venue
stringclasses
2 values
paper_content
stringlengths
7.54k
83.7k
prompt
stringlengths
161
2.5k
format
stringclasses
5 values
review
stringlengths
293
9.84k
NIPS
Title FrugalML: How to use ML Prediction APIs more accurately and cheaply Abstract Prediction APIs offered for a fee are a fast-growing industry and an important part of machine learning as a service. While many such services are available, the heterogeneity in their price and performance makes it challenging for users to decide which API or combination of APIs to use for their own data and budget. We take a first step towards addressing this challenge by proposing FrugalML, a principled framework that jointly learns the strength and weakness of each API on different data, and performs an efficient optimization to automatically identify the best sequential strategy to adaptively use the available APIs within a budget constraint. Our theoretical analysis shows that natural sparsity in the formulation can be leveraged to make FrugalML efficient. We conduct systematic experiments using ML APIs from Google, Microsoft, Amazon, IBM, Baidu and other providers for tasks including facial emotion recognition, sentiment analysis and speech recognition. Across various tasks, FrugalML can achieve up to 90% cost reduction while matching the accuracy of the best single API, or up to 5% better accuracy while matching the best API’s cost. 1 Introduction Machine learning as a service (MLaaS) is a rapidly growing industry. For example, one could use Google prediction API [9] to classify an image for $0.0015 or to classify the sentiment of a text passage for $0.00025. MLaaS services are appealing because using such APIs reduces the need to develop one’s own ML models. The MLaaS market size was estimated at $1 billion in 2019, and it is expected to grow to $8.4 billion by 2025 [1]. Third-party ML APIs come with their own challenges, however. A major challenge is that different companies charge quite different amounts for similar tasks. For example, for image classification, Face++ charges $0.0005 per image [6], which is 67% cheaper than Google [9], while Microsoft charges $0.0010 [11]. Moreover, the prediction APIs of different providers perform better or worse on different types of inputs. For example, accuracy disparities in gender classification were observed for different skin colors [23, 37]. As we will show later in the paper, these APIs’ performance also varies by class—for example, we found that on the FER+ dataset, the Face++ API had the best accuracy on surprise images while the Microsoft API had the best performance on neutral images. The more expensive APIs are not uniformly better; and APIs tend to have specific classes of inputs where they perform better than alternatives. This heterogeneity in price and in performance makes it challenging for users to decide which API or combination of APIs to use for their own data and budget. In this paper, we propose FrugalML, a principled framework to address this challenge. FrugalML jointly learns the strength and weakness of each API on different data, then 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. performs an efficient optimization to automatically identify the best adaptive strategy to use all the available APIs given the user’s budget constraint. FrugalML leverages the modular nature of APIs by designing adaptive strategies that can call APIs sequentially. For example, we might first send an input to API A. If A returns the label “dog” with high confidence—and we know A tends to be accurate for dogs—then we stop and report “dog”. But if A returns “hare” with lower confidence, and we have learned that A is less accurate for “hare,” then we might adaptively select a second API B to make additional assessment. FrugalML optimizes such adaptive strategies to substantially improve prediction performance over simpler approaches such as model cascades with a fixed quality threshold (Figure 1). Through experiments with real commercial ML APIs on diverse tasks, we observe that FrugalML typically reduces costs more than 50% and sometimes up to 90%. Adaptive strategies are challenging to learn and optimize, because the choice of the 2nd predictor, if one is chosen, could depend on the prediction and confidence of the first API, and because FrugalML may need to allocate different fractions of its budget to predictions for different classes. We prove that under quite general conditions, there is natural sparsity in this problem that we can leverage to make FrugalML efficient. Contributions To sum up, our contributions are: 1. We formulate and study the problem of learning to optimally use commercial ML APIs given a budget. This is a growing area of importance and is under-explored. 2. We propose FrugalML, a framework that jointly learns the strength and weakness of each API, and performs an optimization to identify the best strategy for using those APIs within a budget constraint. By leveraging natural sparsity in this optimization problem, we design an efficient algorithm to solve it with provable guarantees. 3. We evaluate FrugalML using real-world APIs from diverse providers (e.g., Google, Microsoft, Amazon, and Baidu) for classification tasks including facial emotion recognition, text sentiment analysis, and speech recognition. We find that FrugalML can match the accuracy of the best individual API with up to 90% lower cost, or significantly improve on this accuracy, up to 5%, with the the same cost. 4. We release our code and our dataset1 of 612,139 samples annotated by commercial APIs as a resource to aid future research in this area. Related Work. MLaaS: With the growing importance of MLaaS APIs [2, 3, 6, 9, 10, 11], existing research has largely focused on individual API for performance [57], pricing [26], robustness [31], and applications [23, 32, 44]. On the other hand, FrugalML aims at finding strategies to select from or use multiple APIs to reduce costs and increase accuracy. Ensemble methods: A natural approach to exploiting multiple predictors is ensemble methods [25, 29, 45]. While most ensemble methods such as stacking [53], and bagging [22] require predictions from all predictors and thus incur a high cost, mixture of experts 1https://github.com/lchen001/FrugalML [35, 34, 58] uses gate functions to select one expert (predictor) per data point and is less expensive. Substantial research has focused on developing gate function models, such as SVMs [27, 56], Gaussian Process [28, 55], and neutral networks [47, 46]. However, applying mixture of experts for MLaaS would result in fixed cost and would not allow users to specify a budget as in FrugalML. As we will show later, sometimes FrugalML with a budget constraint can even outperform mixture of experts algorithms while using less budget. Model Cascades: Cascades consisting of a sequence of models are useful to balance the quality and runtime of inference [49, 50, 24, 36, 48, 51, 54, 38]. While model cascades use predicted quality score alone to avoid calling computationally expensive models, FrugalML’ strategies can utilize both quality score and predicted class to select a downstream expensive add-on service. Designing such strategies requires solving a significantly harder optimization problem, e.g., choosing how to divide the available budget between classes (§3), but also improves performance substantially over using the quality score alone (§4). 2 Preliminaries Notation. In our exposition, we denote matrices and vectors in bold, and scalars, sets, and functions in standard script. We let 1m denote the m× 1 all ones vector, while 1n×m denotes the all ones n×m matrix. We define 0m,0n×m analogously. The subscripts are omitted when clear from context. Given a matrix A ∈ Rn×m, we let Ai,j denote its entry at location (i, j), Ai,· ∈ R1×m denote its ith row, and A·,j ∈ Rn×1 denote its jth column. Let [n] denote {1,2, · · · , n}. Let 1 represent the indicator function. ML Tasks. Throughout this paper, we focus on (multiclass) classification tasks, where the goal is to classify a data point x from a distribution D into L label classes. Many real world ML APIs aim at such tasks, including facial emotion recognition, where x is a face image and label classes are emotions (happy, sad, etc), and text sentiment analysis, where x is a text passage and the label classes are attitude sentiment (either positive or negative). MLaaS Market. Consider a MLaaS market consisting of K different ML services which aim at the same classification task. Taken a data point x as input, the kth service returns to the user a predicted label yk(x) ∈ [L] and its quality score qk(x) ∈ [0,1], where larger score indicates higher confidence of its prediction. This is typical for many popular APIs. There is also a unit cost associated with each service. Let the vector c ∈ RK denote the unit cost of all services. Then ck = 0.005 simply means that users need to pay 0.005 every time they call the kth service. We use y(x) to denote x’s true label, and let rk(x) , 1yk(x)=y(x) be the reward of using the k service on x. 3 FrugalML: a Frugal Approach to Adaptively Leverage ML Services In this section, we present FrugalML, a formal framework for API calling strategies to obtain accurate and cheap predictions from a MLaaS market. All proofs are left to the supplemental materials. We generalize the scheme in Figure 1 (c) to K ML services and L label classes. Let a tuple s , (p[1],Q,P[2]) represent a calling strategy produced by FrugalML. Given an input data x, FrugalML first calls a base service, denoted by A[1]s , which with probability p[1]i is the ith service and returns quality score qi(x) and label yi(x). Let Ds be the indicator of whether the quality score is smaller than the threshold value Qi,yi(x). If Ds = 1, then FrugalML invokes an add-on service, denoted by A[2]s , with probability P [2] i,yi(x),j being the jth service and producing yj(x) as the predicted label ŷs(x). Otherwise, FrugalML simply returns label ŷs(x) = yi(x) from the base service. This process is summarized in Figure 2. Note that the strategy is adaptive: the choice of the add-on API can depend on the predicted label and quality score of the base model. The set of possible strategies can be parametrized as S , {(p[1],Q,P[2])|p[1] < 0 ∈ RK ,1Tp[1] = 1,Q ∈ RK×L,0 4 Q 4 1,P[2] ∈ RK×L×K ,P[2] < 0,1TP[2]k,`,· = 1}. Our goal is to choose the optimal strategy s∗ that maximizes the expected accuracy while satisfies the user’s budget constraint b. This is formally stated as below. Definition 1. Given a user budget b, the optimal FrugalML strategy s∗ = (p[1]∗,Q∗,P[2]∗) is s∗ , argmax s∈S E[rs(x)] s.t. E[η[s](x, c)] ≤ b, (3.1) where rs(x) , 1ŷs(x)=y(x) is the reward and η[s](x, c) the total cost of strategy s on x. Remark 1. The above definition can be generalized to wider settings. For example, instead of 0-1 loss, the reward can be negative square loss to handle regression tasks. We pick the concrete form for demonstration purposes. The cost of strategy s, η[s](x, c), is the sum of all services called on x. For example, if service 1 and 2 are called for predicting x, then η[s](x, c) becomes c1 + c2. Given the above formulation, a natural question is how to solve it efficiently. In the following, We first highlight an interesting property of the optimal strategy, sparsity, which inspires the design of the efficient solver, and then present the algorithm for the solver. 3.1 Sparsity Structure in the Optimal Strategy We show that if problem 3.1 is feasible and has unique optimal solution, then we must have ‖p[1]∗‖ ≤ 2. In other words, the optimal strategy should only choose the base service from at most two services (instead of K) in the MLaaS market. This is formally stated in Lemma 1. Lemma 1. If problem 3.1 is feasible, then there exists one optimal solution s∗ = (p[1]∗,Q∗,P[2]∗) such that ‖p[1]∗‖ ≤ 2. To see this, let us first expand E[rs(x)] and E[ηs(x)] by the law of total expectation. Lemma 2. The expected accuracy is E[rs(x)] = ∑K i=1Pr[A [1] s = i]Pr[Ds = 0|A[1]s = i]E[ri(x)|Ds = 0,A[1]s = i] + ∑K i,j=1Pr[A [1] s = i]Pr[Ds = 1|A[1]s = i]Pr[A[2]s = j|Ds = 1,A[1]s = i]E[rj(x)|Ds = 1,A[1]s = i)]. The expected cost is E[ηs(x)] = ∑K i=1Pr[A [1] s = i]Pr[Ds = 0|A[1]s = i]ci + ∑K i,j=1Pr[A [1] s = i]Pr[Ds = 1|A[1]s = i]Pr[A[2]s = j|Ds = 1,A[1]s = i] (ci + cj). Note that both E[rs(x)] and E[ηs(x)] are linear in Pr[A[1]s = i], which by definition equals p[1]i . Thus, fixing Q and P [2], problem 3.1 becomes a linear programming in p[1]. Intuitively, the corner points of its feasible region must be 2-sparse, since except E[ηs(x)] ≤ b and 1Tp[1] ≤ 1 , all other constraints (p[1] < 0) force sparsity. As the optimal solution of a linear programming should be a corner point, p[2] must also be 2-sparse. This sparsity structure helps reduce the computational complexity for solving problem 3.1. In fact, the sparsity structure implies problem 3.1 becomes equivalent to a master problem max (i1,i2,p1,p2,b1,b2)∈C p1gi1(b1/p1) + p2gi2(b2/p2) s.t.b1 + b2 ≤ b (3.2) where c = {(i1, i2, p1, p2, b1, b2)|i1, i2 ∈ [K], p1, p2 ≥ 0, p1 + p2 = 1, b1, b2 ≥ 0}, and gi(b′) is the optimal value of the subproblem max Q,P[2]:s=(ei,Q,P[2])∈S E[rs(x) s.t. E[ηs(x)] ≤ b′ (3.3) Here, the master problem decides which two services (i1, i2) can be the base service, how often (p1, p2) they should be invoked, and how large budgets (b1, b2) are assigned, while for a fixed base service i and budget b′, the subproblem maximizes the expected reward. 3.2 A Practical Algorithm Now we are ready to give the sparsity-inspired algorithm for generating an approximately optimal strategy ŝ, summarized in Algorithm 1. Algorithm 1 FrugalML Strategy Training. Input :K,M, c, b, {y(xi),{qk(xi), yk(xi)}Kk=1}Ni=1 Output : FrugalML strategy tuple ŝ = ( p̂[1], Q̂, P̂[2] ) 1: Estimate E[ri(x)|Ds,A[1]s ] from the training data {y(xi),{qk(xi), yk(xi)}Kk=1}Ni=1 2: For i ∈ [K], b′m ∈ [0, ‖2c‖∞ M , · · · ,‖2c‖∞], solve problem 3.3 to find optimal value gi(b ′ m) 3: For i ∈ [K], construct function gi(·) by linear interpolation on b′0, b′1, · · · , b′M . 4: Solve problem 3.2 to find optimal solution i∗1, i∗2, p∗1, p∗2, b∗1, b∗2 5: For t ∈ [2], let i = i∗t , b′ = b∗t /p∗t , solve problem 3.3 to find the optimal solution Q[i∗t ],P [2] [i∗t ] 6: p̂[1] = p∗1ei∗1 + p ∗ 2ei∗2 , Q̂ = Q[i∗1 ] + Q[i∗2 ], P̂ [2] = P[2][i∗1 ] + P [2] [i∗2 ] 7: Return ŝ = ( p̂[1], Q̂, P̂[2] ) Algorithm 1 consists of three main steps. First, the conditional accuracy E[ri(x)|Ds,A[i]s ] is estimated from the training data (line 1). Next (line 2 to line 4), we find the optimal solution i∗1, i ∗ 2, p ∗ 1, p ∗ 2, b ∗ 1, b ∗ 2 to problem 3.2. To do so, we first evaluate gi(b′) for M +1 different budget values (line 2), and then construct the functions gi(·) via linear interpolation (line 3) while enforce gi(b′) = 0,∀b′ ≤ ci. Given (piece-wise linear) gi(·), problem 3.2 can be solved by enumerating a few linear programming (line 4). Finally, the algorithm seeks to find the optimal solution in the original domain of the strategy, by solving subproblem 3.3 for base service being i∗1 and i∗2 separately (line 5), and then align those solutions appropriately (line 6). We leave the details of solving subproblem 3.3 to the supplement material due to space constraint. Theorem 3 provides the performance analysis of Algorithm 1. Theorem 3. Suppose E[ri(x)|Ds,A[1]s ] is Lipschitz continuous with constant γ w.r.t. each element in Q. Given N i.i.d. samples {y(xi),{(yk(xi), qk(xi))}Kk=1}Ni=1, the computational cost of Algorithm 1 is O ( NMK2 +K3M3L+MLK2 ) . With probability 1− , the produced strategy ŝ satisfies E[rŝ(x)]−E[rs∗(x)] ≥ −O (√ log +logM+logK+logL N + γ M ) , and E[η[ŝ](x, c)] ≤ b. As Theorem 3 suggests, the parameterM is used to balance between computational cost and accuracy drop of ŝ. For practical cases where K and L (the number of classes) are around ten and N is more than a few thousands, we have found M = 10 is a good value for good accuracy and small computational cost. Note that the coefficient of the KL terms is small: in experiments, we observe it takes only a few seconds for L = 31,M = 40. For datasets with very large number of possible labels, we can always cluster those labels into a few ”supclasses”, or adopt approximation algorithms to reduce O(ML) to O(M2) (see details in the supplemental materials). In addition, slight modification of ŝ can satisfy strict budget constraint: if budgets allows, use ŝ to pick APIs; otherwise, switch to the cheapest API. 4 Experiments We compare the accuracy and incurred costs of FrugalML to that of real world ML services for various tasks. Our goal is four-fold: (i) understanding when and why FrugalML can reduce cost without hurting accuracy, (ii) evaluating the cost savings by FrugalML, (iii) investigating the trade-offs between accuracy and cost achieved by FrugalML, and (iv) measuring the effect of training data size on FrugalML’s performance. Tasks, ML services, and Datasets. We focus on three common ML tasks in different application domains: facial emotion recognition (FER) in computer vision, sentiment analysis (SA) in natural langauge processing), and speech to text (STT) in speech recognition. The ML services used for each task as well as their prices are summarized in Table 1. For each task we also found a small open source model from GitHub, which is much less expensive to execute per data point than the commercial APIs. Table 2 lists the statistics for all the datasets used for different tasks. More details can be found in the supplemental materials. Facial Emotion Recognition: A Case Study. Let us start with facial emotion recognition on the FER+ dataset. We set budget b = 5, the price of FACE++, the cheapest API (except the open source CNN model from GitHub) and obtain a FrugalML strategy by training on half of FER+. Figure 3 demonstrates the learned FrugalML strategy. Interestingly, as shown in Figure 3(b), FrugalML’s accuracy is higher than that of the best ML service (Microsoft Face), while its cost is much lower. This is because base service’s quality score, utilized by FrugalML, is a better signal than raw image to identify if its prediction is trustworthy. Furthermore, the quality score threshold, produced by FrugalML also depends on label predicted by the base service. This flexibility helps to increase accuracy as well as to reduce costs. For example, using a universal threshold 0.86 leads to misclassfication on Figure 3(f), while 0.93 causes unnecessary add-on service call on Figure 3 (c). The learned FrugalML strategy can be interpreted by the varying API accuracy given labels and quality scores produced by the base service. As shown in Figure 4, the GitHub API can achieve the highest accuracy given that its predicted label is happy or surprise. Thus, when prediction is surprise or happy, the base service is sufficient for most of the images and thus quite some budget can be saved. For comparison, we also train a mixture of experts strategy with a softmax gating network and the majority voting ensemble method. The learned mixture of experts always uses Microsoft API, leading to the same accuracy (81%) and same cost ($10). The accuracy of majority voting on the test data is slightly better at 82%, but substantially worse than the performance of FrugalML using a small budget of $5. Majority vote, and other standard ensemble methods, needs to collect the prediction of all services, resulting in a cost ($30) which is 6 times the cost of FrugalML. Moreover, both mixture of experts and ensemble method require fixed cost, while FrugalML gives the users flexibility to choose a budget. Analysis of Cost Savings. Next, we evaluate how much cost can be saved by FrugalML to reach the highest accuracy produced by a single API on different tasks, to obtain some qualitative sense of FrugalML. As shown in Table 3, FrugalML can typically save more than half of the cost. In fact, the cost savings can be as high as 90% on the AUDIOMNIST dataset. This is likely because the base service’s quality score is highly correlated to its prediction accuracy, and thus FrugalML only needs to call expensive services for a few difficult data points. A relatively small saving is reached for SA tasks (e.g., on IMDB). This might be that the quality score of the rule based SA tool is not highly reliable. Another possible reason is that SA task has only two labels (positive and negative), limiting the power of FrugalML. Accuracy and Cost Trade-offs. Now we dive deeply into the accuracy and cost trade-offs achieved by FrugalML, shown in Figure 5. Here we also compare with two oblations to FrugalML, “Base=GH”, where the base service is forced to be the GitHub model, and “QS only”, which further forces a universal quality score threshold across all labels. While using any single ML service incurs a fixed cost, FrugalML allows users to pick any point in its trade-off curve, offering substantial flexibility. In addition to cost saving, FrugalML sometimes can achieve higher accuracy than any ML services it calls. For example, on FER+ and AFFECTNET, more than 2% accuracy improvement can be reached with small cost, and on RAFDB, when a large cost is allowed, more than 5% accuracy improvement is gained. It is also worthy noting that each component in FrugalML helps improve the accuracy. On WAIMAI, for instance, “Base=GH” and ”QS only” lead to significant accuracy drops. For speech datasets such as COMMAND, the drop is negligible, as there is no significant accuracy difference between different labels (utterance). Another interesting observation is that there is no universally “best” service for a fixed task. For SA task, Baidu NLP achieves the highest accuracy for WAIMAI and SHOP datasets, but Google NLP has best performance on YELP and IMDB. Fortunately, FrugalML adaptively learns the optimal strategy. Effects of Training Sample Size Finally we evaluate how the training sample size affects FrugalML’s performance, shown in Figure 6. Note that FrugalML only requires a few thousands training data points for the testing accuracy to converge across all datasets evaluated. This is often more sample-efficient and cost-efficient than training a customized model from scratch. It is also worthy mentioning that larger number of labels usually needs more training samples. For example, 1500 samples might be enough for WAIMAI (#label=2), but 3000 samples are needed for AudioMNIST (#label=10). 5 Conclusion and Open Problems In this work we proposed FrugalML, a formal framework for identifying the best strategy to call ML APIs given a user’s budget. Both theoretical analysis and empirical results demonstrate that FrugalML leads to significant cost reduction and accuracy improvement. FrugalML is also efficient to learn: it typically takes a few minutes on a modern machine. Our research characterized the substantial heterogeneity in cost and performance across available ML APIs, which is useful in its own right and also leveraged by FrugalML. Extending FrugalML to produce calling strategies for ML tasks beyond classification (e.g., object detection and language translation) is an interesting future direction. Our discussion with practitioners frequently using ML APIs indicates handling API updates and performance shift is another open problem. As a resource to stimulate further research in MLaaS, we also release a dataset used to develop FrugalML, consisting of 612,139 samples annotated by the APIs, and our code, available at https://github.com/lchen001/FrugalML. Acknowledgement This work was supported in part by a Google PhD Fellowship, NSF CCF 1763191, NSF CAREER 1651570 and 1942926, NIH P30AG059307, NIH U01MH098953, grants from the Chan-Zuckerberg Initiative, and affiliate members and other supporters of the Stanford DAWN project—Ant Financial, Facebook, Google, Infosys, NEC, and VMware—as well as Cisco and SAP. We also thank anonymous reviewers for helpful discussion and feedback. Potential Broader Impact ML as a service is a growing industry with substantial economic and societal impact. In this paper, we identify the cost and performance heterogeneity across popular ML APIs, which contributes to the broader understanding of this important but under-explored industry. We proposed a method to automatically reduce user cost while improving accuracy. FrugalML can broadly contribute to the applied ML ecosystem by reducing the expense and complexity of using prediction APIs. This can be a positive impact by increasing accessibility to ML APIs for less well-resourced groups. A potential concern about the ML APIs in general is that they may be trained on biased data and produce biased predictions that could disadvantage certain sub-groups. To tackle this challenge, we are releasing our dataset of over 600k images, text, and utterances that we annotated using commercial APIs. This is a resource for the broad community to use to better understand the biases in existing APIs.
1. What is the main contribution of the paper regarding machine learning and API assessment? 2. What are the strengths of the proposed approach, particularly in its application and novelty? 3. What are the weaknesses of the paper, especially regarding the implementation details?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper deals with learning to assess ML APIs in terms of predictive accuracy and quality score, where the latter depicts the confidence of the API. The gist of the learning problem is that each API has an assigned cost, which needs to be kept at a minimum as well. The authors define a novel strategy where a base API is chosen based on learnt conditional accuracies which might be overruled by an add-on API if the quality score is not sufficiently high. The optimal strategy is generated via solving a stated optimization problem. The empirical results on computer vision and NLP datasets with real-world APIs are promising in that the generated strategy reduces costs while achieving high predictive accuracies. Strengths * Very sensible application of ML to efficient API-reuse on the Web, which has an impact for the community. * Sufficiently novel: while there are other works on assessing service APIs using ML, the proposed approach also optimizes budget constraints and takes into account additional service quality data * Good empirical results with substantial cost savings while achieving high predictive accuracy on diverse datasets. Weaknesses * While the approach is sufficiently introduced, details on the implementation of conditional accuracy estimation and "quality of quality score" is missing
NIPS
Title FrugalML: How to use ML Prediction APIs more accurately and cheaply Abstract Prediction APIs offered for a fee are a fast-growing industry and an important part of machine learning as a service. While many such services are available, the heterogeneity in their price and performance makes it challenging for users to decide which API or combination of APIs to use for their own data and budget. We take a first step towards addressing this challenge by proposing FrugalML, a principled framework that jointly learns the strength and weakness of each API on different data, and performs an efficient optimization to automatically identify the best sequential strategy to adaptively use the available APIs within a budget constraint. Our theoretical analysis shows that natural sparsity in the formulation can be leveraged to make FrugalML efficient. We conduct systematic experiments using ML APIs from Google, Microsoft, Amazon, IBM, Baidu and other providers for tasks including facial emotion recognition, sentiment analysis and speech recognition. Across various tasks, FrugalML can achieve up to 90% cost reduction while matching the accuracy of the best single API, or up to 5% better accuracy while matching the best API’s cost. 1 Introduction Machine learning as a service (MLaaS) is a rapidly growing industry. For example, one could use Google prediction API [9] to classify an image for $0.0015 or to classify the sentiment of a text passage for $0.00025. MLaaS services are appealing because using such APIs reduces the need to develop one’s own ML models. The MLaaS market size was estimated at $1 billion in 2019, and it is expected to grow to $8.4 billion by 2025 [1]. Third-party ML APIs come with their own challenges, however. A major challenge is that different companies charge quite different amounts for similar tasks. For example, for image classification, Face++ charges $0.0005 per image [6], which is 67% cheaper than Google [9], while Microsoft charges $0.0010 [11]. Moreover, the prediction APIs of different providers perform better or worse on different types of inputs. For example, accuracy disparities in gender classification were observed for different skin colors [23, 37]. As we will show later in the paper, these APIs’ performance also varies by class—for example, we found that on the FER+ dataset, the Face++ API had the best accuracy on surprise images while the Microsoft API had the best performance on neutral images. The more expensive APIs are not uniformly better; and APIs tend to have specific classes of inputs where they perform better than alternatives. This heterogeneity in price and in performance makes it challenging for users to decide which API or combination of APIs to use for their own data and budget. In this paper, we propose FrugalML, a principled framework to address this challenge. FrugalML jointly learns the strength and weakness of each API on different data, then 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. performs an efficient optimization to automatically identify the best adaptive strategy to use all the available APIs given the user’s budget constraint. FrugalML leverages the modular nature of APIs by designing adaptive strategies that can call APIs sequentially. For example, we might first send an input to API A. If A returns the label “dog” with high confidence—and we know A tends to be accurate for dogs—then we stop and report “dog”. But if A returns “hare” with lower confidence, and we have learned that A is less accurate for “hare,” then we might adaptively select a second API B to make additional assessment. FrugalML optimizes such adaptive strategies to substantially improve prediction performance over simpler approaches such as model cascades with a fixed quality threshold (Figure 1). Through experiments with real commercial ML APIs on diverse tasks, we observe that FrugalML typically reduces costs more than 50% and sometimes up to 90%. Adaptive strategies are challenging to learn and optimize, because the choice of the 2nd predictor, if one is chosen, could depend on the prediction and confidence of the first API, and because FrugalML may need to allocate different fractions of its budget to predictions for different classes. We prove that under quite general conditions, there is natural sparsity in this problem that we can leverage to make FrugalML efficient. Contributions To sum up, our contributions are: 1. We formulate and study the problem of learning to optimally use commercial ML APIs given a budget. This is a growing area of importance and is under-explored. 2. We propose FrugalML, a framework that jointly learns the strength and weakness of each API, and performs an optimization to identify the best strategy for using those APIs within a budget constraint. By leveraging natural sparsity in this optimization problem, we design an efficient algorithm to solve it with provable guarantees. 3. We evaluate FrugalML using real-world APIs from diverse providers (e.g., Google, Microsoft, Amazon, and Baidu) for classification tasks including facial emotion recognition, text sentiment analysis, and speech recognition. We find that FrugalML can match the accuracy of the best individual API with up to 90% lower cost, or significantly improve on this accuracy, up to 5%, with the the same cost. 4. We release our code and our dataset1 of 612,139 samples annotated by commercial APIs as a resource to aid future research in this area. Related Work. MLaaS: With the growing importance of MLaaS APIs [2, 3, 6, 9, 10, 11], existing research has largely focused on individual API for performance [57], pricing [26], robustness [31], and applications [23, 32, 44]. On the other hand, FrugalML aims at finding strategies to select from or use multiple APIs to reduce costs and increase accuracy. Ensemble methods: A natural approach to exploiting multiple predictors is ensemble methods [25, 29, 45]. While most ensemble methods such as stacking [53], and bagging [22] require predictions from all predictors and thus incur a high cost, mixture of experts 1https://github.com/lchen001/FrugalML [35, 34, 58] uses gate functions to select one expert (predictor) per data point and is less expensive. Substantial research has focused on developing gate function models, such as SVMs [27, 56], Gaussian Process [28, 55], and neutral networks [47, 46]. However, applying mixture of experts for MLaaS would result in fixed cost and would not allow users to specify a budget as in FrugalML. As we will show later, sometimes FrugalML with a budget constraint can even outperform mixture of experts algorithms while using less budget. Model Cascades: Cascades consisting of a sequence of models are useful to balance the quality and runtime of inference [49, 50, 24, 36, 48, 51, 54, 38]. While model cascades use predicted quality score alone to avoid calling computationally expensive models, FrugalML’ strategies can utilize both quality score and predicted class to select a downstream expensive add-on service. Designing such strategies requires solving a significantly harder optimization problem, e.g., choosing how to divide the available budget between classes (§3), but also improves performance substantially over using the quality score alone (§4). 2 Preliminaries Notation. In our exposition, we denote matrices and vectors in bold, and scalars, sets, and functions in standard script. We let 1m denote the m× 1 all ones vector, while 1n×m denotes the all ones n×m matrix. We define 0m,0n×m analogously. The subscripts are omitted when clear from context. Given a matrix A ∈ Rn×m, we let Ai,j denote its entry at location (i, j), Ai,· ∈ R1×m denote its ith row, and A·,j ∈ Rn×1 denote its jth column. Let [n] denote {1,2, · · · , n}. Let 1 represent the indicator function. ML Tasks. Throughout this paper, we focus on (multiclass) classification tasks, where the goal is to classify a data point x from a distribution D into L label classes. Many real world ML APIs aim at such tasks, including facial emotion recognition, where x is a face image and label classes are emotions (happy, sad, etc), and text sentiment analysis, where x is a text passage and the label classes are attitude sentiment (either positive or negative). MLaaS Market. Consider a MLaaS market consisting of K different ML services which aim at the same classification task. Taken a data point x as input, the kth service returns to the user a predicted label yk(x) ∈ [L] and its quality score qk(x) ∈ [0,1], where larger score indicates higher confidence of its prediction. This is typical for many popular APIs. There is also a unit cost associated with each service. Let the vector c ∈ RK denote the unit cost of all services. Then ck = 0.005 simply means that users need to pay 0.005 every time they call the kth service. We use y(x) to denote x’s true label, and let rk(x) , 1yk(x)=y(x) be the reward of using the k service on x. 3 FrugalML: a Frugal Approach to Adaptively Leverage ML Services In this section, we present FrugalML, a formal framework for API calling strategies to obtain accurate and cheap predictions from a MLaaS market. All proofs are left to the supplemental materials. We generalize the scheme in Figure 1 (c) to K ML services and L label classes. Let a tuple s , (p[1],Q,P[2]) represent a calling strategy produced by FrugalML. Given an input data x, FrugalML first calls a base service, denoted by A[1]s , which with probability p[1]i is the ith service and returns quality score qi(x) and label yi(x). Let Ds be the indicator of whether the quality score is smaller than the threshold value Qi,yi(x). If Ds = 1, then FrugalML invokes an add-on service, denoted by A[2]s , with probability P [2] i,yi(x),j being the jth service and producing yj(x) as the predicted label ŷs(x). Otherwise, FrugalML simply returns label ŷs(x) = yi(x) from the base service. This process is summarized in Figure 2. Note that the strategy is adaptive: the choice of the add-on API can depend on the predicted label and quality score of the base model. The set of possible strategies can be parametrized as S , {(p[1],Q,P[2])|p[1] < 0 ∈ RK ,1Tp[1] = 1,Q ∈ RK×L,0 4 Q 4 1,P[2] ∈ RK×L×K ,P[2] < 0,1TP[2]k,`,· = 1}. Our goal is to choose the optimal strategy s∗ that maximizes the expected accuracy while satisfies the user’s budget constraint b. This is formally stated as below. Definition 1. Given a user budget b, the optimal FrugalML strategy s∗ = (p[1]∗,Q∗,P[2]∗) is s∗ , argmax s∈S E[rs(x)] s.t. E[η[s](x, c)] ≤ b, (3.1) where rs(x) , 1ŷs(x)=y(x) is the reward and η[s](x, c) the total cost of strategy s on x. Remark 1. The above definition can be generalized to wider settings. For example, instead of 0-1 loss, the reward can be negative square loss to handle regression tasks. We pick the concrete form for demonstration purposes. The cost of strategy s, η[s](x, c), is the sum of all services called on x. For example, if service 1 and 2 are called for predicting x, then η[s](x, c) becomes c1 + c2. Given the above formulation, a natural question is how to solve it efficiently. In the following, We first highlight an interesting property of the optimal strategy, sparsity, which inspires the design of the efficient solver, and then present the algorithm for the solver. 3.1 Sparsity Structure in the Optimal Strategy We show that if problem 3.1 is feasible and has unique optimal solution, then we must have ‖p[1]∗‖ ≤ 2. In other words, the optimal strategy should only choose the base service from at most two services (instead of K) in the MLaaS market. This is formally stated in Lemma 1. Lemma 1. If problem 3.1 is feasible, then there exists one optimal solution s∗ = (p[1]∗,Q∗,P[2]∗) such that ‖p[1]∗‖ ≤ 2. To see this, let us first expand E[rs(x)] and E[ηs(x)] by the law of total expectation. Lemma 2. The expected accuracy is E[rs(x)] = ∑K i=1Pr[A [1] s = i]Pr[Ds = 0|A[1]s = i]E[ri(x)|Ds = 0,A[1]s = i] + ∑K i,j=1Pr[A [1] s = i]Pr[Ds = 1|A[1]s = i]Pr[A[2]s = j|Ds = 1,A[1]s = i]E[rj(x)|Ds = 1,A[1]s = i)]. The expected cost is E[ηs(x)] = ∑K i=1Pr[A [1] s = i]Pr[Ds = 0|A[1]s = i]ci + ∑K i,j=1Pr[A [1] s = i]Pr[Ds = 1|A[1]s = i]Pr[A[2]s = j|Ds = 1,A[1]s = i] (ci + cj). Note that both E[rs(x)] and E[ηs(x)] are linear in Pr[A[1]s = i], which by definition equals p[1]i . Thus, fixing Q and P [2], problem 3.1 becomes a linear programming in p[1]. Intuitively, the corner points of its feasible region must be 2-sparse, since except E[ηs(x)] ≤ b and 1Tp[1] ≤ 1 , all other constraints (p[1] < 0) force sparsity. As the optimal solution of a linear programming should be a corner point, p[2] must also be 2-sparse. This sparsity structure helps reduce the computational complexity for solving problem 3.1. In fact, the sparsity structure implies problem 3.1 becomes equivalent to a master problem max (i1,i2,p1,p2,b1,b2)∈C p1gi1(b1/p1) + p2gi2(b2/p2) s.t.b1 + b2 ≤ b (3.2) where c = {(i1, i2, p1, p2, b1, b2)|i1, i2 ∈ [K], p1, p2 ≥ 0, p1 + p2 = 1, b1, b2 ≥ 0}, and gi(b′) is the optimal value of the subproblem max Q,P[2]:s=(ei,Q,P[2])∈S E[rs(x) s.t. E[ηs(x)] ≤ b′ (3.3) Here, the master problem decides which two services (i1, i2) can be the base service, how often (p1, p2) they should be invoked, and how large budgets (b1, b2) are assigned, while for a fixed base service i and budget b′, the subproblem maximizes the expected reward. 3.2 A Practical Algorithm Now we are ready to give the sparsity-inspired algorithm for generating an approximately optimal strategy ŝ, summarized in Algorithm 1. Algorithm 1 FrugalML Strategy Training. Input :K,M, c, b, {y(xi),{qk(xi), yk(xi)}Kk=1}Ni=1 Output : FrugalML strategy tuple ŝ = ( p̂[1], Q̂, P̂[2] ) 1: Estimate E[ri(x)|Ds,A[1]s ] from the training data {y(xi),{qk(xi), yk(xi)}Kk=1}Ni=1 2: For i ∈ [K], b′m ∈ [0, ‖2c‖∞ M , · · · ,‖2c‖∞], solve problem 3.3 to find optimal value gi(b ′ m) 3: For i ∈ [K], construct function gi(·) by linear interpolation on b′0, b′1, · · · , b′M . 4: Solve problem 3.2 to find optimal solution i∗1, i∗2, p∗1, p∗2, b∗1, b∗2 5: For t ∈ [2], let i = i∗t , b′ = b∗t /p∗t , solve problem 3.3 to find the optimal solution Q[i∗t ],P [2] [i∗t ] 6: p̂[1] = p∗1ei∗1 + p ∗ 2ei∗2 , Q̂ = Q[i∗1 ] + Q[i∗2 ], P̂ [2] = P[2][i∗1 ] + P [2] [i∗2 ] 7: Return ŝ = ( p̂[1], Q̂, P̂[2] ) Algorithm 1 consists of three main steps. First, the conditional accuracy E[ri(x)|Ds,A[i]s ] is estimated from the training data (line 1). Next (line 2 to line 4), we find the optimal solution i∗1, i ∗ 2, p ∗ 1, p ∗ 2, b ∗ 1, b ∗ 2 to problem 3.2. To do so, we first evaluate gi(b′) for M +1 different budget values (line 2), and then construct the functions gi(·) via linear interpolation (line 3) while enforce gi(b′) = 0,∀b′ ≤ ci. Given (piece-wise linear) gi(·), problem 3.2 can be solved by enumerating a few linear programming (line 4). Finally, the algorithm seeks to find the optimal solution in the original domain of the strategy, by solving subproblem 3.3 for base service being i∗1 and i∗2 separately (line 5), and then align those solutions appropriately (line 6). We leave the details of solving subproblem 3.3 to the supplement material due to space constraint. Theorem 3 provides the performance analysis of Algorithm 1. Theorem 3. Suppose E[ri(x)|Ds,A[1]s ] is Lipschitz continuous with constant γ w.r.t. each element in Q. Given N i.i.d. samples {y(xi),{(yk(xi), qk(xi))}Kk=1}Ni=1, the computational cost of Algorithm 1 is O ( NMK2 +K3M3L+MLK2 ) . With probability 1− , the produced strategy ŝ satisfies E[rŝ(x)]−E[rs∗(x)] ≥ −O (√ log +logM+logK+logL N + γ M ) , and E[η[ŝ](x, c)] ≤ b. As Theorem 3 suggests, the parameterM is used to balance between computational cost and accuracy drop of ŝ. For practical cases where K and L (the number of classes) are around ten and N is more than a few thousands, we have found M = 10 is a good value for good accuracy and small computational cost. Note that the coefficient of the KL terms is small: in experiments, we observe it takes only a few seconds for L = 31,M = 40. For datasets with very large number of possible labels, we can always cluster those labels into a few ”supclasses”, or adopt approximation algorithms to reduce O(ML) to O(M2) (see details in the supplemental materials). In addition, slight modification of ŝ can satisfy strict budget constraint: if budgets allows, use ŝ to pick APIs; otherwise, switch to the cheapest API. 4 Experiments We compare the accuracy and incurred costs of FrugalML to that of real world ML services for various tasks. Our goal is four-fold: (i) understanding when and why FrugalML can reduce cost without hurting accuracy, (ii) evaluating the cost savings by FrugalML, (iii) investigating the trade-offs between accuracy and cost achieved by FrugalML, and (iv) measuring the effect of training data size on FrugalML’s performance. Tasks, ML services, and Datasets. We focus on three common ML tasks in different application domains: facial emotion recognition (FER) in computer vision, sentiment analysis (SA) in natural langauge processing), and speech to text (STT) in speech recognition. The ML services used for each task as well as their prices are summarized in Table 1. For each task we also found a small open source model from GitHub, which is much less expensive to execute per data point than the commercial APIs. Table 2 lists the statistics for all the datasets used for different tasks. More details can be found in the supplemental materials. Facial Emotion Recognition: A Case Study. Let us start with facial emotion recognition on the FER+ dataset. We set budget b = 5, the price of FACE++, the cheapest API (except the open source CNN model from GitHub) and obtain a FrugalML strategy by training on half of FER+. Figure 3 demonstrates the learned FrugalML strategy. Interestingly, as shown in Figure 3(b), FrugalML’s accuracy is higher than that of the best ML service (Microsoft Face), while its cost is much lower. This is because base service’s quality score, utilized by FrugalML, is a better signal than raw image to identify if its prediction is trustworthy. Furthermore, the quality score threshold, produced by FrugalML also depends on label predicted by the base service. This flexibility helps to increase accuracy as well as to reduce costs. For example, using a universal threshold 0.86 leads to misclassfication on Figure 3(f), while 0.93 causes unnecessary add-on service call on Figure 3 (c). The learned FrugalML strategy can be interpreted by the varying API accuracy given labels and quality scores produced by the base service. As shown in Figure 4, the GitHub API can achieve the highest accuracy given that its predicted label is happy or surprise. Thus, when prediction is surprise or happy, the base service is sufficient for most of the images and thus quite some budget can be saved. For comparison, we also train a mixture of experts strategy with a softmax gating network and the majority voting ensemble method. The learned mixture of experts always uses Microsoft API, leading to the same accuracy (81%) and same cost ($10). The accuracy of majority voting on the test data is slightly better at 82%, but substantially worse than the performance of FrugalML using a small budget of $5. Majority vote, and other standard ensemble methods, needs to collect the prediction of all services, resulting in a cost ($30) which is 6 times the cost of FrugalML. Moreover, both mixture of experts and ensemble method require fixed cost, while FrugalML gives the users flexibility to choose a budget. Analysis of Cost Savings. Next, we evaluate how much cost can be saved by FrugalML to reach the highest accuracy produced by a single API on different tasks, to obtain some qualitative sense of FrugalML. As shown in Table 3, FrugalML can typically save more than half of the cost. In fact, the cost savings can be as high as 90% on the AUDIOMNIST dataset. This is likely because the base service’s quality score is highly correlated to its prediction accuracy, and thus FrugalML only needs to call expensive services for a few difficult data points. A relatively small saving is reached for SA tasks (e.g., on IMDB). This might be that the quality score of the rule based SA tool is not highly reliable. Another possible reason is that SA task has only two labels (positive and negative), limiting the power of FrugalML. Accuracy and Cost Trade-offs. Now we dive deeply into the accuracy and cost trade-offs achieved by FrugalML, shown in Figure 5. Here we also compare with two oblations to FrugalML, “Base=GH”, where the base service is forced to be the GitHub model, and “QS only”, which further forces a universal quality score threshold across all labels. While using any single ML service incurs a fixed cost, FrugalML allows users to pick any point in its trade-off curve, offering substantial flexibility. In addition to cost saving, FrugalML sometimes can achieve higher accuracy than any ML services it calls. For example, on FER+ and AFFECTNET, more than 2% accuracy improvement can be reached with small cost, and on RAFDB, when a large cost is allowed, more than 5% accuracy improvement is gained. It is also worthy noting that each component in FrugalML helps improve the accuracy. On WAIMAI, for instance, “Base=GH” and ”QS only” lead to significant accuracy drops. For speech datasets such as COMMAND, the drop is negligible, as there is no significant accuracy difference between different labels (utterance). Another interesting observation is that there is no universally “best” service for a fixed task. For SA task, Baidu NLP achieves the highest accuracy for WAIMAI and SHOP datasets, but Google NLP has best performance on YELP and IMDB. Fortunately, FrugalML adaptively learns the optimal strategy. Effects of Training Sample Size Finally we evaluate how the training sample size affects FrugalML’s performance, shown in Figure 6. Note that FrugalML only requires a few thousands training data points for the testing accuracy to converge across all datasets evaluated. This is often more sample-efficient and cost-efficient than training a customized model from scratch. It is also worthy mentioning that larger number of labels usually needs more training samples. For example, 1500 samples might be enough for WAIMAI (#label=2), but 3000 samples are needed for AudioMNIST (#label=10). 5 Conclusion and Open Problems In this work we proposed FrugalML, a formal framework for identifying the best strategy to call ML APIs given a user’s budget. Both theoretical analysis and empirical results demonstrate that FrugalML leads to significant cost reduction and accuracy improvement. FrugalML is also efficient to learn: it typically takes a few minutes on a modern machine. Our research characterized the substantial heterogeneity in cost and performance across available ML APIs, which is useful in its own right and also leveraged by FrugalML. Extending FrugalML to produce calling strategies for ML tasks beyond classification (e.g., object detection and language translation) is an interesting future direction. Our discussion with practitioners frequently using ML APIs indicates handling API updates and performance shift is another open problem. As a resource to stimulate further research in MLaaS, we also release a dataset used to develop FrugalML, consisting of 612,139 samples annotated by the APIs, and our code, available at https://github.com/lchen001/FrugalML. Acknowledgement This work was supported in part by a Google PhD Fellowship, NSF CCF 1763191, NSF CAREER 1651570 and 1942926, NIH P30AG059307, NIH U01MH098953, grants from the Chan-Zuckerberg Initiative, and affiliate members and other supporters of the Stanford DAWN project—Ant Financial, Facebook, Google, Infosys, NEC, and VMware—as well as Cisco and SAP. We also thank anonymous reviewers for helpful discussion and feedback. Potential Broader Impact ML as a service is a growing industry with substantial economic and societal impact. In this paper, we identify the cost and performance heterogeneity across popular ML APIs, which contributes to the broader understanding of this important but under-explored industry. We proposed a method to automatically reduce user cost while improving accuracy. FrugalML can broadly contribute to the applied ML ecosystem by reducing the expense and complexity of using prediction APIs. This can be a positive impact by increasing accessibility to ML APIs for less well-resourced groups. A potential concern about the ML APIs in general is that they may be trained on biased data and produce biased predictions that could disadvantage certain sub-groups. To tackle this challenge, we are releasing our dataset of over 600k images, text, and utterances that we annotated using commercial APIs. This is a resource for the broad community to use to better understand the biases in existing APIs.
1. What is the main contribution of the paper regarding machine learning (ML) APIs? 2. What are the strengths of the proposed approach, particularly in terms of its practicality and flexibility? 3. What are the weaknesses of the method, especially concerning its reliance on the label distribution of the initial training set? 4. How might the method be affected by changes in the label distribution over time? 5. Are there any potential strategies for addressing these limitations and improving the method's robustness?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper presents a framework that can learn the reliability of existing ML APIs and make use of them to classify data within a given budget. It requires an initial training set where all true labels and predictions from all APIs are available to learn the strategy. The classification for new data is done in a two-round style, where a base API is first picked, if the confidence is larger than a learned threshold, then the prediction will be returned, otherwise a second API will be selected based on the learned strategy, and its prediction will be returned. The authors provide theoretic guarantees that the best strategy can be learned and conducted extensive experiments to show the effectiveness of their method. Strengths This paper studies a practical problem, i.e. how to use ML APIs more accurately and cheaply. The proposed method is novel and well-motivated. It takes budget into account and is flexible enough for practitioners to balance accuracy and cost. I believe it has a wide audience in this community. Weaknesses It seems that the best strategy depends on the label distribution of the initial training set. Let’s say we are doing tweet sentiment analysis, and it’s easy to imagine that the best strategy when most tweets are positive would be different from when most tweets are negative. I believe that the proposed method will work when the real label distribution is invariant and the same as the initial dataset. But is the proposed method robust when they are not the same? What will happen if the label distribution is changing over time? Should we switch the strategy at some point?
NIPS
Title FrugalML: How to use ML Prediction APIs more accurately and cheaply Abstract Prediction APIs offered for a fee are a fast-growing industry and an important part of machine learning as a service. While many such services are available, the heterogeneity in their price and performance makes it challenging for users to decide which API or combination of APIs to use for their own data and budget. We take a first step towards addressing this challenge by proposing FrugalML, a principled framework that jointly learns the strength and weakness of each API on different data, and performs an efficient optimization to automatically identify the best sequential strategy to adaptively use the available APIs within a budget constraint. Our theoretical analysis shows that natural sparsity in the formulation can be leveraged to make FrugalML efficient. We conduct systematic experiments using ML APIs from Google, Microsoft, Amazon, IBM, Baidu and other providers for tasks including facial emotion recognition, sentiment analysis and speech recognition. Across various tasks, FrugalML can achieve up to 90% cost reduction while matching the accuracy of the best single API, or up to 5% better accuracy while matching the best API’s cost. 1 Introduction Machine learning as a service (MLaaS) is a rapidly growing industry. For example, one could use Google prediction API [9] to classify an image for $0.0015 or to classify the sentiment of a text passage for $0.00025. MLaaS services are appealing because using such APIs reduces the need to develop one’s own ML models. The MLaaS market size was estimated at $1 billion in 2019, and it is expected to grow to $8.4 billion by 2025 [1]. Third-party ML APIs come with their own challenges, however. A major challenge is that different companies charge quite different amounts for similar tasks. For example, for image classification, Face++ charges $0.0005 per image [6], which is 67% cheaper than Google [9], while Microsoft charges $0.0010 [11]. Moreover, the prediction APIs of different providers perform better or worse on different types of inputs. For example, accuracy disparities in gender classification were observed for different skin colors [23, 37]. As we will show later in the paper, these APIs’ performance also varies by class—for example, we found that on the FER+ dataset, the Face++ API had the best accuracy on surprise images while the Microsoft API had the best performance on neutral images. The more expensive APIs are not uniformly better; and APIs tend to have specific classes of inputs where they perform better than alternatives. This heterogeneity in price and in performance makes it challenging for users to decide which API or combination of APIs to use for their own data and budget. In this paper, we propose FrugalML, a principled framework to address this challenge. FrugalML jointly learns the strength and weakness of each API on different data, then 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. performs an efficient optimization to automatically identify the best adaptive strategy to use all the available APIs given the user’s budget constraint. FrugalML leverages the modular nature of APIs by designing adaptive strategies that can call APIs sequentially. For example, we might first send an input to API A. If A returns the label “dog” with high confidence—and we know A tends to be accurate for dogs—then we stop and report “dog”. But if A returns “hare” with lower confidence, and we have learned that A is less accurate for “hare,” then we might adaptively select a second API B to make additional assessment. FrugalML optimizes such adaptive strategies to substantially improve prediction performance over simpler approaches such as model cascades with a fixed quality threshold (Figure 1). Through experiments with real commercial ML APIs on diverse tasks, we observe that FrugalML typically reduces costs more than 50% and sometimes up to 90%. Adaptive strategies are challenging to learn and optimize, because the choice of the 2nd predictor, if one is chosen, could depend on the prediction and confidence of the first API, and because FrugalML may need to allocate different fractions of its budget to predictions for different classes. We prove that under quite general conditions, there is natural sparsity in this problem that we can leverage to make FrugalML efficient. Contributions To sum up, our contributions are: 1. We formulate and study the problem of learning to optimally use commercial ML APIs given a budget. This is a growing area of importance and is under-explored. 2. We propose FrugalML, a framework that jointly learns the strength and weakness of each API, and performs an optimization to identify the best strategy for using those APIs within a budget constraint. By leveraging natural sparsity in this optimization problem, we design an efficient algorithm to solve it with provable guarantees. 3. We evaluate FrugalML using real-world APIs from diverse providers (e.g., Google, Microsoft, Amazon, and Baidu) for classification tasks including facial emotion recognition, text sentiment analysis, and speech recognition. We find that FrugalML can match the accuracy of the best individual API with up to 90% lower cost, or significantly improve on this accuracy, up to 5%, with the the same cost. 4. We release our code and our dataset1 of 612,139 samples annotated by commercial APIs as a resource to aid future research in this area. Related Work. MLaaS: With the growing importance of MLaaS APIs [2, 3, 6, 9, 10, 11], existing research has largely focused on individual API for performance [57], pricing [26], robustness [31], and applications [23, 32, 44]. On the other hand, FrugalML aims at finding strategies to select from or use multiple APIs to reduce costs and increase accuracy. Ensemble methods: A natural approach to exploiting multiple predictors is ensemble methods [25, 29, 45]. While most ensemble methods such as stacking [53], and bagging [22] require predictions from all predictors and thus incur a high cost, mixture of experts 1https://github.com/lchen001/FrugalML [35, 34, 58] uses gate functions to select one expert (predictor) per data point and is less expensive. Substantial research has focused on developing gate function models, such as SVMs [27, 56], Gaussian Process [28, 55], and neutral networks [47, 46]. However, applying mixture of experts for MLaaS would result in fixed cost and would not allow users to specify a budget as in FrugalML. As we will show later, sometimes FrugalML with a budget constraint can even outperform mixture of experts algorithms while using less budget. Model Cascades: Cascades consisting of a sequence of models are useful to balance the quality and runtime of inference [49, 50, 24, 36, 48, 51, 54, 38]. While model cascades use predicted quality score alone to avoid calling computationally expensive models, FrugalML’ strategies can utilize both quality score and predicted class to select a downstream expensive add-on service. Designing such strategies requires solving a significantly harder optimization problem, e.g., choosing how to divide the available budget between classes (§3), but also improves performance substantially over using the quality score alone (§4). 2 Preliminaries Notation. In our exposition, we denote matrices and vectors in bold, and scalars, sets, and functions in standard script. We let 1m denote the m× 1 all ones vector, while 1n×m denotes the all ones n×m matrix. We define 0m,0n×m analogously. The subscripts are omitted when clear from context. Given a matrix A ∈ Rn×m, we let Ai,j denote its entry at location (i, j), Ai,· ∈ R1×m denote its ith row, and A·,j ∈ Rn×1 denote its jth column. Let [n] denote {1,2, · · · , n}. Let 1 represent the indicator function. ML Tasks. Throughout this paper, we focus on (multiclass) classification tasks, where the goal is to classify a data point x from a distribution D into L label classes. Many real world ML APIs aim at such tasks, including facial emotion recognition, where x is a face image and label classes are emotions (happy, sad, etc), and text sentiment analysis, where x is a text passage and the label classes are attitude sentiment (either positive or negative). MLaaS Market. Consider a MLaaS market consisting of K different ML services which aim at the same classification task. Taken a data point x as input, the kth service returns to the user a predicted label yk(x) ∈ [L] and its quality score qk(x) ∈ [0,1], where larger score indicates higher confidence of its prediction. This is typical for many popular APIs. There is also a unit cost associated with each service. Let the vector c ∈ RK denote the unit cost of all services. Then ck = 0.005 simply means that users need to pay 0.005 every time they call the kth service. We use y(x) to denote x’s true label, and let rk(x) , 1yk(x)=y(x) be the reward of using the k service on x. 3 FrugalML: a Frugal Approach to Adaptively Leverage ML Services In this section, we present FrugalML, a formal framework for API calling strategies to obtain accurate and cheap predictions from a MLaaS market. All proofs are left to the supplemental materials. We generalize the scheme in Figure 1 (c) to K ML services and L label classes. Let a tuple s , (p[1],Q,P[2]) represent a calling strategy produced by FrugalML. Given an input data x, FrugalML first calls a base service, denoted by A[1]s , which with probability p[1]i is the ith service and returns quality score qi(x) and label yi(x). Let Ds be the indicator of whether the quality score is smaller than the threshold value Qi,yi(x). If Ds = 1, then FrugalML invokes an add-on service, denoted by A[2]s , with probability P [2] i,yi(x),j being the jth service and producing yj(x) as the predicted label ŷs(x). Otherwise, FrugalML simply returns label ŷs(x) = yi(x) from the base service. This process is summarized in Figure 2. Note that the strategy is adaptive: the choice of the add-on API can depend on the predicted label and quality score of the base model. The set of possible strategies can be parametrized as S , {(p[1],Q,P[2])|p[1] < 0 ∈ RK ,1Tp[1] = 1,Q ∈ RK×L,0 4 Q 4 1,P[2] ∈ RK×L×K ,P[2] < 0,1TP[2]k,`,· = 1}. Our goal is to choose the optimal strategy s∗ that maximizes the expected accuracy while satisfies the user’s budget constraint b. This is formally stated as below. Definition 1. Given a user budget b, the optimal FrugalML strategy s∗ = (p[1]∗,Q∗,P[2]∗) is s∗ , argmax s∈S E[rs(x)] s.t. E[η[s](x, c)] ≤ b, (3.1) where rs(x) , 1ŷs(x)=y(x) is the reward and η[s](x, c) the total cost of strategy s on x. Remark 1. The above definition can be generalized to wider settings. For example, instead of 0-1 loss, the reward can be negative square loss to handle regression tasks. We pick the concrete form for demonstration purposes. The cost of strategy s, η[s](x, c), is the sum of all services called on x. For example, if service 1 and 2 are called for predicting x, then η[s](x, c) becomes c1 + c2. Given the above formulation, a natural question is how to solve it efficiently. In the following, We first highlight an interesting property of the optimal strategy, sparsity, which inspires the design of the efficient solver, and then present the algorithm for the solver. 3.1 Sparsity Structure in the Optimal Strategy We show that if problem 3.1 is feasible and has unique optimal solution, then we must have ‖p[1]∗‖ ≤ 2. In other words, the optimal strategy should only choose the base service from at most two services (instead of K) in the MLaaS market. This is formally stated in Lemma 1. Lemma 1. If problem 3.1 is feasible, then there exists one optimal solution s∗ = (p[1]∗,Q∗,P[2]∗) such that ‖p[1]∗‖ ≤ 2. To see this, let us first expand E[rs(x)] and E[ηs(x)] by the law of total expectation. Lemma 2. The expected accuracy is E[rs(x)] = ∑K i=1Pr[A [1] s = i]Pr[Ds = 0|A[1]s = i]E[ri(x)|Ds = 0,A[1]s = i] + ∑K i,j=1Pr[A [1] s = i]Pr[Ds = 1|A[1]s = i]Pr[A[2]s = j|Ds = 1,A[1]s = i]E[rj(x)|Ds = 1,A[1]s = i)]. The expected cost is E[ηs(x)] = ∑K i=1Pr[A [1] s = i]Pr[Ds = 0|A[1]s = i]ci + ∑K i,j=1Pr[A [1] s = i]Pr[Ds = 1|A[1]s = i]Pr[A[2]s = j|Ds = 1,A[1]s = i] (ci + cj). Note that both E[rs(x)] and E[ηs(x)] are linear in Pr[A[1]s = i], which by definition equals p[1]i . Thus, fixing Q and P [2], problem 3.1 becomes a linear programming in p[1]. Intuitively, the corner points of its feasible region must be 2-sparse, since except E[ηs(x)] ≤ b and 1Tp[1] ≤ 1 , all other constraints (p[1] < 0) force sparsity. As the optimal solution of a linear programming should be a corner point, p[2] must also be 2-sparse. This sparsity structure helps reduce the computational complexity for solving problem 3.1. In fact, the sparsity structure implies problem 3.1 becomes equivalent to a master problem max (i1,i2,p1,p2,b1,b2)∈C p1gi1(b1/p1) + p2gi2(b2/p2) s.t.b1 + b2 ≤ b (3.2) where c = {(i1, i2, p1, p2, b1, b2)|i1, i2 ∈ [K], p1, p2 ≥ 0, p1 + p2 = 1, b1, b2 ≥ 0}, and gi(b′) is the optimal value of the subproblem max Q,P[2]:s=(ei,Q,P[2])∈S E[rs(x) s.t. E[ηs(x)] ≤ b′ (3.3) Here, the master problem decides which two services (i1, i2) can be the base service, how often (p1, p2) they should be invoked, and how large budgets (b1, b2) are assigned, while for a fixed base service i and budget b′, the subproblem maximizes the expected reward. 3.2 A Practical Algorithm Now we are ready to give the sparsity-inspired algorithm for generating an approximately optimal strategy ŝ, summarized in Algorithm 1. Algorithm 1 FrugalML Strategy Training. Input :K,M, c, b, {y(xi),{qk(xi), yk(xi)}Kk=1}Ni=1 Output : FrugalML strategy tuple ŝ = ( p̂[1], Q̂, P̂[2] ) 1: Estimate E[ri(x)|Ds,A[1]s ] from the training data {y(xi),{qk(xi), yk(xi)}Kk=1}Ni=1 2: For i ∈ [K], b′m ∈ [0, ‖2c‖∞ M , · · · ,‖2c‖∞], solve problem 3.3 to find optimal value gi(b ′ m) 3: For i ∈ [K], construct function gi(·) by linear interpolation on b′0, b′1, · · · , b′M . 4: Solve problem 3.2 to find optimal solution i∗1, i∗2, p∗1, p∗2, b∗1, b∗2 5: For t ∈ [2], let i = i∗t , b′ = b∗t /p∗t , solve problem 3.3 to find the optimal solution Q[i∗t ],P [2] [i∗t ] 6: p̂[1] = p∗1ei∗1 + p ∗ 2ei∗2 , Q̂ = Q[i∗1 ] + Q[i∗2 ], P̂ [2] = P[2][i∗1 ] + P [2] [i∗2 ] 7: Return ŝ = ( p̂[1], Q̂, P̂[2] ) Algorithm 1 consists of three main steps. First, the conditional accuracy E[ri(x)|Ds,A[i]s ] is estimated from the training data (line 1). Next (line 2 to line 4), we find the optimal solution i∗1, i ∗ 2, p ∗ 1, p ∗ 2, b ∗ 1, b ∗ 2 to problem 3.2. To do so, we first evaluate gi(b′) for M +1 different budget values (line 2), and then construct the functions gi(·) via linear interpolation (line 3) while enforce gi(b′) = 0,∀b′ ≤ ci. Given (piece-wise linear) gi(·), problem 3.2 can be solved by enumerating a few linear programming (line 4). Finally, the algorithm seeks to find the optimal solution in the original domain of the strategy, by solving subproblem 3.3 for base service being i∗1 and i∗2 separately (line 5), and then align those solutions appropriately (line 6). We leave the details of solving subproblem 3.3 to the supplement material due to space constraint. Theorem 3 provides the performance analysis of Algorithm 1. Theorem 3. Suppose E[ri(x)|Ds,A[1]s ] is Lipschitz continuous with constant γ w.r.t. each element in Q. Given N i.i.d. samples {y(xi),{(yk(xi), qk(xi))}Kk=1}Ni=1, the computational cost of Algorithm 1 is O ( NMK2 +K3M3L+MLK2 ) . With probability 1− , the produced strategy ŝ satisfies E[rŝ(x)]−E[rs∗(x)] ≥ −O (√ log +logM+logK+logL N + γ M ) , and E[η[ŝ](x, c)] ≤ b. As Theorem 3 suggests, the parameterM is used to balance between computational cost and accuracy drop of ŝ. For practical cases where K and L (the number of classes) are around ten and N is more than a few thousands, we have found M = 10 is a good value for good accuracy and small computational cost. Note that the coefficient of the KL terms is small: in experiments, we observe it takes only a few seconds for L = 31,M = 40. For datasets with very large number of possible labels, we can always cluster those labels into a few ”supclasses”, or adopt approximation algorithms to reduce O(ML) to O(M2) (see details in the supplemental materials). In addition, slight modification of ŝ can satisfy strict budget constraint: if budgets allows, use ŝ to pick APIs; otherwise, switch to the cheapest API. 4 Experiments We compare the accuracy and incurred costs of FrugalML to that of real world ML services for various tasks. Our goal is four-fold: (i) understanding when and why FrugalML can reduce cost without hurting accuracy, (ii) evaluating the cost savings by FrugalML, (iii) investigating the trade-offs between accuracy and cost achieved by FrugalML, and (iv) measuring the effect of training data size on FrugalML’s performance. Tasks, ML services, and Datasets. We focus on three common ML tasks in different application domains: facial emotion recognition (FER) in computer vision, sentiment analysis (SA) in natural langauge processing), and speech to text (STT) in speech recognition. The ML services used for each task as well as their prices are summarized in Table 1. For each task we also found a small open source model from GitHub, which is much less expensive to execute per data point than the commercial APIs. Table 2 lists the statistics for all the datasets used for different tasks. More details can be found in the supplemental materials. Facial Emotion Recognition: A Case Study. Let us start with facial emotion recognition on the FER+ dataset. We set budget b = 5, the price of FACE++, the cheapest API (except the open source CNN model from GitHub) and obtain a FrugalML strategy by training on half of FER+. Figure 3 demonstrates the learned FrugalML strategy. Interestingly, as shown in Figure 3(b), FrugalML’s accuracy is higher than that of the best ML service (Microsoft Face), while its cost is much lower. This is because base service’s quality score, utilized by FrugalML, is a better signal than raw image to identify if its prediction is trustworthy. Furthermore, the quality score threshold, produced by FrugalML also depends on label predicted by the base service. This flexibility helps to increase accuracy as well as to reduce costs. For example, using a universal threshold 0.86 leads to misclassfication on Figure 3(f), while 0.93 causes unnecessary add-on service call on Figure 3 (c). The learned FrugalML strategy can be interpreted by the varying API accuracy given labels and quality scores produced by the base service. As shown in Figure 4, the GitHub API can achieve the highest accuracy given that its predicted label is happy or surprise. Thus, when prediction is surprise or happy, the base service is sufficient for most of the images and thus quite some budget can be saved. For comparison, we also train a mixture of experts strategy with a softmax gating network and the majority voting ensemble method. The learned mixture of experts always uses Microsoft API, leading to the same accuracy (81%) and same cost ($10). The accuracy of majority voting on the test data is slightly better at 82%, but substantially worse than the performance of FrugalML using a small budget of $5. Majority vote, and other standard ensemble methods, needs to collect the prediction of all services, resulting in a cost ($30) which is 6 times the cost of FrugalML. Moreover, both mixture of experts and ensemble method require fixed cost, while FrugalML gives the users flexibility to choose a budget. Analysis of Cost Savings. Next, we evaluate how much cost can be saved by FrugalML to reach the highest accuracy produced by a single API on different tasks, to obtain some qualitative sense of FrugalML. As shown in Table 3, FrugalML can typically save more than half of the cost. In fact, the cost savings can be as high as 90% on the AUDIOMNIST dataset. This is likely because the base service’s quality score is highly correlated to its prediction accuracy, and thus FrugalML only needs to call expensive services for a few difficult data points. A relatively small saving is reached for SA tasks (e.g., on IMDB). This might be that the quality score of the rule based SA tool is not highly reliable. Another possible reason is that SA task has only two labels (positive and negative), limiting the power of FrugalML. Accuracy and Cost Trade-offs. Now we dive deeply into the accuracy and cost trade-offs achieved by FrugalML, shown in Figure 5. Here we also compare with two oblations to FrugalML, “Base=GH”, where the base service is forced to be the GitHub model, and “QS only”, which further forces a universal quality score threshold across all labels. While using any single ML service incurs a fixed cost, FrugalML allows users to pick any point in its trade-off curve, offering substantial flexibility. In addition to cost saving, FrugalML sometimes can achieve higher accuracy than any ML services it calls. For example, on FER+ and AFFECTNET, more than 2% accuracy improvement can be reached with small cost, and on RAFDB, when a large cost is allowed, more than 5% accuracy improvement is gained. It is also worthy noting that each component in FrugalML helps improve the accuracy. On WAIMAI, for instance, “Base=GH” and ”QS only” lead to significant accuracy drops. For speech datasets such as COMMAND, the drop is negligible, as there is no significant accuracy difference between different labels (utterance). Another interesting observation is that there is no universally “best” service for a fixed task. For SA task, Baidu NLP achieves the highest accuracy for WAIMAI and SHOP datasets, but Google NLP has best performance on YELP and IMDB. Fortunately, FrugalML adaptively learns the optimal strategy. Effects of Training Sample Size Finally we evaluate how the training sample size affects FrugalML’s performance, shown in Figure 6. Note that FrugalML only requires a few thousands training data points for the testing accuracy to converge across all datasets evaluated. This is often more sample-efficient and cost-efficient than training a customized model from scratch. It is also worthy mentioning that larger number of labels usually needs more training samples. For example, 1500 samples might be enough for WAIMAI (#label=2), but 3000 samples are needed for AudioMNIST (#label=10). 5 Conclusion and Open Problems In this work we proposed FrugalML, a formal framework for identifying the best strategy to call ML APIs given a user’s budget. Both theoretical analysis and empirical results demonstrate that FrugalML leads to significant cost reduction and accuracy improvement. FrugalML is also efficient to learn: it typically takes a few minutes on a modern machine. Our research characterized the substantial heterogeneity in cost and performance across available ML APIs, which is useful in its own right and also leveraged by FrugalML. Extending FrugalML to produce calling strategies for ML tasks beyond classification (e.g., object detection and language translation) is an interesting future direction. Our discussion with practitioners frequently using ML APIs indicates handling API updates and performance shift is another open problem. As a resource to stimulate further research in MLaaS, we also release a dataset used to develop FrugalML, consisting of 612,139 samples annotated by the APIs, and our code, available at https://github.com/lchen001/FrugalML. Acknowledgement This work was supported in part by a Google PhD Fellowship, NSF CCF 1763191, NSF CAREER 1651570 and 1942926, NIH P30AG059307, NIH U01MH098953, grants from the Chan-Zuckerberg Initiative, and affiliate members and other supporters of the Stanford DAWN project—Ant Financial, Facebook, Google, Infosys, NEC, and VMware—as well as Cisco and SAP. We also thank anonymous reviewers for helpful discussion and feedback. Potential Broader Impact ML as a service is a growing industry with substantial economic and societal impact. In this paper, we identify the cost and performance heterogeneity across popular ML APIs, which contributes to the broader understanding of this important but under-explored industry. We proposed a method to automatically reduce user cost while improving accuracy. FrugalML can broadly contribute to the applied ML ecosystem by reducing the expense and complexity of using prediction APIs. This can be a positive impact by increasing accessibility to ML APIs for less well-resourced groups. A potential concern about the ML APIs in general is that they may be trained on biased data and produce biased predictions that could disadvantage certain sub-groups. To tackle this challenge, we are releasing our dataset of over 600k images, text, and utterances that we annotated using commercial APIs. This is a resource for the broad community to use to better understand the biases in existing APIs.
1. What is the focus and contribution of the paper on combining APIs and open-source APIs for cheaper machine learning? 2. What are the strengths of the proposed approach, particularly in terms of its simplicity and clarity? 3. What are the weaknesses of the paper, especially regarding comparisons with other cascade architectures and robustness? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper proposes FrugalAI, a way to combine APIs and an open source API for cheaper ML. The paper is well written and the idea is simple and clear. The experiments I felt were reasonable. The one theory result about the method was solid but not surprising. The authors address an important problem. Strengths The idea proposed is simple, clear, compelling, and useful. The experiments are well done, I wish somehow one could understand better how these results would generalize to other types of datasets. Weaknesses The authors could have pushed a bit further on the comparison with other cascade architectures and a better understanding of how robust these results are would be nice. For example can a GAN mess up FrugalAI more so than a quality API.
NIPS
Title Video Object Segmentation with Adaptive Feature Bank and Uncertain-Region Refinement Abstract This paper presents a new matching-based framework for semi-supervised video object segmentation (VOS). Recently, state-of-the-art VOS performance has been achieved by matching-based algorithms, in which feature banks are created to store features for region matching and classification. However, how to effectively organize information in the continuously growing feature bank remains underexplored, and this leads to an inefficient design of the bank. We introduced an adaptive feature bank update scheme to dynamically absorb new features and discard obsolete features. We also designed a new confidence loss and a finegrained segmentation module to enhance the segmentation accuracy in uncertain regions. On public benchmarks, our algorithm outperforms existing state-of-thearts. 1 Introduction Video object segmentation (VOS) is a fundamental step in many video processing tasks, like video editing and video inpainting. In the semi-supervised setting, the first frame annotation is given, which depicts the objects of interest of the video sequence. The goal is to segment mask of that object in the subsequent frames. Many deep learning based methods have been proposed to solve this problem in recent years. When people tackle the semi-supervised VOS task, the segmentation performance is affected by two main steps: (1) distinguish the object regions from the background, (2) segment the object boundary clearly. A key question in VOS is how to learn the cues of target objects. We divide recent works into two categories, implicit learning and explicit learning. Conventional implicit approaches include detection-based and propagation-based methods [3, 27, 10, 32, 2, 13, 23]. They often adopt the fully convolutional network (FCN) [20] pipeline to learn object features by the network weights implicitly; then, before segmenting a new video, these methods often need an online learning to fine-tune their weights to learn new object cues from the video. Explicit approaches learn object appearance explicitly. They often formulate the segmentation as pixel-wise classification in a learnt embedding space [31, 4, 24, 11, 33, 18, 12, 25, 17]. These approaches first construct an embedding space to memorize the object appearance, then segment the subsequent frames by computing similarity. Therefore, they are also called matching-based methods. Recently, matching-based methods achieve the state-of-the-art results in the VOS benchmark. A fundamental issue in matching-based VOS segmentation is how to effectively exploit previous frames’ information to segment the new frame. Since the memory size is limited, it is not possible and unnecessary to memorize information from all the previous frames. Most methods [24, 33, 31, 18, 25] only utilize the first and the latest frame or uniformly sample key frames. However, when the given video becomes longer, these methods often either miss sampling on some key-frames or encounter ∗Corresponding author. Codes are available at https://github.com/xmlyqing00/AFB-URR. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. out-of-memory crash. To tackle this problem, we propose an adaptive feature bank (AFB) to organize the target object features. This adaptive feature bank absorbs new features by weighted averaging and discards obsolete features according to the least frequently used (LFU) index. As results, our model could memorize the characteristics of multi objects and segment them simultaneously in long videos under a low memory consumption. Besides identifying the target object, clearly segmenting object boundary is also critical to VOS performance: (1) People are often sensitive to boundary segmentation. (2) When estimated masks on some boundary regions are ambiguous and hard to classify, their misclassificaton is easily accumulated in video. However, most recent VOS methods follow an encoder-decoder mode to estimate the object masks, the boundary of the object mask becomes vague when it is iteratively upscaled from a lower resolution. Therefore, we propose an uncertain-region refinement (URR) scheme to improve the segmentation quality. It includes a novel classification confidence loss to estimate the ambiguity of segmentation, and a local fine-grained segmentation to refine the ambiguous regions. Our main contributions are three-folded: (1) We proposed an adaptive and efficient feature bank to maintain most useful information for video object segmentation. (2) We introduced a confidence loss to estimate the ambiguity of the segmentation results. We also designed a local fine-grained segmentation module to refine these ambiguous regions. (3) We demonstrated the effectiveness of our method on segmenting long videos, which are often seen in practical applications. 2 Related Work Recent video object segmentation works can be divided into two categories: implicit learning and explicit learning. The implicit learning approaches include detection-based methods [3, 23] which segment the object mask without using temporal information, and propagation-based methods [27, 10, 32, 2, 13, 9] which use masks computed in previously frames to infer masks in the current frame. These methods often adopt a fully convolutional network (FCN) structure to learn object appearance by network weights implicitly; so they often require an online learning to adapt to new objects in the test video. The explicit learning methods first construct an embedding space to memorize the object appearance, then classify each pixel’s label using their similarity. Thus, the explicit learning is also called matching-based method. A key issue in matching-based VOS segmentation is how to build the embedding space. DMM [36] only uses the first frame’s information. RGMP [24], FEELVOS [31], RANet [33] and AGSS [18] store information from the first and the latest frames. VideoMatch [11] and WaterNet [17] store information from several latest frames using a slide window. STM [25] stores features every T frames (T = 5 in their experiments). However, when the video to segment is long, these static strategies could encounter out-of-memory crashes or miss sampling key-frames. Our proposed adaptive feature bank (AFB) is a first non-uniform frame-sampling strategy in VOS that can more flexibly and dynamically manage objects’ key features in videos. AFB performs dynamic feature merging and removal, and can handle videos with any length effectively. Recent image segmentation techniques introduce fine-grained modules to improve local accuracy. ShapeMask [15] revise the decoder from FCN to refine the segmentation. PointRend [14] defines uncertainty on a binary mask, and does a one-pass detection and refinement on uncertain regions. We proposed an uncertainty-region refinement (URR) strategy to perform boundary refinement in video segmentation. URR includes (1) a more general multi-object uncertainty score for estimated masks, (2) a novel confidence loss to generate cleaner masks, and (3) a non-local mechanism to more reliably refine uncertain regions. 3 Approach The overview of our framework is illustrated in Fig. 1. First, as shown in blue region, we use a basic pipeline of matching-based segmentation (Sec. 3.1) to generate initial segmentation masks. In Sec. 3.2, we propose an adaptive feature bank module to dynamically organize the past frame information. In Sec. 3.3, given the initial segmentation, we design a confidence loss to estimate the ambiguity of misclassification, and a fine-grained module to classify the uncertain regions. These two components are marked in the red region in Fig. 1. 3.1 Matching-based Segmentation Given an evaluation video, we encode the first frame and its groundtruth annotation to build a feature bank. Then, we use the feature bank to match and segment target objects starting from the second frame. The decoder takes the matching results to estimate the frame’s object masks. Encoders. A query encoder is designed to encode the current frame, named query frame, to its feature map for segmentation. We use the ResNet-50 [8] as backbone and takes the output of the layer-3 as feature map Q ∈ R(H/8)×(W/8)×1024, where H and W are the height and width. For segmenting the tth frame, we treat the past frames from 1 to t−1 as reference frames. A reference encoder is designed for memorizing the characteristics of the target objects. Suppose there are L objects of interest, we encode the reference frame object by object and output L feature maps P̂i, i ∈ [1, L]. The reference encoder is a modification of the original ResNet-50. For each object i, it takes both the reference frame and its corresponding mask as inputs, then extracts object-level feature map, P̂i ∈ R(H/8)×(W/8)×1024. Combining the object-level feature maps together, we obtain the feature maps of the reference frame at index j, Pj = {P̂1, P̂2, · · · , P̂L}, where j ∈ [1, t− 1]. Feature map embedding. Traditional matching-based methods directly compare the feature maps of the query feature map and the reference feature maps. Although such design is good for classification, they are lack of semantic information to estimate the object masks. Inspired by STM [25], we utilize the similar feature map embedding module. The feature maps are encoded into two embedding spaces, named key k and value v by two convolutional modules. Specifically, we match the feature maps by their keys k, while allow their values v being different in order to preserve as much semantic information as possible. The feature bank stores pairs of keys and values of the past frames. In the next section, we compare the feature maps of the current frame with the feature bank to estimate the object masks. The details of maintaining the feature bank are elaborated in Section 3.2. Matcher. A query frame is encoded into pairs of key kQ and value vQ through the query encoder and feature map embedding. We maintain L feature banks FBi, i ∈ [1, L] from the past frames for each object i. The similarity between the query frame and the feature banks is calculated object by object. For each point p in the query frame, we use a weighted summation to retrieve the closest value v̂i(p) in the ith object feature bank, v̂i(p) = ∑ (kFB ,vFB)∈FBi g(kQ(p), kFB)vFB , (1) where i ∈ [1, L], and g is the softmax function g(kQ(p), kFB) = exp(k Q(p)•kFB)∑ ∀j exp(k Q(p)•kFB) , where • represents the dot production between two vectors. We concatenate the query value map with its most similar retrieval value map as yi = [vQ, v̂i], i ∈ [1, L], where yi is the matching results between the query frame and the feature banks for the object i. Decoder. The decoder takes the output of the matcher yi, i ∈ [1, L] to estimate the object mask independently, where yi depicts the semantic information for the object i. We follow the refinement module used in [24, 18, 25] that gradually upscales the feature map by a set of residual convolutional blocks. At each stage, the refinement module takes both the output of the previous stage and a feature map from the query encoder at corresponding scale through skip connections. After the decoder module, we obtain the initial object masks Mi for each object i. We minimize the cross entropy loss Lcls between the object masks and the groundtruth labels C. The Lcls is averaged across all pixels p, Lcls(M,C) = − 1 |p| ∑ p [ log ( exp(Mc)∑ i exp(Mi) )] p . (2) 3.2 Adaptive Feature Bank We build a feature bank to store features of each object and classify new pixels/regions in the current frame t. Storing features from all the previous frames {1, . . . , t− 1} is impossible, because it will make the bank prohibitively big (as the length of the video clip grows) and make the query slow. Recent approaches either store the features from every few frames or from the several latest frames. For example, STM [25] uniformly stores every one of K = 5 frames in feature bank; and on an NVIDIA 1080Ti card with 11GB Memory, this can only handle a single-object video at most 350+ frames. Practical videos are often longer (e.g., average YouTube video has about 12 minutes or 22K frames); and for a 10-min video, STM needs to set K = 300, which will probably miss many important frames and information. Hence, we propose an adaptive feature bank (AFB) to more effectively manage object’s key features. AFB contains two operations: absorbing new features and removing obsolete ones. Absorbing new features. In video segmentation, although features from most recent frames are often more important, earlier frames may contain useful information. Therefore, rather than simply ignoring those earlier frames, we keep earlier features and organize all the features by a weighted averaging (which has been shown effective for finding optimal data representations such as Neural Gas [7]). Fig. 2 illustrates how the adaptive feature bank absorbs new features. Existing features and new features are marked in blue and red, separately. When a new feature is extracted, if it is close enough to some existing one, then merge them. Such a merge avoids storing redundant information and helps the memory efficiency. Meanwhile, it allows a flexible update on the stored features according to the object’s changing appearance. At the beginning of the video segmentation, the feature bank is initialized by the features of the first frame. We build an independent feature bank for each object. Since the object-level feature banks are maintained separately, we omit the object symbol here to simplify the formulae. Suppose we have estimated the object mask of the (t− 1)th frame, then the (t− 1)th frame and the estimated mask are encoded into features (kPt−1, v P t−1) through the reference encoder and the embedding module. For each new feature a(i) = (kPt−1(i), v P t−1(i)) and old features that store in the feature bank b(j) = (kFB(j), vFB(j)) ∈ FB, we employ an inner product as the similarity function, h(a(i), b(j)) = kPt−1(i) • kFB(j) ‖kPt−1(i)‖‖kFB(j)‖ . (3) For each new feature a(i), we select the most similar feature b(j′) from the feature bank that H(a(i)) = max∀b(j)∈FBh(a(i), b(j)). If H(a(i)) is large enough, since these two features are similar, we will merge the new one to the feature bank. Specifically, when H(a(i)) > h, kFB(j′) = (1− λp)kFB(j′) + λpkPt−1(i), vFB(j′) = (1− λp)vFB(j′) + λpvPt−1(i), (4) where h = 0.95 controls the merging rate, and λp = 0.1 controls the impact of the moving averaging. Otherwise, for all H(a(i)) ≤ h, because the new features are so distinct from all the existing ones, we append the new features to the feature bank, kFB = kFB ∪ kPt−1(i), vFB = vFB ∪ vPt−1(i). (5) From our experiments, we find about 90% of new features that satisfy merging operation and we only need to add the rest 10% each time. Removing obsolete features. Though the above updating strategy reliefs memory pressure significantly (e.g., 90% less memory consumption), feature bank sizes still gradually expand with the growth of frame number. Similar to the cache replacement policy, we measure which old features are least likely to be useful and may be eliminated. We build a measurement using the least-frequently used (LFU) index. Each time when we use the feature bank to match the query frame in Eqn. 1, if the similarity function g is greater than a threshold l = 10−4, we increase the count of this feature. Specifically, for ∀(kFB(j), vFB(j)) ∈ FB, the LFU index is counted by cnt(j) := cnt(j) + log ( ∑ ∀(kQ(i),vQ(i)) sgn ( g(kQ(i), kFB(j)) > l ) + 1 ) , LFU(j) = cnt(j) l(j) , (6) where l(j) is the time span that the feature stays in the feature bank and the log function is used to smooth the LFU index. In practice, when the size of the feature bank is about to exceed the predefined budget, we remove the features with the least LFU index until the size of the feature bank is below the budget. The LFU index counting and feature removing procedure are very efficient. This adaptive feature bank scheme can be generalized to other matching-based video processing methods to maintain the bank size, making it suitable to handle videos of arbitrary length. 3.3 Uncertain-region Refinement In the decoding stage, the object masks are computed from the upscaled low-resolution images. Therefore, the object boundaries of such estimated masks are often ambiguous. The classification accuracy of the boundary regions, however, is critical to the segmentation results. Hence, we propose a new scheme, named uncertain-region refinement (URR), to tackle boundary and other uncertain regions. It includes a new loss to evaluate such uncertainty and a novel local refinement mechanism to adjust fine-grained segmentation. Confidence loss. After decoding and a softmax normalization, we have a set of initial segmentations Mi for each object i, i ∈ [1, L]. The object mask Mi represents the likelihood of each pixel p belonging to the object i, where the value range of Mi is in [0, 1], and ∑L i=1Mi(p) = 1. In other words, for each pixel p, there are L values Mi(p) in [0, 1], indicating the likelihood of p being one of these L objects. We can simply pick the index i of the largest value Mi(p) as p’s label. More adaptively, we define a pixel-wise uncertainty map U to measure classification ambiguity on each pixel, using the ratio of the largest likelihood value M̂1 to the second largest value M̂2, U = exp(1− M̂ 1 M̂2 ), (7) where M̂ 1 M̂2 ∈ [1,+∞). The uncertainty map U is in (0, 1], where smaller value means more confidence. The confidence loss Lconf of a set of object masks is defined as Lconf = ‖U‖2. (8) During the training stage, our framework is optimized using the following loss function, L = Lcls + λuLconf , (9) where the λu = 0.5 is a weight scalar. Lcls (Eqn. 2) is the cross entropy loss for pixel-wise classification. Lconf is designed for minimizing the ambiguities of the estimated masks, i.e., pushing each object mask towards a 0/1 map. Local refinement mechanism. We propose a novel local refinement mechanism to refine the ambiguous regions. Experimentally, given two neighbor points in the spatial space, if they belong to the same object, their features are usually close. The main intuition is that we use the pixels which have high confidence in classification to refine the other uncertain points in its neighborhood. Specifically, for each uncertain pixel p, we compose its local reference features y(p) = {yi(p)|i ∈ [1, L]} from p’s neighborhood, where L is the number of target objects. If the local feature r(p) of p is close to the yi(p), we say pixel p is likely to be classified as the object i. The local reference feature yi(p) is computed by weighted average in a small neighborhood N (p), yi(p) = 1∑ q∈N (p)Mi(q) ∑ q∈N (p) Mi(q)r(q), (10) where the weight Mi is the object mask for the object i. Then, a residual network module fl is designed to learn to predict the local similarity. We assign a local refinement mask e for each pixel p by comparing the similarity between r(p) and yi(p), ei(p) = ci(p)fl ( r(p), yi(p) ) , (11) where ci(p) = maxq∈N (p)Mi(q). ci are confidence scores for adjusting the impact of the local refinement mask. Finally, we obtain the final segmentation Si for each object i by adding the local refinement mask ei to the initial object mask Mi, Si(p) =Mi(p) + U(p)ei(p). (12) Fig. 3 shows the effectiveness of the proposed uncertain-region refinement (URR). The initial segmentation (marked in blue) is ambiguous, where some areas are lack of confidence (marked in red). As Fig. 3 (d)(e) shown, when our model is trained with the Lconf , the uncertain-region refinement improves the segmentation quality. 4 Training Details Our model is first pretrained on simulation videos which is generated from static image datasets. Then, for different benchmarks, our model is further trained on their training videos. Pretraining on image datasets. Since we don’t introduce any temporal smoothness assumptions, the learnable modules in our model does not require long videos for training. Pretraining on image datasets is widely used in VOS methods [27, 25], we simulate training videos by static image datasets [5, 29, 16, 19, 6] (136, 032 images in total). A synthetic video clip has 1 first frame and 5 subsequent frames, which are generated from the same image by data augmentation (random affine, color, flip, resize, and crop). We use the first frame to initialize the feature bank and the rest 5 frames consist of a mini-batch to train our framework by minimizing the loss function L in Eqn. 9. Main training on the benchmark datasets. Similar to the pretrianing routine, we randomly select 6 frames per training video as a training sample and apply data augmentation on those frames. The input frames are randomly resized and cropped into 400× 400px for all training. For each training sample, we randomly select at most 3 objects for training. We minimize our loss using AdamW [21] optimizer (β = (0.9, 0.999), eps = 10−8, and the weight decay is 0.01). The initial learning rate is 10−5 for pretraining and 4× 10−6 for main training. Note that we directly use the network output without post-processing or video-by-video online training. 5 Experiments 5.1 Datasets and Evaluation Metrics We evaluated our model (AFB-URR) on DAVIS17 [28] and YouTube-VOS18 [35], two large-scale VOS benchmarks with multiple objects. DAVIS17 contains 60 training videos and 30 validation videos. YouTube-VOS18 (YV) contains 3, 471 training videos and 474 videos for validation. We implemented our framework in PyTorch [26] and conducted experiments on a single NVIDIA 1080Ti GPU. Qualitative results of our framework on DAVIS17 dataset are shown in Fig. 4. More qualitative comparisons are reported in the supplementary file. We adopted the evaluation metrics from the DAVIS benchmark. The region accuracy J calculates the intersection-over-union (IoU) of the estimated masks and the groundtruth masks. The boundary accuracy F measures the accuracy of boundaries, via bipartite matching between the boundary pixels. 5.2 Comparison with the State-of-the-art Results on DAVIS benchmarks. We compared our approach with recent implicit and explicit learning methods. Table 1 reports three accuracy scores [28] in percentages: the mean (M) is the average value, recall (R) measures the fraction of sequences scoring higher than a threshold τ = 0.5, and decay (D) measures how the performance changes over time. Our method significantly outperforms existing methods: our J&F M score is 74.6 without any online fine-tune. Our model also has better runtime performance than the baseline STM [25]. On DAVIS17, with an NVIDIA 1080Ti, STM achieves 3.4fps with J&F = 71.6, and ours achieves 4.0fps with J&F = 74.6. We can also trade accuracy for better efficiency: if we limit the memory usage under 20%, it achieves 5.7fps with J&F = 71.7. Results on YouTube-VOS benchmarks. The validation set contains 474 first-frame-annotated videos. They include objects from 65 training categories and 26 unseen categories in training. Table 2 shows a comparison with previous state-of-the-art methods on the open evaluation server [35]. Our framework achieves the best overall score 79.6 because the adaptive feature bank improves the robustness and reliability for different scenarios. For those videos whose objects are already been seen in the training videos, STM’s results are somewhat better than ours. The reason could be that their model and ours are evaluated on different memory budgets. STM [25] evaluated their work on an NVIDIA V100 GPU with 16GB memory, while we evaluated ours on a weaker machine (one NVIDIA 1080Ti GPU with 11GB memory). However, our framework is more robust in segmenting unseen objects, 74.1 in J and 82.6 in F , compared with the STM’s 72.8 in J and 80.9 in F . Overall, our proposed model has great generalizability and achieves the state-of-the-art performance. 5.3 Segmentation of Long Videos The widely used benchmarks DAVIS17 (67 frames per video on average) and YouTube-VOS (132 frames per video on average) only contain trimmed short clips. To better evaluate the performance of our method in processing long videos in real-world tasks, we also conducted experiments on several long videos2. We randomly selected three videos from the Internet, that are longer than 1.5K frames and have their main objects continuously appearing. Each video has 20 uniformly sampled frames manually annotated for evaluation. Table 3 reports the experimental results of ours and three other state-of-the-art methods, ran on an NVIDIA 1080Ti (11GB Memory). We used these methods’ released codes and their models pretrained on YouTube-VOS dataset. Note that STM [25] could only store at most 50 frames per video under 11GB GPU memory, so we set this parameter to 50. RVOS [30] and A-GAME [12] achieved lower scores because they only use information from the first or the latest frame. They failed to segment the object of interest after 1K frames. STM has a J&F score of 79.3. Because the total frames that STM can store is fixed, when the video length becomes longer, STM has to increase the key frame interval and has a higher chance of missing important frames and information. In contrast, the proposed AFB mechanism can dynamically manage the key information from previous frames. We achieved the best J&F score of 83.3. 5.4 Ablation Study We conducted an ablation analysis of our framework on the DAVIS17 dataset. In Table 4, the quantitative results show the effectiveness of the proposed key modules. First, we analyzed the impact of the adaptive feature bank (AFB), and evaluated 4 memory management schemes, namely, keeping features from (1) the first frame, (2) latest frame, (3) the first and latest frame, and (4) the first and latest 5 frames. The remaining modules follow the full framework (i.e., with uncertain-regions refinement (URR) included). From the first four rows in Table 4, while reference frames help the segmentation, simply adding multiple frames may not further improve the performance. Our adaptive feature bank more effectively organized the key information of all the previous frames. Consequently, our framework AFB+URR has the best performance. Second, we analyzed the efficiency of the proposed uncertain-regions refinement (URR). We disabled URR by training the framework without the confidence loss Lconf in Eqn. 9 or local refinement. Our uncertainty evaluation and local refinement significantly improve performance in these regions, because object boundary regions are often ambiguous, and their uncertainty errors are easily accumulated and can harm segmentation results. 6 Conclusion We presented a novel framework for semi-supervised video object segmentation. Our framework includes an adaptive feature bank (AFB) module and an uncertain-region refinement (URR) module. The adaptive feature bank effectively organizes key features for segmentation. The uncertain-region refinement is designed for refining ambiguous regions. Our approach outperforms the state-of-the-art methods on two large-scale benchmark datasets. 2Long videos link: https://www.kaggle.com/gvclsu/long-videos Broader Impact Our framework is designed for the semi-supervised video object segmentation task, also known as one-shot video object segmentation. Given the first frame annotation, our model could segment the object of interest in the subsequent frames. Due to the great generalizability of our model, the category of the target object is unrestricted. Our adaptive feature bank and the matching based framework can be modified to benefit other video processing tasks in autonomous driving, robot interaction, and video surveillance monitoring that need to handle long videos and appearance-changing contents. For example, one application is in real-time flood detection/monitoring using surveillance cameras. Flooding constitutes the largest portion of insured losses among all disasters in the world [1]. Nowadays, many cameras in city including traffic monitoring and security surveillance cameras are able to capture time-lapse images and videos. By leveraging our video object segmentation framework, flood can be located from the videos and the water level can be estimated. The societal impact is immersed because such a flood monitoring system can predict and alert a flooding event from rainstorms or hurricanes in time. Our framework is trained and evaluated on large-scale segmentation datasets, we do not leverage biases in the data. Acknowledgments and Disclosure of Funding This work is partly supported by Louisiana Board of Regents ITRS LEQSF(2018-21)-RD-B-03 and National Science Foundation of USA OIA-1946231.
1. What is the main contribution of the paper in the field of Visual Object Segmentation? 2. What are the strengths of the proposed method, particularly in its novel approach and technical soundness? 3. What are the weaknesses of the paper regarding the Local Refinement Mechanism and performance analysis? 4. How does the reviewer assess the significance of the paper's contributions and its impact on future research?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper proposes a method for Visual Object Segmentation (multi-objects, the semi-supervised task - first frame is given) that keeps the appearance history of the object in an adaptive way, claiming better organization. The authors base the retrieval from feature bank on the key-value principle. They also use a score for the segmentation uncertainty and penalize it at the loss level. The experimental results are good, having a clear advantage on unseen objects compared with previous SOTA on YouTube-VOS18 and SOTA results on DAVIS2017. Strengths The paper is technically sound and the need for an adaptive memory for both the appearance and for the semantics is clearly motivated. The “Local Refinement Mechanism” works like a binarization step, forcing the solution to be closer to the final/wanted 0/1 map. The experiments are done on two datasets, with SOTA results. The solution is novel to a certain degree. Mostly, it brings together well validated principles and ideas from tracking and segmentation and nicely connects them in a derivable pipeline. Weaknesses The need for Local Refinement Mechanism is not validated in the ablation. The value of pretraining on the image sets is also not taken into account in the ablation or interpreted in text (how the number of images/classes/datasets influences the score?). The solution is lacking a running time performance analysis, which is quite important in the field.
NIPS
Title Video Object Segmentation with Adaptive Feature Bank and Uncertain-Region Refinement Abstract This paper presents a new matching-based framework for semi-supervised video object segmentation (VOS). Recently, state-of-the-art VOS performance has been achieved by matching-based algorithms, in which feature banks are created to store features for region matching and classification. However, how to effectively organize information in the continuously growing feature bank remains underexplored, and this leads to an inefficient design of the bank. We introduced an adaptive feature bank update scheme to dynamically absorb new features and discard obsolete features. We also designed a new confidence loss and a finegrained segmentation module to enhance the segmentation accuracy in uncertain regions. On public benchmarks, our algorithm outperforms existing state-of-thearts. 1 Introduction Video object segmentation (VOS) is a fundamental step in many video processing tasks, like video editing and video inpainting. In the semi-supervised setting, the first frame annotation is given, which depicts the objects of interest of the video sequence. The goal is to segment mask of that object in the subsequent frames. Many deep learning based methods have been proposed to solve this problem in recent years. When people tackle the semi-supervised VOS task, the segmentation performance is affected by two main steps: (1) distinguish the object regions from the background, (2) segment the object boundary clearly. A key question in VOS is how to learn the cues of target objects. We divide recent works into two categories, implicit learning and explicit learning. Conventional implicit approaches include detection-based and propagation-based methods [3, 27, 10, 32, 2, 13, 23]. They often adopt the fully convolutional network (FCN) [20] pipeline to learn object features by the network weights implicitly; then, before segmenting a new video, these methods often need an online learning to fine-tune their weights to learn new object cues from the video. Explicit approaches learn object appearance explicitly. They often formulate the segmentation as pixel-wise classification in a learnt embedding space [31, 4, 24, 11, 33, 18, 12, 25, 17]. These approaches first construct an embedding space to memorize the object appearance, then segment the subsequent frames by computing similarity. Therefore, they are also called matching-based methods. Recently, matching-based methods achieve the state-of-the-art results in the VOS benchmark. A fundamental issue in matching-based VOS segmentation is how to effectively exploit previous frames’ information to segment the new frame. Since the memory size is limited, it is not possible and unnecessary to memorize information from all the previous frames. Most methods [24, 33, 31, 18, 25] only utilize the first and the latest frame or uniformly sample key frames. However, when the given video becomes longer, these methods often either miss sampling on some key-frames or encounter ∗Corresponding author. Codes are available at https://github.com/xmlyqing00/AFB-URR. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. out-of-memory crash. To tackle this problem, we propose an adaptive feature bank (AFB) to organize the target object features. This adaptive feature bank absorbs new features by weighted averaging and discards obsolete features according to the least frequently used (LFU) index. As results, our model could memorize the characteristics of multi objects and segment them simultaneously in long videos under a low memory consumption. Besides identifying the target object, clearly segmenting object boundary is also critical to VOS performance: (1) People are often sensitive to boundary segmentation. (2) When estimated masks on some boundary regions are ambiguous and hard to classify, their misclassificaton is easily accumulated in video. However, most recent VOS methods follow an encoder-decoder mode to estimate the object masks, the boundary of the object mask becomes vague when it is iteratively upscaled from a lower resolution. Therefore, we propose an uncertain-region refinement (URR) scheme to improve the segmentation quality. It includes a novel classification confidence loss to estimate the ambiguity of segmentation, and a local fine-grained segmentation to refine the ambiguous regions. Our main contributions are three-folded: (1) We proposed an adaptive and efficient feature bank to maintain most useful information for video object segmentation. (2) We introduced a confidence loss to estimate the ambiguity of the segmentation results. We also designed a local fine-grained segmentation module to refine these ambiguous regions. (3) We demonstrated the effectiveness of our method on segmenting long videos, which are often seen in practical applications. 2 Related Work Recent video object segmentation works can be divided into two categories: implicit learning and explicit learning. The implicit learning approaches include detection-based methods [3, 23] which segment the object mask without using temporal information, and propagation-based methods [27, 10, 32, 2, 13, 9] which use masks computed in previously frames to infer masks in the current frame. These methods often adopt a fully convolutional network (FCN) structure to learn object appearance by network weights implicitly; so they often require an online learning to adapt to new objects in the test video. The explicit learning methods first construct an embedding space to memorize the object appearance, then classify each pixel’s label using their similarity. Thus, the explicit learning is also called matching-based method. A key issue in matching-based VOS segmentation is how to build the embedding space. DMM [36] only uses the first frame’s information. RGMP [24], FEELVOS [31], RANet [33] and AGSS [18] store information from the first and the latest frames. VideoMatch [11] and WaterNet [17] store information from several latest frames using a slide window. STM [25] stores features every T frames (T = 5 in their experiments). However, when the video to segment is long, these static strategies could encounter out-of-memory crashes or miss sampling key-frames. Our proposed adaptive feature bank (AFB) is a first non-uniform frame-sampling strategy in VOS that can more flexibly and dynamically manage objects’ key features in videos. AFB performs dynamic feature merging and removal, and can handle videos with any length effectively. Recent image segmentation techniques introduce fine-grained modules to improve local accuracy. ShapeMask [15] revise the decoder from FCN to refine the segmentation. PointRend [14] defines uncertainty on a binary mask, and does a one-pass detection and refinement on uncertain regions. We proposed an uncertainty-region refinement (URR) strategy to perform boundary refinement in video segmentation. URR includes (1) a more general multi-object uncertainty score for estimated masks, (2) a novel confidence loss to generate cleaner masks, and (3) a non-local mechanism to more reliably refine uncertain regions. 3 Approach The overview of our framework is illustrated in Fig. 1. First, as shown in blue region, we use a basic pipeline of matching-based segmentation (Sec. 3.1) to generate initial segmentation masks. In Sec. 3.2, we propose an adaptive feature bank module to dynamically organize the past frame information. In Sec. 3.3, given the initial segmentation, we design a confidence loss to estimate the ambiguity of misclassification, and a fine-grained module to classify the uncertain regions. These two components are marked in the red region in Fig. 1. 3.1 Matching-based Segmentation Given an evaluation video, we encode the first frame and its groundtruth annotation to build a feature bank. Then, we use the feature bank to match and segment target objects starting from the second frame. The decoder takes the matching results to estimate the frame’s object masks. Encoders. A query encoder is designed to encode the current frame, named query frame, to its feature map for segmentation. We use the ResNet-50 [8] as backbone and takes the output of the layer-3 as feature map Q ∈ R(H/8)×(W/8)×1024, where H and W are the height and width. For segmenting the tth frame, we treat the past frames from 1 to t−1 as reference frames. A reference encoder is designed for memorizing the characteristics of the target objects. Suppose there are L objects of interest, we encode the reference frame object by object and output L feature maps P̂i, i ∈ [1, L]. The reference encoder is a modification of the original ResNet-50. For each object i, it takes both the reference frame and its corresponding mask as inputs, then extracts object-level feature map, P̂i ∈ R(H/8)×(W/8)×1024. Combining the object-level feature maps together, we obtain the feature maps of the reference frame at index j, Pj = {P̂1, P̂2, · · · , P̂L}, where j ∈ [1, t− 1]. Feature map embedding. Traditional matching-based methods directly compare the feature maps of the query feature map and the reference feature maps. Although such design is good for classification, they are lack of semantic information to estimate the object masks. Inspired by STM [25], we utilize the similar feature map embedding module. The feature maps are encoded into two embedding spaces, named key k and value v by two convolutional modules. Specifically, we match the feature maps by their keys k, while allow their values v being different in order to preserve as much semantic information as possible. The feature bank stores pairs of keys and values of the past frames. In the next section, we compare the feature maps of the current frame with the feature bank to estimate the object masks. The details of maintaining the feature bank are elaborated in Section 3.2. Matcher. A query frame is encoded into pairs of key kQ and value vQ through the query encoder and feature map embedding. We maintain L feature banks FBi, i ∈ [1, L] from the past frames for each object i. The similarity between the query frame and the feature banks is calculated object by object. For each point p in the query frame, we use a weighted summation to retrieve the closest value v̂i(p) in the ith object feature bank, v̂i(p) = ∑ (kFB ,vFB)∈FBi g(kQ(p), kFB)vFB , (1) where i ∈ [1, L], and g is the softmax function g(kQ(p), kFB) = exp(k Q(p)•kFB)∑ ∀j exp(k Q(p)•kFB) , where • represents the dot production between two vectors. We concatenate the query value map with its most similar retrieval value map as yi = [vQ, v̂i], i ∈ [1, L], where yi is the matching results between the query frame and the feature banks for the object i. Decoder. The decoder takes the output of the matcher yi, i ∈ [1, L] to estimate the object mask independently, where yi depicts the semantic information for the object i. We follow the refinement module used in [24, 18, 25] that gradually upscales the feature map by a set of residual convolutional blocks. At each stage, the refinement module takes both the output of the previous stage and a feature map from the query encoder at corresponding scale through skip connections. After the decoder module, we obtain the initial object masks Mi for each object i. We minimize the cross entropy loss Lcls between the object masks and the groundtruth labels C. The Lcls is averaged across all pixels p, Lcls(M,C) = − 1 |p| ∑ p [ log ( exp(Mc)∑ i exp(Mi) )] p . (2) 3.2 Adaptive Feature Bank We build a feature bank to store features of each object and classify new pixels/regions in the current frame t. Storing features from all the previous frames {1, . . . , t− 1} is impossible, because it will make the bank prohibitively big (as the length of the video clip grows) and make the query slow. Recent approaches either store the features from every few frames or from the several latest frames. For example, STM [25] uniformly stores every one of K = 5 frames in feature bank; and on an NVIDIA 1080Ti card with 11GB Memory, this can only handle a single-object video at most 350+ frames. Practical videos are often longer (e.g., average YouTube video has about 12 minutes or 22K frames); and for a 10-min video, STM needs to set K = 300, which will probably miss many important frames and information. Hence, we propose an adaptive feature bank (AFB) to more effectively manage object’s key features. AFB contains two operations: absorbing new features and removing obsolete ones. Absorbing new features. In video segmentation, although features from most recent frames are often more important, earlier frames may contain useful information. Therefore, rather than simply ignoring those earlier frames, we keep earlier features and organize all the features by a weighted averaging (which has been shown effective for finding optimal data representations such as Neural Gas [7]). Fig. 2 illustrates how the adaptive feature bank absorbs new features. Existing features and new features are marked in blue and red, separately. When a new feature is extracted, if it is close enough to some existing one, then merge them. Such a merge avoids storing redundant information and helps the memory efficiency. Meanwhile, it allows a flexible update on the stored features according to the object’s changing appearance. At the beginning of the video segmentation, the feature bank is initialized by the features of the first frame. We build an independent feature bank for each object. Since the object-level feature banks are maintained separately, we omit the object symbol here to simplify the formulae. Suppose we have estimated the object mask of the (t− 1)th frame, then the (t− 1)th frame and the estimated mask are encoded into features (kPt−1, v P t−1) through the reference encoder and the embedding module. For each new feature a(i) = (kPt−1(i), v P t−1(i)) and old features that store in the feature bank b(j) = (kFB(j), vFB(j)) ∈ FB, we employ an inner product as the similarity function, h(a(i), b(j)) = kPt−1(i) • kFB(j) ‖kPt−1(i)‖‖kFB(j)‖ . (3) For each new feature a(i), we select the most similar feature b(j′) from the feature bank that H(a(i)) = max∀b(j)∈FBh(a(i), b(j)). If H(a(i)) is large enough, since these two features are similar, we will merge the new one to the feature bank. Specifically, when H(a(i)) > h, kFB(j′) = (1− λp)kFB(j′) + λpkPt−1(i), vFB(j′) = (1− λp)vFB(j′) + λpvPt−1(i), (4) where h = 0.95 controls the merging rate, and λp = 0.1 controls the impact of the moving averaging. Otherwise, for all H(a(i)) ≤ h, because the new features are so distinct from all the existing ones, we append the new features to the feature bank, kFB = kFB ∪ kPt−1(i), vFB = vFB ∪ vPt−1(i). (5) From our experiments, we find about 90% of new features that satisfy merging operation and we only need to add the rest 10% each time. Removing obsolete features. Though the above updating strategy reliefs memory pressure significantly (e.g., 90% less memory consumption), feature bank sizes still gradually expand with the growth of frame number. Similar to the cache replacement policy, we measure which old features are least likely to be useful and may be eliminated. We build a measurement using the least-frequently used (LFU) index. Each time when we use the feature bank to match the query frame in Eqn. 1, if the similarity function g is greater than a threshold l = 10−4, we increase the count of this feature. Specifically, for ∀(kFB(j), vFB(j)) ∈ FB, the LFU index is counted by cnt(j) := cnt(j) + log ( ∑ ∀(kQ(i),vQ(i)) sgn ( g(kQ(i), kFB(j)) > l ) + 1 ) , LFU(j) = cnt(j) l(j) , (6) where l(j) is the time span that the feature stays in the feature bank and the log function is used to smooth the LFU index. In practice, when the size of the feature bank is about to exceed the predefined budget, we remove the features with the least LFU index until the size of the feature bank is below the budget. The LFU index counting and feature removing procedure are very efficient. This adaptive feature bank scheme can be generalized to other matching-based video processing methods to maintain the bank size, making it suitable to handle videos of arbitrary length. 3.3 Uncertain-region Refinement In the decoding stage, the object masks are computed from the upscaled low-resolution images. Therefore, the object boundaries of such estimated masks are often ambiguous. The classification accuracy of the boundary regions, however, is critical to the segmentation results. Hence, we propose a new scheme, named uncertain-region refinement (URR), to tackle boundary and other uncertain regions. It includes a new loss to evaluate such uncertainty and a novel local refinement mechanism to adjust fine-grained segmentation. Confidence loss. After decoding and a softmax normalization, we have a set of initial segmentations Mi for each object i, i ∈ [1, L]. The object mask Mi represents the likelihood of each pixel p belonging to the object i, where the value range of Mi is in [0, 1], and ∑L i=1Mi(p) = 1. In other words, for each pixel p, there are L values Mi(p) in [0, 1], indicating the likelihood of p being one of these L objects. We can simply pick the index i of the largest value Mi(p) as p’s label. More adaptively, we define a pixel-wise uncertainty map U to measure classification ambiguity on each pixel, using the ratio of the largest likelihood value M̂1 to the second largest value M̂2, U = exp(1− M̂ 1 M̂2 ), (7) where M̂ 1 M̂2 ∈ [1,+∞). The uncertainty map U is in (0, 1], where smaller value means more confidence. The confidence loss Lconf of a set of object masks is defined as Lconf = ‖U‖2. (8) During the training stage, our framework is optimized using the following loss function, L = Lcls + λuLconf , (9) where the λu = 0.5 is a weight scalar. Lcls (Eqn. 2) is the cross entropy loss for pixel-wise classification. Lconf is designed for minimizing the ambiguities of the estimated masks, i.e., pushing each object mask towards a 0/1 map. Local refinement mechanism. We propose a novel local refinement mechanism to refine the ambiguous regions. Experimentally, given two neighbor points in the spatial space, if they belong to the same object, their features are usually close. The main intuition is that we use the pixels which have high confidence in classification to refine the other uncertain points in its neighborhood. Specifically, for each uncertain pixel p, we compose its local reference features y(p) = {yi(p)|i ∈ [1, L]} from p’s neighborhood, where L is the number of target objects. If the local feature r(p) of p is close to the yi(p), we say pixel p is likely to be classified as the object i. The local reference feature yi(p) is computed by weighted average in a small neighborhood N (p), yi(p) = 1∑ q∈N (p)Mi(q) ∑ q∈N (p) Mi(q)r(q), (10) where the weight Mi is the object mask for the object i. Then, a residual network module fl is designed to learn to predict the local similarity. We assign a local refinement mask e for each pixel p by comparing the similarity between r(p) and yi(p), ei(p) = ci(p)fl ( r(p), yi(p) ) , (11) where ci(p) = maxq∈N (p)Mi(q). ci are confidence scores for adjusting the impact of the local refinement mask. Finally, we obtain the final segmentation Si for each object i by adding the local refinement mask ei to the initial object mask Mi, Si(p) =Mi(p) + U(p)ei(p). (12) Fig. 3 shows the effectiveness of the proposed uncertain-region refinement (URR). The initial segmentation (marked in blue) is ambiguous, where some areas are lack of confidence (marked in red). As Fig. 3 (d)(e) shown, when our model is trained with the Lconf , the uncertain-region refinement improves the segmentation quality. 4 Training Details Our model is first pretrained on simulation videos which is generated from static image datasets. Then, for different benchmarks, our model is further trained on their training videos. Pretraining on image datasets. Since we don’t introduce any temporal smoothness assumptions, the learnable modules in our model does not require long videos for training. Pretraining on image datasets is widely used in VOS methods [27, 25], we simulate training videos by static image datasets [5, 29, 16, 19, 6] (136, 032 images in total). A synthetic video clip has 1 first frame and 5 subsequent frames, which are generated from the same image by data augmentation (random affine, color, flip, resize, and crop). We use the first frame to initialize the feature bank and the rest 5 frames consist of a mini-batch to train our framework by minimizing the loss function L in Eqn. 9. Main training on the benchmark datasets. Similar to the pretrianing routine, we randomly select 6 frames per training video as a training sample and apply data augmentation on those frames. The input frames are randomly resized and cropped into 400× 400px for all training. For each training sample, we randomly select at most 3 objects for training. We minimize our loss using AdamW [21] optimizer (β = (0.9, 0.999), eps = 10−8, and the weight decay is 0.01). The initial learning rate is 10−5 for pretraining and 4× 10−6 for main training. Note that we directly use the network output without post-processing or video-by-video online training. 5 Experiments 5.1 Datasets and Evaluation Metrics We evaluated our model (AFB-URR) on DAVIS17 [28] and YouTube-VOS18 [35], two large-scale VOS benchmarks with multiple objects. DAVIS17 contains 60 training videos and 30 validation videos. YouTube-VOS18 (YV) contains 3, 471 training videos and 474 videos for validation. We implemented our framework in PyTorch [26] and conducted experiments on a single NVIDIA 1080Ti GPU. Qualitative results of our framework on DAVIS17 dataset are shown in Fig. 4. More qualitative comparisons are reported in the supplementary file. We adopted the evaluation metrics from the DAVIS benchmark. The region accuracy J calculates the intersection-over-union (IoU) of the estimated masks and the groundtruth masks. The boundary accuracy F measures the accuracy of boundaries, via bipartite matching between the boundary pixels. 5.2 Comparison with the State-of-the-art Results on DAVIS benchmarks. We compared our approach with recent implicit and explicit learning methods. Table 1 reports three accuracy scores [28] in percentages: the mean (M) is the average value, recall (R) measures the fraction of sequences scoring higher than a threshold τ = 0.5, and decay (D) measures how the performance changes over time. Our method significantly outperforms existing methods: our J&F M score is 74.6 without any online fine-tune. Our model also has better runtime performance than the baseline STM [25]. On DAVIS17, with an NVIDIA 1080Ti, STM achieves 3.4fps with J&F = 71.6, and ours achieves 4.0fps with J&F = 74.6. We can also trade accuracy for better efficiency: if we limit the memory usage under 20%, it achieves 5.7fps with J&F = 71.7. Results on YouTube-VOS benchmarks. The validation set contains 474 first-frame-annotated videos. They include objects from 65 training categories and 26 unseen categories in training. Table 2 shows a comparison with previous state-of-the-art methods on the open evaluation server [35]. Our framework achieves the best overall score 79.6 because the adaptive feature bank improves the robustness and reliability for different scenarios. For those videos whose objects are already been seen in the training videos, STM’s results are somewhat better than ours. The reason could be that their model and ours are evaluated on different memory budgets. STM [25] evaluated their work on an NVIDIA V100 GPU with 16GB memory, while we evaluated ours on a weaker machine (one NVIDIA 1080Ti GPU with 11GB memory). However, our framework is more robust in segmenting unseen objects, 74.1 in J and 82.6 in F , compared with the STM’s 72.8 in J and 80.9 in F . Overall, our proposed model has great generalizability and achieves the state-of-the-art performance. 5.3 Segmentation of Long Videos The widely used benchmarks DAVIS17 (67 frames per video on average) and YouTube-VOS (132 frames per video on average) only contain trimmed short clips. To better evaluate the performance of our method in processing long videos in real-world tasks, we also conducted experiments on several long videos2. We randomly selected three videos from the Internet, that are longer than 1.5K frames and have their main objects continuously appearing. Each video has 20 uniformly sampled frames manually annotated for evaluation. Table 3 reports the experimental results of ours and three other state-of-the-art methods, ran on an NVIDIA 1080Ti (11GB Memory). We used these methods’ released codes and their models pretrained on YouTube-VOS dataset. Note that STM [25] could only store at most 50 frames per video under 11GB GPU memory, so we set this parameter to 50. RVOS [30] and A-GAME [12] achieved lower scores because they only use information from the first or the latest frame. They failed to segment the object of interest after 1K frames. STM has a J&F score of 79.3. Because the total frames that STM can store is fixed, when the video length becomes longer, STM has to increase the key frame interval and has a higher chance of missing important frames and information. In contrast, the proposed AFB mechanism can dynamically manage the key information from previous frames. We achieved the best J&F score of 83.3. 5.4 Ablation Study We conducted an ablation analysis of our framework on the DAVIS17 dataset. In Table 4, the quantitative results show the effectiveness of the proposed key modules. First, we analyzed the impact of the adaptive feature bank (AFB), and evaluated 4 memory management schemes, namely, keeping features from (1) the first frame, (2) latest frame, (3) the first and latest frame, and (4) the first and latest 5 frames. The remaining modules follow the full framework (i.e., with uncertain-regions refinement (URR) included). From the first four rows in Table 4, while reference frames help the segmentation, simply adding multiple frames may not further improve the performance. Our adaptive feature bank more effectively organized the key information of all the previous frames. Consequently, our framework AFB+URR has the best performance. Second, we analyzed the efficiency of the proposed uncertain-regions refinement (URR). We disabled URR by training the framework without the confidence loss Lconf in Eqn. 9 or local refinement. Our uncertainty evaluation and local refinement significantly improve performance in these regions, because object boundary regions are often ambiguous, and their uncertainty errors are easily accumulated and can harm segmentation results. 6 Conclusion We presented a novel framework for semi-supervised video object segmentation. Our framework includes an adaptive feature bank (AFB) module and an uncertain-region refinement (URR) module. The adaptive feature bank effectively organizes key features for segmentation. The uncertain-region refinement is designed for refining ambiguous regions. Our approach outperforms the state-of-the-art methods on two large-scale benchmark datasets. 2Long videos link: https://www.kaggle.com/gvclsu/long-videos Broader Impact Our framework is designed for the semi-supervised video object segmentation task, also known as one-shot video object segmentation. Given the first frame annotation, our model could segment the object of interest in the subsequent frames. Due to the great generalizability of our model, the category of the target object is unrestricted. Our adaptive feature bank and the matching based framework can be modified to benefit other video processing tasks in autonomous driving, robot interaction, and video surveillance monitoring that need to handle long videos and appearance-changing contents. For example, one application is in real-time flood detection/monitoring using surveillance cameras. Flooding constitutes the largest portion of insured losses among all disasters in the world [1]. Nowadays, many cameras in city including traffic monitoring and security surveillance cameras are able to capture time-lapse images and videos. By leveraging our video object segmentation framework, flood can be located from the videos and the water level can be estimated. The societal impact is immersed because such a flood monitoring system can predict and alert a flooding event from rainstorms or hurricanes in time. Our framework is trained and evaluated on large-scale segmentation datasets, we do not leverage biases in the data. Acknowledgments and Disclosure of Funding This work is partly supported by Louisiana Board of Regents ITRS LEQSF(2018-21)-RD-B-03 and National Science Foundation of USA OIA-1946231.
1. What is the main contribution of the paper, and what are the strengths and weaknesses of the proposed VOS framework? 2. What are the concerns regarding the experiments and comparisons with other works, especially STM? 3. How does the reviewer assess the significance of the feature bank, confidence loss, and local fine-grained segmentation module? 4. Are there any questions or suggestions regarding the related work, particularly in uncertainty loss and local fine-grained segmentation? 5. What are the issues with the supplementary material and qualitative results? 6. How does the reviewer view the comparison to STM on youtube-VOS unseen class, and what would be a fair way to conduct this comparison?
Summary and Contributions Strengths Weaknesses
Summary and Contributions In this paper, authors propose a VOS framework, that achieves STOA on Davis17 and on par performance on youtube-VOS. The main contributions of this framework are feature bank, confidence loss, and local fine-grained segmentation module. Strengths The proposed model achieves STOA on Davis17 and on par performance on youtube-VOS with STM. Weaknesses 1. More experiments should be added for further justification. Table3, author only reported AFB without URR. However, another crucial experiment is AFB with the uncertainty loss but without local fine-grained segmentation module. 2. Further analysis of AFB should be conducted. According to Table3, AFB without URR achieve 70.2 J&F mean and 68.5 J mean. However STM baseline archives 72.2 J&F mean and 69.3 J mean, which makes the story of AFB doesn’t hold. Line 35-37, The author mentions "AFB is more efficient then STM because when the given video becomes longer, STM often occurs miss sampling or encounter out-of-memory crash. " Such claim should be addressed in experiments. 3. Related work about both uncertainty loss and local fine-grained segmentation module should be discussed. Fine-grained segmentation module is not new in 2d image segmentation community. For example[1] and [2]. 4. In supp material, qualitative results comparing to *STM* are missing. STM is the current STOA method which checkpoint released. Qualitative results can't be omit 5. In line 233, author discussed comparison to STM on youtube-VOS unseen class. The terms of "somewhat better" and "lack of computation power" are not valid excuses when compare with STOA method. This is a red sign of lacking systematically analysis. With regard to computational power difference, a fair comparison can be achieved by running STOA method under authors environment. [1] Pointrend: Image segmentation as rendering, Kirillov, Alexander and Wu, Yuxin and He, Kaiming and Girshick, Ross [2] Shapemask: Learning to segment novel objects by refining shape priors, Kuo, Weicheng and Angelova, Anelia and Malik, Jitendra and Lin, Tsung-Yi
NIPS
Title Video Object Segmentation with Adaptive Feature Bank and Uncertain-Region Refinement Abstract This paper presents a new matching-based framework for semi-supervised video object segmentation (VOS). Recently, state-of-the-art VOS performance has been achieved by matching-based algorithms, in which feature banks are created to store features for region matching and classification. However, how to effectively organize information in the continuously growing feature bank remains underexplored, and this leads to an inefficient design of the bank. We introduced an adaptive feature bank update scheme to dynamically absorb new features and discard obsolete features. We also designed a new confidence loss and a finegrained segmentation module to enhance the segmentation accuracy in uncertain regions. On public benchmarks, our algorithm outperforms existing state-of-thearts. 1 Introduction Video object segmentation (VOS) is a fundamental step in many video processing tasks, like video editing and video inpainting. In the semi-supervised setting, the first frame annotation is given, which depicts the objects of interest of the video sequence. The goal is to segment mask of that object in the subsequent frames. Many deep learning based methods have been proposed to solve this problem in recent years. When people tackle the semi-supervised VOS task, the segmentation performance is affected by two main steps: (1) distinguish the object regions from the background, (2) segment the object boundary clearly. A key question in VOS is how to learn the cues of target objects. We divide recent works into two categories, implicit learning and explicit learning. Conventional implicit approaches include detection-based and propagation-based methods [3, 27, 10, 32, 2, 13, 23]. They often adopt the fully convolutional network (FCN) [20] pipeline to learn object features by the network weights implicitly; then, before segmenting a new video, these methods often need an online learning to fine-tune their weights to learn new object cues from the video. Explicit approaches learn object appearance explicitly. They often formulate the segmentation as pixel-wise classification in a learnt embedding space [31, 4, 24, 11, 33, 18, 12, 25, 17]. These approaches first construct an embedding space to memorize the object appearance, then segment the subsequent frames by computing similarity. Therefore, they are also called matching-based methods. Recently, matching-based methods achieve the state-of-the-art results in the VOS benchmark. A fundamental issue in matching-based VOS segmentation is how to effectively exploit previous frames’ information to segment the new frame. Since the memory size is limited, it is not possible and unnecessary to memorize information from all the previous frames. Most methods [24, 33, 31, 18, 25] only utilize the first and the latest frame or uniformly sample key frames. However, when the given video becomes longer, these methods often either miss sampling on some key-frames or encounter ∗Corresponding author. Codes are available at https://github.com/xmlyqing00/AFB-URR. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. out-of-memory crash. To tackle this problem, we propose an adaptive feature bank (AFB) to organize the target object features. This adaptive feature bank absorbs new features by weighted averaging and discards obsolete features according to the least frequently used (LFU) index. As results, our model could memorize the characteristics of multi objects and segment them simultaneously in long videos under a low memory consumption. Besides identifying the target object, clearly segmenting object boundary is also critical to VOS performance: (1) People are often sensitive to boundary segmentation. (2) When estimated masks on some boundary regions are ambiguous and hard to classify, their misclassificaton is easily accumulated in video. However, most recent VOS methods follow an encoder-decoder mode to estimate the object masks, the boundary of the object mask becomes vague when it is iteratively upscaled from a lower resolution. Therefore, we propose an uncertain-region refinement (URR) scheme to improve the segmentation quality. It includes a novel classification confidence loss to estimate the ambiguity of segmentation, and a local fine-grained segmentation to refine the ambiguous regions. Our main contributions are three-folded: (1) We proposed an adaptive and efficient feature bank to maintain most useful information for video object segmentation. (2) We introduced a confidence loss to estimate the ambiguity of the segmentation results. We also designed a local fine-grained segmentation module to refine these ambiguous regions. (3) We demonstrated the effectiveness of our method on segmenting long videos, which are often seen in practical applications. 2 Related Work Recent video object segmentation works can be divided into two categories: implicit learning and explicit learning. The implicit learning approaches include detection-based methods [3, 23] which segment the object mask without using temporal information, and propagation-based methods [27, 10, 32, 2, 13, 9] which use masks computed in previously frames to infer masks in the current frame. These methods often adopt a fully convolutional network (FCN) structure to learn object appearance by network weights implicitly; so they often require an online learning to adapt to new objects in the test video. The explicit learning methods first construct an embedding space to memorize the object appearance, then classify each pixel’s label using their similarity. Thus, the explicit learning is also called matching-based method. A key issue in matching-based VOS segmentation is how to build the embedding space. DMM [36] only uses the first frame’s information. RGMP [24], FEELVOS [31], RANet [33] and AGSS [18] store information from the first and the latest frames. VideoMatch [11] and WaterNet [17] store information from several latest frames using a slide window. STM [25] stores features every T frames (T = 5 in their experiments). However, when the video to segment is long, these static strategies could encounter out-of-memory crashes or miss sampling key-frames. Our proposed adaptive feature bank (AFB) is a first non-uniform frame-sampling strategy in VOS that can more flexibly and dynamically manage objects’ key features in videos. AFB performs dynamic feature merging and removal, and can handle videos with any length effectively. Recent image segmentation techniques introduce fine-grained modules to improve local accuracy. ShapeMask [15] revise the decoder from FCN to refine the segmentation. PointRend [14] defines uncertainty on a binary mask, and does a one-pass detection and refinement on uncertain regions. We proposed an uncertainty-region refinement (URR) strategy to perform boundary refinement in video segmentation. URR includes (1) a more general multi-object uncertainty score for estimated masks, (2) a novel confidence loss to generate cleaner masks, and (3) a non-local mechanism to more reliably refine uncertain regions. 3 Approach The overview of our framework is illustrated in Fig. 1. First, as shown in blue region, we use a basic pipeline of matching-based segmentation (Sec. 3.1) to generate initial segmentation masks. In Sec. 3.2, we propose an adaptive feature bank module to dynamically organize the past frame information. In Sec. 3.3, given the initial segmentation, we design a confidence loss to estimate the ambiguity of misclassification, and a fine-grained module to classify the uncertain regions. These two components are marked in the red region in Fig. 1. 3.1 Matching-based Segmentation Given an evaluation video, we encode the first frame and its groundtruth annotation to build a feature bank. Then, we use the feature bank to match and segment target objects starting from the second frame. The decoder takes the matching results to estimate the frame’s object masks. Encoders. A query encoder is designed to encode the current frame, named query frame, to its feature map for segmentation. We use the ResNet-50 [8] as backbone and takes the output of the layer-3 as feature map Q ∈ R(H/8)×(W/8)×1024, where H and W are the height and width. For segmenting the tth frame, we treat the past frames from 1 to t−1 as reference frames. A reference encoder is designed for memorizing the characteristics of the target objects. Suppose there are L objects of interest, we encode the reference frame object by object and output L feature maps P̂i, i ∈ [1, L]. The reference encoder is a modification of the original ResNet-50. For each object i, it takes both the reference frame and its corresponding mask as inputs, then extracts object-level feature map, P̂i ∈ R(H/8)×(W/8)×1024. Combining the object-level feature maps together, we obtain the feature maps of the reference frame at index j, Pj = {P̂1, P̂2, · · · , P̂L}, where j ∈ [1, t− 1]. Feature map embedding. Traditional matching-based methods directly compare the feature maps of the query feature map and the reference feature maps. Although such design is good for classification, they are lack of semantic information to estimate the object masks. Inspired by STM [25], we utilize the similar feature map embedding module. The feature maps are encoded into two embedding spaces, named key k and value v by two convolutional modules. Specifically, we match the feature maps by their keys k, while allow their values v being different in order to preserve as much semantic information as possible. The feature bank stores pairs of keys and values of the past frames. In the next section, we compare the feature maps of the current frame with the feature bank to estimate the object masks. The details of maintaining the feature bank are elaborated in Section 3.2. Matcher. A query frame is encoded into pairs of key kQ and value vQ through the query encoder and feature map embedding. We maintain L feature banks FBi, i ∈ [1, L] from the past frames for each object i. The similarity between the query frame and the feature banks is calculated object by object. For each point p in the query frame, we use a weighted summation to retrieve the closest value v̂i(p) in the ith object feature bank, v̂i(p) = ∑ (kFB ,vFB)∈FBi g(kQ(p), kFB)vFB , (1) where i ∈ [1, L], and g is the softmax function g(kQ(p), kFB) = exp(k Q(p)•kFB)∑ ∀j exp(k Q(p)•kFB) , where • represents the dot production between two vectors. We concatenate the query value map with its most similar retrieval value map as yi = [vQ, v̂i], i ∈ [1, L], where yi is the matching results between the query frame and the feature banks for the object i. Decoder. The decoder takes the output of the matcher yi, i ∈ [1, L] to estimate the object mask independently, where yi depicts the semantic information for the object i. We follow the refinement module used in [24, 18, 25] that gradually upscales the feature map by a set of residual convolutional blocks. At each stage, the refinement module takes both the output of the previous stage and a feature map from the query encoder at corresponding scale through skip connections. After the decoder module, we obtain the initial object masks Mi for each object i. We minimize the cross entropy loss Lcls between the object masks and the groundtruth labels C. The Lcls is averaged across all pixels p, Lcls(M,C) = − 1 |p| ∑ p [ log ( exp(Mc)∑ i exp(Mi) )] p . (2) 3.2 Adaptive Feature Bank We build a feature bank to store features of each object and classify new pixels/regions in the current frame t. Storing features from all the previous frames {1, . . . , t− 1} is impossible, because it will make the bank prohibitively big (as the length of the video clip grows) and make the query slow. Recent approaches either store the features from every few frames or from the several latest frames. For example, STM [25] uniformly stores every one of K = 5 frames in feature bank; and on an NVIDIA 1080Ti card with 11GB Memory, this can only handle a single-object video at most 350+ frames. Practical videos are often longer (e.g., average YouTube video has about 12 minutes or 22K frames); and for a 10-min video, STM needs to set K = 300, which will probably miss many important frames and information. Hence, we propose an adaptive feature bank (AFB) to more effectively manage object’s key features. AFB contains two operations: absorbing new features and removing obsolete ones. Absorbing new features. In video segmentation, although features from most recent frames are often more important, earlier frames may contain useful information. Therefore, rather than simply ignoring those earlier frames, we keep earlier features and organize all the features by a weighted averaging (which has been shown effective for finding optimal data representations such as Neural Gas [7]). Fig. 2 illustrates how the adaptive feature bank absorbs new features. Existing features and new features are marked in blue and red, separately. When a new feature is extracted, if it is close enough to some existing one, then merge them. Such a merge avoids storing redundant information and helps the memory efficiency. Meanwhile, it allows a flexible update on the stored features according to the object’s changing appearance. At the beginning of the video segmentation, the feature bank is initialized by the features of the first frame. We build an independent feature bank for each object. Since the object-level feature banks are maintained separately, we omit the object symbol here to simplify the formulae. Suppose we have estimated the object mask of the (t− 1)th frame, then the (t− 1)th frame and the estimated mask are encoded into features (kPt−1, v P t−1) through the reference encoder and the embedding module. For each new feature a(i) = (kPt−1(i), v P t−1(i)) and old features that store in the feature bank b(j) = (kFB(j), vFB(j)) ∈ FB, we employ an inner product as the similarity function, h(a(i), b(j)) = kPt−1(i) • kFB(j) ‖kPt−1(i)‖‖kFB(j)‖ . (3) For each new feature a(i), we select the most similar feature b(j′) from the feature bank that H(a(i)) = max∀b(j)∈FBh(a(i), b(j)). If H(a(i)) is large enough, since these two features are similar, we will merge the new one to the feature bank. Specifically, when H(a(i)) > h, kFB(j′) = (1− λp)kFB(j′) + λpkPt−1(i), vFB(j′) = (1− λp)vFB(j′) + λpvPt−1(i), (4) where h = 0.95 controls the merging rate, and λp = 0.1 controls the impact of the moving averaging. Otherwise, for all H(a(i)) ≤ h, because the new features are so distinct from all the existing ones, we append the new features to the feature bank, kFB = kFB ∪ kPt−1(i), vFB = vFB ∪ vPt−1(i). (5) From our experiments, we find about 90% of new features that satisfy merging operation and we only need to add the rest 10% each time. Removing obsolete features. Though the above updating strategy reliefs memory pressure significantly (e.g., 90% less memory consumption), feature bank sizes still gradually expand with the growth of frame number. Similar to the cache replacement policy, we measure which old features are least likely to be useful and may be eliminated. We build a measurement using the least-frequently used (LFU) index. Each time when we use the feature bank to match the query frame in Eqn. 1, if the similarity function g is greater than a threshold l = 10−4, we increase the count of this feature. Specifically, for ∀(kFB(j), vFB(j)) ∈ FB, the LFU index is counted by cnt(j) := cnt(j) + log ( ∑ ∀(kQ(i),vQ(i)) sgn ( g(kQ(i), kFB(j)) > l ) + 1 ) , LFU(j) = cnt(j) l(j) , (6) where l(j) is the time span that the feature stays in the feature bank and the log function is used to smooth the LFU index. In practice, when the size of the feature bank is about to exceed the predefined budget, we remove the features with the least LFU index until the size of the feature bank is below the budget. The LFU index counting and feature removing procedure are very efficient. This adaptive feature bank scheme can be generalized to other matching-based video processing methods to maintain the bank size, making it suitable to handle videos of arbitrary length. 3.3 Uncertain-region Refinement In the decoding stage, the object masks are computed from the upscaled low-resolution images. Therefore, the object boundaries of such estimated masks are often ambiguous. The classification accuracy of the boundary regions, however, is critical to the segmentation results. Hence, we propose a new scheme, named uncertain-region refinement (URR), to tackle boundary and other uncertain regions. It includes a new loss to evaluate such uncertainty and a novel local refinement mechanism to adjust fine-grained segmentation. Confidence loss. After decoding and a softmax normalization, we have a set of initial segmentations Mi for each object i, i ∈ [1, L]. The object mask Mi represents the likelihood of each pixel p belonging to the object i, where the value range of Mi is in [0, 1], and ∑L i=1Mi(p) = 1. In other words, for each pixel p, there are L values Mi(p) in [0, 1], indicating the likelihood of p being one of these L objects. We can simply pick the index i of the largest value Mi(p) as p’s label. More adaptively, we define a pixel-wise uncertainty map U to measure classification ambiguity on each pixel, using the ratio of the largest likelihood value M̂1 to the second largest value M̂2, U = exp(1− M̂ 1 M̂2 ), (7) where M̂ 1 M̂2 ∈ [1,+∞). The uncertainty map U is in (0, 1], where smaller value means more confidence. The confidence loss Lconf of a set of object masks is defined as Lconf = ‖U‖2. (8) During the training stage, our framework is optimized using the following loss function, L = Lcls + λuLconf , (9) where the λu = 0.5 is a weight scalar. Lcls (Eqn. 2) is the cross entropy loss for pixel-wise classification. Lconf is designed for minimizing the ambiguities of the estimated masks, i.e., pushing each object mask towards a 0/1 map. Local refinement mechanism. We propose a novel local refinement mechanism to refine the ambiguous regions. Experimentally, given two neighbor points in the spatial space, if they belong to the same object, their features are usually close. The main intuition is that we use the pixels which have high confidence in classification to refine the other uncertain points in its neighborhood. Specifically, for each uncertain pixel p, we compose its local reference features y(p) = {yi(p)|i ∈ [1, L]} from p’s neighborhood, where L is the number of target objects. If the local feature r(p) of p is close to the yi(p), we say pixel p is likely to be classified as the object i. The local reference feature yi(p) is computed by weighted average in a small neighborhood N (p), yi(p) = 1∑ q∈N (p)Mi(q) ∑ q∈N (p) Mi(q)r(q), (10) where the weight Mi is the object mask for the object i. Then, a residual network module fl is designed to learn to predict the local similarity. We assign a local refinement mask e for each pixel p by comparing the similarity between r(p) and yi(p), ei(p) = ci(p)fl ( r(p), yi(p) ) , (11) where ci(p) = maxq∈N (p)Mi(q). ci are confidence scores for adjusting the impact of the local refinement mask. Finally, we obtain the final segmentation Si for each object i by adding the local refinement mask ei to the initial object mask Mi, Si(p) =Mi(p) + U(p)ei(p). (12) Fig. 3 shows the effectiveness of the proposed uncertain-region refinement (URR). The initial segmentation (marked in blue) is ambiguous, where some areas are lack of confidence (marked in red). As Fig. 3 (d)(e) shown, when our model is trained with the Lconf , the uncertain-region refinement improves the segmentation quality. 4 Training Details Our model is first pretrained on simulation videos which is generated from static image datasets. Then, for different benchmarks, our model is further trained on their training videos. Pretraining on image datasets. Since we don’t introduce any temporal smoothness assumptions, the learnable modules in our model does not require long videos for training. Pretraining on image datasets is widely used in VOS methods [27, 25], we simulate training videos by static image datasets [5, 29, 16, 19, 6] (136, 032 images in total). A synthetic video clip has 1 first frame and 5 subsequent frames, which are generated from the same image by data augmentation (random affine, color, flip, resize, and crop). We use the first frame to initialize the feature bank and the rest 5 frames consist of a mini-batch to train our framework by minimizing the loss function L in Eqn. 9. Main training on the benchmark datasets. Similar to the pretrianing routine, we randomly select 6 frames per training video as a training sample and apply data augmentation on those frames. The input frames are randomly resized and cropped into 400× 400px for all training. For each training sample, we randomly select at most 3 objects for training. We minimize our loss using AdamW [21] optimizer (β = (0.9, 0.999), eps = 10−8, and the weight decay is 0.01). The initial learning rate is 10−5 for pretraining and 4× 10−6 for main training. Note that we directly use the network output without post-processing or video-by-video online training. 5 Experiments 5.1 Datasets and Evaluation Metrics We evaluated our model (AFB-URR) on DAVIS17 [28] and YouTube-VOS18 [35], two large-scale VOS benchmarks with multiple objects. DAVIS17 contains 60 training videos and 30 validation videos. YouTube-VOS18 (YV) contains 3, 471 training videos and 474 videos for validation. We implemented our framework in PyTorch [26] and conducted experiments on a single NVIDIA 1080Ti GPU. Qualitative results of our framework on DAVIS17 dataset are shown in Fig. 4. More qualitative comparisons are reported in the supplementary file. We adopted the evaluation metrics from the DAVIS benchmark. The region accuracy J calculates the intersection-over-union (IoU) of the estimated masks and the groundtruth masks. The boundary accuracy F measures the accuracy of boundaries, via bipartite matching between the boundary pixels. 5.2 Comparison with the State-of-the-art Results on DAVIS benchmarks. We compared our approach with recent implicit and explicit learning methods. Table 1 reports three accuracy scores [28] in percentages: the mean (M) is the average value, recall (R) measures the fraction of sequences scoring higher than a threshold τ = 0.5, and decay (D) measures how the performance changes over time. Our method significantly outperforms existing methods: our J&F M score is 74.6 without any online fine-tune. Our model also has better runtime performance than the baseline STM [25]. On DAVIS17, with an NVIDIA 1080Ti, STM achieves 3.4fps with J&F = 71.6, and ours achieves 4.0fps with J&F = 74.6. We can also trade accuracy for better efficiency: if we limit the memory usage under 20%, it achieves 5.7fps with J&F = 71.7. Results on YouTube-VOS benchmarks. The validation set contains 474 first-frame-annotated videos. They include objects from 65 training categories and 26 unseen categories in training. Table 2 shows a comparison with previous state-of-the-art methods on the open evaluation server [35]. Our framework achieves the best overall score 79.6 because the adaptive feature bank improves the robustness and reliability for different scenarios. For those videos whose objects are already been seen in the training videos, STM’s results are somewhat better than ours. The reason could be that their model and ours are evaluated on different memory budgets. STM [25] evaluated their work on an NVIDIA V100 GPU with 16GB memory, while we evaluated ours on a weaker machine (one NVIDIA 1080Ti GPU with 11GB memory). However, our framework is more robust in segmenting unseen objects, 74.1 in J and 82.6 in F , compared with the STM’s 72.8 in J and 80.9 in F . Overall, our proposed model has great generalizability and achieves the state-of-the-art performance. 5.3 Segmentation of Long Videos The widely used benchmarks DAVIS17 (67 frames per video on average) and YouTube-VOS (132 frames per video on average) only contain trimmed short clips. To better evaluate the performance of our method in processing long videos in real-world tasks, we also conducted experiments on several long videos2. We randomly selected three videos from the Internet, that are longer than 1.5K frames and have their main objects continuously appearing. Each video has 20 uniformly sampled frames manually annotated for evaluation. Table 3 reports the experimental results of ours and three other state-of-the-art methods, ran on an NVIDIA 1080Ti (11GB Memory). We used these methods’ released codes and their models pretrained on YouTube-VOS dataset. Note that STM [25] could only store at most 50 frames per video under 11GB GPU memory, so we set this parameter to 50. RVOS [30] and A-GAME [12] achieved lower scores because they only use information from the first or the latest frame. They failed to segment the object of interest after 1K frames. STM has a J&F score of 79.3. Because the total frames that STM can store is fixed, when the video length becomes longer, STM has to increase the key frame interval and has a higher chance of missing important frames and information. In contrast, the proposed AFB mechanism can dynamically manage the key information from previous frames. We achieved the best J&F score of 83.3. 5.4 Ablation Study We conducted an ablation analysis of our framework on the DAVIS17 dataset. In Table 4, the quantitative results show the effectiveness of the proposed key modules. First, we analyzed the impact of the adaptive feature bank (AFB), and evaluated 4 memory management schemes, namely, keeping features from (1) the first frame, (2) latest frame, (3) the first and latest frame, and (4) the first and latest 5 frames. The remaining modules follow the full framework (i.e., with uncertain-regions refinement (URR) included). From the first four rows in Table 4, while reference frames help the segmentation, simply adding multiple frames may not further improve the performance. Our adaptive feature bank more effectively organized the key information of all the previous frames. Consequently, our framework AFB+URR has the best performance. Second, we analyzed the efficiency of the proposed uncertain-regions refinement (URR). We disabled URR by training the framework without the confidence loss Lconf in Eqn. 9 or local refinement. Our uncertainty evaluation and local refinement significantly improve performance in these regions, because object boundary regions are often ambiguous, and their uncertainty errors are easily accumulated and can harm segmentation results. 6 Conclusion We presented a novel framework for semi-supervised video object segmentation. Our framework includes an adaptive feature bank (AFB) module and an uncertain-region refinement (URR) module. The adaptive feature bank effectively organizes key features for segmentation. The uncertain-region refinement is designed for refining ambiguous regions. Our approach outperforms the state-of-the-art methods on two large-scale benchmark datasets. 2Long videos link: https://www.kaggle.com/gvclsu/long-videos Broader Impact Our framework is designed for the semi-supervised video object segmentation task, also known as one-shot video object segmentation. Given the first frame annotation, our model could segment the object of interest in the subsequent frames. Due to the great generalizability of our model, the category of the target object is unrestricted. Our adaptive feature bank and the matching based framework can be modified to benefit other video processing tasks in autonomous driving, robot interaction, and video surveillance monitoring that need to handle long videos and appearance-changing contents. For example, one application is in real-time flood detection/monitoring using surveillance cameras. Flooding constitutes the largest portion of insured losses among all disasters in the world [1]. Nowadays, many cameras in city including traffic monitoring and security surveillance cameras are able to capture time-lapse images and videos. By leveraging our video object segmentation framework, flood can be located from the videos and the water level can be estimated. The societal impact is immersed because such a flood monitoring system can predict and alert a flooding event from rainstorms or hurricanes in time. Our framework is trained and evaluated on large-scale segmentation datasets, we do not leverage biases in the data. Acknowledgments and Disclosure of Funding This work is partly supported by Louisiana Board of Regents ITRS LEQSF(2018-21)-RD-B-03 and National Science Foundation of USA OIA-1946231.
1. What is the focus and contribution of the paper on semi-supervised video object segmentation? 2. What are the strengths of the proposed approach, particularly in terms of memory update rules and boundary accuracy enhancement? 3. What are the weaknesses of the paper regarding time complexity analysis and efficiency comparison with prior methods? 4. Do you have any questions or concerns about the applicability of the proposed URR module to other VOS algorithms?
Summary and Contributions Strengths Weaknesses
Summary and Contributions In this paper, a new semi-supervised video object segmentation approach is proposed. The paper improves the previous memory matching based VOS approaches (e.g. STM [22]). The authors propose an efficient memory update rule. And, a loss function and module to enhance the boundary accuracy is proposed. Strengths - New memory update rule for memory-based VOS algorithm is proposed. In a previous memory based approach [22], the memory is maintained through very simple rule. In this paper, the authors propose a more elegant way to maintain memory that can absorb new features and removes obsolete features. In specific, a threshold is set to distinguish the similarity between memory slots, and LFU index is used to remove useless memories. - Uncertain-Regions refinement (URR). One of the challenges in VOS is computing a sharp boundaries between objects and the background. As shown in Table 3, the proposed URR significantly enhance the accuracy. I wonder if URR module is general so It is applicable to other baseline VOS algorithm. Weaknesses - The paper lacks the analysis on time complexity. One of the main contribution is an efficient memory management by absorbing new and removing old features. However, there is no analysis on how the proposed memory management is efficient compared to the naive (or previous) approaches. It would be great if the author provide an extensive analysis on this aspect.
NIPS
Title Video Object Segmentation with Adaptive Feature Bank and Uncertain-Region Refinement Abstract This paper presents a new matching-based framework for semi-supervised video object segmentation (VOS). Recently, state-of-the-art VOS performance has been achieved by matching-based algorithms, in which feature banks are created to store features for region matching and classification. However, how to effectively organize information in the continuously growing feature bank remains underexplored, and this leads to an inefficient design of the bank. We introduced an adaptive feature bank update scheme to dynamically absorb new features and discard obsolete features. We also designed a new confidence loss and a finegrained segmentation module to enhance the segmentation accuracy in uncertain regions. On public benchmarks, our algorithm outperforms existing state-of-thearts. 1 Introduction Video object segmentation (VOS) is a fundamental step in many video processing tasks, like video editing and video inpainting. In the semi-supervised setting, the first frame annotation is given, which depicts the objects of interest of the video sequence. The goal is to segment mask of that object in the subsequent frames. Many deep learning based methods have been proposed to solve this problem in recent years. When people tackle the semi-supervised VOS task, the segmentation performance is affected by two main steps: (1) distinguish the object regions from the background, (2) segment the object boundary clearly. A key question in VOS is how to learn the cues of target objects. We divide recent works into two categories, implicit learning and explicit learning. Conventional implicit approaches include detection-based and propagation-based methods [3, 27, 10, 32, 2, 13, 23]. They often adopt the fully convolutional network (FCN) [20] pipeline to learn object features by the network weights implicitly; then, before segmenting a new video, these methods often need an online learning to fine-tune their weights to learn new object cues from the video. Explicit approaches learn object appearance explicitly. They often formulate the segmentation as pixel-wise classification in a learnt embedding space [31, 4, 24, 11, 33, 18, 12, 25, 17]. These approaches first construct an embedding space to memorize the object appearance, then segment the subsequent frames by computing similarity. Therefore, they are also called matching-based methods. Recently, matching-based methods achieve the state-of-the-art results in the VOS benchmark. A fundamental issue in matching-based VOS segmentation is how to effectively exploit previous frames’ information to segment the new frame. Since the memory size is limited, it is not possible and unnecessary to memorize information from all the previous frames. Most methods [24, 33, 31, 18, 25] only utilize the first and the latest frame or uniformly sample key frames. However, when the given video becomes longer, these methods often either miss sampling on some key-frames or encounter ∗Corresponding author. Codes are available at https://github.com/xmlyqing00/AFB-URR. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. out-of-memory crash. To tackle this problem, we propose an adaptive feature bank (AFB) to organize the target object features. This adaptive feature bank absorbs new features by weighted averaging and discards obsolete features according to the least frequently used (LFU) index. As results, our model could memorize the characteristics of multi objects and segment them simultaneously in long videos under a low memory consumption. Besides identifying the target object, clearly segmenting object boundary is also critical to VOS performance: (1) People are often sensitive to boundary segmentation. (2) When estimated masks on some boundary regions are ambiguous and hard to classify, their misclassificaton is easily accumulated in video. However, most recent VOS methods follow an encoder-decoder mode to estimate the object masks, the boundary of the object mask becomes vague when it is iteratively upscaled from a lower resolution. Therefore, we propose an uncertain-region refinement (URR) scheme to improve the segmentation quality. It includes a novel classification confidence loss to estimate the ambiguity of segmentation, and a local fine-grained segmentation to refine the ambiguous regions. Our main contributions are three-folded: (1) We proposed an adaptive and efficient feature bank to maintain most useful information for video object segmentation. (2) We introduced a confidence loss to estimate the ambiguity of the segmentation results. We also designed a local fine-grained segmentation module to refine these ambiguous regions. (3) We demonstrated the effectiveness of our method on segmenting long videos, which are often seen in practical applications. 2 Related Work Recent video object segmentation works can be divided into two categories: implicit learning and explicit learning. The implicit learning approaches include detection-based methods [3, 23] which segment the object mask without using temporal information, and propagation-based methods [27, 10, 32, 2, 13, 9] which use masks computed in previously frames to infer masks in the current frame. These methods often adopt a fully convolutional network (FCN) structure to learn object appearance by network weights implicitly; so they often require an online learning to adapt to new objects in the test video. The explicit learning methods first construct an embedding space to memorize the object appearance, then classify each pixel’s label using their similarity. Thus, the explicit learning is also called matching-based method. A key issue in matching-based VOS segmentation is how to build the embedding space. DMM [36] only uses the first frame’s information. RGMP [24], FEELVOS [31], RANet [33] and AGSS [18] store information from the first and the latest frames. VideoMatch [11] and WaterNet [17] store information from several latest frames using a slide window. STM [25] stores features every T frames (T = 5 in their experiments). However, when the video to segment is long, these static strategies could encounter out-of-memory crashes or miss sampling key-frames. Our proposed adaptive feature bank (AFB) is a first non-uniform frame-sampling strategy in VOS that can more flexibly and dynamically manage objects’ key features in videos. AFB performs dynamic feature merging and removal, and can handle videos with any length effectively. Recent image segmentation techniques introduce fine-grained modules to improve local accuracy. ShapeMask [15] revise the decoder from FCN to refine the segmentation. PointRend [14] defines uncertainty on a binary mask, and does a one-pass detection and refinement on uncertain regions. We proposed an uncertainty-region refinement (URR) strategy to perform boundary refinement in video segmentation. URR includes (1) a more general multi-object uncertainty score for estimated masks, (2) a novel confidence loss to generate cleaner masks, and (3) a non-local mechanism to more reliably refine uncertain regions. 3 Approach The overview of our framework is illustrated in Fig. 1. First, as shown in blue region, we use a basic pipeline of matching-based segmentation (Sec. 3.1) to generate initial segmentation masks. In Sec. 3.2, we propose an adaptive feature bank module to dynamically organize the past frame information. In Sec. 3.3, given the initial segmentation, we design a confidence loss to estimate the ambiguity of misclassification, and a fine-grained module to classify the uncertain regions. These two components are marked in the red region in Fig. 1. 3.1 Matching-based Segmentation Given an evaluation video, we encode the first frame and its groundtruth annotation to build a feature bank. Then, we use the feature bank to match and segment target objects starting from the second frame. The decoder takes the matching results to estimate the frame’s object masks. Encoders. A query encoder is designed to encode the current frame, named query frame, to its feature map for segmentation. We use the ResNet-50 [8] as backbone and takes the output of the layer-3 as feature map Q ∈ R(H/8)×(W/8)×1024, where H and W are the height and width. For segmenting the tth frame, we treat the past frames from 1 to t−1 as reference frames. A reference encoder is designed for memorizing the characteristics of the target objects. Suppose there are L objects of interest, we encode the reference frame object by object and output L feature maps P̂i, i ∈ [1, L]. The reference encoder is a modification of the original ResNet-50. For each object i, it takes both the reference frame and its corresponding mask as inputs, then extracts object-level feature map, P̂i ∈ R(H/8)×(W/8)×1024. Combining the object-level feature maps together, we obtain the feature maps of the reference frame at index j, Pj = {P̂1, P̂2, · · · , P̂L}, where j ∈ [1, t− 1]. Feature map embedding. Traditional matching-based methods directly compare the feature maps of the query feature map and the reference feature maps. Although such design is good for classification, they are lack of semantic information to estimate the object masks. Inspired by STM [25], we utilize the similar feature map embedding module. The feature maps are encoded into two embedding spaces, named key k and value v by two convolutional modules. Specifically, we match the feature maps by their keys k, while allow their values v being different in order to preserve as much semantic information as possible. The feature bank stores pairs of keys and values of the past frames. In the next section, we compare the feature maps of the current frame with the feature bank to estimate the object masks. The details of maintaining the feature bank are elaborated in Section 3.2. Matcher. A query frame is encoded into pairs of key kQ and value vQ through the query encoder and feature map embedding. We maintain L feature banks FBi, i ∈ [1, L] from the past frames for each object i. The similarity between the query frame and the feature banks is calculated object by object. For each point p in the query frame, we use a weighted summation to retrieve the closest value v̂i(p) in the ith object feature bank, v̂i(p) = ∑ (kFB ,vFB)∈FBi g(kQ(p), kFB)vFB , (1) where i ∈ [1, L], and g is the softmax function g(kQ(p), kFB) = exp(k Q(p)•kFB)∑ ∀j exp(k Q(p)•kFB) , where • represents the dot production between two vectors. We concatenate the query value map with its most similar retrieval value map as yi = [vQ, v̂i], i ∈ [1, L], where yi is the matching results between the query frame and the feature banks for the object i. Decoder. The decoder takes the output of the matcher yi, i ∈ [1, L] to estimate the object mask independently, where yi depicts the semantic information for the object i. We follow the refinement module used in [24, 18, 25] that gradually upscales the feature map by a set of residual convolutional blocks. At each stage, the refinement module takes both the output of the previous stage and a feature map from the query encoder at corresponding scale through skip connections. After the decoder module, we obtain the initial object masks Mi for each object i. We minimize the cross entropy loss Lcls between the object masks and the groundtruth labels C. The Lcls is averaged across all pixels p, Lcls(M,C) = − 1 |p| ∑ p [ log ( exp(Mc)∑ i exp(Mi) )] p . (2) 3.2 Adaptive Feature Bank We build a feature bank to store features of each object and classify new pixels/regions in the current frame t. Storing features from all the previous frames {1, . . . , t− 1} is impossible, because it will make the bank prohibitively big (as the length of the video clip grows) and make the query slow. Recent approaches either store the features from every few frames or from the several latest frames. For example, STM [25] uniformly stores every one of K = 5 frames in feature bank; and on an NVIDIA 1080Ti card with 11GB Memory, this can only handle a single-object video at most 350+ frames. Practical videos are often longer (e.g., average YouTube video has about 12 minutes or 22K frames); and for a 10-min video, STM needs to set K = 300, which will probably miss many important frames and information. Hence, we propose an adaptive feature bank (AFB) to more effectively manage object’s key features. AFB contains two operations: absorbing new features and removing obsolete ones. Absorbing new features. In video segmentation, although features from most recent frames are often more important, earlier frames may contain useful information. Therefore, rather than simply ignoring those earlier frames, we keep earlier features and organize all the features by a weighted averaging (which has been shown effective for finding optimal data representations such as Neural Gas [7]). Fig. 2 illustrates how the adaptive feature bank absorbs new features. Existing features and new features are marked in blue and red, separately. When a new feature is extracted, if it is close enough to some existing one, then merge them. Such a merge avoids storing redundant information and helps the memory efficiency. Meanwhile, it allows a flexible update on the stored features according to the object’s changing appearance. At the beginning of the video segmentation, the feature bank is initialized by the features of the first frame. We build an independent feature bank for each object. Since the object-level feature banks are maintained separately, we omit the object symbol here to simplify the formulae. Suppose we have estimated the object mask of the (t− 1)th frame, then the (t− 1)th frame and the estimated mask are encoded into features (kPt−1, v P t−1) through the reference encoder and the embedding module. For each new feature a(i) = (kPt−1(i), v P t−1(i)) and old features that store in the feature bank b(j) = (kFB(j), vFB(j)) ∈ FB, we employ an inner product as the similarity function, h(a(i), b(j)) = kPt−1(i) • kFB(j) ‖kPt−1(i)‖‖kFB(j)‖ . (3) For each new feature a(i), we select the most similar feature b(j′) from the feature bank that H(a(i)) = max∀b(j)∈FBh(a(i), b(j)). If H(a(i)) is large enough, since these two features are similar, we will merge the new one to the feature bank. Specifically, when H(a(i)) > h, kFB(j′) = (1− λp)kFB(j′) + λpkPt−1(i), vFB(j′) = (1− λp)vFB(j′) + λpvPt−1(i), (4) where h = 0.95 controls the merging rate, and λp = 0.1 controls the impact of the moving averaging. Otherwise, for all H(a(i)) ≤ h, because the new features are so distinct from all the existing ones, we append the new features to the feature bank, kFB = kFB ∪ kPt−1(i), vFB = vFB ∪ vPt−1(i). (5) From our experiments, we find about 90% of new features that satisfy merging operation and we only need to add the rest 10% each time. Removing obsolete features. Though the above updating strategy reliefs memory pressure significantly (e.g., 90% less memory consumption), feature bank sizes still gradually expand with the growth of frame number. Similar to the cache replacement policy, we measure which old features are least likely to be useful and may be eliminated. We build a measurement using the least-frequently used (LFU) index. Each time when we use the feature bank to match the query frame in Eqn. 1, if the similarity function g is greater than a threshold l = 10−4, we increase the count of this feature. Specifically, for ∀(kFB(j), vFB(j)) ∈ FB, the LFU index is counted by cnt(j) := cnt(j) + log ( ∑ ∀(kQ(i),vQ(i)) sgn ( g(kQ(i), kFB(j)) > l ) + 1 ) , LFU(j) = cnt(j) l(j) , (6) where l(j) is the time span that the feature stays in the feature bank and the log function is used to smooth the LFU index. In practice, when the size of the feature bank is about to exceed the predefined budget, we remove the features with the least LFU index until the size of the feature bank is below the budget. The LFU index counting and feature removing procedure are very efficient. This adaptive feature bank scheme can be generalized to other matching-based video processing methods to maintain the bank size, making it suitable to handle videos of arbitrary length. 3.3 Uncertain-region Refinement In the decoding stage, the object masks are computed from the upscaled low-resolution images. Therefore, the object boundaries of such estimated masks are often ambiguous. The classification accuracy of the boundary regions, however, is critical to the segmentation results. Hence, we propose a new scheme, named uncertain-region refinement (URR), to tackle boundary and other uncertain regions. It includes a new loss to evaluate such uncertainty and a novel local refinement mechanism to adjust fine-grained segmentation. Confidence loss. After decoding and a softmax normalization, we have a set of initial segmentations Mi for each object i, i ∈ [1, L]. The object mask Mi represents the likelihood of each pixel p belonging to the object i, where the value range of Mi is in [0, 1], and ∑L i=1Mi(p) = 1. In other words, for each pixel p, there are L values Mi(p) in [0, 1], indicating the likelihood of p being one of these L objects. We can simply pick the index i of the largest value Mi(p) as p’s label. More adaptively, we define a pixel-wise uncertainty map U to measure classification ambiguity on each pixel, using the ratio of the largest likelihood value M̂1 to the second largest value M̂2, U = exp(1− M̂ 1 M̂2 ), (7) where M̂ 1 M̂2 ∈ [1,+∞). The uncertainty map U is in (0, 1], where smaller value means more confidence. The confidence loss Lconf of a set of object masks is defined as Lconf = ‖U‖2. (8) During the training stage, our framework is optimized using the following loss function, L = Lcls + λuLconf , (9) where the λu = 0.5 is a weight scalar. Lcls (Eqn. 2) is the cross entropy loss for pixel-wise classification. Lconf is designed for minimizing the ambiguities of the estimated masks, i.e., pushing each object mask towards a 0/1 map. Local refinement mechanism. We propose a novel local refinement mechanism to refine the ambiguous regions. Experimentally, given two neighbor points in the spatial space, if they belong to the same object, their features are usually close. The main intuition is that we use the pixels which have high confidence in classification to refine the other uncertain points in its neighborhood. Specifically, for each uncertain pixel p, we compose its local reference features y(p) = {yi(p)|i ∈ [1, L]} from p’s neighborhood, where L is the number of target objects. If the local feature r(p) of p is close to the yi(p), we say pixel p is likely to be classified as the object i. The local reference feature yi(p) is computed by weighted average in a small neighborhood N (p), yi(p) = 1∑ q∈N (p)Mi(q) ∑ q∈N (p) Mi(q)r(q), (10) where the weight Mi is the object mask for the object i. Then, a residual network module fl is designed to learn to predict the local similarity. We assign a local refinement mask e for each pixel p by comparing the similarity between r(p) and yi(p), ei(p) = ci(p)fl ( r(p), yi(p) ) , (11) where ci(p) = maxq∈N (p)Mi(q). ci are confidence scores for adjusting the impact of the local refinement mask. Finally, we obtain the final segmentation Si for each object i by adding the local refinement mask ei to the initial object mask Mi, Si(p) =Mi(p) + U(p)ei(p). (12) Fig. 3 shows the effectiveness of the proposed uncertain-region refinement (URR). The initial segmentation (marked in blue) is ambiguous, where some areas are lack of confidence (marked in red). As Fig. 3 (d)(e) shown, when our model is trained with the Lconf , the uncertain-region refinement improves the segmentation quality. 4 Training Details Our model is first pretrained on simulation videos which is generated from static image datasets. Then, for different benchmarks, our model is further trained on their training videos. Pretraining on image datasets. Since we don’t introduce any temporal smoothness assumptions, the learnable modules in our model does not require long videos for training. Pretraining on image datasets is widely used in VOS methods [27, 25], we simulate training videos by static image datasets [5, 29, 16, 19, 6] (136, 032 images in total). A synthetic video clip has 1 first frame and 5 subsequent frames, which are generated from the same image by data augmentation (random affine, color, flip, resize, and crop). We use the first frame to initialize the feature bank and the rest 5 frames consist of a mini-batch to train our framework by minimizing the loss function L in Eqn. 9. Main training on the benchmark datasets. Similar to the pretrianing routine, we randomly select 6 frames per training video as a training sample and apply data augmentation on those frames. The input frames are randomly resized and cropped into 400× 400px for all training. For each training sample, we randomly select at most 3 objects for training. We minimize our loss using AdamW [21] optimizer (β = (0.9, 0.999), eps = 10−8, and the weight decay is 0.01). The initial learning rate is 10−5 for pretraining and 4× 10−6 for main training. Note that we directly use the network output without post-processing or video-by-video online training. 5 Experiments 5.1 Datasets and Evaluation Metrics We evaluated our model (AFB-URR) on DAVIS17 [28] and YouTube-VOS18 [35], two large-scale VOS benchmarks with multiple objects. DAVIS17 contains 60 training videos and 30 validation videos. YouTube-VOS18 (YV) contains 3, 471 training videos and 474 videos for validation. We implemented our framework in PyTorch [26] and conducted experiments on a single NVIDIA 1080Ti GPU. Qualitative results of our framework on DAVIS17 dataset are shown in Fig. 4. More qualitative comparisons are reported in the supplementary file. We adopted the evaluation metrics from the DAVIS benchmark. The region accuracy J calculates the intersection-over-union (IoU) of the estimated masks and the groundtruth masks. The boundary accuracy F measures the accuracy of boundaries, via bipartite matching between the boundary pixels. 5.2 Comparison with the State-of-the-art Results on DAVIS benchmarks. We compared our approach with recent implicit and explicit learning methods. Table 1 reports three accuracy scores [28] in percentages: the mean (M) is the average value, recall (R) measures the fraction of sequences scoring higher than a threshold τ = 0.5, and decay (D) measures how the performance changes over time. Our method significantly outperforms existing methods: our J&F M score is 74.6 without any online fine-tune. Our model also has better runtime performance than the baseline STM [25]. On DAVIS17, with an NVIDIA 1080Ti, STM achieves 3.4fps with J&F = 71.6, and ours achieves 4.0fps with J&F = 74.6. We can also trade accuracy for better efficiency: if we limit the memory usage under 20%, it achieves 5.7fps with J&F = 71.7. Results on YouTube-VOS benchmarks. The validation set contains 474 first-frame-annotated videos. They include objects from 65 training categories and 26 unseen categories in training. Table 2 shows a comparison with previous state-of-the-art methods on the open evaluation server [35]. Our framework achieves the best overall score 79.6 because the adaptive feature bank improves the robustness and reliability for different scenarios. For those videos whose objects are already been seen in the training videos, STM’s results are somewhat better than ours. The reason could be that their model and ours are evaluated on different memory budgets. STM [25] evaluated their work on an NVIDIA V100 GPU with 16GB memory, while we evaluated ours on a weaker machine (one NVIDIA 1080Ti GPU with 11GB memory). However, our framework is more robust in segmenting unseen objects, 74.1 in J and 82.6 in F , compared with the STM’s 72.8 in J and 80.9 in F . Overall, our proposed model has great generalizability and achieves the state-of-the-art performance. 5.3 Segmentation of Long Videos The widely used benchmarks DAVIS17 (67 frames per video on average) and YouTube-VOS (132 frames per video on average) only contain trimmed short clips. To better evaluate the performance of our method in processing long videos in real-world tasks, we also conducted experiments on several long videos2. We randomly selected three videos from the Internet, that are longer than 1.5K frames and have their main objects continuously appearing. Each video has 20 uniformly sampled frames manually annotated for evaluation. Table 3 reports the experimental results of ours and three other state-of-the-art methods, ran on an NVIDIA 1080Ti (11GB Memory). We used these methods’ released codes and their models pretrained on YouTube-VOS dataset. Note that STM [25] could only store at most 50 frames per video under 11GB GPU memory, so we set this parameter to 50. RVOS [30] and A-GAME [12] achieved lower scores because they only use information from the first or the latest frame. They failed to segment the object of interest after 1K frames. STM has a J&F score of 79.3. Because the total frames that STM can store is fixed, when the video length becomes longer, STM has to increase the key frame interval and has a higher chance of missing important frames and information. In contrast, the proposed AFB mechanism can dynamically manage the key information from previous frames. We achieved the best J&F score of 83.3. 5.4 Ablation Study We conducted an ablation analysis of our framework on the DAVIS17 dataset. In Table 4, the quantitative results show the effectiveness of the proposed key modules. First, we analyzed the impact of the adaptive feature bank (AFB), and evaluated 4 memory management schemes, namely, keeping features from (1) the first frame, (2) latest frame, (3) the first and latest frame, and (4) the first and latest 5 frames. The remaining modules follow the full framework (i.e., with uncertain-regions refinement (URR) included). From the first four rows in Table 4, while reference frames help the segmentation, simply adding multiple frames may not further improve the performance. Our adaptive feature bank more effectively organized the key information of all the previous frames. Consequently, our framework AFB+URR has the best performance. Second, we analyzed the efficiency of the proposed uncertain-regions refinement (URR). We disabled URR by training the framework without the confidence loss Lconf in Eqn. 9 or local refinement. Our uncertainty evaluation and local refinement significantly improve performance in these regions, because object boundary regions are often ambiguous, and their uncertainty errors are easily accumulated and can harm segmentation results. 6 Conclusion We presented a novel framework for semi-supervised video object segmentation. Our framework includes an adaptive feature bank (AFB) module and an uncertain-region refinement (URR) module. The adaptive feature bank effectively organizes key features for segmentation. The uncertain-region refinement is designed for refining ambiguous regions. Our approach outperforms the state-of-the-art methods on two large-scale benchmark datasets. 2Long videos link: https://www.kaggle.com/gvclsu/long-videos Broader Impact Our framework is designed for the semi-supervised video object segmentation task, also known as one-shot video object segmentation. Given the first frame annotation, our model could segment the object of interest in the subsequent frames. Due to the great generalizability of our model, the category of the target object is unrestricted. Our adaptive feature bank and the matching based framework can be modified to benefit other video processing tasks in autonomous driving, robot interaction, and video surveillance monitoring that need to handle long videos and appearance-changing contents. For example, one application is in real-time flood detection/monitoring using surveillance cameras. Flooding constitutes the largest portion of insured losses among all disasters in the world [1]. Nowadays, many cameras in city including traffic monitoring and security surveillance cameras are able to capture time-lapse images and videos. By leveraging our video object segmentation framework, flood can be located from the videos and the water level can be estimated. The societal impact is immersed because such a flood monitoring system can predict and alert a flooding event from rainstorms or hurricanes in time. Our framework is trained and evaluated on large-scale segmentation datasets, we do not leverage biases in the data. Acknowledgments and Disclosure of Funding This work is partly supported by Louisiana Board of Regents ITRS LEQSF(2018-21)-RD-B-03 and National Science Foundation of USA OIA-1946231.
1. What is the main contribution of the paper in video object segmentation? 2. What are the strengths of the proposed approach, particularly in terms of feature bank and mask refinement? 3. What are the weaknesses of the paper regarding its literature review and comparisons with other works? 4. How does the reviewer assess the significance of the contributions and the impact of the proposed method compared to prior works? 5. Are there any concerns regarding the runtime complexity of the proposed framework and its components?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper proposes a framework for video object segmentation, which consists of 1) a feature bank that dynamically discards or updates features from past frames and their predictions, and 2) a mask refinement module based on the output uncertainty. Authors evaluate their method on the DAVIS and YouTube VOS benchmarks, achieving state of the art performance compared to previous works. Strengths - The paper is well written and easy to follow. - The method is well motivated and clearly described. - The paper addresses a relevant and challenging problem in the field and achieves better results with respect to previous works. - Authors report results on 2 different well established VOS datasets and achieve superior performance w.r.t. previous works. Weaknesses - The paper is missing a literature review / related work section. While previous works are cited, and authors compare their results w.r.t. them, the paper lacks the context in which the proposed method was built upon. Previous works in the literature (many of which are cited in this paper) have already addressed the problems that this paper aims at solving, namely 1) leveraging information from past frames in the video to make predictions in the current frame, and 2) proposed refinement modules for VOS. Although many of these works are indeed cited, authors do not explicitly mention the relationship between those works and their method, in terms of how they addressed the issues that their approach is trying to solve, and how do their contributions compare to the components of existing approaches designed specifically to address these problems. Although this paper's results are better than those reported in previous works, the scientific contributions are ultimately what matters to the community to build on top of in order to make consistent and grounded progress. - Given the above, I am not fully convinced that this paper's contributions (dynamic feature bank to leverage past information & uncertainty based mask refinement) are having significant impact to the performance that is reported in the paper. There's no empirical evidence suggesting that it's better to use this paper's methodology to refine masks and deal with past frame information instead of those in e.g. [15, 22]. While it is clear that this method achieves better performance on these benchmarks, these comparisons are always performed at the "full model" level, in which we know there can be many other factors involved that have an impact in performance (eg augmentation strategies, choice of hyperparameters, early stopping criteria, etc). For example, one may wonder what would happen if we plug other refinement modules (eg the one from [15]) into the pipeline instead of the one the authors propose? Is *this* refinement method better than others? There are no experiments that suggest so. - No runtime numbers are reported. I would have expected authors to include an analysis on that respect. The proposed architecture uses two separate ResNet based encoders: one for the query and one for previous frames and their predictions. Further, authors mention that the context encoder is used separately on each individual object that needs to be predicted, so I wonder about the complexity of the full pipeline, and how does this method compare to existing ones. Recent works eg [15, 22] report runtime numbers, so I would have expected authors to do the same here.
NIPS
Title MarioNette: Self-Supervised Sprite Learning Abstract Artists and video game designers often construct 2D animations using libraries of sprites—textured patches of objects and characters. We propose a deep learning approach that decomposes sprite-based video animations into a disentangled representation of recurring graphic elements in a self-supervised manner. By jointly learning a dictionary of possibly transparent patches and training a network that places them onto a canvas, we deconstruct sprite-based content into a sparse, consistent, and explicit representation that can be easily used in downstream tasks, like editing or analysis. Our framework offers a promising approach for discovering recurring visual patterns in image collections without supervision. Since the early days of machine learning, the accepted unit of image synthesis has been the pixel. But while the pixel grid is a natural representation for display hardware and convolutional generators, it does not easily permit high-level reasoning and editing. In this paper, we take inspiration from animation to consider an atomic unit that is richer and easier to edit than the pixel: the sprite. In sprite-based animation, a popular early technique for drawing cartoons and rendering video games, an artist draws a collection of patches—a sprite sheet— consisting of texture swatches, characters in various poses, static objects, and so on. Then, each frame is assembled by compositing a subset of the patches onto a canvas. By reusing the sprite sheet, authoring new content requires minimal effort and can even be automated procedurally. Our goal is to invert this process, simultaneously tackling unsupervised instance segmentation and dictionary learning. Given an image dataset, e.g., frames from a sprite-based video game, we train a model that jointly learns a 2D sprite dictionary, capturing recurring visual elements in an image collection, and explains each input frame as a combination of these potentially transparent sprites. Whereas standard CNN-based generators hide their feature representation in their intermediate layers, our model wears its representation “on its sleeve”: by explicitly compositing sprites from its learnt dictionary onto a background canvas, rather than synthesizing pixels from hidden neural features, it provides a readily-interpretable visual representation. Our contributions include the following: • We describe a grid-based anchor system along with a learned dictionary of textured patches (with transparency) to extract a sprite-based image representation. • We propose a method to learn the patch dictionary and the grid-based representation jointly, in a differentiable, end-to-end fashion. • We compare to past work on learned disentangled graphics representations for video games. • We show how our method offers promising avenues for further work towards identifying visual patterns in more complex data such as natural images and video. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). N/A Artists and video game designers often construct 2D animations using libraries of sprites—textured patches of objects and characters. We propose a deep learning approach that decomposes sprite-based video animations into a disentangled representation of recurring graphic elements in a self-supervised manner. By jointly learning a dictionary of possibly transparent patches and training a network that places them onto a canvas, we deconstruct sprite-based content into a sparse, consistent, and explicit representation that can be easily used in downstream tasks, like editing or analysis. Our framework offers a promising approach for discovering recurring visual patterns in image collections without supervision. Since the early days of machine learning, the accepted unit of image synthesis has been the pixel. But while the pixel grid is a natural representation for display hardware and convolutional generators, it does not easily permit high-level reasoning and editing. In this paper, we take inspiration from animation to consider an atomic unit that is richer and easier to edit than the pixel: the sprite. In sprite-based animation, a popular early technique for drawing cartoons and rendering video games, an artist draws a collection of patches—a sprite sheet— consisting of texture swatches, characters in various poses, static objects, and so on. Then, each frame is assembled by compositing a subset of the patches onto a canvas. By reusing the sprite sheet, authoring new content requires minimal effort and can even be automated procedurally. Our goal is to invert this process, simultaneously tackling unsupervised instance segmentation and dictionary learning. Given an image dataset, e.g., frames from a sprite-based video game, we train a model that jointly learns a 2D sprite dictionary, capturing recurring visual elements in an image collection, and explains each input frame as a combination of these potentially transparent sprites. Whereas standard CNN-based generators hide their feature representation in their intermediate layers, our model wears its representation “on its sleeve”: by explicitly compositing sprites from its learnt dictionary onto a background canvas, rather than synthesizing pixels from hidden neural features, it provides a readily-interpretable visual representation. Our contributions include the following: • We describe a grid-based anchor system along with a learned dictionary of textured patches (with transparency) to extract a sprite-based image representation. • We propose a method to learn the patch dictionary and the grid-based representation jointly, in a differentiable, end-to-end fashion. • We compare to past work on learned disentangled graphics representations for video games. • We show how our method offers promising avenues for further work towards identifying visual patterns in more complex data such as natural images and video. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). 1 Related Work Decomposing visual content into semantically meaningful parts for analysis, synthesis, and editing is a long-standing problem. We review the most closely related work. Layered decompositions. Wang and Adelson [54] decompose videos into layers undergoing temporally-varying warps for compression. Similarly, Flexible Sprites [24] and Kannan et al. [27] represent videos with full-canvas semi-transparent layers to facilitate editing. Like Flexible Sprites, we adopt translation-only motion but restrict transformations to small neighborhoods around anchors, making inference tractable with many (≥100) sprites. Other methods decompose videos with moving subjects, such as humans, into independent layers, enabling matting [41] and retiming of individual actions [40]; unlike sprite-based techniques, motion and appearance are not disentangled. Sbai et al. [48] use a layered representation as inductive bias in a GAN with solid colored layers. Automatic decompositions into “soft layers” according to texture, color, or semantic features have been used in image editing [1, 2]. Gandelsman et al. [12] use deep image priors [52] to separate images into layer pairs. Huang and Murphy [19] introduce a recurrent architecture to output multiple layers sequentially. Reddy et al. [46] discover patterns in images via differentiable compositing. Interpretable generators for neural synthesis. Neural networks improve the fidelity and realism of generative models [14, 28] but limit control and interpretability [5, 6, 8, 16]. Several works explore interpretability using differentiable domain-specific functions. Hu et al. [18], Li et al. [31] constrain the generator to sets of parametric image operators. Mildenhall et al. [44] use a ray-marching prior and rendering model to encode a radiance field for novel view synthesis. Neural textures [51] replace RGB textures on 3D meshes with high-dimensional features. Rendering under new views enables view-consistent editing. Lin et al. [33] use spatial transformers in their generator to obtain geometric transformations. We synthesize frames by compositing 2D sprites undergoing rigid motions, enabling direct interpretation and control over appearance and motion. Object-centric representations. Our learned sprites reveal, segment, and track object instances. Similarly, Slot Attention [37] extracts object-centric compositional video representations. However, our sprites are interpretable—motion and appearance are direct outputs—and our model scales to more objects per scene. SCALOR [23] handles up to 100 instances but does not produce a common dictionary or handle diverse sprites. While SPACE [34] decomposes images into object layers, it tends to embed sprites in the background, providing no control. Our method achieves a higher IoU of recurring sprite patterns (see §3.1). Stampnet [53] discovers and localizes objects but focuses on simpler, synthetic datasets. MONet [7] decomposes images into multiple object regions using attention. Earlier attention mechanisms leverage pattern recurrence [9, 30] and motion cues [11] to identify individual objects. Recent works use parametric primitives as image building blocks [32, 49]. Applying our sprite decompositions to video games, we can learn about dynamics and gameplay, benefiting downstream agents [17, 26] and aiding content-authoring for research and game development, as in Procedural Content Generation [50]. GameGAN [29] synthesizes new frames from controller input. They split rendering into static and dynamic components but render full frames, without factorization into parts. Their generator is difficult to interpret: appearance and dynamics are entangled within its parameters. Compression. Appearance consistency and motion compensation are central to video compression [4, 38, 42]. We model videos as compositions of moving sprites, factoring redundancy in the input. This draws inspiration from works like DjVu [15] and Digipaper [20], which compress scanned documents by separating them into a background layer and foreground text. Image epitomes [25] summarize and compress image shape and appearance into a miniature texture. Our sprite dictionary fills a similar role, providing superior editing control. 2 Method We start with an input sequence of n RGB frames {I1, . . . , In} with resolution w×h. Our goal is to decompose each frame Ii ∈ R3×w×h into a set of possibly overlapping sprites, organized into ` depth layers, selected from a finite-size dictionary. The dictionary is a collection of trainable latent codes {z1, . . . , zm} that are decoded into RGBA sprites using a neural network generator (§2.1). Our training pipeline is illustrated in Figure 1. We first process each input frame with a convolutional encoder to produce ` grids of feature vectors, one grid per depth layer (§2.2). The grids are lower resolution than the input frame, with a downsampling factor proportional to the sprite size. We call the center of each grid cell an anchor. We compare each anchor’s feature vector against the dictionary’s latent codes, using a softmax scoring function, to select the best matching sprite per anchor (§2.3). Using our sprite generator, we decode each anchor’s matching sprite. This gives us a grid of sprites for each of the ` layers. To factorize image patterns that may not align with our anchor grid, we allow sprites to move in a small neighborhood around anchors (§2.4). We composite the layers from back to front onto the output canvas to obtain our final reconstruction (§2.5). Optionally, the background is modeled as a special learnable sprite that covers the entire canvas. We train the dictionary latent codes, frame encoder, and sprite generator jointly on all frames, comparing our reconstruction to the input (§2.6). This self-supervised procedure yields a representation that is sparse, compact, interpretable, and well-suited for downstream editing and learning applications. 2.1 Dictionary and sprite generator The central component of our representation is a global dictionary of m textured patches or sprites D = {P1, . . . , Pm}, where each Pi ∈ R4×k×k is an RGBA patch. Our sprites have an alpha channel, which allows them to be partially transparent, with possibly irregular (i.e., non-square) boundaries. This is useful for representing animations with multiple depth layers and also allows to learn sprites smaller than their maximal resolution, if necessary, by setting alpha to zero around the boundary. The dictionary is shared among all frames; we reconstruct frames using only sprites from the dictionary. Instead of optimizing for RGBA pixel values directly, we represent the dictionary as a set of trainable latent codes {z1, . . . , zm}, with zi ∈ Rd. We decode these codes into RGBA sprites using a fullyconnected sprite generator Pi = G (zi). This latent representation allows us to define a similarity metric over the latent space, which we use to pair anchors with dictionary sprites to best reconstruct the input frame (§2.3). At test time, we can forego the sprite generator and edit the RGBA sprites directly. Unless otherwise specified, we set latent dimension to d = 128 and patch size to k = 32. We randomly initialize the latent codes from the standard normal distribution. Our sprite generator first applies zero-mean unit-variance normalization—Layer Normalization [3], without an affine transformation—to each latent code zi individually, followed by one fully-connected hidden layer with 8d features, Group Normalization [55], and ReLU activation. We obtain the final sprite using a fullyconnected layer with sigmoid activation to keep RGBA values in [0, 1]. Latent code normalization is crucial to stabilize training and keep the latent space in a compact subspace as the optimization progresses. See §3.3 for an ablation study of this and other components. 2.2 Layered frame decomposition using sprite anchors We seek a decomposition that best explains each input frame using dictionary sprites. We exploit translation invariance and locality in our representation; our sprites are “attached” to a regular grid of reference points, or anchors, inspired by [13, 47]. Each anchor has at most one sprite; we call it inactive if it has none. We give the sprites freedom of motion around their anchors to factorize structures that may not be aligned with the anchor grid. This local—or, Eulerian—viewpoint makes inference tractable and avoids the pitfalls of tracking the global motion of all the sprites across the canvas (a Lagrangian viewpoint). To enable multiple layers with sprite occlusions, we output ` > 1 anchor grids for each frame (` = 2 in our experiments). Figure 2 illustrates our layered anchor grids and local sprite transformations. We use a convolutional encoder E to map the w×h RGB frame Ii to ` grids of anchors, with resolution 2wk × 2h k . Each anchor j in layer l is represented by a feature vector alj ∈ Rd characterizing local image appearance around the anchor and an active/inactive switch probability plj ∈ [0, 1]. Our frame encoder contains log2(k) − 1 downsampling blocks, which use partial convolutions [36] with kernel size 3 and stride 2 (for downsampling), Group Normalization, and Leaky ReLU. It produces a tensor of intermediate features for each layer, which are normalized with LayerNorm. From these, we obtain the anchor switches with an MLP with one hidden layer of size d followed by Group Normalization and Leaky ReLU. We get anchor features using a linear projection followed by LayerNorm. The encoder architecture is illustrated in Figure 3. 2.3 Per-anchor sprite selection Once we have the layered anchor grids for the input frame, we need to assign sprites to the active anchors. We do this by scoring every dictionary element i against each anchor j at layer l, using a softmax over dot products between dictionary codes and anchor features: slij = exp(alj ·zi/ √ d)∑m k=1 exp(a l j ·zk/ √ d) . (1) Recall that both the anchor features and dictionary latent codes are individually normalized using a Layer Normalization operator. Restricting both latent spaces to a compact subspace helps stabilize the optimization and avoid getting stuck in local optima. During training, each anchor’s sprite is a weighted combination of the dictionary elements, masked by the anchor’s active probability: Slj = p l j m∑ i=1 slijPi. (2) This soft patch selection allows gradients to propagate to both dictionary and anchor features during training. Except for natural image and video datasets, at test time, we use hard selections, i.e., for each anchor, we pick the sprite (Slj := Pi) with highest score s l ij and binarize the switches p l j ∈ {0, 1}. 2.4 Local sprite transformations In real animations, sprites rarely perfectly align with our regular anchor grid, so, to avoid learning several copies of the same sprites (e.g., all sub-grid translations of a given image pattern), we allow sprites to move around their anchors. In our implementation, we only allow 2D translations of up to 1/2 the sprite size on each side of the anchor, i.e., T lj = (xlj , ylj) ∈ [−k/2, k/2]2. We use a convolutional network to predict the translation offsets from the anchor’s sprite and a crop of the input frame centered around the anchor, with identical spatial dimensions. This network follows the architecture of E followed by an MLP with a single hidden layer of size d, Group Normalization, and Leaky ReLU. Specifically, we concatenate the image crop and the anchor’s sprite Slj along the channel dimension and pass this tensor through this network to obtain the xlj and y l j offsets. An output layer projects to two dimensions (horziontal and vertical shift) and applies tanh to restrict the range. We apply the shifts to the patches using a spatial transformer [21]. 2.5 Compositing and reconstruction Each anchor in our layered representation is now equipped with a sprite Slj and a transformation T lj . For each layer l, we transform the sprites in their anchor’s local coordinate system and render them onto the layer’s canvas, initialized as fully transparent. Because of the local transformation, neighboring sprites within a layer may overlap. When this happens, we randomly choose an ordering, as in Figure 2. This random permutation encourages our model to either avoid overlapping sprites within the same layer or make the sprite colors agree in the overlap region, since these are the only two options that yield the same rendering regardless of the random z-ordering. Note that sprites on distinct layers are not shuffled. The shuffling prevents the network from abusing the compositing to cover patches with others from the same layer. We optionally learn a background texture to capture elements that cannot be explained using sprites. This can be thought of as a special patch of resolution greater than that of a single frame. For each frame, we learn a (discrete) position offset in the background from which to crop. We represent these offsets as discrete pixel shifts using a softmax classification (independently for each spatial dimension). We found this encoding better behaved than using a continuous offset with a spatial transformer—the discrete encoding allows the gradient signal to propagate to all shifts rather than the weak local gradient from bilinear interpolation (see §3.3 for an ablation). We combine the background and sprite layers via standard alpha compositing [45]. Figure 8 shows a learned background. In some experiments, we use a simpler background model: a fixed solid color, determined by analyzing the data before training. In this variant, we sample 100 random frames, cluster the pixel values into 5 clusters using k-means, and choose the largest cluster center as the background color. 2.6 Training procedure Our pipeline is fully differentiable. We train the latent codes dictionary, sprite generator, frame encoder, transformation predictor, and background layer jointly, minimizing L2 distance between our reconstructions and ground truth frames. We also employ two regularizers: a Beta distribution prior on switches and dictionary element scores favors values close to 0 or 1, and an L1 loss on switches favors a sparser solution without superfluous patches. Our final loss function for a single input is: L(·)= 1wh‖O−I‖ 2 2+ k2 4`wh ∑̀ l=1 2w k × 2h k∑ j=1 [ λBeta ( 1 m m∑ i=1 Beta(2, 2)(slij)+Beta(2, 2)(p l j) ) +λsparse|plj | ] , (3) where O is the result of compositing the background and sprite layers; we optimize {slij}, {plj}, and O. We set λsparse = 0.005 and train for 200,000 steps (∼20 hours) with λBeta = 0.002 and finetune for 10,000 steps with λBeta = 0.1. For natural images and video, we set λBeta = 0. We use the AdamW [39] optimizer on a GeForce GTX 1080 GPU, with batch size 4 and learning rate 0.0001, except for the background module (learning rate 0.001 when used). 3 Experimental Results We evaluate our self-supervised decomposition on several real (non-synthetic) datasets, compare to related work, and conduct an ablation study. In figures, we use a checkerboard to show transparency. Dictionary order is determined by sorting along a 1-dimensional t-SNE embedding of the sprite latent codes. We find this ordering tends to group semantically similar sprites, making the dictionary easier to interpret and manipulate. While our models are trained with a dictionary of 150 patches, not all patches end up being used; we only show the used patches. 3.1 Comparisons While to our knowledge no prior works target differentiable unsupervised sprite-based reconstruction, we compare to two state-of-the-art methods that obtain similarly disentangled representations. In Figure 4, we compare to SPACE [34] and Slot Attention [37]. The former decomposes a scene into a foreground layer consisting of several objects as well as a background, segmented into three layers. The latter deconstructs a scene into discrete “slots.” We train both methods to convergence using their default parameters. While both reconstruct the input frames faithfully, SPACE only recognizes a few sprites in its foreground layer, and Slot Attention does not provide a semantically meaningful decomposition. In contrast, not only does our method model the entire scene using learned sprites, but also it factors out the sprites to form a consistent, sparse dictionary shared for the entire sequence. Additionally, we evaluate on a synthetically-generated sprite-based game from [10], which is made of sprites on a solid background. We compare quantitatively to SPACE in Figure 6 and show qualitative results in Figure 5. Since we have a ground truth segmentation of each scene into sprites, we compute a matching between learned dictionary patches and sprites by associating each dictionary patch with the sprite that it most frequently overlaps across the dataset. We visualize dictionary patches next to their respective sprites. We also use this labeling to compute segmentation metrics. In particular, we report mean IoU in the multiclass case (where each sprite is a distinct class) as well is in the binary case (foreground/background). Because SPACE does not learn a common dictionary, we are unable to obtain a labeling for its foreground elements and, consequently, cannot evaluate its multiclass metric. For the binary metric, we obtain a significantly higher value, since SPACE defers many sprites to the background, whereas our method learns the sprites as dictionary elements. To show that our model learns more than simple motion features, we also compare to two conventional (non-learning) baselines. In Figure 7(a), we compare a segmentation of a frame obtained by clustering optical flow directions using k-means (inspired by Liu et al. [35]) to one generated using our learned decomposition. The flow-based approach is unable to capture many of the details in the frame. In (b), we show the normalized dictionary obtained using an online dictionary learning method [43]. Because this method does not have the inductive biases of our model, the resulting dictionary is not easily interpretable or editable. 3.2 Sprite-based game deconstruction We train on Fighting Hero (one level, 5,330 frames), Nintendo Super Mario Bros. (one level, 2,220 frames), and ATARI Space Invaders (5,000 frames). We use patch size k = 32 for Mario and Fighting Hero and k = 16 for Space Invaders. For Fighting Hero, we learn a background, as described in §2.5. The sprites, background, and example frame reconstructions are shown in Figure 8. Our model successfully disentangles foreground from background and recovers a reasonable sprite sheet for each game. Having reverse-engineered the games, we can use the decomposition for applications like editing. In Figure 9, we demonstrate a GUI that allows the user to move sprites around the screen. 3.3 Ablation Study Model PSNR Smaller patches 28.85 ± 0.95 Full 28.04 ± 0.72 No LayerNorm 26.05 ± 0.45 Smaller dictionary 23.80 ± 1.38 Larger patches 23.63 ± 1.05 Straight-through switches 22.15 ± 0.25 Figure 10: Ablation study on Mario data across five random seeds. Figure 11: Mario dictionary with 16×16 patches. We show an ablation study on the Mario data. We train our full model, one with smaller 16×16 patches, another with larger 64×64 patches, a model with a smaller dictionary (25 elements), a model without LayerNorm, and one where we use a straight-through estimator [22] to learn discrete switches plj in lieu of Beta regularization. We train each model with five random seeds and report the reconstruction PSNR means and standard deviations in Figure 10. This experiment verifies the importance of LayerNorm in our architecture and shows that the straight-through trick is ineffective in our setting. Though the smaller patches model achieves slightly higher mean PSNR than our full model, more of the sprites are split across dictionary patches (Figure 11), illustrating how the patch size choice sets an inductive bias for our decomposition. We also justify our choice for learning background shifts via classification (§2.5) rather than regression, i.e., using spatial transformers. Figure 12 shows the background learned using a spatial transformer. In contrast to our full model (Figure 8), the original background is not discovered, and most of the canvas is unused. We suspect that this is due to lack of gradient signal from background pixels that do not get rendered at each training step. 3.4 Future Directions and Limitations While our method is designed with sprite-based animation in mind, it can generalize to natural images and videos. An exciting direction for future work is to incorporate more expressive transformations so as to discover recurring content in generic videos. Here, we obtain preliminary results using our approach and achieve interesting decompositions even without modifications to our sprite-based model. In Figure 14, we show results on a tennis video (4,000 frames). The model learns parts of the player’s body (head, limbs, shirt, etc.) as sprites and captures most of the tennis court in the learned background. By simply selecting the player sprites in the dictionary, we segment the entire video clip. Our model can also discover recurring patterns in a single natural image. We train on random crops of a 768×512 photograph from the 2013 US Presidential Inauguration,1 which contains many repeating elements such as stairs, columns, and people. With a dictionary of 39 32×32 sprites (39,936 pixels), we recover much of the detail of the original 393,216 pixels. 1AP Photo/Cliff Owen We demonstrate further limitations of our approach by applying it to automatic font discovery. We train on random 128×128 crops of six scanned pages of Moby Dick, each of approximately 500×800 resolution. Figure 15 shows an input text excerpt, our reconstruction, and the learned dictionary. This dataset differs significantly from our other testing datasets. Each input frame consists of many densely packed sprites (∼100 glyphs in each 128×128 crop), and many individual glyphs consist of smaller repeating elements. We hypothesize that because of these issues, combined with a lack of motion cues between frames, we do not achieve a perfect reconstruction, learning certain sprites with multiple glyphs and others with just partial glyphs. Incorporating priors tailored to regularly structured and dense data like text is a direction for future research. 4 Conclusion We present a self-supervised method to jointly learn a patch dictionary and a frame encoder from a video, where the encoder explains frames as compositions of dictionary elements, anchored on a regular grid. By generating layers of alpha-masked sprites and predicting per-sprite local transformation, we recover fine-scale motion and achieve high-quality reconstructions with semantically meaningful, well-separated sprites. Applied to content with significant recurrence, our approach recovers structurally significant patterns. Understanding recurring patterns and their relationships is central to machine learning. Learning to act intelligently in video games or in the physical world requires breaking experiences down into elements between which knowledge can be transferred effectively. Our sprite-based decomposition provides an intuitive basis for this purpose. In this work, we focus on a simplified video domain. In the future, we would like to expand the range of deformations applied to the learned dictionary elements, such as appearance or shape changes. Our work opens significant avenues for future research to explore recurrences and object relationships in more complex domains. Acknowledgements The MIT Geometric Data Processing group acknowledges the generous support of Army Research Office grants W911NF2010168 and W911NF2110293, of Air Force Office of Scientific Research award FA9550-19-1-031, of National Science Foundation grants IIS-1838071 and CHS-1955697, from the CSAIL Systems that Learn program, from the MIT–IBM Watson AI Laboratory, from the Toyota–CSAIL Joint Research Center, from a gift from Adobe Systems, from an MIT.nano Immersion Lab/NCSOFT Gaming Program seed grant, and from the Skoltech–MIT Next Generation Program. This work was also supported by the National Science Foundation Graduate Research Fellowship under Grant No. 1122374.
1. What is the main contribution of the paper, and how does it advance the state of the art? 2. What are the strengths and weaknesses of the proposed technique, particularly regarding its ability to handle various transformations? 3. How does the reviewer interpret the paper's claims and results, and what clarifications were necessary to understand the technique better? 4. What are some potential limitations or areas for improvement in the proposed approach, such as robustness to variations in scale, pose, lighting, etc.?
Summary Of The Paper Review
Summary Of The Paper The paper introduces a technique for unsupervised / self-supervised decomposition of a scene into an arrangement of layered grids of offset sprites drawn from a simultaneously learned sprite sheet. The main contributions are the network architecture and training objective. Review (An earlier version of this review interpreted the paper as only having effective applications to images that were originally built from a grid of tiles where the ground truth grid size was matched to the grid sized used for analysis or a precise multiple of it. A clarification of the data preprocessing pipeline clarified the situation. Several critical comments that were founded on that observation have been removed here. Main score changed from 3 to 7.) The abstract suggests “Our framework offers a promising approach for discovering recurring visual patterns in image collections without supervision,” and very good results are shown for Super Mario Bros. It might seem that the technique relies on a careful coherence between the grid structure it uses for analysis and a tile grid being used to generate the original data used for evaluation. However, the fact that a reasonably effective reconstruction is found for tennis while an ineffective reconstruction is found for the Moby Dick text image shows that the situation is not as simple as being specific to tile-oriented images. The way the this approach "wear its representation 'on its sleeve'" seems to be both a strength and weakness. Reifying the recurring visual patterns as a library of sprite images makes the library easy to audit, but bakes certain features into the pattern representation that we'd normally like representations to abstract over, e.g. robustness to slight variations scale, pose, lighting, etc. Scaling the brightness of an input image to 50% wouldn't effect most other approaches, but it would seem to have a major impact the scene reconstruction ability of this approach. Changing the scale of the images 20% between train and test might also break this approach. These problems do not seem to be unrecoverable within the proposed approach -- the "transform predictor" could be changed to account for more transformations -- but the evaluations performed in the paper don't probe the limits of the translation-only design choice. Overall, the results do show a clear improvement over approaches like SPACE in settings where translation is the only transformation needed. (SPACE was also evaluated on a 3d-rooms dataset where objects varied in scale and lighting.) The state of the art is being advanced in this paper.
NIPS
Title MarioNette: Self-Supervised Sprite Learning Abstract Artists and video game designers often construct 2D animations using libraries of sprites—textured patches of objects and characters. We propose a deep learning approach that decomposes sprite-based video animations into a disentangled representation of recurring graphic elements in a self-supervised manner. By jointly learning a dictionary of possibly transparent patches and training a network that places them onto a canvas, we deconstruct sprite-based content into a sparse, consistent, and explicit representation that can be easily used in downstream tasks, like editing or analysis. Our framework offers a promising approach for discovering recurring visual patterns in image collections without supervision. Since the early days of machine learning, the accepted unit of image synthesis has been the pixel. But while the pixel grid is a natural representation for display hardware and convolutional generators, it does not easily permit high-level reasoning and editing. In this paper, we take inspiration from animation to consider an atomic unit that is richer and easier to edit than the pixel: the sprite. In sprite-based animation, a popular early technique for drawing cartoons and rendering video games, an artist draws a collection of patches—a sprite sheet— consisting of texture swatches, characters in various poses, static objects, and so on. Then, each frame is assembled by compositing a subset of the patches onto a canvas. By reusing the sprite sheet, authoring new content requires minimal effort and can even be automated procedurally. Our goal is to invert this process, simultaneously tackling unsupervised instance segmentation and dictionary learning. Given an image dataset, e.g., frames from a sprite-based video game, we train a model that jointly learns a 2D sprite dictionary, capturing recurring visual elements in an image collection, and explains each input frame as a combination of these potentially transparent sprites. Whereas standard CNN-based generators hide their feature representation in their intermediate layers, our model wears its representation “on its sleeve”: by explicitly compositing sprites from its learnt dictionary onto a background canvas, rather than synthesizing pixels from hidden neural features, it provides a readily-interpretable visual representation. Our contributions include the following: • We describe a grid-based anchor system along with a learned dictionary of textured patches (with transparency) to extract a sprite-based image representation. • We propose a method to learn the patch dictionary and the grid-based representation jointly, in a differentiable, end-to-end fashion. • We compare to past work on learned disentangled graphics representations for video games. • We show how our method offers promising avenues for further work towards identifying visual patterns in more complex data such as natural images and video. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). N/A Artists and video game designers often construct 2D animations using libraries of sprites—textured patches of objects and characters. We propose a deep learning approach that decomposes sprite-based video animations into a disentangled representation of recurring graphic elements in a self-supervised manner. By jointly learning a dictionary of possibly transparent patches and training a network that places them onto a canvas, we deconstruct sprite-based content into a sparse, consistent, and explicit representation that can be easily used in downstream tasks, like editing or analysis. Our framework offers a promising approach for discovering recurring visual patterns in image collections without supervision. Since the early days of machine learning, the accepted unit of image synthesis has been the pixel. But while the pixel grid is a natural representation for display hardware and convolutional generators, it does not easily permit high-level reasoning and editing. In this paper, we take inspiration from animation to consider an atomic unit that is richer and easier to edit than the pixel: the sprite. In sprite-based animation, a popular early technique for drawing cartoons and rendering video games, an artist draws a collection of patches—a sprite sheet— consisting of texture swatches, characters in various poses, static objects, and so on. Then, each frame is assembled by compositing a subset of the patches onto a canvas. By reusing the sprite sheet, authoring new content requires minimal effort and can even be automated procedurally. Our goal is to invert this process, simultaneously tackling unsupervised instance segmentation and dictionary learning. Given an image dataset, e.g., frames from a sprite-based video game, we train a model that jointly learns a 2D sprite dictionary, capturing recurring visual elements in an image collection, and explains each input frame as a combination of these potentially transparent sprites. Whereas standard CNN-based generators hide their feature representation in their intermediate layers, our model wears its representation “on its sleeve”: by explicitly compositing sprites from its learnt dictionary onto a background canvas, rather than synthesizing pixels from hidden neural features, it provides a readily-interpretable visual representation. Our contributions include the following: • We describe a grid-based anchor system along with a learned dictionary of textured patches (with transparency) to extract a sprite-based image representation. • We propose a method to learn the patch dictionary and the grid-based representation jointly, in a differentiable, end-to-end fashion. • We compare to past work on learned disentangled graphics representations for video games. • We show how our method offers promising avenues for further work towards identifying visual patterns in more complex data such as natural images and video. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). 1 Related Work Decomposing visual content into semantically meaningful parts for analysis, synthesis, and editing is a long-standing problem. We review the most closely related work. Layered decompositions. Wang and Adelson [54] decompose videos into layers undergoing temporally-varying warps for compression. Similarly, Flexible Sprites [24] and Kannan et al. [27] represent videos with full-canvas semi-transparent layers to facilitate editing. Like Flexible Sprites, we adopt translation-only motion but restrict transformations to small neighborhoods around anchors, making inference tractable with many (≥100) sprites. Other methods decompose videos with moving subjects, such as humans, into independent layers, enabling matting [41] and retiming of individual actions [40]; unlike sprite-based techniques, motion and appearance are not disentangled. Sbai et al. [48] use a layered representation as inductive bias in a GAN with solid colored layers. Automatic decompositions into “soft layers” according to texture, color, or semantic features have been used in image editing [1, 2]. Gandelsman et al. [12] use deep image priors [52] to separate images into layer pairs. Huang and Murphy [19] introduce a recurrent architecture to output multiple layers sequentially. Reddy et al. [46] discover patterns in images via differentiable compositing. Interpretable generators for neural synthesis. Neural networks improve the fidelity and realism of generative models [14, 28] but limit control and interpretability [5, 6, 8, 16]. Several works explore interpretability using differentiable domain-specific functions. Hu et al. [18], Li et al. [31] constrain the generator to sets of parametric image operators. Mildenhall et al. [44] use a ray-marching prior and rendering model to encode a radiance field for novel view synthesis. Neural textures [51] replace RGB textures on 3D meshes with high-dimensional features. Rendering under new views enables view-consistent editing. Lin et al. [33] use spatial transformers in their generator to obtain geometric transformations. We synthesize frames by compositing 2D sprites undergoing rigid motions, enabling direct interpretation and control over appearance and motion. Object-centric representations. Our learned sprites reveal, segment, and track object instances. Similarly, Slot Attention [37] extracts object-centric compositional video representations. However, our sprites are interpretable—motion and appearance are direct outputs—and our model scales to more objects per scene. SCALOR [23] handles up to 100 instances but does not produce a common dictionary or handle diverse sprites. While SPACE [34] decomposes images into object layers, it tends to embed sprites in the background, providing no control. Our method achieves a higher IoU of recurring sprite patterns (see §3.1). Stampnet [53] discovers and localizes objects but focuses on simpler, synthetic datasets. MONet [7] decomposes images into multiple object regions using attention. Earlier attention mechanisms leverage pattern recurrence [9, 30] and motion cues [11] to identify individual objects. Recent works use parametric primitives as image building blocks [32, 49]. Applying our sprite decompositions to video games, we can learn about dynamics and gameplay, benefiting downstream agents [17, 26] and aiding content-authoring for research and game development, as in Procedural Content Generation [50]. GameGAN [29] synthesizes new frames from controller input. They split rendering into static and dynamic components but render full frames, without factorization into parts. Their generator is difficult to interpret: appearance and dynamics are entangled within its parameters. Compression. Appearance consistency and motion compensation are central to video compression [4, 38, 42]. We model videos as compositions of moving sprites, factoring redundancy in the input. This draws inspiration from works like DjVu [15] and Digipaper [20], which compress scanned documents by separating them into a background layer and foreground text. Image epitomes [25] summarize and compress image shape and appearance into a miniature texture. Our sprite dictionary fills a similar role, providing superior editing control. 2 Method We start with an input sequence of n RGB frames {I1, . . . , In} with resolution w×h. Our goal is to decompose each frame Ii ∈ R3×w×h into a set of possibly overlapping sprites, organized into ` depth layers, selected from a finite-size dictionary. The dictionary is a collection of trainable latent codes {z1, . . . , zm} that are decoded into RGBA sprites using a neural network generator (§2.1). Our training pipeline is illustrated in Figure 1. We first process each input frame with a convolutional encoder to produce ` grids of feature vectors, one grid per depth layer (§2.2). The grids are lower resolution than the input frame, with a downsampling factor proportional to the sprite size. We call the center of each grid cell an anchor. We compare each anchor’s feature vector against the dictionary’s latent codes, using a softmax scoring function, to select the best matching sprite per anchor (§2.3). Using our sprite generator, we decode each anchor’s matching sprite. This gives us a grid of sprites for each of the ` layers. To factorize image patterns that may not align with our anchor grid, we allow sprites to move in a small neighborhood around anchors (§2.4). We composite the layers from back to front onto the output canvas to obtain our final reconstruction (§2.5). Optionally, the background is modeled as a special learnable sprite that covers the entire canvas. We train the dictionary latent codes, frame encoder, and sprite generator jointly on all frames, comparing our reconstruction to the input (§2.6). This self-supervised procedure yields a representation that is sparse, compact, interpretable, and well-suited for downstream editing and learning applications. 2.1 Dictionary and sprite generator The central component of our representation is a global dictionary of m textured patches or sprites D = {P1, . . . , Pm}, where each Pi ∈ R4×k×k is an RGBA patch. Our sprites have an alpha channel, which allows them to be partially transparent, with possibly irregular (i.e., non-square) boundaries. This is useful for representing animations with multiple depth layers and also allows to learn sprites smaller than their maximal resolution, if necessary, by setting alpha to zero around the boundary. The dictionary is shared among all frames; we reconstruct frames using only sprites from the dictionary. Instead of optimizing for RGBA pixel values directly, we represent the dictionary as a set of trainable latent codes {z1, . . . , zm}, with zi ∈ Rd. We decode these codes into RGBA sprites using a fullyconnected sprite generator Pi = G (zi). This latent representation allows us to define a similarity metric over the latent space, which we use to pair anchors with dictionary sprites to best reconstruct the input frame (§2.3). At test time, we can forego the sprite generator and edit the RGBA sprites directly. Unless otherwise specified, we set latent dimension to d = 128 and patch size to k = 32. We randomly initialize the latent codes from the standard normal distribution. Our sprite generator first applies zero-mean unit-variance normalization—Layer Normalization [3], without an affine transformation—to each latent code zi individually, followed by one fully-connected hidden layer with 8d features, Group Normalization [55], and ReLU activation. We obtain the final sprite using a fullyconnected layer with sigmoid activation to keep RGBA values in [0, 1]. Latent code normalization is crucial to stabilize training and keep the latent space in a compact subspace as the optimization progresses. See §3.3 for an ablation study of this and other components. 2.2 Layered frame decomposition using sprite anchors We seek a decomposition that best explains each input frame using dictionary sprites. We exploit translation invariance and locality in our representation; our sprites are “attached” to a regular grid of reference points, or anchors, inspired by [13, 47]. Each anchor has at most one sprite; we call it inactive if it has none. We give the sprites freedom of motion around their anchors to factorize structures that may not be aligned with the anchor grid. This local—or, Eulerian—viewpoint makes inference tractable and avoids the pitfalls of tracking the global motion of all the sprites across the canvas (a Lagrangian viewpoint). To enable multiple layers with sprite occlusions, we output ` > 1 anchor grids for each frame (` = 2 in our experiments). Figure 2 illustrates our layered anchor grids and local sprite transformations. We use a convolutional encoder E to map the w×h RGB frame Ii to ` grids of anchors, with resolution 2wk × 2h k . Each anchor j in layer l is represented by a feature vector alj ∈ Rd characterizing local image appearance around the anchor and an active/inactive switch probability plj ∈ [0, 1]. Our frame encoder contains log2(k) − 1 downsampling blocks, which use partial convolutions [36] with kernel size 3 and stride 2 (for downsampling), Group Normalization, and Leaky ReLU. It produces a tensor of intermediate features for each layer, which are normalized with LayerNorm. From these, we obtain the anchor switches with an MLP with one hidden layer of size d followed by Group Normalization and Leaky ReLU. We get anchor features using a linear projection followed by LayerNorm. The encoder architecture is illustrated in Figure 3. 2.3 Per-anchor sprite selection Once we have the layered anchor grids for the input frame, we need to assign sprites to the active anchors. We do this by scoring every dictionary element i against each anchor j at layer l, using a softmax over dot products between dictionary codes and anchor features: slij = exp(alj ·zi/ √ d)∑m k=1 exp(a l j ·zk/ √ d) . (1) Recall that both the anchor features and dictionary latent codes are individually normalized using a Layer Normalization operator. Restricting both latent spaces to a compact subspace helps stabilize the optimization and avoid getting stuck in local optima. During training, each anchor’s sprite is a weighted combination of the dictionary elements, masked by the anchor’s active probability: Slj = p l j m∑ i=1 slijPi. (2) This soft patch selection allows gradients to propagate to both dictionary and anchor features during training. Except for natural image and video datasets, at test time, we use hard selections, i.e., for each anchor, we pick the sprite (Slj := Pi) with highest score s l ij and binarize the switches p l j ∈ {0, 1}. 2.4 Local sprite transformations In real animations, sprites rarely perfectly align with our regular anchor grid, so, to avoid learning several copies of the same sprites (e.g., all sub-grid translations of a given image pattern), we allow sprites to move around their anchors. In our implementation, we only allow 2D translations of up to 1/2 the sprite size on each side of the anchor, i.e., T lj = (xlj , ylj) ∈ [−k/2, k/2]2. We use a convolutional network to predict the translation offsets from the anchor’s sprite and a crop of the input frame centered around the anchor, with identical spatial dimensions. This network follows the architecture of E followed by an MLP with a single hidden layer of size d, Group Normalization, and Leaky ReLU. Specifically, we concatenate the image crop and the anchor’s sprite Slj along the channel dimension and pass this tensor through this network to obtain the xlj and y l j offsets. An output layer projects to two dimensions (horziontal and vertical shift) and applies tanh to restrict the range. We apply the shifts to the patches using a spatial transformer [21]. 2.5 Compositing and reconstruction Each anchor in our layered representation is now equipped with a sprite Slj and a transformation T lj . For each layer l, we transform the sprites in their anchor’s local coordinate system and render them onto the layer’s canvas, initialized as fully transparent. Because of the local transformation, neighboring sprites within a layer may overlap. When this happens, we randomly choose an ordering, as in Figure 2. This random permutation encourages our model to either avoid overlapping sprites within the same layer or make the sprite colors agree in the overlap region, since these are the only two options that yield the same rendering regardless of the random z-ordering. Note that sprites on distinct layers are not shuffled. The shuffling prevents the network from abusing the compositing to cover patches with others from the same layer. We optionally learn a background texture to capture elements that cannot be explained using sprites. This can be thought of as a special patch of resolution greater than that of a single frame. For each frame, we learn a (discrete) position offset in the background from which to crop. We represent these offsets as discrete pixel shifts using a softmax classification (independently for each spatial dimension). We found this encoding better behaved than using a continuous offset with a spatial transformer—the discrete encoding allows the gradient signal to propagate to all shifts rather than the weak local gradient from bilinear interpolation (see §3.3 for an ablation). We combine the background and sprite layers via standard alpha compositing [45]. Figure 8 shows a learned background. In some experiments, we use a simpler background model: a fixed solid color, determined by analyzing the data before training. In this variant, we sample 100 random frames, cluster the pixel values into 5 clusters using k-means, and choose the largest cluster center as the background color. 2.6 Training procedure Our pipeline is fully differentiable. We train the latent codes dictionary, sprite generator, frame encoder, transformation predictor, and background layer jointly, minimizing L2 distance between our reconstructions and ground truth frames. We also employ two regularizers: a Beta distribution prior on switches and dictionary element scores favors values close to 0 or 1, and an L1 loss on switches favors a sparser solution without superfluous patches. Our final loss function for a single input is: L(·)= 1wh‖O−I‖ 2 2+ k2 4`wh ∑̀ l=1 2w k × 2h k∑ j=1 [ λBeta ( 1 m m∑ i=1 Beta(2, 2)(slij)+Beta(2, 2)(p l j) ) +λsparse|plj | ] , (3) where O is the result of compositing the background and sprite layers; we optimize {slij}, {plj}, and O. We set λsparse = 0.005 and train for 200,000 steps (∼20 hours) with λBeta = 0.002 and finetune for 10,000 steps with λBeta = 0.1. For natural images and video, we set λBeta = 0. We use the AdamW [39] optimizer on a GeForce GTX 1080 GPU, with batch size 4 and learning rate 0.0001, except for the background module (learning rate 0.001 when used). 3 Experimental Results We evaluate our self-supervised decomposition on several real (non-synthetic) datasets, compare to related work, and conduct an ablation study. In figures, we use a checkerboard to show transparency. Dictionary order is determined by sorting along a 1-dimensional t-SNE embedding of the sprite latent codes. We find this ordering tends to group semantically similar sprites, making the dictionary easier to interpret and manipulate. While our models are trained with a dictionary of 150 patches, not all patches end up being used; we only show the used patches. 3.1 Comparisons While to our knowledge no prior works target differentiable unsupervised sprite-based reconstruction, we compare to two state-of-the-art methods that obtain similarly disentangled representations. In Figure 4, we compare to SPACE [34] and Slot Attention [37]. The former decomposes a scene into a foreground layer consisting of several objects as well as a background, segmented into three layers. The latter deconstructs a scene into discrete “slots.” We train both methods to convergence using their default parameters. While both reconstruct the input frames faithfully, SPACE only recognizes a few sprites in its foreground layer, and Slot Attention does not provide a semantically meaningful decomposition. In contrast, not only does our method model the entire scene using learned sprites, but also it factors out the sprites to form a consistent, sparse dictionary shared for the entire sequence. Additionally, we evaluate on a synthetically-generated sprite-based game from [10], which is made of sprites on a solid background. We compare quantitatively to SPACE in Figure 6 and show qualitative results in Figure 5. Since we have a ground truth segmentation of each scene into sprites, we compute a matching between learned dictionary patches and sprites by associating each dictionary patch with the sprite that it most frequently overlaps across the dataset. We visualize dictionary patches next to their respective sprites. We also use this labeling to compute segmentation metrics. In particular, we report mean IoU in the multiclass case (where each sprite is a distinct class) as well is in the binary case (foreground/background). Because SPACE does not learn a common dictionary, we are unable to obtain a labeling for its foreground elements and, consequently, cannot evaluate its multiclass metric. For the binary metric, we obtain a significantly higher value, since SPACE defers many sprites to the background, whereas our method learns the sprites as dictionary elements. To show that our model learns more than simple motion features, we also compare to two conventional (non-learning) baselines. In Figure 7(a), we compare a segmentation of a frame obtained by clustering optical flow directions using k-means (inspired by Liu et al. [35]) to one generated using our learned decomposition. The flow-based approach is unable to capture many of the details in the frame. In (b), we show the normalized dictionary obtained using an online dictionary learning method [43]. Because this method does not have the inductive biases of our model, the resulting dictionary is not easily interpretable or editable. 3.2 Sprite-based game deconstruction We train on Fighting Hero (one level, 5,330 frames), Nintendo Super Mario Bros. (one level, 2,220 frames), and ATARI Space Invaders (5,000 frames). We use patch size k = 32 for Mario and Fighting Hero and k = 16 for Space Invaders. For Fighting Hero, we learn a background, as described in §2.5. The sprites, background, and example frame reconstructions are shown in Figure 8. Our model successfully disentangles foreground from background and recovers a reasonable sprite sheet for each game. Having reverse-engineered the games, we can use the decomposition for applications like editing. In Figure 9, we demonstrate a GUI that allows the user to move sprites around the screen. 3.3 Ablation Study Model PSNR Smaller patches 28.85 ± 0.95 Full 28.04 ± 0.72 No LayerNorm 26.05 ± 0.45 Smaller dictionary 23.80 ± 1.38 Larger patches 23.63 ± 1.05 Straight-through switches 22.15 ± 0.25 Figure 10: Ablation study on Mario data across five random seeds. Figure 11: Mario dictionary with 16×16 patches. We show an ablation study on the Mario data. We train our full model, one with smaller 16×16 patches, another with larger 64×64 patches, a model with a smaller dictionary (25 elements), a model without LayerNorm, and one where we use a straight-through estimator [22] to learn discrete switches plj in lieu of Beta regularization. We train each model with five random seeds and report the reconstruction PSNR means and standard deviations in Figure 10. This experiment verifies the importance of LayerNorm in our architecture and shows that the straight-through trick is ineffective in our setting. Though the smaller patches model achieves slightly higher mean PSNR than our full model, more of the sprites are split across dictionary patches (Figure 11), illustrating how the patch size choice sets an inductive bias for our decomposition. We also justify our choice for learning background shifts via classification (§2.5) rather than regression, i.e., using spatial transformers. Figure 12 shows the background learned using a spatial transformer. In contrast to our full model (Figure 8), the original background is not discovered, and most of the canvas is unused. We suspect that this is due to lack of gradient signal from background pixels that do not get rendered at each training step. 3.4 Future Directions and Limitations While our method is designed with sprite-based animation in mind, it can generalize to natural images and videos. An exciting direction for future work is to incorporate more expressive transformations so as to discover recurring content in generic videos. Here, we obtain preliminary results using our approach and achieve interesting decompositions even without modifications to our sprite-based model. In Figure 14, we show results on a tennis video (4,000 frames). The model learns parts of the player’s body (head, limbs, shirt, etc.) as sprites and captures most of the tennis court in the learned background. By simply selecting the player sprites in the dictionary, we segment the entire video clip. Our model can also discover recurring patterns in a single natural image. We train on random crops of a 768×512 photograph from the 2013 US Presidential Inauguration,1 which contains many repeating elements such as stairs, columns, and people. With a dictionary of 39 32×32 sprites (39,936 pixels), we recover much of the detail of the original 393,216 pixels. 1AP Photo/Cliff Owen We demonstrate further limitations of our approach by applying it to automatic font discovery. We train on random 128×128 crops of six scanned pages of Moby Dick, each of approximately 500×800 resolution. Figure 15 shows an input text excerpt, our reconstruction, and the learned dictionary. This dataset differs significantly from our other testing datasets. Each input frame consists of many densely packed sprites (∼100 glyphs in each 128×128 crop), and many individual glyphs consist of smaller repeating elements. We hypothesize that because of these issues, combined with a lack of motion cues between frames, we do not achieve a perfect reconstruction, learning certain sprites with multiple glyphs and others with just partial glyphs. Incorporating priors tailored to regularly structured and dense data like text is a direction for future research. 4 Conclusion We present a self-supervised method to jointly learn a patch dictionary and a frame encoder from a video, where the encoder explains frames as compositions of dictionary elements, anchored on a regular grid. By generating layers of alpha-masked sprites and predicting per-sprite local transformation, we recover fine-scale motion and achieve high-quality reconstructions with semantically meaningful, well-separated sprites. Applied to content with significant recurrence, our approach recovers structurally significant patterns. Understanding recurring patterns and their relationships is central to machine learning. Learning to act intelligently in video games or in the physical world requires breaking experiences down into elements between which knowledge can be transferred effectively. Our sprite-based decomposition provides an intuitive basis for this purpose. In this work, we focus on a simplified video domain. In the future, we would like to expand the range of deformations applied to the learned dictionary elements, such as appearance or shape changes. Our work opens significant avenues for future research to explore recurrences and object relationships in more complex domains. Acknowledgements The MIT Geometric Data Processing group acknowledges the generous support of Army Research Office grants W911NF2010168 and W911NF2110293, of Air Force Office of Scientific Research award FA9550-19-1-031, of National Science Foundation grants IIS-1838071 and CHS-1955697, from the CSAIL Systems that Learn program, from the MIT–IBM Watson AI Laboratory, from the Toyota–CSAIL Joint Research Center, from a gift from Adobe Systems, from an MIT.nano Immersion Lab/NCSOFT Gaming Program seed grant, and from the Skoltech–MIT Next Generation Program. This work was also supported by the National Science Foundation Graduate Research Fellowship under Grant No. 1122374.
1. What is the focus of the paper regarding sprite-based video animations? 2. What are the strengths of the proposed method, particularly its grid-based modeling approach? 3. What are the weaknesses or concerns of the paper, especially regarding its generalizability to real videos and the necessity of the editing GUI? 4. Do you have any questions regarding the methodology, such as the importance of the regularization terms in Equation 3? 5. Are there any typos or minor issues in the paper that can be addressed?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a method to disentangle sprite-based video animations into recurring visual elements in a self-supervised manner. Specifically, it decompose a frame into grids and try to reconstruct the frame by drawing elements from a dictionary of discovered sprites for each grid. By training with this reconstruction objective, the model could discover a set of disentangled sprite elements. The authors show the effectiveness of their method both through qualitative discovery results as well as quantitative segmentation metrics. With these results, the authors demonstrate that their method work much better compared to previous methods in terms of discovering these recurring visual elements. Review The topic of self-supervised discovery of recurring visual elements is very interesting. Though has been explored before, the proposed method tackles this problem with a very explicit and intuitive grid-based modeling approach. The ides is well executed, the qualitative results are convincing to show that the method works pretty well in the video game animation domain. Also, as the quantitative results demonstrated, the proposed method has a large advantage compared to previous methods in visual discovery and disentanglement. The paper is well-written and easy to follow. The authors have promised to release the code and models. I appreciate the discussion on the limitation of the proposed approach. Regarding the approach The biggest concern I have about this work is that, though demonstrated to work very well on video game frames, it's largely unknown how well it can generalize to the domain of real videos. Even with the example shown in Fig. 14, the video is relatively simple and with clean background. It would be nice to see how it works in more challenging scenarios, e.g. evaluate segmentation quality on datasets like DAVIS. I understand this paper is trying to make a step towards self-supervised visual discovery and appreciate the choice of video game frames as a controllable and easy-to-source testbed to develop prototypes. With that, I'm not sure what is the utility of the editing GUI demonstrated in Figure 9? Since for video games, we already have the full control of all disentangled elements, why do we need to develop such an application? In Eq 3, I'm curious how important are those regularization terms? Does the method learn at all without them? Typos In Figure 3, the bottom-right texts should be "switches p^l_j"? Overall, I appreciate the direction taken and the approach developed in this paper and believe it would be inspiring for the community. ####### Post rebuttal ####### I thank the authors for their response, they are useful in further clearing some questions I had about the work. I'd like to maintain my score to accept this paper.
NIPS
Title MarioNette: Self-Supervised Sprite Learning Abstract Artists and video game designers often construct 2D animations using libraries of sprites—textured patches of objects and characters. We propose a deep learning approach that decomposes sprite-based video animations into a disentangled representation of recurring graphic elements in a self-supervised manner. By jointly learning a dictionary of possibly transparent patches and training a network that places them onto a canvas, we deconstruct sprite-based content into a sparse, consistent, and explicit representation that can be easily used in downstream tasks, like editing or analysis. Our framework offers a promising approach for discovering recurring visual patterns in image collections without supervision. Since the early days of machine learning, the accepted unit of image synthesis has been the pixel. But while the pixel grid is a natural representation for display hardware and convolutional generators, it does not easily permit high-level reasoning and editing. In this paper, we take inspiration from animation to consider an atomic unit that is richer and easier to edit than the pixel: the sprite. In sprite-based animation, a popular early technique for drawing cartoons and rendering video games, an artist draws a collection of patches—a sprite sheet— consisting of texture swatches, characters in various poses, static objects, and so on. Then, each frame is assembled by compositing a subset of the patches onto a canvas. By reusing the sprite sheet, authoring new content requires minimal effort and can even be automated procedurally. Our goal is to invert this process, simultaneously tackling unsupervised instance segmentation and dictionary learning. Given an image dataset, e.g., frames from a sprite-based video game, we train a model that jointly learns a 2D sprite dictionary, capturing recurring visual elements in an image collection, and explains each input frame as a combination of these potentially transparent sprites. Whereas standard CNN-based generators hide their feature representation in their intermediate layers, our model wears its representation “on its sleeve”: by explicitly compositing sprites from its learnt dictionary onto a background canvas, rather than synthesizing pixels from hidden neural features, it provides a readily-interpretable visual representation. Our contributions include the following: • We describe a grid-based anchor system along with a learned dictionary of textured patches (with transparency) to extract a sprite-based image representation. • We propose a method to learn the patch dictionary and the grid-based representation jointly, in a differentiable, end-to-end fashion. • We compare to past work on learned disentangled graphics representations for video games. • We show how our method offers promising avenues for further work towards identifying visual patterns in more complex data such as natural images and video. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). N/A Artists and video game designers often construct 2D animations using libraries of sprites—textured patches of objects and characters. We propose a deep learning approach that decomposes sprite-based video animations into a disentangled representation of recurring graphic elements in a self-supervised manner. By jointly learning a dictionary of possibly transparent patches and training a network that places them onto a canvas, we deconstruct sprite-based content into a sparse, consistent, and explicit representation that can be easily used in downstream tasks, like editing or analysis. Our framework offers a promising approach for discovering recurring visual patterns in image collections without supervision. Since the early days of machine learning, the accepted unit of image synthesis has been the pixel. But while the pixel grid is a natural representation for display hardware and convolutional generators, it does not easily permit high-level reasoning and editing. In this paper, we take inspiration from animation to consider an atomic unit that is richer and easier to edit than the pixel: the sprite. In sprite-based animation, a popular early technique for drawing cartoons and rendering video games, an artist draws a collection of patches—a sprite sheet— consisting of texture swatches, characters in various poses, static objects, and so on. Then, each frame is assembled by compositing a subset of the patches onto a canvas. By reusing the sprite sheet, authoring new content requires minimal effort and can even be automated procedurally. Our goal is to invert this process, simultaneously tackling unsupervised instance segmentation and dictionary learning. Given an image dataset, e.g., frames from a sprite-based video game, we train a model that jointly learns a 2D sprite dictionary, capturing recurring visual elements in an image collection, and explains each input frame as a combination of these potentially transparent sprites. Whereas standard CNN-based generators hide their feature representation in their intermediate layers, our model wears its representation “on its sleeve”: by explicitly compositing sprites from its learnt dictionary onto a background canvas, rather than synthesizing pixels from hidden neural features, it provides a readily-interpretable visual representation. Our contributions include the following: • We describe a grid-based anchor system along with a learned dictionary of textured patches (with transparency) to extract a sprite-based image representation. • We propose a method to learn the patch dictionary and the grid-based representation jointly, in a differentiable, end-to-end fashion. • We compare to past work on learned disentangled graphics representations for video games. • We show how our method offers promising avenues for further work towards identifying visual patterns in more complex data such as natural images and video. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). 1 Related Work Decomposing visual content into semantically meaningful parts for analysis, synthesis, and editing is a long-standing problem. We review the most closely related work. Layered decompositions. Wang and Adelson [54] decompose videos into layers undergoing temporally-varying warps for compression. Similarly, Flexible Sprites [24] and Kannan et al. [27] represent videos with full-canvas semi-transparent layers to facilitate editing. Like Flexible Sprites, we adopt translation-only motion but restrict transformations to small neighborhoods around anchors, making inference tractable with many (≥100) sprites. Other methods decompose videos with moving subjects, such as humans, into independent layers, enabling matting [41] and retiming of individual actions [40]; unlike sprite-based techniques, motion and appearance are not disentangled. Sbai et al. [48] use a layered representation as inductive bias in a GAN with solid colored layers. Automatic decompositions into “soft layers” according to texture, color, or semantic features have been used in image editing [1, 2]. Gandelsman et al. [12] use deep image priors [52] to separate images into layer pairs. Huang and Murphy [19] introduce a recurrent architecture to output multiple layers sequentially. Reddy et al. [46] discover patterns in images via differentiable compositing. Interpretable generators for neural synthesis. Neural networks improve the fidelity and realism of generative models [14, 28] but limit control and interpretability [5, 6, 8, 16]. Several works explore interpretability using differentiable domain-specific functions. Hu et al. [18], Li et al. [31] constrain the generator to sets of parametric image operators. Mildenhall et al. [44] use a ray-marching prior and rendering model to encode a radiance field for novel view synthesis. Neural textures [51] replace RGB textures on 3D meshes with high-dimensional features. Rendering under new views enables view-consistent editing. Lin et al. [33] use spatial transformers in their generator to obtain geometric transformations. We synthesize frames by compositing 2D sprites undergoing rigid motions, enabling direct interpretation and control over appearance and motion. Object-centric representations. Our learned sprites reveal, segment, and track object instances. Similarly, Slot Attention [37] extracts object-centric compositional video representations. However, our sprites are interpretable—motion and appearance are direct outputs—and our model scales to more objects per scene. SCALOR [23] handles up to 100 instances but does not produce a common dictionary or handle diverse sprites. While SPACE [34] decomposes images into object layers, it tends to embed sprites in the background, providing no control. Our method achieves a higher IoU of recurring sprite patterns (see §3.1). Stampnet [53] discovers and localizes objects but focuses on simpler, synthetic datasets. MONet [7] decomposes images into multiple object regions using attention. Earlier attention mechanisms leverage pattern recurrence [9, 30] and motion cues [11] to identify individual objects. Recent works use parametric primitives as image building blocks [32, 49]. Applying our sprite decompositions to video games, we can learn about dynamics and gameplay, benefiting downstream agents [17, 26] and aiding content-authoring for research and game development, as in Procedural Content Generation [50]. GameGAN [29] synthesizes new frames from controller input. They split rendering into static and dynamic components but render full frames, without factorization into parts. Their generator is difficult to interpret: appearance and dynamics are entangled within its parameters. Compression. Appearance consistency and motion compensation are central to video compression [4, 38, 42]. We model videos as compositions of moving sprites, factoring redundancy in the input. This draws inspiration from works like DjVu [15] and Digipaper [20], which compress scanned documents by separating them into a background layer and foreground text. Image epitomes [25] summarize and compress image shape and appearance into a miniature texture. Our sprite dictionary fills a similar role, providing superior editing control. 2 Method We start with an input sequence of n RGB frames {I1, . . . , In} with resolution w×h. Our goal is to decompose each frame Ii ∈ R3×w×h into a set of possibly overlapping sprites, organized into ` depth layers, selected from a finite-size dictionary. The dictionary is a collection of trainable latent codes {z1, . . . , zm} that are decoded into RGBA sprites using a neural network generator (§2.1). Our training pipeline is illustrated in Figure 1. We first process each input frame with a convolutional encoder to produce ` grids of feature vectors, one grid per depth layer (§2.2). The grids are lower resolution than the input frame, with a downsampling factor proportional to the sprite size. We call the center of each grid cell an anchor. We compare each anchor’s feature vector against the dictionary’s latent codes, using a softmax scoring function, to select the best matching sprite per anchor (§2.3). Using our sprite generator, we decode each anchor’s matching sprite. This gives us a grid of sprites for each of the ` layers. To factorize image patterns that may not align with our anchor grid, we allow sprites to move in a small neighborhood around anchors (§2.4). We composite the layers from back to front onto the output canvas to obtain our final reconstruction (§2.5). Optionally, the background is modeled as a special learnable sprite that covers the entire canvas. We train the dictionary latent codes, frame encoder, and sprite generator jointly on all frames, comparing our reconstruction to the input (§2.6). This self-supervised procedure yields a representation that is sparse, compact, interpretable, and well-suited for downstream editing and learning applications. 2.1 Dictionary and sprite generator The central component of our representation is a global dictionary of m textured patches or sprites D = {P1, . . . , Pm}, where each Pi ∈ R4×k×k is an RGBA patch. Our sprites have an alpha channel, which allows them to be partially transparent, with possibly irregular (i.e., non-square) boundaries. This is useful for representing animations with multiple depth layers and also allows to learn sprites smaller than their maximal resolution, if necessary, by setting alpha to zero around the boundary. The dictionary is shared among all frames; we reconstruct frames using only sprites from the dictionary. Instead of optimizing for RGBA pixel values directly, we represent the dictionary as a set of trainable latent codes {z1, . . . , zm}, with zi ∈ Rd. We decode these codes into RGBA sprites using a fullyconnected sprite generator Pi = G (zi). This latent representation allows us to define a similarity metric over the latent space, which we use to pair anchors with dictionary sprites to best reconstruct the input frame (§2.3). At test time, we can forego the sprite generator and edit the RGBA sprites directly. Unless otherwise specified, we set latent dimension to d = 128 and patch size to k = 32. We randomly initialize the latent codes from the standard normal distribution. Our sprite generator first applies zero-mean unit-variance normalization—Layer Normalization [3], without an affine transformation—to each latent code zi individually, followed by one fully-connected hidden layer with 8d features, Group Normalization [55], and ReLU activation. We obtain the final sprite using a fullyconnected layer with sigmoid activation to keep RGBA values in [0, 1]. Latent code normalization is crucial to stabilize training and keep the latent space in a compact subspace as the optimization progresses. See §3.3 for an ablation study of this and other components. 2.2 Layered frame decomposition using sprite anchors We seek a decomposition that best explains each input frame using dictionary sprites. We exploit translation invariance and locality in our representation; our sprites are “attached” to a regular grid of reference points, or anchors, inspired by [13, 47]. Each anchor has at most one sprite; we call it inactive if it has none. We give the sprites freedom of motion around their anchors to factorize structures that may not be aligned with the anchor grid. This local—or, Eulerian—viewpoint makes inference tractable and avoids the pitfalls of tracking the global motion of all the sprites across the canvas (a Lagrangian viewpoint). To enable multiple layers with sprite occlusions, we output ` > 1 anchor grids for each frame (` = 2 in our experiments). Figure 2 illustrates our layered anchor grids and local sprite transformations. We use a convolutional encoder E to map the w×h RGB frame Ii to ` grids of anchors, with resolution 2wk × 2h k . Each anchor j in layer l is represented by a feature vector alj ∈ Rd characterizing local image appearance around the anchor and an active/inactive switch probability plj ∈ [0, 1]. Our frame encoder contains log2(k) − 1 downsampling blocks, which use partial convolutions [36] with kernel size 3 and stride 2 (for downsampling), Group Normalization, and Leaky ReLU. It produces a tensor of intermediate features for each layer, which are normalized with LayerNorm. From these, we obtain the anchor switches with an MLP with one hidden layer of size d followed by Group Normalization and Leaky ReLU. We get anchor features using a linear projection followed by LayerNorm. The encoder architecture is illustrated in Figure 3. 2.3 Per-anchor sprite selection Once we have the layered anchor grids for the input frame, we need to assign sprites to the active anchors. We do this by scoring every dictionary element i against each anchor j at layer l, using a softmax over dot products between dictionary codes and anchor features: slij = exp(alj ·zi/ √ d)∑m k=1 exp(a l j ·zk/ √ d) . (1) Recall that both the anchor features and dictionary latent codes are individually normalized using a Layer Normalization operator. Restricting both latent spaces to a compact subspace helps stabilize the optimization and avoid getting stuck in local optima. During training, each anchor’s sprite is a weighted combination of the dictionary elements, masked by the anchor’s active probability: Slj = p l j m∑ i=1 slijPi. (2) This soft patch selection allows gradients to propagate to both dictionary and anchor features during training. Except for natural image and video datasets, at test time, we use hard selections, i.e., for each anchor, we pick the sprite (Slj := Pi) with highest score s l ij and binarize the switches p l j ∈ {0, 1}. 2.4 Local sprite transformations In real animations, sprites rarely perfectly align with our regular anchor grid, so, to avoid learning several copies of the same sprites (e.g., all sub-grid translations of a given image pattern), we allow sprites to move around their anchors. In our implementation, we only allow 2D translations of up to 1/2 the sprite size on each side of the anchor, i.e., T lj = (xlj , ylj) ∈ [−k/2, k/2]2. We use a convolutional network to predict the translation offsets from the anchor’s sprite and a crop of the input frame centered around the anchor, with identical spatial dimensions. This network follows the architecture of E followed by an MLP with a single hidden layer of size d, Group Normalization, and Leaky ReLU. Specifically, we concatenate the image crop and the anchor’s sprite Slj along the channel dimension and pass this tensor through this network to obtain the xlj and y l j offsets. An output layer projects to two dimensions (horziontal and vertical shift) and applies tanh to restrict the range. We apply the shifts to the patches using a spatial transformer [21]. 2.5 Compositing and reconstruction Each anchor in our layered representation is now equipped with a sprite Slj and a transformation T lj . For each layer l, we transform the sprites in their anchor’s local coordinate system and render them onto the layer’s canvas, initialized as fully transparent. Because of the local transformation, neighboring sprites within a layer may overlap. When this happens, we randomly choose an ordering, as in Figure 2. This random permutation encourages our model to either avoid overlapping sprites within the same layer or make the sprite colors agree in the overlap region, since these are the only two options that yield the same rendering regardless of the random z-ordering. Note that sprites on distinct layers are not shuffled. The shuffling prevents the network from abusing the compositing to cover patches with others from the same layer. We optionally learn a background texture to capture elements that cannot be explained using sprites. This can be thought of as a special patch of resolution greater than that of a single frame. For each frame, we learn a (discrete) position offset in the background from which to crop. We represent these offsets as discrete pixel shifts using a softmax classification (independently for each spatial dimension). We found this encoding better behaved than using a continuous offset with a spatial transformer—the discrete encoding allows the gradient signal to propagate to all shifts rather than the weak local gradient from bilinear interpolation (see §3.3 for an ablation). We combine the background and sprite layers via standard alpha compositing [45]. Figure 8 shows a learned background. In some experiments, we use a simpler background model: a fixed solid color, determined by analyzing the data before training. In this variant, we sample 100 random frames, cluster the pixel values into 5 clusters using k-means, and choose the largest cluster center as the background color. 2.6 Training procedure Our pipeline is fully differentiable. We train the latent codes dictionary, sprite generator, frame encoder, transformation predictor, and background layer jointly, minimizing L2 distance between our reconstructions and ground truth frames. We also employ two regularizers: a Beta distribution prior on switches and dictionary element scores favors values close to 0 or 1, and an L1 loss on switches favors a sparser solution without superfluous patches. Our final loss function for a single input is: L(·)= 1wh‖O−I‖ 2 2+ k2 4`wh ∑̀ l=1 2w k × 2h k∑ j=1 [ λBeta ( 1 m m∑ i=1 Beta(2, 2)(slij)+Beta(2, 2)(p l j) ) +λsparse|plj | ] , (3) where O is the result of compositing the background and sprite layers; we optimize {slij}, {plj}, and O. We set λsparse = 0.005 and train for 200,000 steps (∼20 hours) with λBeta = 0.002 and finetune for 10,000 steps with λBeta = 0.1. For natural images and video, we set λBeta = 0. We use the AdamW [39] optimizer on a GeForce GTX 1080 GPU, with batch size 4 and learning rate 0.0001, except for the background module (learning rate 0.001 when used). 3 Experimental Results We evaluate our self-supervised decomposition on several real (non-synthetic) datasets, compare to related work, and conduct an ablation study. In figures, we use a checkerboard to show transparency. Dictionary order is determined by sorting along a 1-dimensional t-SNE embedding of the sprite latent codes. We find this ordering tends to group semantically similar sprites, making the dictionary easier to interpret and manipulate. While our models are trained with a dictionary of 150 patches, not all patches end up being used; we only show the used patches. 3.1 Comparisons While to our knowledge no prior works target differentiable unsupervised sprite-based reconstruction, we compare to two state-of-the-art methods that obtain similarly disentangled representations. In Figure 4, we compare to SPACE [34] and Slot Attention [37]. The former decomposes a scene into a foreground layer consisting of several objects as well as a background, segmented into three layers. The latter deconstructs a scene into discrete “slots.” We train both methods to convergence using their default parameters. While both reconstruct the input frames faithfully, SPACE only recognizes a few sprites in its foreground layer, and Slot Attention does not provide a semantically meaningful decomposition. In contrast, not only does our method model the entire scene using learned sprites, but also it factors out the sprites to form a consistent, sparse dictionary shared for the entire sequence. Additionally, we evaluate on a synthetically-generated sprite-based game from [10], which is made of sprites on a solid background. We compare quantitatively to SPACE in Figure 6 and show qualitative results in Figure 5. Since we have a ground truth segmentation of each scene into sprites, we compute a matching between learned dictionary patches and sprites by associating each dictionary patch with the sprite that it most frequently overlaps across the dataset. We visualize dictionary patches next to their respective sprites. We also use this labeling to compute segmentation metrics. In particular, we report mean IoU in the multiclass case (where each sprite is a distinct class) as well is in the binary case (foreground/background). Because SPACE does not learn a common dictionary, we are unable to obtain a labeling for its foreground elements and, consequently, cannot evaluate its multiclass metric. For the binary metric, we obtain a significantly higher value, since SPACE defers many sprites to the background, whereas our method learns the sprites as dictionary elements. To show that our model learns more than simple motion features, we also compare to two conventional (non-learning) baselines. In Figure 7(a), we compare a segmentation of a frame obtained by clustering optical flow directions using k-means (inspired by Liu et al. [35]) to one generated using our learned decomposition. The flow-based approach is unable to capture many of the details in the frame. In (b), we show the normalized dictionary obtained using an online dictionary learning method [43]. Because this method does not have the inductive biases of our model, the resulting dictionary is not easily interpretable or editable. 3.2 Sprite-based game deconstruction We train on Fighting Hero (one level, 5,330 frames), Nintendo Super Mario Bros. (one level, 2,220 frames), and ATARI Space Invaders (5,000 frames). We use patch size k = 32 for Mario and Fighting Hero and k = 16 for Space Invaders. For Fighting Hero, we learn a background, as described in §2.5. The sprites, background, and example frame reconstructions are shown in Figure 8. Our model successfully disentangles foreground from background and recovers a reasonable sprite sheet for each game. Having reverse-engineered the games, we can use the decomposition for applications like editing. In Figure 9, we demonstrate a GUI that allows the user to move sprites around the screen. 3.3 Ablation Study Model PSNR Smaller patches 28.85 ± 0.95 Full 28.04 ± 0.72 No LayerNorm 26.05 ± 0.45 Smaller dictionary 23.80 ± 1.38 Larger patches 23.63 ± 1.05 Straight-through switches 22.15 ± 0.25 Figure 10: Ablation study on Mario data across five random seeds. Figure 11: Mario dictionary with 16×16 patches. We show an ablation study on the Mario data. We train our full model, one with smaller 16×16 patches, another with larger 64×64 patches, a model with a smaller dictionary (25 elements), a model without LayerNorm, and one where we use a straight-through estimator [22] to learn discrete switches plj in lieu of Beta regularization. We train each model with five random seeds and report the reconstruction PSNR means and standard deviations in Figure 10. This experiment verifies the importance of LayerNorm in our architecture and shows that the straight-through trick is ineffective in our setting. Though the smaller patches model achieves slightly higher mean PSNR than our full model, more of the sprites are split across dictionary patches (Figure 11), illustrating how the patch size choice sets an inductive bias for our decomposition. We also justify our choice for learning background shifts via classification (§2.5) rather than regression, i.e., using spatial transformers. Figure 12 shows the background learned using a spatial transformer. In contrast to our full model (Figure 8), the original background is not discovered, and most of the canvas is unused. We suspect that this is due to lack of gradient signal from background pixels that do not get rendered at each training step. 3.4 Future Directions and Limitations While our method is designed with sprite-based animation in mind, it can generalize to natural images and videos. An exciting direction for future work is to incorporate more expressive transformations so as to discover recurring content in generic videos. Here, we obtain preliminary results using our approach and achieve interesting decompositions even without modifications to our sprite-based model. In Figure 14, we show results on a tennis video (4,000 frames). The model learns parts of the player’s body (head, limbs, shirt, etc.) as sprites and captures most of the tennis court in the learned background. By simply selecting the player sprites in the dictionary, we segment the entire video clip. Our model can also discover recurring patterns in a single natural image. We train on random crops of a 768×512 photograph from the 2013 US Presidential Inauguration,1 which contains many repeating elements such as stairs, columns, and people. With a dictionary of 39 32×32 sprites (39,936 pixels), we recover much of the detail of the original 393,216 pixels. 1AP Photo/Cliff Owen We demonstrate further limitations of our approach by applying it to automatic font discovery. We train on random 128×128 crops of six scanned pages of Moby Dick, each of approximately 500×800 resolution. Figure 15 shows an input text excerpt, our reconstruction, and the learned dictionary. This dataset differs significantly from our other testing datasets. Each input frame consists of many densely packed sprites (∼100 glyphs in each 128×128 crop), and many individual glyphs consist of smaller repeating elements. We hypothesize that because of these issues, combined with a lack of motion cues between frames, we do not achieve a perfect reconstruction, learning certain sprites with multiple glyphs and others with just partial glyphs. Incorporating priors tailored to regularly structured and dense data like text is a direction for future research. 4 Conclusion We present a self-supervised method to jointly learn a patch dictionary and a frame encoder from a video, where the encoder explains frames as compositions of dictionary elements, anchored on a regular grid. By generating layers of alpha-masked sprites and predicting per-sprite local transformation, we recover fine-scale motion and achieve high-quality reconstructions with semantically meaningful, well-separated sprites. Applied to content with significant recurrence, our approach recovers structurally significant patterns. Understanding recurring patterns and their relationships is central to machine learning. Learning to act intelligently in video games or in the physical world requires breaking experiences down into elements between which knowledge can be transferred effectively. Our sprite-based decomposition provides an intuitive basis for this purpose. In this work, we focus on a simplified video domain. In the future, we would like to expand the range of deformations applied to the learned dictionary elements, such as appearance or shape changes. Our work opens significant avenues for future research to explore recurrences and object relationships in more complex domains. Acknowledgements The MIT Geometric Data Processing group acknowledges the generous support of Army Research Office grants W911NF2010168 and W911NF2110293, of Air Force Office of Scientific Research award FA9550-19-1-031, of National Science Foundation grants IIS-1838071 and CHS-1955697, from the CSAIL Systems that Learn program, from the MIT–IBM Watson AI Laboratory, from the Toyota–CSAIL Joint Research Center, from a gift from Adobe Systems, from an MIT.nano Immersion Lab/NCSOFT Gaming Program seed grant, and from the Skoltech–MIT Next Generation Program. This work was also supported by the National Science Foundation Graduate Research Fellowship under Grant No. 1122374.
1. What is the focus and contribution of the paper regarding image generation using sprite-based layers? 2. What are the strengths and weaknesses of the proposed model, particularly in its novelty, technical execution, and potential applications? 3. Do you have any questions or concerns about the partial convolutions used in the model, the determination of the background canvas size, or the lack of motion cues in the model? 4. Are there any recent works related to layered decomposition models that could be added to the paper for further comparison and context?
Summary Of The Paper Review
Summary Of The Paper The authors propose a generative layered model of images based on sprites — the idea is that there is a dictionary of (possibly transparent) sprites which are arranged on layers which are, together with an optionally learned background layer, composited back-to-front to form a given image. The task is to learn the sprite dictionary and their poses and depth ordering on a given frame. The paper shows successful results on frames from Atari/SNES era video games (which actually did use sprites) and some intriguing (but very much preliminary) one-off results on real images, training on patches sampled at random from those images. Review I like several things about this paper. First, there is pretty good execution in that technically the model make makes sense and these models are never easy to get right (it’s often easy for these models to “cheat”, e.g. by putting all of their information on one layer). It also seems like a clearly novel model and also outperforms some natural competitors at least on old video game data. On the other hand the main weakness is that the proposed model certainly does not seem terribly general — though it is interesting that the authors are able to show that it can be a generic dictionary learning approach (even if not a very good one). Overall I’m a bit biased in favor of this paper because the model is elegant and thought provoking so I would like to see it published, with the admission that the motivation for working on such a model is not particularly compelling and the application space seems narrow. Some more detailed comments/questions: Why partial convolutions? I was not familiar with this concept and looked at the reference, but it was not clear what was used as the mask for the partial convolutions in this paper and why it was necessary (is there an ablation for this?) In general, how does one decide on background canvas size? I notice that motion cues are referenced a few times in paper, but don’t seem to actually be used by model? It does makes a lot of sense to assume that sprites move smoothly across the background --- and perhaps have a learnable transition model through the sprites to reflect appearance change of an object. Other related work ( recent layered decomposition models based on similar alpha compositing and reconstruction assumptions) that I recommend adding: ** Omnimatte (https://omnimatte.github.io/) ** Multiplane images (e.g., https://augmentedperception.github.io/deepview/) ** Efficient inference in occlusion-aware generative models of images (https://arxiv.org/abs/1511.06362)
NIPS
Title MarioNette: Self-Supervised Sprite Learning Abstract Artists and video game designers often construct 2D animations using libraries of sprites—textured patches of objects and characters. We propose a deep learning approach that decomposes sprite-based video animations into a disentangled representation of recurring graphic elements in a self-supervised manner. By jointly learning a dictionary of possibly transparent patches and training a network that places them onto a canvas, we deconstruct sprite-based content into a sparse, consistent, and explicit representation that can be easily used in downstream tasks, like editing or analysis. Our framework offers a promising approach for discovering recurring visual patterns in image collections without supervision. Since the early days of machine learning, the accepted unit of image synthesis has been the pixel. But while the pixel grid is a natural representation for display hardware and convolutional generators, it does not easily permit high-level reasoning and editing. In this paper, we take inspiration from animation to consider an atomic unit that is richer and easier to edit than the pixel: the sprite. In sprite-based animation, a popular early technique for drawing cartoons and rendering video games, an artist draws a collection of patches—a sprite sheet— consisting of texture swatches, characters in various poses, static objects, and so on. Then, each frame is assembled by compositing a subset of the patches onto a canvas. By reusing the sprite sheet, authoring new content requires minimal effort and can even be automated procedurally. Our goal is to invert this process, simultaneously tackling unsupervised instance segmentation and dictionary learning. Given an image dataset, e.g., frames from a sprite-based video game, we train a model that jointly learns a 2D sprite dictionary, capturing recurring visual elements in an image collection, and explains each input frame as a combination of these potentially transparent sprites. Whereas standard CNN-based generators hide their feature representation in their intermediate layers, our model wears its representation “on its sleeve”: by explicitly compositing sprites from its learnt dictionary onto a background canvas, rather than synthesizing pixels from hidden neural features, it provides a readily-interpretable visual representation. Our contributions include the following: • We describe a grid-based anchor system along with a learned dictionary of textured patches (with transparency) to extract a sprite-based image representation. • We propose a method to learn the patch dictionary and the grid-based representation jointly, in a differentiable, end-to-end fashion. • We compare to past work on learned disentangled graphics representations for video games. • We show how our method offers promising avenues for further work towards identifying visual patterns in more complex data such as natural images and video. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). N/A Artists and video game designers often construct 2D animations using libraries of sprites—textured patches of objects and characters. We propose a deep learning approach that decomposes sprite-based video animations into a disentangled representation of recurring graphic elements in a self-supervised manner. By jointly learning a dictionary of possibly transparent patches and training a network that places them onto a canvas, we deconstruct sprite-based content into a sparse, consistent, and explicit representation that can be easily used in downstream tasks, like editing or analysis. Our framework offers a promising approach for discovering recurring visual patterns in image collections without supervision. Since the early days of machine learning, the accepted unit of image synthesis has been the pixel. But while the pixel grid is a natural representation for display hardware and convolutional generators, it does not easily permit high-level reasoning and editing. In this paper, we take inspiration from animation to consider an atomic unit that is richer and easier to edit than the pixel: the sprite. In sprite-based animation, a popular early technique for drawing cartoons and rendering video games, an artist draws a collection of patches—a sprite sheet— consisting of texture swatches, characters in various poses, static objects, and so on. Then, each frame is assembled by compositing a subset of the patches onto a canvas. By reusing the sprite sheet, authoring new content requires minimal effort and can even be automated procedurally. Our goal is to invert this process, simultaneously tackling unsupervised instance segmentation and dictionary learning. Given an image dataset, e.g., frames from a sprite-based video game, we train a model that jointly learns a 2D sprite dictionary, capturing recurring visual elements in an image collection, and explains each input frame as a combination of these potentially transparent sprites. Whereas standard CNN-based generators hide their feature representation in their intermediate layers, our model wears its representation “on its sleeve”: by explicitly compositing sprites from its learnt dictionary onto a background canvas, rather than synthesizing pixels from hidden neural features, it provides a readily-interpretable visual representation. Our contributions include the following: • We describe a grid-based anchor system along with a learned dictionary of textured patches (with transparency) to extract a sprite-based image representation. • We propose a method to learn the patch dictionary and the grid-based representation jointly, in a differentiable, end-to-end fashion. • We compare to past work on learned disentangled graphics representations for video games. • We show how our method offers promising avenues for further work towards identifying visual patterns in more complex data such as natural images and video. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). 1 Related Work Decomposing visual content into semantically meaningful parts for analysis, synthesis, and editing is a long-standing problem. We review the most closely related work. Layered decompositions. Wang and Adelson [54] decompose videos into layers undergoing temporally-varying warps for compression. Similarly, Flexible Sprites [24] and Kannan et al. [27] represent videos with full-canvas semi-transparent layers to facilitate editing. Like Flexible Sprites, we adopt translation-only motion but restrict transformations to small neighborhoods around anchors, making inference tractable with many (≥100) sprites. Other methods decompose videos with moving subjects, such as humans, into independent layers, enabling matting [41] and retiming of individual actions [40]; unlike sprite-based techniques, motion and appearance are not disentangled. Sbai et al. [48] use a layered representation as inductive bias in a GAN with solid colored layers. Automatic decompositions into “soft layers” according to texture, color, or semantic features have been used in image editing [1, 2]. Gandelsman et al. [12] use deep image priors [52] to separate images into layer pairs. Huang and Murphy [19] introduce a recurrent architecture to output multiple layers sequentially. Reddy et al. [46] discover patterns in images via differentiable compositing. Interpretable generators for neural synthesis. Neural networks improve the fidelity and realism of generative models [14, 28] but limit control and interpretability [5, 6, 8, 16]. Several works explore interpretability using differentiable domain-specific functions. Hu et al. [18], Li et al. [31] constrain the generator to sets of parametric image operators. Mildenhall et al. [44] use a ray-marching prior and rendering model to encode a radiance field for novel view synthesis. Neural textures [51] replace RGB textures on 3D meshes with high-dimensional features. Rendering under new views enables view-consistent editing. Lin et al. [33] use spatial transformers in their generator to obtain geometric transformations. We synthesize frames by compositing 2D sprites undergoing rigid motions, enabling direct interpretation and control over appearance and motion. Object-centric representations. Our learned sprites reveal, segment, and track object instances. Similarly, Slot Attention [37] extracts object-centric compositional video representations. However, our sprites are interpretable—motion and appearance are direct outputs—and our model scales to more objects per scene. SCALOR [23] handles up to 100 instances but does not produce a common dictionary or handle diverse sprites. While SPACE [34] decomposes images into object layers, it tends to embed sprites in the background, providing no control. Our method achieves a higher IoU of recurring sprite patterns (see §3.1). Stampnet [53] discovers and localizes objects but focuses on simpler, synthetic datasets. MONet [7] decomposes images into multiple object regions using attention. Earlier attention mechanisms leverage pattern recurrence [9, 30] and motion cues [11] to identify individual objects. Recent works use parametric primitives as image building blocks [32, 49]. Applying our sprite decompositions to video games, we can learn about dynamics and gameplay, benefiting downstream agents [17, 26] and aiding content-authoring for research and game development, as in Procedural Content Generation [50]. GameGAN [29] synthesizes new frames from controller input. They split rendering into static and dynamic components but render full frames, without factorization into parts. Their generator is difficult to interpret: appearance and dynamics are entangled within its parameters. Compression. Appearance consistency and motion compensation are central to video compression [4, 38, 42]. We model videos as compositions of moving sprites, factoring redundancy in the input. This draws inspiration from works like DjVu [15] and Digipaper [20], which compress scanned documents by separating them into a background layer and foreground text. Image epitomes [25] summarize and compress image shape and appearance into a miniature texture. Our sprite dictionary fills a similar role, providing superior editing control. 2 Method We start with an input sequence of n RGB frames {I1, . . . , In} with resolution w×h. Our goal is to decompose each frame Ii ∈ R3×w×h into a set of possibly overlapping sprites, organized into ` depth layers, selected from a finite-size dictionary. The dictionary is a collection of trainable latent codes {z1, . . . , zm} that are decoded into RGBA sprites using a neural network generator (§2.1). Our training pipeline is illustrated in Figure 1. We first process each input frame with a convolutional encoder to produce ` grids of feature vectors, one grid per depth layer (§2.2). The grids are lower resolution than the input frame, with a downsampling factor proportional to the sprite size. We call the center of each grid cell an anchor. We compare each anchor’s feature vector against the dictionary’s latent codes, using a softmax scoring function, to select the best matching sprite per anchor (§2.3). Using our sprite generator, we decode each anchor’s matching sprite. This gives us a grid of sprites for each of the ` layers. To factorize image patterns that may not align with our anchor grid, we allow sprites to move in a small neighborhood around anchors (§2.4). We composite the layers from back to front onto the output canvas to obtain our final reconstruction (§2.5). Optionally, the background is modeled as a special learnable sprite that covers the entire canvas. We train the dictionary latent codes, frame encoder, and sprite generator jointly on all frames, comparing our reconstruction to the input (§2.6). This self-supervised procedure yields a representation that is sparse, compact, interpretable, and well-suited for downstream editing and learning applications. 2.1 Dictionary and sprite generator The central component of our representation is a global dictionary of m textured patches or sprites D = {P1, . . . , Pm}, where each Pi ∈ R4×k×k is an RGBA patch. Our sprites have an alpha channel, which allows them to be partially transparent, with possibly irregular (i.e., non-square) boundaries. This is useful for representing animations with multiple depth layers and also allows to learn sprites smaller than their maximal resolution, if necessary, by setting alpha to zero around the boundary. The dictionary is shared among all frames; we reconstruct frames using only sprites from the dictionary. Instead of optimizing for RGBA pixel values directly, we represent the dictionary as a set of trainable latent codes {z1, . . . , zm}, with zi ∈ Rd. We decode these codes into RGBA sprites using a fullyconnected sprite generator Pi = G (zi). This latent representation allows us to define a similarity metric over the latent space, which we use to pair anchors with dictionary sprites to best reconstruct the input frame (§2.3). At test time, we can forego the sprite generator and edit the RGBA sprites directly. Unless otherwise specified, we set latent dimension to d = 128 and patch size to k = 32. We randomly initialize the latent codes from the standard normal distribution. Our sprite generator first applies zero-mean unit-variance normalization—Layer Normalization [3], without an affine transformation—to each latent code zi individually, followed by one fully-connected hidden layer with 8d features, Group Normalization [55], and ReLU activation. We obtain the final sprite using a fullyconnected layer with sigmoid activation to keep RGBA values in [0, 1]. Latent code normalization is crucial to stabilize training and keep the latent space in a compact subspace as the optimization progresses. See §3.3 for an ablation study of this and other components. 2.2 Layered frame decomposition using sprite anchors We seek a decomposition that best explains each input frame using dictionary sprites. We exploit translation invariance and locality in our representation; our sprites are “attached” to a regular grid of reference points, or anchors, inspired by [13, 47]. Each anchor has at most one sprite; we call it inactive if it has none. We give the sprites freedom of motion around their anchors to factorize structures that may not be aligned with the anchor grid. This local—or, Eulerian—viewpoint makes inference tractable and avoids the pitfalls of tracking the global motion of all the sprites across the canvas (a Lagrangian viewpoint). To enable multiple layers with sprite occlusions, we output ` > 1 anchor grids for each frame (` = 2 in our experiments). Figure 2 illustrates our layered anchor grids and local sprite transformations. We use a convolutional encoder E to map the w×h RGB frame Ii to ` grids of anchors, with resolution 2wk × 2h k . Each anchor j in layer l is represented by a feature vector alj ∈ Rd characterizing local image appearance around the anchor and an active/inactive switch probability plj ∈ [0, 1]. Our frame encoder contains log2(k) − 1 downsampling blocks, which use partial convolutions [36] with kernel size 3 and stride 2 (for downsampling), Group Normalization, and Leaky ReLU. It produces a tensor of intermediate features for each layer, which are normalized with LayerNorm. From these, we obtain the anchor switches with an MLP with one hidden layer of size d followed by Group Normalization and Leaky ReLU. We get anchor features using a linear projection followed by LayerNorm. The encoder architecture is illustrated in Figure 3. 2.3 Per-anchor sprite selection Once we have the layered anchor grids for the input frame, we need to assign sprites to the active anchors. We do this by scoring every dictionary element i against each anchor j at layer l, using a softmax over dot products between dictionary codes and anchor features: slij = exp(alj ·zi/ √ d)∑m k=1 exp(a l j ·zk/ √ d) . (1) Recall that both the anchor features and dictionary latent codes are individually normalized using a Layer Normalization operator. Restricting both latent spaces to a compact subspace helps stabilize the optimization and avoid getting stuck in local optima. During training, each anchor’s sprite is a weighted combination of the dictionary elements, masked by the anchor’s active probability: Slj = p l j m∑ i=1 slijPi. (2) This soft patch selection allows gradients to propagate to both dictionary and anchor features during training. Except for natural image and video datasets, at test time, we use hard selections, i.e., for each anchor, we pick the sprite (Slj := Pi) with highest score s l ij and binarize the switches p l j ∈ {0, 1}. 2.4 Local sprite transformations In real animations, sprites rarely perfectly align with our regular anchor grid, so, to avoid learning several copies of the same sprites (e.g., all sub-grid translations of a given image pattern), we allow sprites to move around their anchors. In our implementation, we only allow 2D translations of up to 1/2 the sprite size on each side of the anchor, i.e., T lj = (xlj , ylj) ∈ [−k/2, k/2]2. We use a convolutional network to predict the translation offsets from the anchor’s sprite and a crop of the input frame centered around the anchor, with identical spatial dimensions. This network follows the architecture of E followed by an MLP with a single hidden layer of size d, Group Normalization, and Leaky ReLU. Specifically, we concatenate the image crop and the anchor’s sprite Slj along the channel dimension and pass this tensor through this network to obtain the xlj and y l j offsets. An output layer projects to two dimensions (horziontal and vertical shift) and applies tanh to restrict the range. We apply the shifts to the patches using a spatial transformer [21]. 2.5 Compositing and reconstruction Each anchor in our layered representation is now equipped with a sprite Slj and a transformation T lj . For each layer l, we transform the sprites in their anchor’s local coordinate system and render them onto the layer’s canvas, initialized as fully transparent. Because of the local transformation, neighboring sprites within a layer may overlap. When this happens, we randomly choose an ordering, as in Figure 2. This random permutation encourages our model to either avoid overlapping sprites within the same layer or make the sprite colors agree in the overlap region, since these are the only two options that yield the same rendering regardless of the random z-ordering. Note that sprites on distinct layers are not shuffled. The shuffling prevents the network from abusing the compositing to cover patches with others from the same layer. We optionally learn a background texture to capture elements that cannot be explained using sprites. This can be thought of as a special patch of resolution greater than that of a single frame. For each frame, we learn a (discrete) position offset in the background from which to crop. We represent these offsets as discrete pixel shifts using a softmax classification (independently for each spatial dimension). We found this encoding better behaved than using a continuous offset with a spatial transformer—the discrete encoding allows the gradient signal to propagate to all shifts rather than the weak local gradient from bilinear interpolation (see §3.3 for an ablation). We combine the background and sprite layers via standard alpha compositing [45]. Figure 8 shows a learned background. In some experiments, we use a simpler background model: a fixed solid color, determined by analyzing the data before training. In this variant, we sample 100 random frames, cluster the pixel values into 5 clusters using k-means, and choose the largest cluster center as the background color. 2.6 Training procedure Our pipeline is fully differentiable. We train the latent codes dictionary, sprite generator, frame encoder, transformation predictor, and background layer jointly, minimizing L2 distance between our reconstructions and ground truth frames. We also employ two regularizers: a Beta distribution prior on switches and dictionary element scores favors values close to 0 or 1, and an L1 loss on switches favors a sparser solution without superfluous patches. Our final loss function for a single input is: L(·)= 1wh‖O−I‖ 2 2+ k2 4`wh ∑̀ l=1 2w k × 2h k∑ j=1 [ λBeta ( 1 m m∑ i=1 Beta(2, 2)(slij)+Beta(2, 2)(p l j) ) +λsparse|plj | ] , (3) where O is the result of compositing the background and sprite layers; we optimize {slij}, {plj}, and O. We set λsparse = 0.005 and train for 200,000 steps (∼20 hours) with λBeta = 0.002 and finetune for 10,000 steps with λBeta = 0.1. For natural images and video, we set λBeta = 0. We use the AdamW [39] optimizer on a GeForce GTX 1080 GPU, with batch size 4 and learning rate 0.0001, except for the background module (learning rate 0.001 when used). 3 Experimental Results We evaluate our self-supervised decomposition on several real (non-synthetic) datasets, compare to related work, and conduct an ablation study. In figures, we use a checkerboard to show transparency. Dictionary order is determined by sorting along a 1-dimensional t-SNE embedding of the sprite latent codes. We find this ordering tends to group semantically similar sprites, making the dictionary easier to interpret and manipulate. While our models are trained with a dictionary of 150 patches, not all patches end up being used; we only show the used patches. 3.1 Comparisons While to our knowledge no prior works target differentiable unsupervised sprite-based reconstruction, we compare to two state-of-the-art methods that obtain similarly disentangled representations. In Figure 4, we compare to SPACE [34] and Slot Attention [37]. The former decomposes a scene into a foreground layer consisting of several objects as well as a background, segmented into three layers. The latter deconstructs a scene into discrete “slots.” We train both methods to convergence using their default parameters. While both reconstruct the input frames faithfully, SPACE only recognizes a few sprites in its foreground layer, and Slot Attention does not provide a semantically meaningful decomposition. In contrast, not only does our method model the entire scene using learned sprites, but also it factors out the sprites to form a consistent, sparse dictionary shared for the entire sequence. Additionally, we evaluate on a synthetically-generated sprite-based game from [10], which is made of sprites on a solid background. We compare quantitatively to SPACE in Figure 6 and show qualitative results in Figure 5. Since we have a ground truth segmentation of each scene into sprites, we compute a matching between learned dictionary patches and sprites by associating each dictionary patch with the sprite that it most frequently overlaps across the dataset. We visualize dictionary patches next to their respective sprites. We also use this labeling to compute segmentation metrics. In particular, we report mean IoU in the multiclass case (where each sprite is a distinct class) as well is in the binary case (foreground/background). Because SPACE does not learn a common dictionary, we are unable to obtain a labeling for its foreground elements and, consequently, cannot evaluate its multiclass metric. For the binary metric, we obtain a significantly higher value, since SPACE defers many sprites to the background, whereas our method learns the sprites as dictionary elements. To show that our model learns more than simple motion features, we also compare to two conventional (non-learning) baselines. In Figure 7(a), we compare a segmentation of a frame obtained by clustering optical flow directions using k-means (inspired by Liu et al. [35]) to one generated using our learned decomposition. The flow-based approach is unable to capture many of the details in the frame. In (b), we show the normalized dictionary obtained using an online dictionary learning method [43]. Because this method does not have the inductive biases of our model, the resulting dictionary is not easily interpretable or editable. 3.2 Sprite-based game deconstruction We train on Fighting Hero (one level, 5,330 frames), Nintendo Super Mario Bros. (one level, 2,220 frames), and ATARI Space Invaders (5,000 frames). We use patch size k = 32 for Mario and Fighting Hero and k = 16 for Space Invaders. For Fighting Hero, we learn a background, as described in §2.5. The sprites, background, and example frame reconstructions are shown in Figure 8. Our model successfully disentangles foreground from background and recovers a reasonable sprite sheet for each game. Having reverse-engineered the games, we can use the decomposition for applications like editing. In Figure 9, we demonstrate a GUI that allows the user to move sprites around the screen. 3.3 Ablation Study Model PSNR Smaller patches 28.85 ± 0.95 Full 28.04 ± 0.72 No LayerNorm 26.05 ± 0.45 Smaller dictionary 23.80 ± 1.38 Larger patches 23.63 ± 1.05 Straight-through switches 22.15 ± 0.25 Figure 10: Ablation study on Mario data across five random seeds. Figure 11: Mario dictionary with 16×16 patches. We show an ablation study on the Mario data. We train our full model, one with smaller 16×16 patches, another with larger 64×64 patches, a model with a smaller dictionary (25 elements), a model without LayerNorm, and one where we use a straight-through estimator [22] to learn discrete switches plj in lieu of Beta regularization. We train each model with five random seeds and report the reconstruction PSNR means and standard deviations in Figure 10. This experiment verifies the importance of LayerNorm in our architecture and shows that the straight-through trick is ineffective in our setting. Though the smaller patches model achieves slightly higher mean PSNR than our full model, more of the sprites are split across dictionary patches (Figure 11), illustrating how the patch size choice sets an inductive bias for our decomposition. We also justify our choice for learning background shifts via classification (§2.5) rather than regression, i.e., using spatial transformers. Figure 12 shows the background learned using a spatial transformer. In contrast to our full model (Figure 8), the original background is not discovered, and most of the canvas is unused. We suspect that this is due to lack of gradient signal from background pixels that do not get rendered at each training step. 3.4 Future Directions and Limitations While our method is designed with sprite-based animation in mind, it can generalize to natural images and videos. An exciting direction for future work is to incorporate more expressive transformations so as to discover recurring content in generic videos. Here, we obtain preliminary results using our approach and achieve interesting decompositions even without modifications to our sprite-based model. In Figure 14, we show results on a tennis video (4,000 frames). The model learns parts of the player’s body (head, limbs, shirt, etc.) as sprites and captures most of the tennis court in the learned background. By simply selecting the player sprites in the dictionary, we segment the entire video clip. Our model can also discover recurring patterns in a single natural image. We train on random crops of a 768×512 photograph from the 2013 US Presidential Inauguration,1 which contains many repeating elements such as stairs, columns, and people. With a dictionary of 39 32×32 sprites (39,936 pixels), we recover much of the detail of the original 393,216 pixels. 1AP Photo/Cliff Owen We demonstrate further limitations of our approach by applying it to automatic font discovery. We train on random 128×128 crops of six scanned pages of Moby Dick, each of approximately 500×800 resolution. Figure 15 shows an input text excerpt, our reconstruction, and the learned dictionary. This dataset differs significantly from our other testing datasets. Each input frame consists of many densely packed sprites (∼100 glyphs in each 128×128 crop), and many individual glyphs consist of smaller repeating elements. We hypothesize that because of these issues, combined with a lack of motion cues between frames, we do not achieve a perfect reconstruction, learning certain sprites with multiple glyphs and others with just partial glyphs. Incorporating priors tailored to regularly structured and dense data like text is a direction for future research. 4 Conclusion We present a self-supervised method to jointly learn a patch dictionary and a frame encoder from a video, where the encoder explains frames as compositions of dictionary elements, anchored on a regular grid. By generating layers of alpha-masked sprites and predicting per-sprite local transformation, we recover fine-scale motion and achieve high-quality reconstructions with semantically meaningful, well-separated sprites. Applied to content with significant recurrence, our approach recovers structurally significant patterns. Understanding recurring patterns and their relationships is central to machine learning. Learning to act intelligently in video games or in the physical world requires breaking experiences down into elements between which knowledge can be transferred effectively. Our sprite-based decomposition provides an intuitive basis for this purpose. In this work, we focus on a simplified video domain. In the future, we would like to expand the range of deformations applied to the learned dictionary elements, such as appearance or shape changes. Our work opens significant avenues for future research to explore recurrences and object relationships in more complex domains. Acknowledgements The MIT Geometric Data Processing group acknowledges the generous support of Army Research Office grants W911NF2010168 and W911NF2110293, of Air Force Office of Scientific Research award FA9550-19-1-031, of National Science Foundation grants IIS-1838071 and CHS-1955697, from the CSAIL Systems that Learn program, from the MIT–IBM Watson AI Laboratory, from the Toyota–CSAIL Joint Research Center, from a gift from Adobe Systems, from an MIT.nano Immersion Lab/NCSOFT Gaming Program seed grant, and from the Skoltech–MIT Next Generation Program. This work was also supported by the National Science Foundation Graduate Research Fellowship under Grant No. 1122374.
1. What is the focus of the paper regarding image disentanglement? 2. What are the strengths of the proposed framework, particularly in its ability to handle real-world data? 3. What are the weaknesses of the paper, especially regarding its task's practical usefulness and game specificity? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. Are there any concerns or questions regarding the paper's experiments, figures, or minor issues?
Summary Of The Paper Review
Summary Of The Paper The paper presents a framework for unsupervised disentanglement of images from 2d games into sprites. The framework consists of learning a sprite dictionary, predicting corresponding sprites based on the information from the frame and rendering these sprites. The framework is supervised with reconstruction loss plus some regularization. Overall the solution makes sense and seems well-thought, however the task 2d game decomposition is not that broad. Review Positive: The solution to the problem makes sense and seems well thought. The paper contains an extensive evaluation and ablation study. The paper tries to apply the proposed method for real world data, e.g. scanned text, president photos and tennis matches. Negative: The paper considers a 2d game decomposition task in computer vision fashion, however it is questionable if this task has any practical usefulness. Indeed one could use game sprites from game source code and similarly took their arrangement from there. The proposed method is very game specific, each game it requires to tune many parameters: depending on the size of sprites in the game the size of the grid need to be adapted; background model need to be adapted as well, for example in the game like Mario separate background for each lever need to learned (here only one lever is considered); number of layers need to selected if there are more than 2 layers in the game. Questions: In Fig 8. (Mario) The "layer 2" contains some scores and letters renderings which are practically unreadable, however when they are used in reconstruction they look almost perfect, how did this happen? Is the layer just overplayed with the background for rendering or some other post processing is applied? For "Transform Predictor" why only shifts are considered, is it possible to use arbitrary affine transformations? Some minor issues: Figure 10 and Figure 6 should be Tables 10 and Table 6.
NIPS
Title Graph Neural Networks as Gradient Flows Abstract Dynamical systems minimizing an energy are ubiquitous in geometry and physics. 1 We propose a gradient flow framework for GNNs where the equations follow the 2 direction of steepest descent of a learnable energy. This approach allows to analyse 3 the GNN evolution from a multi-particle perspective as learning attractive and 4 repulsive forces in feature space via the positive and negative eigenvalues of a 5 symmetric ‘channel-mixing’ matrix. We perform spectral analysis of the solutions 6 and conclude that gradient flow graph convolutional models can induce a dynamics 7 dominated by the graph high frequencies, which is desirable for heterophilic 8 datasets. We also describe structural constraints on common GNN architectures 9 allowing to interpret them as gradient flows. We perform thorough ablation studies 10 corroborating our theoretical analysis and show competitive performance of simple 11 and lightweight models on real-world homophilic and heterophilic datasets. 12 N/A Dynamical systems minimizing an energy are ubiquitous in geometry and physics.1 We propose a gradient flow framework for GNNs where the equations follow the2 direction of steepest descent of a learnable energy. This approach allows to analyse3 the GNN evolution from a multi-particle perspective as learning attractive and4 repulsive forces in feature space via the positive and negative eigenvalues of a5 symmetric ‘channel-mixing’ matrix. We perform spectral analysis of the solutions6 and conclude that gradient flow graph convolutional models can induce a dynamics7 dominated by the graph high frequencies, which is desirable for heterophilic8 datasets. We also describe structural constraints on common GNN architectures9 allowing to interpret them as gradient flows. We perform thorough ablation studies10 corroborating our theoretical analysis and show competitive performance of simple11 and lightweight models on real-world homophilic and heterophilic datasets.12 1 Introduction and motivations13 Graph neural networks (GNNs) [38, 20, 21, 36, 7, 15, 27] and in particular their Message Passing14 formulation (MPNN) [19] have become the standard ML tool for dealing with different types of15 relations and interactions, ranging from social networks to particle physics and drug design. One16 of the often cited drawbacks of traditional GNN models is their poor ‘explainability’, making it17 hard to know why and how they make certain predictions [46, 47], and in which situations they18 may work and when they would fail. Limitations of GNNs that have attracted attention are over-19 smoothing [29, 30, 8], over-squashing and bottlenecks [1, 40], and performance on heterophilic data20 [31, 51, 13, 4, 45] – where adjacent nodes usually have different labels.21 Contributions. We propose a Gradient Flow Framework22 (GRAFF) where the GNN equations follow the direction of steep-23 est descent of a learnable energy. Thanks to this framework we can24 (i) interpret GNNs as a multi-particle dynamics where the learned25 parameters determine pairwise attractive and repulsive potentials26 in the feature space. This sheds light on how GNNs can adapt to27 heterophily and explains their performance and the smoothness of28 the prediction. (ii) GRAFF leads to residual convolutional models29 where the channel-mixing W is performed by a shared symmet-30 ric bilinear form inducing attraction and repulsion via its positive31 and negative eigenvalues, respectively. We theoretically investi-32 gate the interaction of the graph spectrum with the spectrum of the33 channel-mixing, proving that if there is more mass on the negative34 eigenvalues of W, then the dynamics is dominated by the graph-35 high frequencies, which could be desirable on heterophilic graphs.36 We also extend results of [29, 30, 8] by showing that when we drop37 the residual connection intrinsic to the gradient flow framework,38 graph convolutional models always induce a low-frequency dominated dynamics independent of the39 sign and magnitude of the spectrum of the channel-mixing. We also discuss how simple choices40 make common architectures fit GRAFF and conduct thorough ablation studies to corroborate the the-41 oretical analysis on the role of the spectrum of W. (iii) We crystallize an instance of our framework42 into a linear, residual, convolutional model that achieves competitive performance on homophilic and43 heterophilic real world graphs whilst being faster than GCN.44 Related work. Our analysis is related to studying GNNs as filters on the graph spectrum [15, 24,45 2, 25] and over-smoothing [29, 30, 8, 50] and partly adopts techniques similar to [30]. The key46 difference is that we also consider the spectrum of the ‘channel-mixing’ matrix. The concept of47 gradient flows has been a standard tool in physics and geometry [16], from which they were adopted48 for image processing [26], and recently used in ML [35] for the analysis of Transformers [41] – see49 also [18] for discussion of loss landscapes. Our continuous-time evolution equations follows the spirit50 of Neural ODES [22, 12, 3] and the study of GNNs as continuous dynamical systems [44, 10, 17, 9].51 Outline. In Section 2, we review the continuous and discrete Dirichlet energy and the associated52 gradient flow framework. We formalize the notion of over-smoothing and low(high)-frequency-53 dominated dynamics to investigate GNNs and study the dominant components in their evolution. We54 extend the graph Dirichlet energy to allow for a non-trivial norm for the feature edge-gradient. This55 leads to gradient flow equations that diffuse the features and over-smooth in the limit. Accordingly,56 in Section 3 we introduce a more general energy with a symmetric channel-mixing matrix W giving57 rise to attractive and repulsive pairwise terms via its positive and negative eigenvalues and show58 that the negative spectrum can induce high-frequency-dominant dynamics. In Section 4 we first59 compare with continuous GNN models and then discretize the equations and provide a ‘recipe’ for60 making standard GNN architectures fit a gradient flow framework. We adapt the spectral analysis to61 discrete-time showing that gradient flow convolutional models can generate a dynamics dominated by62 the high frequencies via the negative eigenvalues of W while this is impossible if we drop the residual63 connection. In Section 5 we corroborate our theoretical analysis on the role of the spectrum of W64 via ablation studies on graphs with varying homophily. Experiments on real world datasets show a65 competitive performance of our model despite its simplicity and reduced number of parameters.66 2 Gradient-flow formalism67 Notations adopted throughout the paper. Let G = (V,E) be an undirected graph with n nodes.68 We denote by F 2 Rn⇥d the matrix of d-dimensional node features, by fi 2 Rd its i-th row69 (transposed), by fr 2 Rn its r-th column, and by vec(F) 2 Rnd the vectorization of F obtained70 by stacking its columns. Given a symmetric matrix B, we let B + , B denote its most positive and71 negative eigenvalues, respectively, and ⇢B be its spectral radius. If B ⌫ 0, then gap(B) denotes the72 positive smallest eigenvalue of B. ḟ(t) denotes the temporal derivative, ⌦ is the Kronecker product73 and ‘a.e.’ means almost every w.r.t. Lebesgue measure and usually refers to data in the complement74 of some lower dimensional subspace in Rn⇥d. Proofs and additional results appear in the Appendix.75 Starting point: a geometric parallelism. To motivate a gradient-flow approach for GNNs, we start76 from the continuous case (see Appendix A.1 for details). Consider a smooth map f : Rn ! (Rd, h)77 with h a constant metric represented by H ⌫ 0. The Dirichlet energy of f is defined by78 E(f, h) = 1 2 Z Rn krfk2h dx = 1 2 dX q,r=1 nX j=1 Z Rn hqr@jf q@jf r (x)dx (1) and measures the ‘smoothness’ of f . A natural approach to find minimizers of E - called harmonic79 maps - was introduced in [16] and consists in studying the gradient flow of E , wherein a given map80 f(0) = f0 is evolved according to ḟ(t) = rfE(f(t)). These type of evolution equations have81 historically been the core of variational and PDE-based image processing; in particular, gradient82 flows of the Dirichlet energy were shown [26] to recover the Perona-Malik nonlinear diffusion [32].83 Motivation: GNNs for node-classification. We wish to extend the gradient flow formalism to node84 classification on graphs. Assume we have a graph G, node-features F0 and labels {yi} on Vtrain ⇢ V,85 and that we want to predict the labels on Vtest ⇢ V. A GNN typically evolves the features via some86 parametric rule, GNN✓(G,F0), and uses a decoding map for the prediction y = DE(GNN✓(G,F0)).87 In graph convolutional models [15, 27], GNN✓ consists of two operations: applying a shared linear88 transformation to the features (‘channel mixing’) and propagating them along the edges of the graph89 (‘diffusion’). Our goal consists in studying when GNN✓ is the gradient flow of some parametric class90 of energies E✓ : Rn⇥d ! R, which generalize the Dirichlet energy. This means that the parameters91 can be interpreted as ‘finding the right notion of smoothness’ for our task. We evolve the features by92 Ḟ(t) = rFE✓(F(t)) with prediction y = DE(F(T )) for some optimal time T .93 Why a gradient flow? Since Ė✓(F(t)) = ||rFE✓(F(t))||2, the energy dissipates along the gradient94 flow. Accordingly, this framework allows to explain the GNN dynamics as flowing the node features95 in the direction of steepest descent of E✓. Indeed, we find that parametrizing an energy leads to96 equations governed by attractive and repulsive forces that can be controlled via the spectrum of97 symmetric ‘channel-mixing’ matrices. This shows that by learning to distribute more mass over the98 negative (positive) eigenvalues of the channel-mixing, gradient flow models can generate dynamics99 dominated by the higher (respectively, lower) graph frequencies and hence tackle different homophily100 scenarios. The gradient flow framework also leads to sharing of the weights across layers (since we101 parametrize the energy rather than the evolution equations, as usually done in GNNs), allowing us to102 reduce the number of parameters without compromising performance (see Table 1).103 Analysis on graphs: preliminaries. Given a connected graph G with self-loops, its adjacency104 matrix A is defined as aij = 1 if (i, j) 2 E and zero otherwise. We let D = diag(di) be the degree105 matrix and write Ā := D 1/2AD 1/2. Let F 2 Rn⇥d be the matrix representation of a signal. Its106 graph gradient is (rF)ij := fj/ p dj fi/ p di. We define the Laplacian as := 12divr (the107 divergence div is the adjoint of r), represented by = I Ā ⌫ 0. We refer to the eigenvalues of108 as frequencies: the lowest frequency is always 0 while the highest frequency is ⇢ 2 [14]. As109 for the continuum case, the gradient allows to define a (graph) Dirichlet energy as [49]110 E Dir (F) := 1 4 X i X j:(i,j)2E ||(rF)ij || 2 ⌘ 1 4 X (i,j)2E || fi p di fjp dj || 2 = 1 2 trace(F > F), (2) where the extra 1 2 is for convenience. As for manifolds, EDir measures smoothness. If we stack the111 columns of F into vec(F) 2 Rnd, the gradient flow of EDir yields the heat equation on each channel:112 vec(Ḟ(t)) = rvec(F)E Dir (vec(F(t))) = (Id ⌦ )vec(F(t)) () ḟ r (t) = fr(t), (3) for 1 r d. Similarly to [8], we rely on EDir to assess whether a given dynamics t 7! F(t) is a113 smoothing process. A different choice of Laplacian L = D A with non-normalized adjacency114 induces the analogous Dirichlet energy EDirL (F) = 1 2 trace(F > LF). Throughout this paper, we rely115 on the following definitions (see Appendix A.3 for further equivalent formulations and justifications):116 Definition 2.1. Ḟ(t) = GNN✓(F(t), t) initialized at F(0) is smoothing if EDir(F(t)) C + '(t),117 with C a constant only depending on EDir(F(0)) and '̇(t) 0. Over-smoothing occurs if either118 E Dir (F(t)) ! 0 or EDirL (F(t)) ! 0 for t ! 1.119 Our notion of ‘over-smoothing’ is a relaxed version of the definition in [34] – although in the linear120 case one always finds an exponential decay of EDir. We note that EDir(F(t)) ! 0 iff fr(t) ! 0 for121 each column fr. As in [30], this corresponds to a loss of separation power along the solution where122 nodes with equal degree become indistinguishable since we converge to ker( ) (if we replaced 123 with L then we would not even be able to separate nodes with different degrees in the limit).124 To motivate the next definition, consider Ḟ(t) = ĀF(t). Despite ||F(t)|| being unbounded for a.e.125 F(0), the low-frequency components are growing the fastest and indeed F(t)/||F(t)|| ! F1 s.t.126 f r 1 = 0 for 1 r d. We formalize this scenario – including the opposite case of high-frequency127 components being dominant – by studying EDir(F(t)/||F(t)||), i.e. the Rayleigh quotient of Id ⌦ .128 Definition 2.2. Ḟ(t) = GNN✓(F(t), t) initialized at F(0) is Low/High-Frequency-Dominant129 (L/HFD) if EDir(F(t)/||F(t)||) ! 0 (respectively, EDir(F(t)/||F(t)||) ! ⇢ /2) for t ! 1.130 We report a consequence of Definition 2.2 and refer to Appendix A.3 for additional details and131 motivations for the characterizations of LFD and HFD.132 Lemma 2.3. GNN✓ is LFD (HFD) iff for each tj ! 1 there exist tjk ! 1 and F1 s.t.133 F(tjk)/||F(tjk)|| ! F1 and fr1 = 0 ( fr1 = ⇢ fr1, respectively).134 If a graph is homophilic, adjacent nodes are likely to share the same label and we expect a smoothing135 or LFD dynamics enhancing the low-frequency components to be successful at node classification136 tasks [43, 28]. In the opposite case of heterophily, the high-frequency components might contain more137 relevant information for separating classes [4, 5] – the prototypical example being the eigenvector of138 associated with largest frequency ⇢ separating a regular bipartite graph. In other words, the class139 of heterophilic graphs contain instances where signals should be sharpened by increasing EDir rather140 than smoothed out. Accordingly, an ideal framework for learning on graphs must accommodate both141 of these opposite scenarios by being able to induce either an LFD or a HFD dynamics.142 Parametric Dirichlet energy: channel-mixing as metric in feature space. In eq. (1) a constant143 nontrivial metric h in Rd leads to the mixing of the feature channels. We adapt this idea by considering144 a symmetric positive semi-definite H = W>W with W 2 Rd⇥d and using it to generalize EDir as145 E Dir W (F) := 1 4 dX q,r=1 X i X j:(i,j)2E hqr(rf q )ij(rf r )ij = 1 4 X (i,j)2E ||W(rF)ij || 2. (4) We note the analogy with eq. (1), where the sum over the nodes replaces the integration over the146 domain and the j-th derivative at some point i is replaced by the gradient along the edge (i, j) 2 E.147 We generally treat W as learnable weights and study the gradient flow of EDirW :148 Ḟ(t) = rFE Dir W (F(t)) = F(t)W > W. (5) We see that eq. (5) generalizes eq. (3). Below ‘smoothing’ is intended as in Definition 2.1.149 Proposition 2.4. Let P kerW be the projection onto ker(W > W). Equation (5) is smoothing since150 E Dir (F(t)) e 2tgap(W >W)gap( ) ||F(0)|| 2 + E Dir ((P kerW ⌦ In)vec(F(0))), t 0. In fact F(t) ! F1 s.t. 9 1 2 Rd: for each i 2 V we have (f1)i = p di 1 + P ker W fi(0).151 Proposition 2.4 implies that no weight matrix W in eq. (5) can separate the limit embeddings F(1)152 of nodes with same degree and input features. If W has a trivial kernel, then nodes with same degrees153 converge to the same representation and over-smoothing occurs as per Definition 2.1. Differently154 from [29, 30, 8], over-smoothing occurs independently of the spectral radius of the ‘channel-mixing’155 if its eigenvalues are positive – even for equations which lead to residual GNNs when discretized156 [12]. According to Proposition 2.4, we do not expect eq. (5) to succeed on heterophilic graphs where157 smoothing processes are generally harmful – this is confirmed in Figure 2 (see prod-curve). To158 remedy this problem, we generalize eq. (5) to a gradient flow that can be HFD as per Definition 2.2.159 3 A general parametric energy for pairwise interactions160 We first rewrite the energy EDirW in eq. (4) as161 E Dir W (F) = 1 2 X i hfi,W > Wfii 1 2 X i,j āijhfi,W > Wfji. (6) We then define a new, more general energy by replacing the occurrences of W>W with new162 symmetric matrices ⌦,W 2 Rd⇥d since we also want to generate repulsive forces:163 E tot (F) := 1 2 X i hfi,⌦fii 1 2 X i,j āijhfi,Wfji ⌘ E ext ⌦ (F) + E pair W (F), (7) with associated gradient flow of the form (see Appendix B)164 Ḟ(t) = rFE tot (F(t)) = F(t)⌦+ ĀF(t)W. (8) Note that eq. (8) is gradient flow of some energy F 7! Etot(F) iff both ⌦ and W are symmetric.165 A multi-particle system point of view: attraction vs repulsion. Consider the d-dimensional166 node-features as particles in Rd with energy Etot. While the term Eext⌦ is independent of the graph167 topology and represents an external field in the feature space, the second term EpairW constitutes a168 potential energy, with W a bilinear form determining the pairwise interactions of adjacent node169 representations. Given a symmetric W, we write W = ⇥> + ⇥+ ⇥ > ⇥ , by decomposing the170 spectrum of W in positive and negative values.We can rewrite Etot = Eext⌦ W + E Dir ⇥+ E Dir ⇥ , i.e.171 E tot (F) = 1 2 X i hfi, (⌦ W)fii+ 1 4 X i,j ||⇥+(rF)ij || 2 1 4 X i,j ||⇥ (rF)ij || 2. (9) The gradient flow of Etot minimizes EDir⇥+ and maximizes E Dir ⇥ . The matrix W encodes repulsive172 pairwise interactions via its negative-definite component ⇥ which lead to terms ||⇥ (rF)ij ||173 increasing along the solution. The latter affords a ‘sharpening’ effect desirable on heterophilic graphs174 where we need to disentangle adjacent node representations and hence ‘magnify’ the edge-gradient.175 Spectral analysis of the channel-mixing. We will now show that eq. (8) can lead to a HFD176 dynamics. To this end, we assume that ⌦ = 0 so that eq. (8) becomes Ḟ(t) = ĀF(t)W. According177 to eq. (9) the negative eigenvalues of W lead to repulsion. We show that the latter can induce HFD178 dynamics as per Definition 2.2. We let P ⇢ W be the orthogonal projection into the eigenspace of179 W ⌦ Ā associated with the eigenvalue ⇢ := | W |(⇢ 1). We define ✏HFD explicitly in eq. (24).180 Proposition 3.1. If ⇢ > W+ , then Ḟ(t) = ĀF(t)W is HFD for a.e. F(0): there exists ✏HFD s.t.181 E Dir (F(t)) = e2t⇢ ⇣⇢ 2 ||P ⇢ W F(0)|| 2 +O(e 2t✏HFD) ⌘ , t 0, and F(t)/||F(t)|| converges to F1 2 Rn⇥d such that fr1 = ⇢ fr1, for 1 r d.182 Proposition 3.1 shows that if enough mass of the spectrum of the ‘channel-mixing’ is distributed over183 the negative eigenvalues, then the evolution is dominated by the graph high frequencies. This analysis184 is made possible in our gradient flow framework where W must be symmetric. The HFD dynamics185 induced by negative eigenvalues of W is confirmed in Figure 2 (neg-prod-curve in the bottom chart).186 A more general energy. Equations with a source term may have better expressive power [44, 11, 39].187 In our framework this means adding an extra energy term of the form Esource W̃ (F) := hF,F(0)W̃i188 to eq. (7) with some learnable and W̃. This leads to the following gradient flow:189 Ḟ(t) = F(t)⌦+ ĀF(t)W F(0)W̃. (10) We also observe that one could replace the fixed matrix Ā with a more general symmetric graph190 vector field A satisfying Aij = 0 if (i, j) /2 E, although in this work we focus on the case A = Ā.191 We also note that when ⌦ = W, then eq. (8) becomes Ḟ(t) = F(t)W. We perform a spectral192 analysis of this case in Appendix B.2.193 Non-linear activations. In Appendix B.3 we discuss non-linear gradient flow equations. Here194 we study what happens if the gradient flow in eq. (10) is activated pointwise by : R ! R. We195 show that although we are no longer a gradient flow, the learnable multi-particle energy Etot is still196 decreasing along the solution, meaning that the interpretation of the channel-mixing W inducing197 attraction and repulsion via its positive and negative eigenvalues respectively is preserved.198 Proposition 3.2. Consider a non-linear map : R ! R such that the function x 7! x (x) 0. If199 t 7! F(t) solves the equation200 Ḟ(t) = ⇣ F(t)⌦+ ĀF(t)W F(0)W̃ ⌘ , where acts elementwise, then201 dEtot(F(t)) dt 0. A proof of this result and more details and discussion are reported in Appendix E. We emphasize202 here that differently from previous results about behaviour of ReLU wrt EDir [30, 8], we deal with a203 much more general energy that can also induce repulsion and a more general family of activation204 functions (that include ReLU, tanh, arctan and many others).205 4 Comparison with GNNs206 In this Section, we study standard GNN models from the perspective of our gradient flow framework.207 4.1 Continuous case208 Continuous GNN models replace layers with continuous time. In contrast with Proposition 3.1,209 we show that three main linearized continuous GNN models are either smoothing or LFD as210 per Definition 2.2. The linearized PDE-GCND model [17] corresponds to choosing = 0 and211 ⌦ = W = K(t)>K(t) in eq. (10), for some time-dependent family t 7! K(t) 2 Rd⇥d:212 ḞPDE GCND(t) = F(t)K(t) > K(t). The CGNN model [44] can be derived from eq. (10) by setting ⌦ = I ⌦̃,W = W̃ = I, = 1:213 ḞCGNN(t) = F(t) + F(t)⌦̃+ F(0). Finally, in linearized GRAND [10] a row-stochastic matrix A(F(0)) is learned from the encoding214 via an attention mechanism and we have215 ḞGRAND(t) = RWF(t) = (I A(F(0)))F(t). We note that if A is not symmetric, then GRAND is not a gradient flow.216 Proposition 4.1. PDE GCND, CGNN and GRAND satisfy the following:217 (i) PDE GCND is a smoothing model: ĖDir(FPDE GCND (t)) 0.218 (ii) For a.e. F(0) it holds: CGNN is never HFD and if we remove the source term, then219 E Dir (FCGNN(t)/||FCGNN(t)||) e gap( )t.220 (iii) If G is connected, FGRAND(t) ! µ as t ! 1, with µr = mean(fr(0)), 1 r d.221 By (ii) the source-free CGNN-evolution is LFD independent of ⌦̃. Moreover, by (iii), over-smoothing222 occurs for GRAND as per Definition 2.1. On the other hand, Proposition 3.1 shows that the negative223 eigenvalues of W can make the source-free gradient flow in eq. (8) HFD. Experiments in Section 5224 confirm that the gradient flow model outperforms CGNN and GRAND on heterophilic graphs.225 4.2 Discrete case226 We now describe a discrete version of our gradient flow model and compare it to ‘discrete’ GNNs227 where discrete time steps correspond to different layers. In the spirit of [12], we use explicit Euler228 scheme with step size ⌧ 1 to solve eq. (10) and set W̃ = I. In the gradient flow framework we229 parametrize the energy rather than the actual equations, which leads to symmetric channel-mixing230 matrices ⌦,W 2 Rd⇥d that are shared across the layers. Since the matrices are square, an encoding231 block EN : Rn⇥p ! Rn⇥d is used to process input features F0 2 Rn⇥p and generally reduce the232 hidden dimension from p to d. Moreover, the iterations inherently lead to a residual architecture233 because of the explicit Euler discretization:234 F(t+ ⌧) = F(t) + ⌧ F(t)⌦+ ĀF(t)W + F(0) , F(0) = EN(F0), (11) with prediction y = DE(F(T )) produced by a decoder DE : Rn⇥d ! Rn⇥k, where k is the235 number of label classes and T integration time of the form T = m⌧ , so that m 2 N represents the236 number of layers. Although eq. (11) is linear, we can include non-linear activations in EN, DE237 making the entire model generally non-linear. We emphasize two important points:238 • Since the framework is residual, even if the message-passing is linear, this is not equivalent239 to collapsing the dynamics into a single layer with diffusion matrix Ām, with m the number240 of layers, see eq. (27) in the appendix where we derive the expansion of the solution.241 • We could also activate the equations pointwise and maintain the physics interpretation thanks242 to Proposition 3.2 to gain greater expressive power. In the following though, we mainly243 stick to the linear discrete gradient flow unless otherwise stated.244 Are discrete GNNs gradient flows? Given a (learned) symmetric graph vector field A 2 Rn⇥n245 satisfying Aij = 0 if (i, j) /2 E, consider a family of linear GNNs with shared weights of the form246 F(t+ 1) = F(t)⌦+AF(t)W + F(0)W̃, 0 t T. (12) Symmetry is the key requirement to interpret GNNs in eq. (12) in a gradient flow framework.247 Lemma 4.2. Equation (12) is the unit step size discrete gradient flow of EextI ⌦ + E pair A,W E source W̃ ,248 with EpairA,W defined by replacing Ā with A in eq. (7), iff ⌦ and W are symmetric.249 Lemma 4.2 provides a recipe for making standard architectures into a gradient flow, with symmetry250 being the key requirement. When eq. (12) is a gradient flow, the underlying GNN dynamics is251 equivalent to minimizing a multi-particle energy by learning attractive and repulsive directions in252 feature space as discussed in Section 3. In Appendix C.2, we show how Lemma 4.2 covers linear253 versions of GCN [27, 43], GAT [42], GraphSAGE [23] and GCNII [11] to name a few.254 Over-smoothing analysis in discrete setting. By Proposition 3.1 we know that the continuous255 version of eq. (11) can be HFD thanks to the negative eigenvalues of W. The next result represents a256 discrete counterpart of Proposition 3.1 and shows that residual, symmetrized graph convolutional257 models can be HFD. Below P ⇢ W is the projection into the eigenspace associated with the eigenvalue258 ⇢ := | W |(⇢ 1) and we report the explicit value of HFD in eq. (28) in Appendix C.3. We let:259 W + (⇢ 1)) 1 < | W | < 2(⌧(2 ⇢ )) 1. (13) Theorem 4.3. Given F(t+ ⌧) = F(t) + ⌧ĀF(t)W, with W symmetric, if eq. (13) holds then260 E Dir (F(m⌧)) = (1 + ⌧⇢ ) 2m ⇢ 2 ||P ⇢ W F(0)|| 2 +O ✓ 1 + ⌧ HFD 1 + ⌧⇢ ◆2m!! , HFD < ⇢ , hence the dynamics is HFD for a.e. F(0) and in fact F(m⌧)/||F(m⌧)|| ! F1 s.t. fr1 = ⇢ fr1.261 Conversely, if G is not bipartite, then for a.e. F(0) the system F(t + ⌧) = ⌧ĀF(t)W, with W262 symmetric, is LFD independent of the spectrum of W.263 Theorem 4.3 shows that linear discrete gradient flows can be HFD due to the negative eigenvalues of264 W. This differs from statements that standard GCNs act as low-pass filters and thus over-smooth in265 the limit. Indeed, in these cases the spectrum of W is generally ignored [43, 11] or required to be266 sufficiently small in terms of singular value decomposition [29, 30, 8] when no residual connection267 is present. On the other hand, Theorem 4.3 emphasizes that the spectrum of W plays a key role to268 enhance the high frequencies when enough mass is distributed over the negative eigenvalues provided269 that a residual connection exists – this is confirmed by the neg-prod-curve in Figure 2.270 The residual connection from a spectral perspective. Given a sufficiently small step-size so271 that the right hand side of inequality 13 is satisfied, F(t+ ⌧) = F(t) + ⌧ĀF(t)W is HFD for a.e.272 F(0) if | W |(⇢ 1) > W+ , i.e. ‘there is more mass’ in the negative spectrum of W than in the273 positive one. This means that differently from [29, 30, 8], there is no requirement on the minimal274 magnitude of the spectral radius of W coming from the graph topology as long as W + is small275 enough. Conversely, without a residual term, the dynamics is LFD for a.e. F(0) independently of the276 sign and magnitude of the eigenvalues of W. This is also confirmed by the GCN-curve in Figure 2.277 Over-smoothing vs LFD. We highlight how in general a linear GCN equation as F(t + ⌧) =278 ⌧ĀF(t)W may avoid over-smoothing in the sense of Definition 2.1, meaning that EDir(F(t)) ! 1279 as soon as there exist i 2 (0, 1) and the spectral radius of W is large enough. However, this280 will not lead to over-separation since the dominating term is the lowest frequency one: in other281 words, once we re-set the scale right as per the normalization in Theorem 4.3, we encounter loss of282 separability even with large (and possibly negative) spectrum of W.283 5 Experiments284 In this section we evaluate the gradient flow framework (GRAFF). We corroborate the spectral285 analysis using synthetic data with controllable homophily. We confirm that having negative (positive)286 eigenvalues of the channel-mixing W are essential in heterophilic (homophilic) scenarios where the287 gradient flow should align with HFD (LFD) respectively. We show that the gradient flow in eq. (11)288 – a linear, residual, symmetric graph convolutional model – achieves competitive performance on289 heterophilic datasets.290 Methodology. We crystallize GRAFF in the model presented in eq. (11) with EN, DE im-291 plemented as single linear layers or MLPs, and we set ⌦ to be diagonal. For the real-world292 experiments we consider diagonally-dominant (DD), diagonal (D) and time-dependent choices293 for the structure of W that offer explicit control over its spectrum. In the (DD)-case, we consider294 a W0 2 Rd⇥d symmetric with zero diagonal and w 2 Rd defined by w↵ = q↵ P |W 0 ↵ | + r↵,295 and set W = diag(w) + W0. Due to the Gershgorin Theorem the eigenvalues of W belong to296 [w↵ P |W 0 ↵ |,w↵ + P |W 0 ↵ |], so the model ‘can’ easily re-distribute mass in the spectrum of297 W via q↵, r↵. This generalizes the decomposition of W in [11] providing a justification in terms of298 its spectrum and turns out to be more efficient w.r.t. the hidden dimension d as shown in Figure 4 in299 the Appendix. For (D) we take W to be diagonal, with entries sampled U [ 1, 1] and fixed – i.e., we300 do not train over W – and only learn EN, DE. We also include a time-dependent model where Wt301 varies across layers. To investigate the role of the spectrum of W on synthetic graphs, we construct302 three additional variants: W = W0 + W0>, W = ±W0>W0 named sum, prod and neg-prod303 respectively where prod (neg-prod) variants have only non-negative (non-positive) eigenvalues.304 Complexity and number of parameters. If we treat the number of layers as a constant, the discrete305 gradient flow scales as O(|V|pd + |E|d2), where p and d are input feature and hidden dimension306 respectively, with p d usually. Note that GCN has complexity O(|E|pd) and in fact our model is307 faster than GCN as confirmed in Figure 5 in Appendix D. Since EN, DE are single linear layers308 (MLPs), we can bound the number of parameters by pd+ d2 + 3d+ dk, with k the number of label309 classes, in the (DD)-variant while in the (D)-variant we have pd+ 3d+ dk. Further ablation studies310 appear in Figure 4 in the Appendix showing that (DD) outperforms sum and GCN – especially in the311 lower hidden dimension regime – on real-world benchmarks with varying homophily.312 Synthetic experiments and ablation studies.313 To investigate our claims in a controlled environ-314 ment we use the synthetic Cora dataset of [51, Ap-315 pendix G]. Graphs are generated for target levels316 of homophily via preferential attachment – see317 Appendix D.3 for details. Figure 2 confirms the318 spectral analysis and offers a better understanding319 in terms of performance and smoothness of the320 predictions. Each curve – except GCN – repre-321 sents one version of W as in ‘methodology’ and322 we implement eq. (11) with = 0, ⌦ = 0. Fig-323 ure 2 (top) reports the test accuracy vs true label324 homophily. Neg-prod is better than prod on low-325 homophily and viceversa on high-homophily. This326 confirms Proposition 3.1 where we have shown327 that the gradient flow can lead to a HFD dy-328 namics – that are generally desirable with low-329 homophily – through the negative eigenvalues of330 W. Conversely, the prod configuration (where we331 have an attraction-only dynamics) struggles in low-332 homophily scenarios even though a residual connection is present. Both prod and neg-prod are333 ‘extreme’ choices and serve the purpose of highlighting that by turning off one side of the spectrum334 this could be the more damaging depending on the underlying homophily. In general though ‘neutral’335 variants like sum and (DD) are indeed more flexible and better performing. In fact, (DD) outperforms336 GCN especially in low-homophily scenarios, confirming Theorem 4.3 where we have shown that337 without a residual connection convolutional models are LFD – and hence more sensitive to underlying338 homophily – irrespectively of the spectrum of W. This is further confirmed in Figure 3.339 In Figure 2 (bottom) we compute the homophily of the prediction (cross) for a given method and we340 compare with the homophily (circle) of the prediction read from the encoding (i.e. graph-agnostic).341 The homophily here is a proxy to assess whether the evolution is smoothing, the goal being explaining342 the smoothness of the prediction via the spectrum of W as per our theoretical analysis. For neg-prod343 the homophily after the evolution is lower than that of the encoding, supporting the analysis that344 negative eigenvalues of W enhance high-frequencies. The opposite behaviour occurs in the case of345 prod and explains that in the low-homophily regime prod is under-performant due to the prediction346 being smoother than the true homophily. (DD) and sum variants adapt better to the true homophily.347 We note how the encoding compensates when the dynamics can only either attract or repulse (i.e. the348 spectrum of W has a sign) by decreasing or increasing the initial homophily respectively.349 Real world experiments. We test GRAFF against a range of datasets with varying homophily350 [37, 33, 31] (see Appendix D.4 for additional details). We use results provided in [45, Table 1],351 which includes standard baselines as GCN [27], GraphSAGE [23], GAT [42], PairNorm [48] and352 recent models tailored towards the heterophilic setting (GGCN [45], Geom-GCN [31], H2GCN353 [51] and GPRGNN [13]). For Sheaf [5], a recent top-performer on heterophilic datasets, we took354 the best performing variant (out of six provided) for each dataset. We also include continuous355 baselines CGNN [44] and GRAND [10] to provide empirical evidence for Proposition 4.1. Splits356 taken from [31] are used in all the comparisons. The GRAFF model discussed in ‘methodology’357 is a very simple architecture with shared parameters across layers and run-time smaller than GCN358 and more recent models like GGCN designed for heterophilic graphs (see Figure 5 in the Appendix).359 Nevertheless, it achieves competitive results on all datasets, performing on par or better than more360 complex recent models. Moreover, comparison with the ‘time-dependent’ (DD) variant confirms361 that by sharing weights across layers we do not lose performance. We note that on heterophilic362 graphs short integration time is usually needed due to the topology being harmful and the negative363 eigenvalues of W leading to exponential behaviour (see Appendix D).364 6 Conclusions365 In this work, we developed a framework for GNNs where the evolution can be interpreted as366 minimizing a multi-particle learnable energy. This translates into studying the interaction between367 the spectrum of the graph and the spectrum of the ‘channel-mixing’ leading to a better understanding368 of when and why the induced dynamics is low (high) frequency dominated. From a theoretical369 perspective, we refined existing asymptotic analysis of GNNs to account for the role of the spectrum of370 the channel-mixing as well. From a practical perspective, our framework allows for ‘educated’ choices371 resulting in a simple convolutional model that achieves competitive performance on homophilic372 and heterophilic benchmarks while being faster than GCN. Our results refute the folklore of graph373 convolutional models being too simple for heterophilic benchmarks.374 Limitations and future works. We limited our attention to a constant bilinear form W, which375 might be excessively rigid. It is possible to derive non-constant alternatives that are aware of the376 features or the position in the graph. The main challenge amounts to matching the requirement for377 local ‘heterogeneity’ with efficiency: we reserve this question for future work. Our analysis is also a378 first step into studying the interaction of the graph and ‘channel-mixing’ spectra; we did not explore379 other dynamics that are neither LFD nor HFD as per our definitions. The energy formulation points380 to new models more ‘physics’ inspired; this will be explored in future work.381 Societal impact. Our work sheds light on the actual dynamics of GNNs and could hence improve382 their understanding, which is crucial for assessing their impact on large-scale applications. We also383 show that instances of our framework achieve competitive performance on heterophilic data despite384 being faster than GCN, providing evidence for efficient methods with reduced footprint.385 References386 [1] U. Alon and E. Yahav. On the bottleneck of graph neural networks and its practical implications.387 In International Conference on Learning Representations, 2021.388 [2] M. Balcilar, G. Renton, P. Héroux, B. Gaüzère, S. Adam, and P. Honeine. Analyzing the389 expressive power of graph neural networks in a spectral perspective. In International Conference390 on Learning Representations, 2020.391 [3] M. Biloš, J. Sommer, S. S. Rangapuram, T. Januschowski, and S. Günnemann. Neural flows:392 Efficient alternative to neural odes. In Advances in Neural Information Processing Systems,393 volume 34, 2021.394 [4] D. Bo, X. Wang, C. Shi, and H. Shen. Beyond low-frequency information in graph convolutional395 networks. In AAAI. AAAI Press, 2021.396 [5] C. Bodnar, F. Di Giovanni, B. P. Chamberlain, P. Liò, and M. M. Bronstein. Neural sheaf397 diffusion: A topological perspective on heterophily and oversmoothing in gnns. arXiv preprint398 arXiv:2202.04579, 2022.399 [6] S. Brody, U. Alon, and E. Yahav. How attentive are graph attention networks? arXiv preprint400 arXiv:2105.14491, 2021.401 [7] J. Bruna, W. Zaremba, A. Szlam, and Y. LeCun. Spectral networks and locally connected402 networks on graphs. In 2nd International Conference on Learning Representations, ICLR 2014,403 2014.404 [8] C. Cai and Y. Wang. A note on over-smoothing for graph neural networks. arXiv preprint405 arXiv:2006.13318, 2020.406 [9] B. Chamberlain, J. Rowbottom, D. Eynard, F. Di Giovanni, X. Dong, and M. Bronstein. Beltrami407 flow and neural diffusion on graphs. Advances in Neural Information Processing Systems, 34,408 2021.409 [10] B. Chamberlain, J. Rowbottom, M. I. Gorinova, M. Bronstein, S. Webb, and E. Rossi. Grand:410 Graph neural diffusion. In International Conference on Machine Learning, pages 1407–1418.411 PMLR, 2021.412 [11] M. Chen, Z. Wei, Z. Huang, B. Ding, and Y. Li. Simple and deep graph convolutional networks.413 In International Conference on Machine Learning, pages 1725–1735. PMLR, 2020.414 [12] R. T. Chen, Y. Rubanova, J. Bettencourt, and D. K. Duvenaud. Neural ordinary differential415 equations. Advances in neural information processing systems, 31, 2018.416 [13] E. Chien, J. Peng, P. Li, and O. Milenkovic. Adaptive universal generalized pagerank graph417 neural network. In 9th International Conference on Learning Representations, ICLR 2021,418 2021.419 [14] F. R. Chung and F. C. Graham. Spectral graph theory. Number 92. American Mathematical420 Soc., 1997.421 [15] M. Defferrard, X. Bresson, and P. Vandergheynst. Convolutional neural networks on graphs422 with fast localized spectral filtering. Advances in neural information processing systems, 29,423 2016.424 [16] J. Eells and J. H. Sampson. Harmonic mappings of riemannian manifolds. American journal of425 mathematics, 86(1):109–160, 1964.426 [17] M. Eliasof, E. Haber, and E. Treister. Pde-gcn: Novel architectures for graph neural networks427 motivated by partial differential equations. Advances in Neural Information Processing Systems,428 34, 2021.429 [18] M. Geiger, L. Petrini, and M. Wyart. Landscape and training regimes in deep learning. Physics430 Reports, 924:1–18, 2021.431 [19] J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl. Neural message passing432 for quantum chemistry. In International Conference on Machine Learning, pages 1263–1272.433 PMLR, 2017.434 [20] C. Goller and A. Kuchler. Learning task-dependent distributed representations by backprop-435 agation through structure. In Proceedings of International Conference on Neural Networks436 (ICNN’96), volume 1, pages 347–352. IEEE, 1996.437 [21] M. Gori, G. Monfardini, and F. Scarselli. A new model for learning in graph domains. In438 Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005., volume 2,439 pages 729–734. IEEE, 2005.440 [22] E. Haber and L. Ruthotto. Stable architectures for deep neural networks. Inverse Problems, 34,441 2018.442 [23] W. Hamilton, Z. Ying, and J. Leskovec. Inductive representation learning on large graphs.443 Advances in neural information processing systems, 30, 2017.444 [24] D. K. Hammond, P. Vandergheynst, and R. Gribonval. The spectral graph wavelet transform:445 Fundamental theory and fast computation. In Vertex-Frequency Analysis of Graph Signals,446 pages 141–175. Springer, 2019.447 [25] M. He, Z. Wei, H. Xu, et al. Bernnet: Learning arbitrary graph spectral filters via bernstein448 approximation. Advances in Neural Information Processing Systems, 34, 2021.449 [26] R. Kimmel, N. Sochen, and R. Malladi. From high energy physics to low level vision. In450 International Conference on Scale-Space Theories in Computer Vision, pages 236–247. Springer,451 1997.452 [27] T. N. Kipf and M. Welling. Semi-Supervised Classification with Graph Convolutional Networks.453 In Proceedings of the 5th International Conference on Learning Representations, ICLR ’17,454 2017.455 [28] J. Klicpera, S. Weißenberger, and S. Günnemann. Diffusion improves graph learning. In456 Proceedings of the 33rd International Conference on Neural Information Processing Systems,457 2019.458 [29] H. Nt and T. Maehara. Revisiting graph neural networks: All we have is low-pass filters. arXiv459 preprint arXiv:1905.09550, 2019.460 [30] K. Oono and T. Suzuki. Graph neural networks exponentially lose expressive power for node461 classification. In International Conference on Learning Representations, 2020.462 [31] H. Pei, B. Wei, K. C. Chang, Y. Lei, and B. Yang. Geom-gcn: Geometric graph convolutional463 networks. In 8th International Conference on Learning Representations, ICLR 2020, 2020.464 [32] P. Perona and J. Malik. Scale-space and edge detection using anisotropic diffusion. PAMI,465 12(7):629–639, 1990.466 [33] B. Rozemberczki, C. Allen, and R. Sarkar. Multi-scale attributed node embedding. Journal of467 Complex Networks, 9(2):cnab014, 2021.468 [34] T. K. Rusch, B. P. Chamberlain, J. Rowbottom, S. Mishra, and M. M. Bronstein. Graph-coupled469 oscillator networks. In International Conference on Machine Learning, 2022.470 [35] M. E. Sander, P. Ablin, M. Blondel, and G. Peyré. Sinkformers: Transformers with doubly471 stochastic attention. In International Conference on Artificial Intelligence and Statistics, pages472 3515–3530. PMLR, 2022.473 [36] F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini. The graph neural474 network model. IEEE transactions on neural networks, 20(1):61–80, 2008.475 [37] P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Galligher, and T. Eliassi-Rad. Collective classifica-476 tion in network data. AI magazine, 29(3):93–93, 2008.477 [38] A. Sperduti. Encoding labeled graphs by labeling raam. Advances in Neural Information478 Processing Systems, 6, 1993.479 [39] M. Thorpe, T. M. Nguyen, H. Xia, T. Strohmer, A. Bertozzi, S. Osher, and B. Wang. Grand++:480 Graph neural diffusion with a source term. In International Conference on Learning Represen-481 tations, 2021.482 [40] J. Topping, F. Di Giovanni, B. P. Chamberlain, X. Dong, and M. M. Bronstein. Understanding483 over-squashing and bottlenecks on graphs via curvature. International Conference on Learning484 Representations, 2022.485 [41] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and486 I. Polosukhin. Attention is all you need. Advances in neural information processing systems,487 30, 2017.488 [42] P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio. Graph attention489 networks. In International Conference on Learning Representations, 2018.490 [43] F. Wu, A. Souza, T. Zhang, C. Fifty, T. Yu, and K. Weinberger. Simplifying graph convolutional491 networks. In International conference on machine learning, pages 6861–6871. PMLR, 2019.492 [44] L.-P. Xhonneux, M. Qu, and J. Tang. Continuous graph neural networks. In International493 Conference on Machine Learning, pages 10432–10441. PMLR, 2020.494 [45] Y. Yan, M. Hashemi, K. Swersky, Y. Yang, and D. Koutra. Two sides of the same coin:495 Heterophily and oversmoothing in graph convolutional neural networks. arXiv preprint496 arXiv:2102.06462, 2021.497 [46] Z. Ying, D. Bourgeois, J. You, M. Zitnik, and J. Leskovec. Gnnexplainer: Generating expla-498 nations for graph neural networks. Advances in neural information processing systems, 32,499 2019.500 [47] H. Yuan, H. Yu, S. Gui, and S. Ji. Explainability in graph neural networks: A taxonomic survey.501 arXiv preprint arXiv:2012.15445, 2020.502 [48] L. Zhao and L. Akoglu. Pairnorm: Tackling oversmoothing in gnns. arXiv preprint503 arXiv:1909.12223, 2019.504 [49] D. Zhou and B. Schölkopf. Regularization on discrete spaces. In Joint Pattern Recognition505 Symposium, pages 361–368. Springer, 2005.506 [50] K. Zhou, X. Huang, D. Zha, R. Chen, L. Li, S.-H. Choi, and X. Hu. Dirichlet energy constrained507 learning for deep graph neural networks. Advances in Neural Information Processing Systems,508 34:21834–21846, 2021.509 [51] J. Zhu, Y. Yan, L. Zhao, M. Heimann, L. Akoglu, and D. Koutra. Beyond homophily in graph510 neural networks: Current limitations and effective designs. Advances in Neural Information511 Processing Systems, 33:7793–7804, 2020.512 [52] O. Shchur, M. Mumme, A. Bojchevski, and S. Günnemann. Pitfalls of graph neural network513 evaluation. In NIPS workshop, 2018.514 [53] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin,515 N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani,516 S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala. Pytorch: An imperative style,517 high-performance deep learning library. In NeurIPS. 2019.518 [54] M. Fey and J. E. Lenssen. Fast graph representation learning with PyTorch Geometric. In ICLR519 Workshop on Representation Learning on Graphs and Manifolds, 2019.520 [55] L. Biewald. Experiment tracking with weights and biases, 2020. Software available from521 wandb.com.522 Checklist523 The checklist follows the references. Please read the checklist guidelines carefully for information on524 how to answer these questions. For each question, change the default [TODO] to [Yes] , [No] , or525 [N/A] . You are strongly encouraged to include a justification to your answer, either by referencing526 the appropriate section of your paper or providing a brief inline description. For example:527 • Did you include the license to the code and datasets? [Yes] See Section ??.528 • Did you include the license to the code and datasets? [No] The code and the data are529 proprietary.530 • Did you include the license to the code and datasets? [N/A]531 Please do not modify the questions and only use the provided macros for your answers. Note that the532 Checklist section does not count towards the page limit. In your paper, please delete this instructions533 block and only keep the Checklist section heading above along with the questions/answers below.534 1. For all authors...535 (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s536 contributions and scope? [Yes]537 (b) Did you describe the limitations of your work? [Yes] , in Section 6.538 (c) Did you discuss any potential negative societal impacts of your work? [Yes] in the539 Societal impact paragraph in Section 6.540 (d) Have you read the ethics review guidelines and ensured that your paper conforms to541 them? [Yes]542 2. If you are including theoretical results...543 (a) Did you state the full set of assumptions of all theoretical results? [Yes]544 (b) Did you include complete proofs of all theoretical results? [Yes] in Appendix A,545 Appendix B and Appendix C.546 3. If you ran experiments...547 (a) Did you include the code, data, and instructions needed to reproduce the main exper-548 imental results (either in the supplemental material or as a URL)? [Yes] Code and549 README in SM, dataloaders in code550 (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they551 were chosen)? [Yes] Splits and hyperparameters provided in code zip552 (c) Did you report error bars (e.g., with respect to the random seed after running experi-553 ments multiple times)? [Yes] Standard deviations are stated in results table554 (d) Did you include the total amount of compute and the type of resources used (e.g., type555 of GPUs, internal cluster, or cloud provider)? [Yes] in appendix D556 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...557 (a) If your work uses existing assets, did you cite the creators? [Yes] datasets and standard558 libraries cited in appendix D559 (b) Did you mention the license of the assets? [Yes] industry standard libraries and560 benchmark datasets were used in accordance with licences561 (c) Did you include any new assets either in the supplemental material or as a URL? [Yes]562 code provided in SM zip563 (d) Did you discuss whether and how consent was obtained from people whose data you’re564 using/curating? [N/A]565 (e) Did you discuss whether the data you are using/curating contains personally identifiable566 information or offensive content? [Yes] no personal data is contained within bench-567 marking datasets568 5. If you used crowdsourcing or conducted research with human subjects...569 (a) Did you include the full text of instructions given to participants and screenshots, if570 applicable? [N/A]571 (b) Did you describe any potential participant risks, with links to Institutional Review572 Board (IRB) approvals, if applicable? [N/A]573 (c) Did you include the estimated hourly wage paid to participants and the total amount574 spent on participant compensation? [N/A]575
1. What is the focus and contribution of the paper on graph neural networks? 2. What are the strengths and weaknesses of the proposed architecture, particularly regarding theoretical insights and experimental limitations? 3. Do you have any concerns about the representation and non-linearity aspects of the method? 4. What are the limitations regarding the shared channel-mixing matrices across layers? 5. How does the paper discuss over-smoothing, and what accuracies were used for various numbers of layers? 6. What are your opinions on the writing presentation, notation, and language used in the paper? 7. Were there any missing citations or references in the paper that should have been included? 8. Could you clarify the gradient and derivation parts of the paper, specifically the relations between equations (4)-(7)? 9. How do the authors justify the absence of non-linearities in their network definition, and how does this relate to traditional neural networks? 10. Can the authors provide more discussion and evaluation on enforcing symmetry in their approach and its impact on performance? 11. How do the authors respond to the concern that their method negates the fundamental aspect of neural networks, which relies on non-linear activations to express complex functions? 12. What is the significance of choosing a diagonal and random W, and why did the authors not train over it? 13. How do the achieved accuracies stabilize with different runs using a random W, and what influence does this choice have on the results? 14. Given the limited scope of experiments, how confident are the authors about the generalization ability of their conclusion on real-world data sets and tasks?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper suggests a new graph neural network architecture that can be seen as a gradient descent minimization of a learnable energy function. The authors use channel-mixing matrices with mixed eigenvalues to infuse high frequencies into the architecture dynamics. Many existing architectures fall into this framework. Three variants of the new architecture are presented. Strengths And Weaknesses Strengths The paper is rich in theoretical insights. Unlike previous works, this is the first work that analyzes the channel mixing matrix. Weaknesses: The experiments are rather limited. The authors show only node classification, where it is customary to show more experiments. The work of GCNII, for example, shows PPI, and two cases of node classification (semi and fully supervised). Also, an example on a large dataset (e.g., OGBN-Arxiv) is also important. Most importantly, it is not clear how well the method works on graph classification tasks (e.g., the TUD data sets), without the non-linearities in the layers. Given the strong claims made by the authors regarding those non-linearities (see below), showing these experiments is essential in my opinion. I find only a single data set where GRAFF yields the best performance. Overall, this method does not improve the SOTA. There is very little discussion about over-smoothing in this work (only in the paragraph in line 249). At the end, the authors do not show if their method is over-smoothing or not. Ideally, the authors would provide accuracies for a variety of number of layers and show that the accuracy does not degrade (e.g., see GCNII). There are questionable choices for the architecture (e.g., no non-linearities) that are accompanied by too strong statements without backing them up. See details in the questions section. In particular, the authors do not show that indeed adding and removing the non-linearity has no influence on the accuracy. The writing of the paper is hard to follow. I would say that the presentation (i.e., notation and language) can be simplified to make this paper more reader-friendly. I could not find how many layers were used in Table 1. Missing citation from the previous Neurips: Zhou, K., Huang, X., Zha, D., Chen, R., Li, L., Choi, S.-H., and Hu, X. Dirichlet energy constrained learning for deep graph neural networks. Advances in Neural Information Processing Systems, 34, 2021. This paper also discusses the Dirichlet energy throughout the layers. Missing citation from ICLR 2022: How attentive are graph attention networks? (GATv2) The conclusion of this paper is that having a more non-linear (in some sense) attention matrix improves the accuracy and training stability over GAT. How does the conclusion in this work align with the findings in GATv2? Note that there are additional data sets in GATv2, which may be more challenging and require non-linearities in the layers. Questions The gradient of Eq (4) looks like Grad^TW^TWGrad(F). Why is Eq (5) the way it is? Please clarify in the text. I am confused by the derivation from (6) to (7). First, if the rest of the paper uses the energies in (7), why introduce the energies in (4)-(6)? How do (6) and (7) relate? This is not clear. More importantly, is it the same W in both equations? It does not seem so. This is very confusing. Please consider revising. Line 202-203: Are the authors saying that there is no role for the non-linearities in graph neural networks? But that is the most important aspect of a neural network’s definition (otherwise, the whole network collapses to a single linear operator). It does not make sense. Maybe try other experiments? CNNs, for example, sure require non-linearities for image classification. Maybe test this hypothesis on graph classification? Maybe shape classification (ModelNet40)? Line 209: by linearized GNNs, do the authors mean with identity activation? Because there are no activations in the following equations. But - this is not the traditional use of the word "linearized", which is traditionally used for a Taylor approximation. Please revise. Line 228 – why do the authors introduce tilde{W}, and then set it as identity? Can’t tilde{W} be chosen better? And if so - why introduce this matrix? Line 230 – Why are Omega and W shared across the layers? Traditionally, at least in CNNs, one learns a variety of channel-mixing convolution operators. What is different here? This negates the common practice of neural networks and requires explanation and evaluation. Line 235 – the authors have T, tau, and the number of layers. How do we choose T or tau? Is it a hyperparameter? Do the authors choose tau to be small enough to ensure the stability of the Euler scheme? Line 236 – The authors state that they can include the non-linearities in the encoder and decoder layers only. Will that be the equivalent to having a non-linearity in Eq. (12)? I do not understand why. Again, that is against the fundamental aspect of neural networks – the use of non-linear activations to express complex functions. Lines 244-245: Symmetry being a key requirement. When other works learn the channel-mixing operators, they do not enforce symmetry. So – this is important only for looking at architectures as a gradient flow. Further, in lines 247-248: indeed (13) can be seen as a generalization of the mentioned methods (with identity activation), but none of these learn symmetric matrices. Also, as the authors note, GAT does not have a symmetric attention matrix. So, is symmetry really important? Will we get better networks if we enforce symmetry in the learning? In continuation to the previous point: the symmetrization in lines 304. In my opinion, this whole concept of symmetry or not should be evaluated in extensive experiments, but I do not see such experiments in the paper. Lines 282-284: Essentially, when choosing negative eigenvalues for W, you indeed reverse the time integration. But then, the Euler method that is used is known to become unstable. Isn’t that a problem with the whole approach? Lines 290: The authors state that linear GNNs achieve competitive performance on real-world data sets. Again, having linear layers negates the whole concept of NNs. This cannot be a conclusion that holds for all data sets and tasks. Given the rather limited scope of experiments, I would say that this is a too strong statement here. Lines 301-302: The authors choose W to be diagonal and random and do not train over it. Why is that? Won’t we get better results if we train over W? This means that there are no non-linearities in the network, and there are no learnable channel-mixing parameters. So, except the encoding and decoding layers, it’s essentially a classical algorithm. How do the authors explain that? Lines 301: the random W. How does the choice of a random W influence the results? Are the achieved accuracies stable? Or do the authors see large deviations between different runs? Line 375: This is yet another strong sentence, given the rather limited experimental study in the paper. Limitations yes.
NIPS
Title Graph Neural Networks as Gradient Flows Abstract Dynamical systems minimizing an energy are ubiquitous in geometry and physics. 1 We propose a gradient flow framework for GNNs where the equations follow the 2 direction of steepest descent of a learnable energy. This approach allows to analyse 3 the GNN evolution from a multi-particle perspective as learning attractive and 4 repulsive forces in feature space via the positive and negative eigenvalues of a 5 symmetric ‘channel-mixing’ matrix. We perform spectral analysis of the solutions 6 and conclude that gradient flow graph convolutional models can induce a dynamics 7 dominated by the graph high frequencies, which is desirable for heterophilic 8 datasets. We also describe structural constraints on common GNN architectures 9 allowing to interpret them as gradient flows. We perform thorough ablation studies 10 corroborating our theoretical analysis and show competitive performance of simple 11 and lightweight models on real-world homophilic and heterophilic datasets. 12 N/A Dynamical systems minimizing an energy are ubiquitous in geometry and physics.1 We propose a gradient flow framework for GNNs where the equations follow the2 direction of steepest descent of a learnable energy. This approach allows to analyse3 the GNN evolution from a multi-particle perspective as learning attractive and4 repulsive forces in feature space via the positive and negative eigenvalues of a5 symmetric ‘channel-mixing’ matrix. We perform spectral analysis of the solutions6 and conclude that gradient flow graph convolutional models can induce a dynamics7 dominated by the graph high frequencies, which is desirable for heterophilic8 datasets. We also describe structural constraints on common GNN architectures9 allowing to interpret them as gradient flows. We perform thorough ablation studies10 corroborating our theoretical analysis and show competitive performance of simple11 and lightweight models on real-world homophilic and heterophilic datasets.12 1 Introduction and motivations13 Graph neural networks (GNNs) [38, 20, 21, 36, 7, 15, 27] and in particular their Message Passing14 formulation (MPNN) [19] have become the standard ML tool for dealing with different types of15 relations and interactions, ranging from social networks to particle physics and drug design. One16 of the often cited drawbacks of traditional GNN models is their poor ‘explainability’, making it17 hard to know why and how they make certain predictions [46, 47], and in which situations they18 may work and when they would fail. Limitations of GNNs that have attracted attention are over-19 smoothing [29, 30, 8], over-squashing and bottlenecks [1, 40], and performance on heterophilic data20 [31, 51, 13, 4, 45] – where adjacent nodes usually have different labels.21 Contributions. We propose a Gradient Flow Framework22 (GRAFF) where the GNN equations follow the direction of steep-23 est descent of a learnable energy. Thanks to this framework we can24 (i) interpret GNNs as a multi-particle dynamics where the learned25 parameters determine pairwise attractive and repulsive potentials26 in the feature space. This sheds light on how GNNs can adapt to27 heterophily and explains their performance and the smoothness of28 the prediction. (ii) GRAFF leads to residual convolutional models29 where the channel-mixing W is performed by a shared symmet-30 ric bilinear form inducing attraction and repulsion via its positive31 and negative eigenvalues, respectively. We theoretically investi-32 gate the interaction of the graph spectrum with the spectrum of the33 channel-mixing, proving that if there is more mass on the negative34 eigenvalues of W, then the dynamics is dominated by the graph-35 high frequencies, which could be desirable on heterophilic graphs.36 We also extend results of [29, 30, 8] by showing that when we drop37 the residual connection intrinsic to the gradient flow framework,38 graph convolutional models always induce a low-frequency dominated dynamics independent of the39 sign and magnitude of the spectrum of the channel-mixing. We also discuss how simple choices40 make common architectures fit GRAFF and conduct thorough ablation studies to corroborate the the-41 oretical analysis on the role of the spectrum of W. (iii) We crystallize an instance of our framework42 into a linear, residual, convolutional model that achieves competitive performance on homophilic and43 heterophilic real world graphs whilst being faster than GCN.44 Related work. Our analysis is related to studying GNNs as filters on the graph spectrum [15, 24,45 2, 25] and over-smoothing [29, 30, 8, 50] and partly adopts techniques similar to [30]. The key46 difference is that we also consider the spectrum of the ‘channel-mixing’ matrix. The concept of47 gradient flows has been a standard tool in physics and geometry [16], from which they were adopted48 for image processing [26], and recently used in ML [35] for the analysis of Transformers [41] – see49 also [18] for discussion of loss landscapes. Our continuous-time evolution equations follows the spirit50 of Neural ODES [22, 12, 3] and the study of GNNs as continuous dynamical systems [44, 10, 17, 9].51 Outline. In Section 2, we review the continuous and discrete Dirichlet energy and the associated52 gradient flow framework. We formalize the notion of over-smoothing and low(high)-frequency-53 dominated dynamics to investigate GNNs and study the dominant components in their evolution. We54 extend the graph Dirichlet energy to allow for a non-trivial norm for the feature edge-gradient. This55 leads to gradient flow equations that diffuse the features and over-smooth in the limit. Accordingly,56 in Section 3 we introduce a more general energy with a symmetric channel-mixing matrix W giving57 rise to attractive and repulsive pairwise terms via its positive and negative eigenvalues and show58 that the negative spectrum can induce high-frequency-dominant dynamics. In Section 4 we first59 compare with continuous GNN models and then discretize the equations and provide a ‘recipe’ for60 making standard GNN architectures fit a gradient flow framework. We adapt the spectral analysis to61 discrete-time showing that gradient flow convolutional models can generate a dynamics dominated by62 the high frequencies via the negative eigenvalues of W while this is impossible if we drop the residual63 connection. In Section 5 we corroborate our theoretical analysis on the role of the spectrum of W64 via ablation studies on graphs with varying homophily. Experiments on real world datasets show a65 competitive performance of our model despite its simplicity and reduced number of parameters.66 2 Gradient-flow formalism67 Notations adopted throughout the paper. Let G = (V,E) be an undirected graph with n nodes.68 We denote by F 2 Rn⇥d the matrix of d-dimensional node features, by fi 2 Rd its i-th row69 (transposed), by fr 2 Rn its r-th column, and by vec(F) 2 Rnd the vectorization of F obtained70 by stacking its columns. Given a symmetric matrix B, we let B + , B denote its most positive and71 negative eigenvalues, respectively, and ⇢B be its spectral radius. If B ⌫ 0, then gap(B) denotes the72 positive smallest eigenvalue of B. ḟ(t) denotes the temporal derivative, ⌦ is the Kronecker product73 and ‘a.e.’ means almost every w.r.t. Lebesgue measure and usually refers to data in the complement74 of some lower dimensional subspace in Rn⇥d. Proofs and additional results appear in the Appendix.75 Starting point: a geometric parallelism. To motivate a gradient-flow approach for GNNs, we start76 from the continuous case (see Appendix A.1 for details). Consider a smooth map f : Rn ! (Rd, h)77 with h a constant metric represented by H ⌫ 0. The Dirichlet energy of f is defined by78 E(f, h) = 1 2 Z Rn krfk2h dx = 1 2 dX q,r=1 nX j=1 Z Rn hqr@jf q@jf r (x)dx (1) and measures the ‘smoothness’ of f . A natural approach to find minimizers of E - called harmonic79 maps - was introduced in [16] and consists in studying the gradient flow of E , wherein a given map80 f(0) = f0 is evolved according to ḟ(t) = rfE(f(t)). These type of evolution equations have81 historically been the core of variational and PDE-based image processing; in particular, gradient82 flows of the Dirichlet energy were shown [26] to recover the Perona-Malik nonlinear diffusion [32].83 Motivation: GNNs for node-classification. We wish to extend the gradient flow formalism to node84 classification on graphs. Assume we have a graph G, node-features F0 and labels {yi} on Vtrain ⇢ V,85 and that we want to predict the labels on Vtest ⇢ V. A GNN typically evolves the features via some86 parametric rule, GNN✓(G,F0), and uses a decoding map for the prediction y = DE(GNN✓(G,F0)).87 In graph convolutional models [15, 27], GNN✓ consists of two operations: applying a shared linear88 transformation to the features (‘channel mixing’) and propagating them along the edges of the graph89 (‘diffusion’). Our goal consists in studying when GNN✓ is the gradient flow of some parametric class90 of energies E✓ : Rn⇥d ! R, which generalize the Dirichlet energy. This means that the parameters91 can be interpreted as ‘finding the right notion of smoothness’ for our task. We evolve the features by92 Ḟ(t) = rFE✓(F(t)) with prediction y = DE(F(T )) for some optimal time T .93 Why a gradient flow? Since Ė✓(F(t)) = ||rFE✓(F(t))||2, the energy dissipates along the gradient94 flow. Accordingly, this framework allows to explain the GNN dynamics as flowing the node features95 in the direction of steepest descent of E✓. Indeed, we find that parametrizing an energy leads to96 equations governed by attractive and repulsive forces that can be controlled via the spectrum of97 symmetric ‘channel-mixing’ matrices. This shows that by learning to distribute more mass over the98 negative (positive) eigenvalues of the channel-mixing, gradient flow models can generate dynamics99 dominated by the higher (respectively, lower) graph frequencies and hence tackle different homophily100 scenarios. The gradient flow framework also leads to sharing of the weights across layers (since we101 parametrize the energy rather than the evolution equations, as usually done in GNNs), allowing us to102 reduce the number of parameters without compromising performance (see Table 1).103 Analysis on graphs: preliminaries. Given a connected graph G with self-loops, its adjacency104 matrix A is defined as aij = 1 if (i, j) 2 E and zero otherwise. We let D = diag(di) be the degree105 matrix and write Ā := D 1/2AD 1/2. Let F 2 Rn⇥d be the matrix representation of a signal. Its106 graph gradient is (rF)ij := fj/ p dj fi/ p di. We define the Laplacian as := 12divr (the107 divergence div is the adjoint of r), represented by = I Ā ⌫ 0. We refer to the eigenvalues of108 as frequencies: the lowest frequency is always 0 while the highest frequency is ⇢ 2 [14]. As109 for the continuum case, the gradient allows to define a (graph) Dirichlet energy as [49]110 E Dir (F) := 1 4 X i X j:(i,j)2E ||(rF)ij || 2 ⌘ 1 4 X (i,j)2E || fi p di fjp dj || 2 = 1 2 trace(F > F), (2) where the extra 1 2 is for convenience. As for manifolds, EDir measures smoothness. If we stack the111 columns of F into vec(F) 2 Rnd, the gradient flow of EDir yields the heat equation on each channel:112 vec(Ḟ(t)) = rvec(F)E Dir (vec(F(t))) = (Id ⌦ )vec(F(t)) () ḟ r (t) = fr(t), (3) for 1 r d. Similarly to [8], we rely on EDir to assess whether a given dynamics t 7! F(t) is a113 smoothing process. A different choice of Laplacian L = D A with non-normalized adjacency114 induces the analogous Dirichlet energy EDirL (F) = 1 2 trace(F > LF). Throughout this paper, we rely115 on the following definitions (see Appendix A.3 for further equivalent formulations and justifications):116 Definition 2.1. Ḟ(t) = GNN✓(F(t), t) initialized at F(0) is smoothing if EDir(F(t)) C + '(t),117 with C a constant only depending on EDir(F(0)) and '̇(t) 0. Over-smoothing occurs if either118 E Dir (F(t)) ! 0 or EDirL (F(t)) ! 0 for t ! 1.119 Our notion of ‘over-smoothing’ is a relaxed version of the definition in [34] – although in the linear120 case one always finds an exponential decay of EDir. We note that EDir(F(t)) ! 0 iff fr(t) ! 0 for121 each column fr. As in [30], this corresponds to a loss of separation power along the solution where122 nodes with equal degree become indistinguishable since we converge to ker( ) (if we replaced 123 with L then we would not even be able to separate nodes with different degrees in the limit).124 To motivate the next definition, consider Ḟ(t) = ĀF(t). Despite ||F(t)|| being unbounded for a.e.125 F(0), the low-frequency components are growing the fastest and indeed F(t)/||F(t)|| ! F1 s.t.126 f r 1 = 0 for 1 r d. We formalize this scenario – including the opposite case of high-frequency127 components being dominant – by studying EDir(F(t)/||F(t)||), i.e. the Rayleigh quotient of Id ⌦ .128 Definition 2.2. Ḟ(t) = GNN✓(F(t), t) initialized at F(0) is Low/High-Frequency-Dominant129 (L/HFD) if EDir(F(t)/||F(t)||) ! 0 (respectively, EDir(F(t)/||F(t)||) ! ⇢ /2) for t ! 1.130 We report a consequence of Definition 2.2 and refer to Appendix A.3 for additional details and131 motivations for the characterizations of LFD and HFD.132 Lemma 2.3. GNN✓ is LFD (HFD) iff for each tj ! 1 there exist tjk ! 1 and F1 s.t.133 F(tjk)/||F(tjk)|| ! F1 and fr1 = 0 ( fr1 = ⇢ fr1, respectively).134 If a graph is homophilic, adjacent nodes are likely to share the same label and we expect a smoothing135 or LFD dynamics enhancing the low-frequency components to be successful at node classification136 tasks [43, 28]. In the opposite case of heterophily, the high-frequency components might contain more137 relevant information for separating classes [4, 5] – the prototypical example being the eigenvector of138 associated with largest frequency ⇢ separating a regular bipartite graph. In other words, the class139 of heterophilic graphs contain instances where signals should be sharpened by increasing EDir rather140 than smoothed out. Accordingly, an ideal framework for learning on graphs must accommodate both141 of these opposite scenarios by being able to induce either an LFD or a HFD dynamics.142 Parametric Dirichlet energy: channel-mixing as metric in feature space. In eq. (1) a constant143 nontrivial metric h in Rd leads to the mixing of the feature channels. We adapt this idea by considering144 a symmetric positive semi-definite H = W>W with W 2 Rd⇥d and using it to generalize EDir as145 E Dir W (F) := 1 4 dX q,r=1 X i X j:(i,j)2E hqr(rf q )ij(rf r )ij = 1 4 X (i,j)2E ||W(rF)ij || 2. (4) We note the analogy with eq. (1), where the sum over the nodes replaces the integration over the146 domain and the j-th derivative at some point i is replaced by the gradient along the edge (i, j) 2 E.147 We generally treat W as learnable weights and study the gradient flow of EDirW :148 Ḟ(t) = rFE Dir W (F(t)) = F(t)W > W. (5) We see that eq. (5) generalizes eq. (3). Below ‘smoothing’ is intended as in Definition 2.1.149 Proposition 2.4. Let P kerW be the projection onto ker(W > W). Equation (5) is smoothing since150 E Dir (F(t)) e 2tgap(W >W)gap( ) ||F(0)|| 2 + E Dir ((P kerW ⌦ In)vec(F(0))), t 0. In fact F(t) ! F1 s.t. 9 1 2 Rd: for each i 2 V we have (f1)i = p di 1 + P ker W fi(0).151 Proposition 2.4 implies that no weight matrix W in eq. (5) can separate the limit embeddings F(1)152 of nodes with same degree and input features. If W has a trivial kernel, then nodes with same degrees153 converge to the same representation and over-smoothing occurs as per Definition 2.1. Differently154 from [29, 30, 8], over-smoothing occurs independently of the spectral radius of the ‘channel-mixing’155 if its eigenvalues are positive – even for equations which lead to residual GNNs when discretized156 [12]. According to Proposition 2.4, we do not expect eq. (5) to succeed on heterophilic graphs where157 smoothing processes are generally harmful – this is confirmed in Figure 2 (see prod-curve). To158 remedy this problem, we generalize eq. (5) to a gradient flow that can be HFD as per Definition 2.2.159 3 A general parametric energy for pairwise interactions160 We first rewrite the energy EDirW in eq. (4) as161 E Dir W (F) = 1 2 X i hfi,W > Wfii 1 2 X i,j āijhfi,W > Wfji. (6) We then define a new, more general energy by replacing the occurrences of W>W with new162 symmetric matrices ⌦,W 2 Rd⇥d since we also want to generate repulsive forces:163 E tot (F) := 1 2 X i hfi,⌦fii 1 2 X i,j āijhfi,Wfji ⌘ E ext ⌦ (F) + E pair W (F), (7) with associated gradient flow of the form (see Appendix B)164 Ḟ(t) = rFE tot (F(t)) = F(t)⌦+ ĀF(t)W. (8) Note that eq. (8) is gradient flow of some energy F 7! Etot(F) iff both ⌦ and W are symmetric.165 A multi-particle system point of view: attraction vs repulsion. Consider the d-dimensional166 node-features as particles in Rd with energy Etot. While the term Eext⌦ is independent of the graph167 topology and represents an external field in the feature space, the second term EpairW constitutes a168 potential energy, with W a bilinear form determining the pairwise interactions of adjacent node169 representations. Given a symmetric W, we write W = ⇥> + ⇥+ ⇥ > ⇥ , by decomposing the170 spectrum of W in positive and negative values.We can rewrite Etot = Eext⌦ W + E Dir ⇥+ E Dir ⇥ , i.e.171 E tot (F) = 1 2 X i hfi, (⌦ W)fii+ 1 4 X i,j ||⇥+(rF)ij || 2 1 4 X i,j ||⇥ (rF)ij || 2. (9) The gradient flow of Etot minimizes EDir⇥+ and maximizes E Dir ⇥ . The matrix W encodes repulsive172 pairwise interactions via its negative-definite component ⇥ which lead to terms ||⇥ (rF)ij ||173 increasing along the solution. The latter affords a ‘sharpening’ effect desirable on heterophilic graphs174 where we need to disentangle adjacent node representations and hence ‘magnify’ the edge-gradient.175 Spectral analysis of the channel-mixing. We will now show that eq. (8) can lead to a HFD176 dynamics. To this end, we assume that ⌦ = 0 so that eq. (8) becomes Ḟ(t) = ĀF(t)W. According177 to eq. (9) the negative eigenvalues of W lead to repulsion. We show that the latter can induce HFD178 dynamics as per Definition 2.2. We let P ⇢ W be the orthogonal projection into the eigenspace of179 W ⌦ Ā associated with the eigenvalue ⇢ := | W |(⇢ 1). We define ✏HFD explicitly in eq. (24).180 Proposition 3.1. If ⇢ > W+ , then Ḟ(t) = ĀF(t)W is HFD for a.e. F(0): there exists ✏HFD s.t.181 E Dir (F(t)) = e2t⇢ ⇣⇢ 2 ||P ⇢ W F(0)|| 2 +O(e 2t✏HFD) ⌘ , t 0, and F(t)/||F(t)|| converges to F1 2 Rn⇥d such that fr1 = ⇢ fr1, for 1 r d.182 Proposition 3.1 shows that if enough mass of the spectrum of the ‘channel-mixing’ is distributed over183 the negative eigenvalues, then the evolution is dominated by the graph high frequencies. This analysis184 is made possible in our gradient flow framework where W must be symmetric. The HFD dynamics185 induced by negative eigenvalues of W is confirmed in Figure 2 (neg-prod-curve in the bottom chart).186 A more general energy. Equations with a source term may have better expressive power [44, 11, 39].187 In our framework this means adding an extra energy term of the form Esource W̃ (F) := hF,F(0)W̃i188 to eq. (7) with some learnable and W̃. This leads to the following gradient flow:189 Ḟ(t) = F(t)⌦+ ĀF(t)W F(0)W̃. (10) We also observe that one could replace the fixed matrix Ā with a more general symmetric graph190 vector field A satisfying Aij = 0 if (i, j) /2 E, although in this work we focus on the case A = Ā.191 We also note that when ⌦ = W, then eq. (8) becomes Ḟ(t) = F(t)W. We perform a spectral192 analysis of this case in Appendix B.2.193 Non-linear activations. In Appendix B.3 we discuss non-linear gradient flow equations. Here194 we study what happens if the gradient flow in eq. (10) is activated pointwise by : R ! R. We195 show that although we are no longer a gradient flow, the learnable multi-particle energy Etot is still196 decreasing along the solution, meaning that the interpretation of the channel-mixing W inducing197 attraction and repulsion via its positive and negative eigenvalues respectively is preserved.198 Proposition 3.2. Consider a non-linear map : R ! R such that the function x 7! x (x) 0. If199 t 7! F(t) solves the equation200 Ḟ(t) = ⇣ F(t)⌦+ ĀF(t)W F(0)W̃ ⌘ , where acts elementwise, then201 dEtot(F(t)) dt 0. A proof of this result and more details and discussion are reported in Appendix E. We emphasize202 here that differently from previous results about behaviour of ReLU wrt EDir [30, 8], we deal with a203 much more general energy that can also induce repulsion and a more general family of activation204 functions (that include ReLU, tanh, arctan and many others).205 4 Comparison with GNNs206 In this Section, we study standard GNN models from the perspective of our gradient flow framework.207 4.1 Continuous case208 Continuous GNN models replace layers with continuous time. In contrast with Proposition 3.1,209 we show that three main linearized continuous GNN models are either smoothing or LFD as210 per Definition 2.2. The linearized PDE-GCND model [17] corresponds to choosing = 0 and211 ⌦ = W = K(t)>K(t) in eq. (10), for some time-dependent family t 7! K(t) 2 Rd⇥d:212 ḞPDE GCND(t) = F(t)K(t) > K(t). The CGNN model [44] can be derived from eq. (10) by setting ⌦ = I ⌦̃,W = W̃ = I, = 1:213 ḞCGNN(t) = F(t) + F(t)⌦̃+ F(0). Finally, in linearized GRAND [10] a row-stochastic matrix A(F(0)) is learned from the encoding214 via an attention mechanism and we have215 ḞGRAND(t) = RWF(t) = (I A(F(0)))F(t). We note that if A is not symmetric, then GRAND is not a gradient flow.216 Proposition 4.1. PDE GCND, CGNN and GRAND satisfy the following:217 (i) PDE GCND is a smoothing model: ĖDir(FPDE GCND (t)) 0.218 (ii) For a.e. F(0) it holds: CGNN is never HFD and if we remove the source term, then219 E Dir (FCGNN(t)/||FCGNN(t)||) e gap( )t.220 (iii) If G is connected, FGRAND(t) ! µ as t ! 1, with µr = mean(fr(0)), 1 r d.221 By (ii) the source-free CGNN-evolution is LFD independent of ⌦̃. Moreover, by (iii), over-smoothing222 occurs for GRAND as per Definition 2.1. On the other hand, Proposition 3.1 shows that the negative223 eigenvalues of W can make the source-free gradient flow in eq. (8) HFD. Experiments in Section 5224 confirm that the gradient flow model outperforms CGNN and GRAND on heterophilic graphs.225 4.2 Discrete case226 We now describe a discrete version of our gradient flow model and compare it to ‘discrete’ GNNs227 where discrete time steps correspond to different layers. In the spirit of [12], we use explicit Euler228 scheme with step size ⌧ 1 to solve eq. (10) and set W̃ = I. In the gradient flow framework we229 parametrize the energy rather than the actual equations, which leads to symmetric channel-mixing230 matrices ⌦,W 2 Rd⇥d that are shared across the layers. Since the matrices are square, an encoding231 block EN : Rn⇥p ! Rn⇥d is used to process input features F0 2 Rn⇥p and generally reduce the232 hidden dimension from p to d. Moreover, the iterations inherently lead to a residual architecture233 because of the explicit Euler discretization:234 F(t+ ⌧) = F(t) + ⌧ F(t)⌦+ ĀF(t)W + F(0) , F(0) = EN(F0), (11) with prediction y = DE(F(T )) produced by a decoder DE : Rn⇥d ! Rn⇥k, where k is the235 number of label classes and T integration time of the form T = m⌧ , so that m 2 N represents the236 number of layers. Although eq. (11) is linear, we can include non-linear activations in EN, DE237 making the entire model generally non-linear. We emphasize two important points:238 • Since the framework is residual, even if the message-passing is linear, this is not equivalent239 to collapsing the dynamics into a single layer with diffusion matrix Ām, with m the number240 of layers, see eq. (27) in the appendix where we derive the expansion of the solution.241 • We could also activate the equations pointwise and maintain the physics interpretation thanks242 to Proposition 3.2 to gain greater expressive power. In the following though, we mainly243 stick to the linear discrete gradient flow unless otherwise stated.244 Are discrete GNNs gradient flows? Given a (learned) symmetric graph vector field A 2 Rn⇥n245 satisfying Aij = 0 if (i, j) /2 E, consider a family of linear GNNs with shared weights of the form246 F(t+ 1) = F(t)⌦+AF(t)W + F(0)W̃, 0 t T. (12) Symmetry is the key requirement to interpret GNNs in eq. (12) in a gradient flow framework.247 Lemma 4.2. Equation (12) is the unit step size discrete gradient flow of EextI ⌦ + E pair A,W E source W̃ ,248 with EpairA,W defined by replacing Ā with A in eq. (7), iff ⌦ and W are symmetric.249 Lemma 4.2 provides a recipe for making standard architectures into a gradient flow, with symmetry250 being the key requirement. When eq. (12) is a gradient flow, the underlying GNN dynamics is251 equivalent to minimizing a multi-particle energy by learning attractive and repulsive directions in252 feature space as discussed in Section 3. In Appendix C.2, we show how Lemma 4.2 covers linear253 versions of GCN [27, 43], GAT [42], GraphSAGE [23] and GCNII [11] to name a few.254 Over-smoothing analysis in discrete setting. By Proposition 3.1 we know that the continuous255 version of eq. (11) can be HFD thanks to the negative eigenvalues of W. The next result represents a256 discrete counterpart of Proposition 3.1 and shows that residual, symmetrized graph convolutional257 models can be HFD. Below P ⇢ W is the projection into the eigenspace associated with the eigenvalue258 ⇢ := | W |(⇢ 1) and we report the explicit value of HFD in eq. (28) in Appendix C.3. We let:259 W + (⇢ 1)) 1 < | W | < 2(⌧(2 ⇢ )) 1. (13) Theorem 4.3. Given F(t+ ⌧) = F(t) + ⌧ĀF(t)W, with W symmetric, if eq. (13) holds then260 E Dir (F(m⌧)) = (1 + ⌧⇢ ) 2m ⇢ 2 ||P ⇢ W F(0)|| 2 +O ✓ 1 + ⌧ HFD 1 + ⌧⇢ ◆2m!! , HFD < ⇢ , hence the dynamics is HFD for a.e. F(0) and in fact F(m⌧)/||F(m⌧)|| ! F1 s.t. fr1 = ⇢ fr1.261 Conversely, if G is not bipartite, then for a.e. F(0) the system F(t + ⌧) = ⌧ĀF(t)W, with W262 symmetric, is LFD independent of the spectrum of W.263 Theorem 4.3 shows that linear discrete gradient flows can be HFD due to the negative eigenvalues of264 W. This differs from statements that standard GCNs act as low-pass filters and thus over-smooth in265 the limit. Indeed, in these cases the spectrum of W is generally ignored [43, 11] or required to be266 sufficiently small in terms of singular value decomposition [29, 30, 8] when no residual connection267 is present. On the other hand, Theorem 4.3 emphasizes that the spectrum of W plays a key role to268 enhance the high frequencies when enough mass is distributed over the negative eigenvalues provided269 that a residual connection exists – this is confirmed by the neg-prod-curve in Figure 2.270 The residual connection from a spectral perspective. Given a sufficiently small step-size so271 that the right hand side of inequality 13 is satisfied, F(t+ ⌧) = F(t) + ⌧ĀF(t)W is HFD for a.e.272 F(0) if | W |(⇢ 1) > W+ , i.e. ‘there is more mass’ in the negative spectrum of W than in the273 positive one. This means that differently from [29, 30, 8], there is no requirement on the minimal274 magnitude of the spectral radius of W coming from the graph topology as long as W + is small275 enough. Conversely, without a residual term, the dynamics is LFD for a.e. F(0) independently of the276 sign and magnitude of the eigenvalues of W. This is also confirmed by the GCN-curve in Figure 2.277 Over-smoothing vs LFD. We highlight how in general a linear GCN equation as F(t + ⌧) =278 ⌧ĀF(t)W may avoid over-smoothing in the sense of Definition 2.1, meaning that EDir(F(t)) ! 1279 as soon as there exist i 2 (0, 1) and the spectral radius of W is large enough. However, this280 will not lead to over-separation since the dominating term is the lowest frequency one: in other281 words, once we re-set the scale right as per the normalization in Theorem 4.3, we encounter loss of282 separability even with large (and possibly negative) spectrum of W.283 5 Experiments284 In this section we evaluate the gradient flow framework (GRAFF). We corroborate the spectral285 analysis using synthetic data with controllable homophily. We confirm that having negative (positive)286 eigenvalues of the channel-mixing W are essential in heterophilic (homophilic) scenarios where the287 gradient flow should align with HFD (LFD) respectively. We show that the gradient flow in eq. (11)288 – a linear, residual, symmetric graph convolutional model – achieves competitive performance on289 heterophilic datasets.290 Methodology. We crystallize GRAFF in the model presented in eq. (11) with EN, DE im-291 plemented as single linear layers or MLPs, and we set ⌦ to be diagonal. For the real-world292 experiments we consider diagonally-dominant (DD), diagonal (D) and time-dependent choices293 for the structure of W that offer explicit control over its spectrum. In the (DD)-case, we consider294 a W0 2 Rd⇥d symmetric with zero diagonal and w 2 Rd defined by w↵ = q↵ P |W 0 ↵ | + r↵,295 and set W = diag(w) + W0. Due to the Gershgorin Theorem the eigenvalues of W belong to296 [w↵ P |W 0 ↵ |,w↵ + P |W 0 ↵ |], so the model ‘can’ easily re-distribute mass in the spectrum of297 W via q↵, r↵. This generalizes the decomposition of W in [11] providing a justification in terms of298 its spectrum and turns out to be more efficient w.r.t. the hidden dimension d as shown in Figure 4 in299 the Appendix. For (D) we take W to be diagonal, with entries sampled U [ 1, 1] and fixed – i.e., we300 do not train over W – and only learn EN, DE. We also include a time-dependent model where Wt301 varies across layers. To investigate the role of the spectrum of W on synthetic graphs, we construct302 three additional variants: W = W0 + W0>, W = ±W0>W0 named sum, prod and neg-prod303 respectively where prod (neg-prod) variants have only non-negative (non-positive) eigenvalues.304 Complexity and number of parameters. If we treat the number of layers as a constant, the discrete305 gradient flow scales as O(|V|pd + |E|d2), where p and d are input feature and hidden dimension306 respectively, with p d usually. Note that GCN has complexity O(|E|pd) and in fact our model is307 faster than GCN as confirmed in Figure 5 in Appendix D. Since EN, DE are single linear layers308 (MLPs), we can bound the number of parameters by pd+ d2 + 3d+ dk, with k the number of label309 classes, in the (DD)-variant while in the (D)-variant we have pd+ 3d+ dk. Further ablation studies310 appear in Figure 4 in the Appendix showing that (DD) outperforms sum and GCN – especially in the311 lower hidden dimension regime – on real-world benchmarks with varying homophily.312 Synthetic experiments and ablation studies.313 To investigate our claims in a controlled environ-314 ment we use the synthetic Cora dataset of [51, Ap-315 pendix G]. Graphs are generated for target levels316 of homophily via preferential attachment – see317 Appendix D.3 for details. Figure 2 confirms the318 spectral analysis and offers a better understanding319 in terms of performance and smoothness of the320 predictions. Each curve – except GCN – repre-321 sents one version of W as in ‘methodology’ and322 we implement eq. (11) with = 0, ⌦ = 0. Fig-323 ure 2 (top) reports the test accuracy vs true label324 homophily. Neg-prod is better than prod on low-325 homophily and viceversa on high-homophily. This326 confirms Proposition 3.1 where we have shown327 that the gradient flow can lead to a HFD dy-328 namics – that are generally desirable with low-329 homophily – through the negative eigenvalues of330 W. Conversely, the prod configuration (where we331 have an attraction-only dynamics) struggles in low-332 homophily scenarios even though a residual connection is present. Both prod and neg-prod are333 ‘extreme’ choices and serve the purpose of highlighting that by turning off one side of the spectrum334 this could be the more damaging depending on the underlying homophily. In general though ‘neutral’335 variants like sum and (DD) are indeed more flexible and better performing. In fact, (DD) outperforms336 GCN especially in low-homophily scenarios, confirming Theorem 4.3 where we have shown that337 without a residual connection convolutional models are LFD – and hence more sensitive to underlying338 homophily – irrespectively of the spectrum of W. This is further confirmed in Figure 3.339 In Figure 2 (bottom) we compute the homophily of the prediction (cross) for a given method and we340 compare with the homophily (circle) of the prediction read from the encoding (i.e. graph-agnostic).341 The homophily here is a proxy to assess whether the evolution is smoothing, the goal being explaining342 the smoothness of the prediction via the spectrum of W as per our theoretical analysis. For neg-prod343 the homophily after the evolution is lower than that of the encoding, supporting the analysis that344 negative eigenvalues of W enhance high-frequencies. The opposite behaviour occurs in the case of345 prod and explains that in the low-homophily regime prod is under-performant due to the prediction346 being smoother than the true homophily. (DD) and sum variants adapt better to the true homophily.347 We note how the encoding compensates when the dynamics can only either attract or repulse (i.e. the348 spectrum of W has a sign) by decreasing or increasing the initial homophily respectively.349 Real world experiments. We test GRAFF against a range of datasets with varying homophily350 [37, 33, 31] (see Appendix D.4 for additional details). We use results provided in [45, Table 1],351 which includes standard baselines as GCN [27], GraphSAGE [23], GAT [42], PairNorm [48] and352 recent models tailored towards the heterophilic setting (GGCN [45], Geom-GCN [31], H2GCN353 [51] and GPRGNN [13]). For Sheaf [5], a recent top-performer on heterophilic datasets, we took354 the best performing variant (out of six provided) for each dataset. We also include continuous355 baselines CGNN [44] and GRAND [10] to provide empirical evidence for Proposition 4.1. Splits356 taken from [31] are used in all the comparisons. The GRAFF model discussed in ‘methodology’357 is a very simple architecture with shared parameters across layers and run-time smaller than GCN358 and more recent models like GGCN designed for heterophilic graphs (see Figure 5 in the Appendix).359 Nevertheless, it achieves competitive results on all datasets, performing on par or better than more360 complex recent models. Moreover, comparison with the ‘time-dependent’ (DD) variant confirms361 that by sharing weights across layers we do not lose performance. We note that on heterophilic362 graphs short integration time is usually needed due to the topology being harmful and the negative363 eigenvalues of W leading to exponential behaviour (see Appendix D).364 6 Conclusions365 In this work, we developed a framework for GNNs where the evolution can be interpreted as366 minimizing a multi-particle learnable energy. This translates into studying the interaction between367 the spectrum of the graph and the spectrum of the ‘channel-mixing’ leading to a better understanding368 of when and why the induced dynamics is low (high) frequency dominated. From a theoretical369 perspective, we refined existing asymptotic analysis of GNNs to account for the role of the spectrum of370 the channel-mixing as well. From a practical perspective, our framework allows for ‘educated’ choices371 resulting in a simple convolutional model that achieves competitive performance on homophilic372 and heterophilic benchmarks while being faster than GCN. Our results refute the folklore of graph373 convolutional models being too simple for heterophilic benchmarks.374 Limitations and future works. We limited our attention to a constant bilinear form W, which375 might be excessively rigid. It is possible to derive non-constant alternatives that are aware of the376 features or the position in the graph. The main challenge amounts to matching the requirement for377 local ‘heterogeneity’ with efficiency: we reserve this question for future work. Our analysis is also a378 first step into studying the interaction of the graph and ‘channel-mixing’ spectra; we did not explore379 other dynamics that are neither LFD nor HFD as per our definitions. The energy formulation points380 to new models more ‘physics’ inspired; this will be explored in future work.381 Societal impact. Our work sheds light on the actual dynamics of GNNs and could hence improve382 their understanding, which is crucial for assessing their impact on large-scale applications. We also383 show that instances of our framework achieve competitive performance on heterophilic data despite384 being faster than GCN, providing evidence for efficient methods with reduced footprint.385 References386 [1] U. Alon and E. Yahav. On the bottleneck of graph neural networks and its practical implications.387 In International Conference on Learning Representations, 2021.388 [2] M. Balcilar, G. Renton, P. Héroux, B. Gaüzère, S. Adam, and P. Honeine. Analyzing the389 expressive power of graph neural networks in a spectral perspective. In International Conference390 on Learning Representations, 2020.391 [3] M. Biloš, J. Sommer, S. S. Rangapuram, T. Januschowski, and S. Günnemann. Neural flows:392 Efficient alternative to neural odes. In Advances in Neural Information Processing Systems,393 volume 34, 2021.394 [4] D. Bo, X. Wang, C. Shi, and H. Shen. Beyond low-frequency information in graph convolutional395 networks. In AAAI. AAAI Press, 2021.396 [5] C. Bodnar, F. Di Giovanni, B. P. Chamberlain, P. Liò, and M. M. Bronstein. Neural sheaf397 diffusion: A topological perspective on heterophily and oversmoothing in gnns. arXiv preprint398 arXiv:2202.04579, 2022.399 [6] S. Brody, U. Alon, and E. Yahav. How attentive are graph attention networks? arXiv preprint400 arXiv:2105.14491, 2021.401 [7] J. Bruna, W. Zaremba, A. Szlam, and Y. LeCun. Spectral networks and locally connected402 networks on graphs. In 2nd International Conference on Learning Representations, ICLR 2014,403 2014.404 [8] C. Cai and Y. Wang. A note on over-smoothing for graph neural networks. arXiv preprint405 arXiv:2006.13318, 2020.406 [9] B. Chamberlain, J. Rowbottom, D. Eynard, F. Di Giovanni, X. Dong, and M. Bronstein. Beltrami407 flow and neural diffusion on graphs. Advances in Neural Information Processing Systems, 34,408 2021.409 [10] B. Chamberlain, J. Rowbottom, M. I. Gorinova, M. Bronstein, S. Webb, and E. Rossi. Grand:410 Graph neural diffusion. In International Conference on Machine Learning, pages 1407–1418.411 PMLR, 2021.412 [11] M. Chen, Z. Wei, Z. Huang, B. Ding, and Y. Li. Simple and deep graph convolutional networks.413 In International Conference on Machine Learning, pages 1725–1735. PMLR, 2020.414 [12] R. T. Chen, Y. Rubanova, J. Bettencourt, and D. K. Duvenaud. Neural ordinary differential415 equations. Advances in neural information processing systems, 31, 2018.416 [13] E. Chien, J. Peng, P. Li, and O. Milenkovic. Adaptive universal generalized pagerank graph417 neural network. In 9th International Conference on Learning Representations, ICLR 2021,418 2021.419 [14] F. R. Chung and F. C. Graham. Spectral graph theory. Number 92. American Mathematical420 Soc., 1997.421 [15] M. Defferrard, X. Bresson, and P. Vandergheynst. Convolutional neural networks on graphs422 with fast localized spectral filtering. Advances in neural information processing systems, 29,423 2016.424 [16] J. Eells and J. H. Sampson. Harmonic mappings of riemannian manifolds. American journal of425 mathematics, 86(1):109–160, 1964.426 [17] M. Eliasof, E. Haber, and E. Treister. Pde-gcn: Novel architectures for graph neural networks427 motivated by partial differential equations. Advances in Neural Information Processing Systems,428 34, 2021.429 [18] M. Geiger, L. Petrini, and M. Wyart. Landscape and training regimes in deep learning. Physics430 Reports, 924:1–18, 2021.431 [19] J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl. Neural message passing432 for quantum chemistry. In International Conference on Machine Learning, pages 1263–1272.433 PMLR, 2017.434 [20] C. Goller and A. Kuchler. Learning task-dependent distributed representations by backprop-435 agation through structure. In Proceedings of International Conference on Neural Networks436 (ICNN’96), volume 1, pages 347–352. IEEE, 1996.437 [21] M. Gori, G. Monfardini, and F. Scarselli. A new model for learning in graph domains. In438 Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005., volume 2,439 pages 729–734. IEEE, 2005.440 [22] E. Haber and L. Ruthotto. Stable architectures for deep neural networks. Inverse Problems, 34,441 2018.442 [23] W. Hamilton, Z. Ying, and J. Leskovec. Inductive representation learning on large graphs.443 Advances in neural information processing systems, 30, 2017.444 [24] D. K. Hammond, P. Vandergheynst, and R. Gribonval. The spectral graph wavelet transform:445 Fundamental theory and fast computation. In Vertex-Frequency Analysis of Graph Signals,446 pages 141–175. Springer, 2019.447 [25] M. He, Z. Wei, H. Xu, et al. Bernnet: Learning arbitrary graph spectral filters via bernstein448 approximation. Advances in Neural Information Processing Systems, 34, 2021.449 [26] R. Kimmel, N. Sochen, and R. Malladi. From high energy physics to low level vision. In450 International Conference on Scale-Space Theories in Computer Vision, pages 236–247. Springer,451 1997.452 [27] T. N. Kipf and M. Welling. Semi-Supervised Classification with Graph Convolutional Networks.453 In Proceedings of the 5th International Conference on Learning Representations, ICLR ’17,454 2017.455 [28] J. Klicpera, S. Weißenberger, and S. Günnemann. Diffusion improves graph learning. In456 Proceedings of the 33rd International Conference on Neural Information Processing Systems,457 2019.458 [29] H. Nt and T. Maehara. Revisiting graph neural networks: All we have is low-pass filters. arXiv459 preprint arXiv:1905.09550, 2019.460 [30] K. Oono and T. Suzuki. Graph neural networks exponentially lose expressive power for node461 classification. In International Conference on Learning Representations, 2020.462 [31] H. Pei, B. Wei, K. C. Chang, Y. Lei, and B. Yang. Geom-gcn: Geometric graph convolutional463 networks. In 8th International Conference on Learning Representations, ICLR 2020, 2020.464 [32] P. Perona and J. Malik. Scale-space and edge detection using anisotropic diffusion. PAMI,465 12(7):629–639, 1990.466 [33] B. Rozemberczki, C. Allen, and R. Sarkar. Multi-scale attributed node embedding. Journal of467 Complex Networks, 9(2):cnab014, 2021.468 [34] T. K. Rusch, B. P. Chamberlain, J. Rowbottom, S. Mishra, and M. M. Bronstein. Graph-coupled469 oscillator networks. In International Conference on Machine Learning, 2022.470 [35] M. E. Sander, P. Ablin, M. Blondel, and G. Peyré. Sinkformers: Transformers with doubly471 stochastic attention. In International Conference on Artificial Intelligence and Statistics, pages472 3515–3530. PMLR, 2022.473 [36] F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini. The graph neural474 network model. IEEE transactions on neural networks, 20(1):61–80, 2008.475 [37] P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Galligher, and T. Eliassi-Rad. Collective classifica-476 tion in network data. AI magazine, 29(3):93–93, 2008.477 [38] A. Sperduti. Encoding labeled graphs by labeling raam. Advances in Neural Information478 Processing Systems, 6, 1993.479 [39] M. Thorpe, T. M. Nguyen, H. Xia, T. Strohmer, A. Bertozzi, S. Osher, and B. Wang. Grand++:480 Graph neural diffusion with a source term. In International Conference on Learning Represen-481 tations, 2021.482 [40] J. Topping, F. Di Giovanni, B. P. Chamberlain, X. Dong, and M. M. Bronstein. Understanding483 over-squashing and bottlenecks on graphs via curvature. International Conference on Learning484 Representations, 2022.485 [41] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and486 I. Polosukhin. Attention is all you need. Advances in neural information processing systems,487 30, 2017.488 [42] P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio. Graph attention489 networks. In International Conference on Learning Representations, 2018.490 [43] F. Wu, A. Souza, T. Zhang, C. Fifty, T. Yu, and K. Weinberger. Simplifying graph convolutional491 networks. In International conference on machine learning, pages 6861–6871. PMLR, 2019.492 [44] L.-P. Xhonneux, M. Qu, and J. Tang. Continuous graph neural networks. In International493 Conference on Machine Learning, pages 10432–10441. PMLR, 2020.494 [45] Y. Yan, M. Hashemi, K. Swersky, Y. Yang, and D. Koutra. Two sides of the same coin:495 Heterophily and oversmoothing in graph convolutional neural networks. arXiv preprint496 arXiv:2102.06462, 2021.497 [46] Z. Ying, D. Bourgeois, J. You, M. Zitnik, and J. Leskovec. Gnnexplainer: Generating expla-498 nations for graph neural networks. Advances in neural information processing systems, 32,499 2019.500 [47] H. Yuan, H. Yu, S. Gui, and S. Ji. Explainability in graph neural networks: A taxonomic survey.501 arXiv preprint arXiv:2012.15445, 2020.502 [48] L. Zhao and L. Akoglu. Pairnorm: Tackling oversmoothing in gnns. arXiv preprint503 arXiv:1909.12223, 2019.504 [49] D. Zhou and B. Schölkopf. Regularization on discrete spaces. In Joint Pattern Recognition505 Symposium, pages 361–368. Springer, 2005.506 [50] K. Zhou, X. Huang, D. Zha, R. Chen, L. Li, S.-H. Choi, and X. Hu. Dirichlet energy constrained507 learning for deep graph neural networks. Advances in Neural Information Processing Systems,508 34:21834–21846, 2021.509 [51] J. Zhu, Y. Yan, L. Zhao, M. Heimann, L. Akoglu, and D. Koutra. Beyond homophily in graph510 neural networks: Current limitations and effective designs. Advances in Neural Information511 Processing Systems, 33:7793–7804, 2020.512 [52] O. Shchur, M. Mumme, A. Bojchevski, and S. Günnemann. Pitfalls of graph neural network513 evaluation. In NIPS workshop, 2018.514 [53] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin,515 N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani,516 S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala. Pytorch: An imperative style,517 high-performance deep learning library. In NeurIPS. 2019.518 [54] M. Fey and J. E. Lenssen. Fast graph representation learning with PyTorch Geometric. In ICLR519 Workshop on Representation Learning on Graphs and Manifolds, 2019.520 [55] L. Biewald. Experiment tracking with weights and biases, 2020. Software available from521 wandb.com.522 Checklist523 The checklist follows the references. Please read the checklist guidelines carefully for information on524 how to answer these questions. For each question, change the default [TODO] to [Yes] , [No] , or525 [N/A] . You are strongly encouraged to include a justification to your answer, either by referencing526 the appropriate section of your paper or providing a brief inline description. For example:527 • Did you include the license to the code and datasets? [Yes] See Section ??.528 • Did you include the license to the code and datasets? [No] The code and the data are529 proprietary.530 • Did you include the license to the code and datasets? [N/A]531 Please do not modify the questions and only use the provided macros for your answers. Note that the532 Checklist section does not count towards the page limit. In your paper, please delete this instructions533 block and only keep the Checklist section heading above along with the questions/answers below.534 1. For all authors...535 (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s536 contributions and scope? [Yes]537 (b) Did you describe the limitations of your work? [Yes] , in Section 6.538 (c) Did you discuss any potential negative societal impacts of your work? [Yes] in the539 Societal impact paragraph in Section 6.540 (d) Have you read the ethics review guidelines and ensured that your paper conforms to541 them? [Yes]542 2. If you are including theoretical results...543 (a) Did you state the full set of assumptions of all theoretical results? [Yes]544 (b) Did you include complete proofs of all theoretical results? [Yes] in Appendix A,545 Appendix B and Appendix C.546 3. If you ran experiments...547 (a) Did you include the code, data, and instructions needed to reproduce the main exper-548 imental results (either in the supplemental material or as a URL)? [Yes] Code and549 README in SM, dataloaders in code550 (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they551 were chosen)? [Yes] Splits and hyperparameters provided in code zip552 (c) Did you report error bars (e.g., with respect to the random seed after running experi-553 ments multiple times)? [Yes] Standard deviations are stated in results table554 (d) Did you include the total amount of compute and the type of resources used (e.g., type555 of GPUs, internal cluster, or cloud provider)? [Yes] in appendix D556 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...557 (a) If your work uses existing assets, did you cite the creators? [Yes] datasets and standard558 libraries cited in appendix D559 (b) Did you mention the license of the assets? [Yes] industry standard libraries and560 benchmark datasets were used in accordance with licences561 (c) Did you include any new assets either in the supplemental material or as a URL? [Yes]562 code provided in SM zip563 (d) Did you discuss whether and how consent was obtained from people whose data you’re564 using/curating? [N/A]565 (e) Did you discuss whether the data you are using/curating contains personally identifiable566 information or offensive content? [Yes] no personal data is contained within bench-567 marking datasets568 5. If you used crowdsourcing or conducted research with human subjects...569 (a) Did you include the full text of instructions given to participants and screenshots, if570 applicable? [N/A]571 (b) Did you describe any potential participant risks, with links to Institutional Review572 Board (IRB) approvals, if applicable? [N/A]573 (c) Did you include the estimated hourly wage paid to participants and the total amount574 spent on participant compensation? [N/A]575
1. What is the main contribution of the paper, and how does it build upon previous works? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its theoretical characterization? 3. How does the reviewer assess the novelty and limitations of the work, especially regarding its empirical experiments and potential applications? 4. Are there any concerns or questions regarding the energy-based formulation and its implications for the loss landscape and optimization process? 5. What additional experiments or analyses would help further validate and improve the proposed method?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper In this work, authors present GRAFF, a gradient flow based graph neural network in which the evolution of GNN is represented as minimizing the combination of attractive and repulsive interactions of a multi-particle system. Detailed theoretical characterization in terms of a parametric Dirichlet energy, a general parametric energy and spectral analysis is performed. Strengths And Weaknesses The work is well-written and clearly presented. It builds on similar ideas as outlined in several previous works such as, for instance, [27]. While the presentation in the work is good, the idea in itself is fairly intuitive and simple and has been discussed in contexts of neural networks and physics (related works, see: Landscape and training regimes in deep learning, M. Geiger; L. Petrini; M. Wyart, Physics Reports. 2021-04-16. Vol. 924, p. 1-18. DOI : 10.1016/j.physrep.2021.04.001.). Further, the empirical experiments reveal that the results are comparable with the existing approaches, but not necessarily better. Questions It has been shown by earlier works that energy formulation results in a loss landscape that has a large number of local minima and hence gradient based minimization results in a poor solution. In contrast, deep neural networks with large number of parameters have flat minima and the loss landscape is connected by level set. Reading these together, it is unclear whether the energy-based formulation with simpler formulation loses the advantages of the landscape that a deep-learning architecture has. Could this also be the reason why the performance of the GRAFF is not superior in comparison to other SOTA models? Authors should investigate this. Limitations The experiments has primarily focussed on one aspect while studying on several datasets with varying hetero/homophily. To evaluate the true performance of the approach, several other experiments on varying downstream tasks are required. in addition, a closer analysis on the loss landscape is required to understand the nature of the minima and saddle points.
NIPS
Title Graph Neural Networks as Gradient Flows Abstract Dynamical systems minimizing an energy are ubiquitous in geometry and physics. 1 We propose a gradient flow framework for GNNs where the equations follow the 2 direction of steepest descent of a learnable energy. This approach allows to analyse 3 the GNN evolution from a multi-particle perspective as learning attractive and 4 repulsive forces in feature space via the positive and negative eigenvalues of a 5 symmetric ‘channel-mixing’ matrix. We perform spectral analysis of the solutions 6 and conclude that gradient flow graph convolutional models can induce a dynamics 7 dominated by the graph high frequencies, which is desirable for heterophilic 8 datasets. We also describe structural constraints on common GNN architectures 9 allowing to interpret them as gradient flows. We perform thorough ablation studies 10 corroborating our theoretical analysis and show competitive performance of simple 11 and lightweight models on real-world homophilic and heterophilic datasets. 12 N/A Dynamical systems minimizing an energy are ubiquitous in geometry and physics.1 We propose a gradient flow framework for GNNs where the equations follow the2 direction of steepest descent of a learnable energy. This approach allows to analyse3 the GNN evolution from a multi-particle perspective as learning attractive and4 repulsive forces in feature space via the positive and negative eigenvalues of a5 symmetric ‘channel-mixing’ matrix. We perform spectral analysis of the solutions6 and conclude that gradient flow graph convolutional models can induce a dynamics7 dominated by the graph high frequencies, which is desirable for heterophilic8 datasets. We also describe structural constraints on common GNN architectures9 allowing to interpret them as gradient flows. We perform thorough ablation studies10 corroborating our theoretical analysis and show competitive performance of simple11 and lightweight models on real-world homophilic and heterophilic datasets.12 1 Introduction and motivations13 Graph neural networks (GNNs) [38, 20, 21, 36, 7, 15, 27] and in particular their Message Passing14 formulation (MPNN) [19] have become the standard ML tool for dealing with different types of15 relations and interactions, ranging from social networks to particle physics and drug design. One16 of the often cited drawbacks of traditional GNN models is their poor ‘explainability’, making it17 hard to know why and how they make certain predictions [46, 47], and in which situations they18 may work and when they would fail. Limitations of GNNs that have attracted attention are over-19 smoothing [29, 30, 8], over-squashing and bottlenecks [1, 40], and performance on heterophilic data20 [31, 51, 13, 4, 45] – where adjacent nodes usually have different labels.21 Contributions. We propose a Gradient Flow Framework22 (GRAFF) where the GNN equations follow the direction of steep-23 est descent of a learnable energy. Thanks to this framework we can24 (i) interpret GNNs as a multi-particle dynamics where the learned25 parameters determine pairwise attractive and repulsive potentials26 in the feature space. This sheds light on how GNNs can adapt to27 heterophily and explains their performance and the smoothness of28 the prediction. (ii) GRAFF leads to residual convolutional models29 where the channel-mixing W is performed by a shared symmet-30 ric bilinear form inducing attraction and repulsion via its positive31 and negative eigenvalues, respectively. We theoretically investi-32 gate the interaction of the graph spectrum with the spectrum of the33 channel-mixing, proving that if there is more mass on the negative34 eigenvalues of W, then the dynamics is dominated by the graph-35 high frequencies, which could be desirable on heterophilic graphs.36 We also extend results of [29, 30, 8] by showing that when we drop37 the residual connection intrinsic to the gradient flow framework,38 graph convolutional models always induce a low-frequency dominated dynamics independent of the39 sign and magnitude of the spectrum of the channel-mixing. We also discuss how simple choices40 make common architectures fit GRAFF and conduct thorough ablation studies to corroborate the the-41 oretical analysis on the role of the spectrum of W. (iii) We crystallize an instance of our framework42 into a linear, residual, convolutional model that achieves competitive performance on homophilic and43 heterophilic real world graphs whilst being faster than GCN.44 Related work. Our analysis is related to studying GNNs as filters on the graph spectrum [15, 24,45 2, 25] and over-smoothing [29, 30, 8, 50] and partly adopts techniques similar to [30]. The key46 difference is that we also consider the spectrum of the ‘channel-mixing’ matrix. The concept of47 gradient flows has been a standard tool in physics and geometry [16], from which they were adopted48 for image processing [26], and recently used in ML [35] for the analysis of Transformers [41] – see49 also [18] for discussion of loss landscapes. Our continuous-time evolution equations follows the spirit50 of Neural ODES [22, 12, 3] and the study of GNNs as continuous dynamical systems [44, 10, 17, 9].51 Outline. In Section 2, we review the continuous and discrete Dirichlet energy and the associated52 gradient flow framework. We formalize the notion of over-smoothing and low(high)-frequency-53 dominated dynamics to investigate GNNs and study the dominant components in their evolution. We54 extend the graph Dirichlet energy to allow for a non-trivial norm for the feature edge-gradient. This55 leads to gradient flow equations that diffuse the features and over-smooth in the limit. Accordingly,56 in Section 3 we introduce a more general energy with a symmetric channel-mixing matrix W giving57 rise to attractive and repulsive pairwise terms via its positive and negative eigenvalues and show58 that the negative spectrum can induce high-frequency-dominant dynamics. In Section 4 we first59 compare with continuous GNN models and then discretize the equations and provide a ‘recipe’ for60 making standard GNN architectures fit a gradient flow framework. We adapt the spectral analysis to61 discrete-time showing that gradient flow convolutional models can generate a dynamics dominated by62 the high frequencies via the negative eigenvalues of W while this is impossible if we drop the residual63 connection. In Section 5 we corroborate our theoretical analysis on the role of the spectrum of W64 via ablation studies on graphs with varying homophily. Experiments on real world datasets show a65 competitive performance of our model despite its simplicity and reduced number of parameters.66 2 Gradient-flow formalism67 Notations adopted throughout the paper. Let G = (V,E) be an undirected graph with n nodes.68 We denote by F 2 Rn⇥d the matrix of d-dimensional node features, by fi 2 Rd its i-th row69 (transposed), by fr 2 Rn its r-th column, and by vec(F) 2 Rnd the vectorization of F obtained70 by stacking its columns. Given a symmetric matrix B, we let B + , B denote its most positive and71 negative eigenvalues, respectively, and ⇢B be its spectral radius. If B ⌫ 0, then gap(B) denotes the72 positive smallest eigenvalue of B. ḟ(t) denotes the temporal derivative, ⌦ is the Kronecker product73 and ‘a.e.’ means almost every w.r.t. Lebesgue measure and usually refers to data in the complement74 of some lower dimensional subspace in Rn⇥d. Proofs and additional results appear in the Appendix.75 Starting point: a geometric parallelism. To motivate a gradient-flow approach for GNNs, we start76 from the continuous case (see Appendix A.1 for details). Consider a smooth map f : Rn ! (Rd, h)77 with h a constant metric represented by H ⌫ 0. The Dirichlet energy of f is defined by78 E(f, h) = 1 2 Z Rn krfk2h dx = 1 2 dX q,r=1 nX j=1 Z Rn hqr@jf q@jf r (x)dx (1) and measures the ‘smoothness’ of f . A natural approach to find minimizers of E - called harmonic79 maps - was introduced in [16] and consists in studying the gradient flow of E , wherein a given map80 f(0) = f0 is evolved according to ḟ(t) = rfE(f(t)). These type of evolution equations have81 historically been the core of variational and PDE-based image processing; in particular, gradient82 flows of the Dirichlet energy were shown [26] to recover the Perona-Malik nonlinear diffusion [32].83 Motivation: GNNs for node-classification. We wish to extend the gradient flow formalism to node84 classification on graphs. Assume we have a graph G, node-features F0 and labels {yi} on Vtrain ⇢ V,85 and that we want to predict the labels on Vtest ⇢ V. A GNN typically evolves the features via some86 parametric rule, GNN✓(G,F0), and uses a decoding map for the prediction y = DE(GNN✓(G,F0)).87 In graph convolutional models [15, 27], GNN✓ consists of two operations: applying a shared linear88 transformation to the features (‘channel mixing’) and propagating them along the edges of the graph89 (‘diffusion’). Our goal consists in studying when GNN✓ is the gradient flow of some parametric class90 of energies E✓ : Rn⇥d ! R, which generalize the Dirichlet energy. This means that the parameters91 can be interpreted as ‘finding the right notion of smoothness’ for our task. We evolve the features by92 Ḟ(t) = rFE✓(F(t)) with prediction y = DE(F(T )) for some optimal time T .93 Why a gradient flow? Since Ė✓(F(t)) = ||rFE✓(F(t))||2, the energy dissipates along the gradient94 flow. Accordingly, this framework allows to explain the GNN dynamics as flowing the node features95 in the direction of steepest descent of E✓. Indeed, we find that parametrizing an energy leads to96 equations governed by attractive and repulsive forces that can be controlled via the spectrum of97 symmetric ‘channel-mixing’ matrices. This shows that by learning to distribute more mass over the98 negative (positive) eigenvalues of the channel-mixing, gradient flow models can generate dynamics99 dominated by the higher (respectively, lower) graph frequencies and hence tackle different homophily100 scenarios. The gradient flow framework also leads to sharing of the weights across layers (since we101 parametrize the energy rather than the evolution equations, as usually done in GNNs), allowing us to102 reduce the number of parameters without compromising performance (see Table 1).103 Analysis on graphs: preliminaries. Given a connected graph G with self-loops, its adjacency104 matrix A is defined as aij = 1 if (i, j) 2 E and zero otherwise. We let D = diag(di) be the degree105 matrix and write Ā := D 1/2AD 1/2. Let F 2 Rn⇥d be the matrix representation of a signal. Its106 graph gradient is (rF)ij := fj/ p dj fi/ p di. We define the Laplacian as := 12divr (the107 divergence div is the adjoint of r), represented by = I Ā ⌫ 0. We refer to the eigenvalues of108 as frequencies: the lowest frequency is always 0 while the highest frequency is ⇢ 2 [14]. As109 for the continuum case, the gradient allows to define a (graph) Dirichlet energy as [49]110 E Dir (F) := 1 4 X i X j:(i,j)2E ||(rF)ij || 2 ⌘ 1 4 X (i,j)2E || fi p di fjp dj || 2 = 1 2 trace(F > F), (2) where the extra 1 2 is for convenience. As for manifolds, EDir measures smoothness. If we stack the111 columns of F into vec(F) 2 Rnd, the gradient flow of EDir yields the heat equation on each channel:112 vec(Ḟ(t)) = rvec(F)E Dir (vec(F(t))) = (Id ⌦ )vec(F(t)) () ḟ r (t) = fr(t), (3) for 1 r d. Similarly to [8], we rely on EDir to assess whether a given dynamics t 7! F(t) is a113 smoothing process. A different choice of Laplacian L = D A with non-normalized adjacency114 induces the analogous Dirichlet energy EDirL (F) = 1 2 trace(F > LF). Throughout this paper, we rely115 on the following definitions (see Appendix A.3 for further equivalent formulations and justifications):116 Definition 2.1. Ḟ(t) = GNN✓(F(t), t) initialized at F(0) is smoothing if EDir(F(t)) C + '(t),117 with C a constant only depending on EDir(F(0)) and '̇(t) 0. Over-smoothing occurs if either118 E Dir (F(t)) ! 0 or EDirL (F(t)) ! 0 for t ! 1.119 Our notion of ‘over-smoothing’ is a relaxed version of the definition in [34] – although in the linear120 case one always finds an exponential decay of EDir. We note that EDir(F(t)) ! 0 iff fr(t) ! 0 for121 each column fr. As in [30], this corresponds to a loss of separation power along the solution where122 nodes with equal degree become indistinguishable since we converge to ker( ) (if we replaced 123 with L then we would not even be able to separate nodes with different degrees in the limit).124 To motivate the next definition, consider Ḟ(t) = ĀF(t). Despite ||F(t)|| being unbounded for a.e.125 F(0), the low-frequency components are growing the fastest and indeed F(t)/||F(t)|| ! F1 s.t.126 f r 1 = 0 for 1 r d. We formalize this scenario – including the opposite case of high-frequency127 components being dominant – by studying EDir(F(t)/||F(t)||), i.e. the Rayleigh quotient of Id ⌦ .128 Definition 2.2. Ḟ(t) = GNN✓(F(t), t) initialized at F(0) is Low/High-Frequency-Dominant129 (L/HFD) if EDir(F(t)/||F(t)||) ! 0 (respectively, EDir(F(t)/||F(t)||) ! ⇢ /2) for t ! 1.130 We report a consequence of Definition 2.2 and refer to Appendix A.3 for additional details and131 motivations for the characterizations of LFD and HFD.132 Lemma 2.3. GNN✓ is LFD (HFD) iff for each tj ! 1 there exist tjk ! 1 and F1 s.t.133 F(tjk)/||F(tjk)|| ! F1 and fr1 = 0 ( fr1 = ⇢ fr1, respectively).134 If a graph is homophilic, adjacent nodes are likely to share the same label and we expect a smoothing135 or LFD dynamics enhancing the low-frequency components to be successful at node classification136 tasks [43, 28]. In the opposite case of heterophily, the high-frequency components might contain more137 relevant information for separating classes [4, 5] – the prototypical example being the eigenvector of138 associated with largest frequency ⇢ separating a regular bipartite graph. In other words, the class139 of heterophilic graphs contain instances where signals should be sharpened by increasing EDir rather140 than smoothed out. Accordingly, an ideal framework for learning on graphs must accommodate both141 of these opposite scenarios by being able to induce either an LFD or a HFD dynamics.142 Parametric Dirichlet energy: channel-mixing as metric in feature space. In eq. (1) a constant143 nontrivial metric h in Rd leads to the mixing of the feature channels. We adapt this idea by considering144 a symmetric positive semi-definite H = W>W with W 2 Rd⇥d and using it to generalize EDir as145 E Dir W (F) := 1 4 dX q,r=1 X i X j:(i,j)2E hqr(rf q )ij(rf r )ij = 1 4 X (i,j)2E ||W(rF)ij || 2. (4) We note the analogy with eq. (1), where the sum over the nodes replaces the integration over the146 domain and the j-th derivative at some point i is replaced by the gradient along the edge (i, j) 2 E.147 We generally treat W as learnable weights and study the gradient flow of EDirW :148 Ḟ(t) = rFE Dir W (F(t)) = F(t)W > W. (5) We see that eq. (5) generalizes eq. (3). Below ‘smoothing’ is intended as in Definition 2.1.149 Proposition 2.4. Let P kerW be the projection onto ker(W > W). Equation (5) is smoothing since150 E Dir (F(t)) e 2tgap(W >W)gap( ) ||F(0)|| 2 + E Dir ((P kerW ⌦ In)vec(F(0))), t 0. In fact F(t) ! F1 s.t. 9 1 2 Rd: for each i 2 V we have (f1)i = p di 1 + P ker W fi(0).151 Proposition 2.4 implies that no weight matrix W in eq. (5) can separate the limit embeddings F(1)152 of nodes with same degree and input features. If W has a trivial kernel, then nodes with same degrees153 converge to the same representation and over-smoothing occurs as per Definition 2.1. Differently154 from [29, 30, 8], over-smoothing occurs independently of the spectral radius of the ‘channel-mixing’155 if its eigenvalues are positive – even for equations which lead to residual GNNs when discretized156 [12]. According to Proposition 2.4, we do not expect eq. (5) to succeed on heterophilic graphs where157 smoothing processes are generally harmful – this is confirmed in Figure 2 (see prod-curve). To158 remedy this problem, we generalize eq. (5) to a gradient flow that can be HFD as per Definition 2.2.159 3 A general parametric energy for pairwise interactions160 We first rewrite the energy EDirW in eq. (4) as161 E Dir W (F) = 1 2 X i hfi,W > Wfii 1 2 X i,j āijhfi,W > Wfji. (6) We then define a new, more general energy by replacing the occurrences of W>W with new162 symmetric matrices ⌦,W 2 Rd⇥d since we also want to generate repulsive forces:163 E tot (F) := 1 2 X i hfi,⌦fii 1 2 X i,j āijhfi,Wfji ⌘ E ext ⌦ (F) + E pair W (F), (7) with associated gradient flow of the form (see Appendix B)164 Ḟ(t) = rFE tot (F(t)) = F(t)⌦+ ĀF(t)W. (8) Note that eq. (8) is gradient flow of some energy F 7! Etot(F) iff both ⌦ and W are symmetric.165 A multi-particle system point of view: attraction vs repulsion. Consider the d-dimensional166 node-features as particles in Rd with energy Etot. While the term Eext⌦ is independent of the graph167 topology and represents an external field in the feature space, the second term EpairW constitutes a168 potential energy, with W a bilinear form determining the pairwise interactions of adjacent node169 representations. Given a symmetric W, we write W = ⇥> + ⇥+ ⇥ > ⇥ , by decomposing the170 spectrum of W in positive and negative values.We can rewrite Etot = Eext⌦ W + E Dir ⇥+ E Dir ⇥ , i.e.171 E tot (F) = 1 2 X i hfi, (⌦ W)fii+ 1 4 X i,j ||⇥+(rF)ij || 2 1 4 X i,j ||⇥ (rF)ij || 2. (9) The gradient flow of Etot minimizes EDir⇥+ and maximizes E Dir ⇥ . The matrix W encodes repulsive172 pairwise interactions via its negative-definite component ⇥ which lead to terms ||⇥ (rF)ij ||173 increasing along the solution. The latter affords a ‘sharpening’ effect desirable on heterophilic graphs174 where we need to disentangle adjacent node representations and hence ‘magnify’ the edge-gradient.175 Spectral analysis of the channel-mixing. We will now show that eq. (8) can lead to a HFD176 dynamics. To this end, we assume that ⌦ = 0 so that eq. (8) becomes Ḟ(t) = ĀF(t)W. According177 to eq. (9) the negative eigenvalues of W lead to repulsion. We show that the latter can induce HFD178 dynamics as per Definition 2.2. We let P ⇢ W be the orthogonal projection into the eigenspace of179 W ⌦ Ā associated with the eigenvalue ⇢ := | W |(⇢ 1). We define ✏HFD explicitly in eq. (24).180 Proposition 3.1. If ⇢ > W+ , then Ḟ(t) = ĀF(t)W is HFD for a.e. F(0): there exists ✏HFD s.t.181 E Dir (F(t)) = e2t⇢ ⇣⇢ 2 ||P ⇢ W F(0)|| 2 +O(e 2t✏HFD) ⌘ , t 0, and F(t)/||F(t)|| converges to F1 2 Rn⇥d such that fr1 = ⇢ fr1, for 1 r d.182 Proposition 3.1 shows that if enough mass of the spectrum of the ‘channel-mixing’ is distributed over183 the negative eigenvalues, then the evolution is dominated by the graph high frequencies. This analysis184 is made possible in our gradient flow framework where W must be symmetric. The HFD dynamics185 induced by negative eigenvalues of W is confirmed in Figure 2 (neg-prod-curve in the bottom chart).186 A more general energy. Equations with a source term may have better expressive power [44, 11, 39].187 In our framework this means adding an extra energy term of the form Esource W̃ (F) := hF,F(0)W̃i188 to eq. (7) with some learnable and W̃. This leads to the following gradient flow:189 Ḟ(t) = F(t)⌦+ ĀF(t)W F(0)W̃. (10) We also observe that one could replace the fixed matrix Ā with a more general symmetric graph190 vector field A satisfying Aij = 0 if (i, j) /2 E, although in this work we focus on the case A = Ā.191 We also note that when ⌦ = W, then eq. (8) becomes Ḟ(t) = F(t)W. We perform a spectral192 analysis of this case in Appendix B.2.193 Non-linear activations. In Appendix B.3 we discuss non-linear gradient flow equations. Here194 we study what happens if the gradient flow in eq. (10) is activated pointwise by : R ! R. We195 show that although we are no longer a gradient flow, the learnable multi-particle energy Etot is still196 decreasing along the solution, meaning that the interpretation of the channel-mixing W inducing197 attraction and repulsion via its positive and negative eigenvalues respectively is preserved.198 Proposition 3.2. Consider a non-linear map : R ! R such that the function x 7! x (x) 0. If199 t 7! F(t) solves the equation200 Ḟ(t) = ⇣ F(t)⌦+ ĀF(t)W F(0)W̃ ⌘ , where acts elementwise, then201 dEtot(F(t)) dt 0. A proof of this result and more details and discussion are reported in Appendix E. We emphasize202 here that differently from previous results about behaviour of ReLU wrt EDir [30, 8], we deal with a203 much more general energy that can also induce repulsion and a more general family of activation204 functions (that include ReLU, tanh, arctan and many others).205 4 Comparison with GNNs206 In this Section, we study standard GNN models from the perspective of our gradient flow framework.207 4.1 Continuous case208 Continuous GNN models replace layers with continuous time. In contrast with Proposition 3.1,209 we show that three main linearized continuous GNN models are either smoothing or LFD as210 per Definition 2.2. The linearized PDE-GCND model [17] corresponds to choosing = 0 and211 ⌦ = W = K(t)>K(t) in eq. (10), for some time-dependent family t 7! K(t) 2 Rd⇥d:212 ḞPDE GCND(t) = F(t)K(t) > K(t). The CGNN model [44] can be derived from eq. (10) by setting ⌦ = I ⌦̃,W = W̃ = I, = 1:213 ḞCGNN(t) = F(t) + F(t)⌦̃+ F(0). Finally, in linearized GRAND [10] a row-stochastic matrix A(F(0)) is learned from the encoding214 via an attention mechanism and we have215 ḞGRAND(t) = RWF(t) = (I A(F(0)))F(t). We note that if A is not symmetric, then GRAND is not a gradient flow.216 Proposition 4.1. PDE GCND, CGNN and GRAND satisfy the following:217 (i) PDE GCND is a smoothing model: ĖDir(FPDE GCND (t)) 0.218 (ii) For a.e. F(0) it holds: CGNN is never HFD and if we remove the source term, then219 E Dir (FCGNN(t)/||FCGNN(t)||) e gap( )t.220 (iii) If G is connected, FGRAND(t) ! µ as t ! 1, with µr = mean(fr(0)), 1 r d.221 By (ii) the source-free CGNN-evolution is LFD independent of ⌦̃. Moreover, by (iii), over-smoothing222 occurs for GRAND as per Definition 2.1. On the other hand, Proposition 3.1 shows that the negative223 eigenvalues of W can make the source-free gradient flow in eq. (8) HFD. Experiments in Section 5224 confirm that the gradient flow model outperforms CGNN and GRAND on heterophilic graphs.225 4.2 Discrete case226 We now describe a discrete version of our gradient flow model and compare it to ‘discrete’ GNNs227 where discrete time steps correspond to different layers. In the spirit of [12], we use explicit Euler228 scheme with step size ⌧ 1 to solve eq. (10) and set W̃ = I. In the gradient flow framework we229 parametrize the energy rather than the actual equations, which leads to symmetric channel-mixing230 matrices ⌦,W 2 Rd⇥d that are shared across the layers. Since the matrices are square, an encoding231 block EN : Rn⇥p ! Rn⇥d is used to process input features F0 2 Rn⇥p and generally reduce the232 hidden dimension from p to d. Moreover, the iterations inherently lead to a residual architecture233 because of the explicit Euler discretization:234 F(t+ ⌧) = F(t) + ⌧ F(t)⌦+ ĀF(t)W + F(0) , F(0) = EN(F0), (11) with prediction y = DE(F(T )) produced by a decoder DE : Rn⇥d ! Rn⇥k, where k is the235 number of label classes and T integration time of the form T = m⌧ , so that m 2 N represents the236 number of layers. Although eq. (11) is linear, we can include non-linear activations in EN, DE237 making the entire model generally non-linear. We emphasize two important points:238 • Since the framework is residual, even if the message-passing is linear, this is not equivalent239 to collapsing the dynamics into a single layer with diffusion matrix Ām, with m the number240 of layers, see eq. (27) in the appendix where we derive the expansion of the solution.241 • We could also activate the equations pointwise and maintain the physics interpretation thanks242 to Proposition 3.2 to gain greater expressive power. In the following though, we mainly243 stick to the linear discrete gradient flow unless otherwise stated.244 Are discrete GNNs gradient flows? Given a (learned) symmetric graph vector field A 2 Rn⇥n245 satisfying Aij = 0 if (i, j) /2 E, consider a family of linear GNNs with shared weights of the form246 F(t+ 1) = F(t)⌦+AF(t)W + F(0)W̃, 0 t T. (12) Symmetry is the key requirement to interpret GNNs in eq. (12) in a gradient flow framework.247 Lemma 4.2. Equation (12) is the unit step size discrete gradient flow of EextI ⌦ + E pair A,W E source W̃ ,248 with EpairA,W defined by replacing Ā with A in eq. (7), iff ⌦ and W are symmetric.249 Lemma 4.2 provides a recipe for making standard architectures into a gradient flow, with symmetry250 being the key requirement. When eq. (12) is a gradient flow, the underlying GNN dynamics is251 equivalent to minimizing a multi-particle energy by learning attractive and repulsive directions in252 feature space as discussed in Section 3. In Appendix C.2, we show how Lemma 4.2 covers linear253 versions of GCN [27, 43], GAT [42], GraphSAGE [23] and GCNII [11] to name a few.254 Over-smoothing analysis in discrete setting. By Proposition 3.1 we know that the continuous255 version of eq. (11) can be HFD thanks to the negative eigenvalues of W. The next result represents a256 discrete counterpart of Proposition 3.1 and shows that residual, symmetrized graph convolutional257 models can be HFD. Below P ⇢ W is the projection into the eigenspace associated with the eigenvalue258 ⇢ := | W |(⇢ 1) and we report the explicit value of HFD in eq. (28) in Appendix C.3. We let:259 W + (⇢ 1)) 1 < | W | < 2(⌧(2 ⇢ )) 1. (13) Theorem 4.3. Given F(t+ ⌧) = F(t) + ⌧ĀF(t)W, with W symmetric, if eq. (13) holds then260 E Dir (F(m⌧)) = (1 + ⌧⇢ ) 2m ⇢ 2 ||P ⇢ W F(0)|| 2 +O ✓ 1 + ⌧ HFD 1 + ⌧⇢ ◆2m!! , HFD < ⇢ , hence the dynamics is HFD for a.e. F(0) and in fact F(m⌧)/||F(m⌧)|| ! F1 s.t. fr1 = ⇢ fr1.261 Conversely, if G is not bipartite, then for a.e. F(0) the system F(t + ⌧) = ⌧ĀF(t)W, with W262 symmetric, is LFD independent of the spectrum of W.263 Theorem 4.3 shows that linear discrete gradient flows can be HFD due to the negative eigenvalues of264 W. This differs from statements that standard GCNs act as low-pass filters and thus over-smooth in265 the limit. Indeed, in these cases the spectrum of W is generally ignored [43, 11] or required to be266 sufficiently small in terms of singular value decomposition [29, 30, 8] when no residual connection267 is present. On the other hand, Theorem 4.3 emphasizes that the spectrum of W plays a key role to268 enhance the high frequencies when enough mass is distributed over the negative eigenvalues provided269 that a residual connection exists – this is confirmed by the neg-prod-curve in Figure 2.270 The residual connection from a spectral perspective. Given a sufficiently small step-size so271 that the right hand side of inequality 13 is satisfied, F(t+ ⌧) = F(t) + ⌧ĀF(t)W is HFD for a.e.272 F(0) if | W |(⇢ 1) > W+ , i.e. ‘there is more mass’ in the negative spectrum of W than in the273 positive one. This means that differently from [29, 30, 8], there is no requirement on the minimal274 magnitude of the spectral radius of W coming from the graph topology as long as W + is small275 enough. Conversely, without a residual term, the dynamics is LFD for a.e. F(0) independently of the276 sign and magnitude of the eigenvalues of W. This is also confirmed by the GCN-curve in Figure 2.277 Over-smoothing vs LFD. We highlight how in general a linear GCN equation as F(t + ⌧) =278 ⌧ĀF(t)W may avoid over-smoothing in the sense of Definition 2.1, meaning that EDir(F(t)) ! 1279 as soon as there exist i 2 (0, 1) and the spectral radius of W is large enough. However, this280 will not lead to over-separation since the dominating term is the lowest frequency one: in other281 words, once we re-set the scale right as per the normalization in Theorem 4.3, we encounter loss of282 separability even with large (and possibly negative) spectrum of W.283 5 Experiments284 In this section we evaluate the gradient flow framework (GRAFF). We corroborate the spectral285 analysis using synthetic data with controllable homophily. We confirm that having negative (positive)286 eigenvalues of the channel-mixing W are essential in heterophilic (homophilic) scenarios where the287 gradient flow should align with HFD (LFD) respectively. We show that the gradient flow in eq. (11)288 – a linear, residual, symmetric graph convolutional model – achieves competitive performance on289 heterophilic datasets.290 Methodology. We crystallize GRAFF in the model presented in eq. (11) with EN, DE im-291 plemented as single linear layers or MLPs, and we set ⌦ to be diagonal. For the real-world292 experiments we consider diagonally-dominant (DD), diagonal (D) and time-dependent choices293 for the structure of W that offer explicit control over its spectrum. In the (DD)-case, we consider294 a W0 2 Rd⇥d symmetric with zero diagonal and w 2 Rd defined by w↵ = q↵ P |W 0 ↵ | + r↵,295 and set W = diag(w) + W0. Due to the Gershgorin Theorem the eigenvalues of W belong to296 [w↵ P |W 0 ↵ |,w↵ + P |W 0 ↵ |], so the model ‘can’ easily re-distribute mass in the spectrum of297 W via q↵, r↵. This generalizes the decomposition of W in [11] providing a justification in terms of298 its spectrum and turns out to be more efficient w.r.t. the hidden dimension d as shown in Figure 4 in299 the Appendix. For (D) we take W to be diagonal, with entries sampled U [ 1, 1] and fixed – i.e., we300 do not train over W – and only learn EN, DE. We also include a time-dependent model where Wt301 varies across layers. To investigate the role of the spectrum of W on synthetic graphs, we construct302 three additional variants: W = W0 + W0>, W = ±W0>W0 named sum, prod and neg-prod303 respectively where prod (neg-prod) variants have only non-negative (non-positive) eigenvalues.304 Complexity and number of parameters. If we treat the number of layers as a constant, the discrete305 gradient flow scales as O(|V|pd + |E|d2), where p and d are input feature and hidden dimension306 respectively, with p d usually. Note that GCN has complexity O(|E|pd) and in fact our model is307 faster than GCN as confirmed in Figure 5 in Appendix D. Since EN, DE are single linear layers308 (MLPs), we can bound the number of parameters by pd+ d2 + 3d+ dk, with k the number of label309 classes, in the (DD)-variant while in the (D)-variant we have pd+ 3d+ dk. Further ablation studies310 appear in Figure 4 in the Appendix showing that (DD) outperforms sum and GCN – especially in the311 lower hidden dimension regime – on real-world benchmarks with varying homophily.312 Synthetic experiments and ablation studies.313 To investigate our claims in a controlled environ-314 ment we use the synthetic Cora dataset of [51, Ap-315 pendix G]. Graphs are generated for target levels316 of homophily via preferential attachment – see317 Appendix D.3 for details. Figure 2 confirms the318 spectral analysis and offers a better understanding319 in terms of performance and smoothness of the320 predictions. Each curve – except GCN – repre-321 sents one version of W as in ‘methodology’ and322 we implement eq. (11) with = 0, ⌦ = 0. Fig-323 ure 2 (top) reports the test accuracy vs true label324 homophily. Neg-prod is better than prod on low-325 homophily and viceversa on high-homophily. This326 confirms Proposition 3.1 where we have shown327 that the gradient flow can lead to a HFD dy-328 namics – that are generally desirable with low-329 homophily – through the negative eigenvalues of330 W. Conversely, the prod configuration (where we331 have an attraction-only dynamics) struggles in low-332 homophily scenarios even though a residual connection is present. Both prod and neg-prod are333 ‘extreme’ choices and serve the purpose of highlighting that by turning off one side of the spectrum334 this could be the more damaging depending on the underlying homophily. In general though ‘neutral’335 variants like sum and (DD) are indeed more flexible and better performing. In fact, (DD) outperforms336 GCN especially in low-homophily scenarios, confirming Theorem 4.3 where we have shown that337 without a residual connection convolutional models are LFD – and hence more sensitive to underlying338 homophily – irrespectively of the spectrum of W. This is further confirmed in Figure 3.339 In Figure 2 (bottom) we compute the homophily of the prediction (cross) for a given method and we340 compare with the homophily (circle) of the prediction read from the encoding (i.e. graph-agnostic).341 The homophily here is a proxy to assess whether the evolution is smoothing, the goal being explaining342 the smoothness of the prediction via the spectrum of W as per our theoretical analysis. For neg-prod343 the homophily after the evolution is lower than that of the encoding, supporting the analysis that344 negative eigenvalues of W enhance high-frequencies. The opposite behaviour occurs in the case of345 prod and explains that in the low-homophily regime prod is under-performant due to the prediction346 being smoother than the true homophily. (DD) and sum variants adapt better to the true homophily.347 We note how the encoding compensates when the dynamics can only either attract or repulse (i.e. the348 spectrum of W has a sign) by decreasing or increasing the initial homophily respectively.349 Real world experiments. We test GRAFF against a range of datasets with varying homophily350 [37, 33, 31] (see Appendix D.4 for additional details). We use results provided in [45, Table 1],351 which includes standard baselines as GCN [27], GraphSAGE [23], GAT [42], PairNorm [48] and352 recent models tailored towards the heterophilic setting (GGCN [45], Geom-GCN [31], H2GCN353 [51] and GPRGNN [13]). For Sheaf [5], a recent top-performer on heterophilic datasets, we took354 the best performing variant (out of six provided) for each dataset. We also include continuous355 baselines CGNN [44] and GRAND [10] to provide empirical evidence for Proposition 4.1. Splits356 taken from [31] are used in all the comparisons. The GRAFF model discussed in ‘methodology’357 is a very simple architecture with shared parameters across layers and run-time smaller than GCN358 and more recent models like GGCN designed for heterophilic graphs (see Figure 5 in the Appendix).359 Nevertheless, it achieves competitive results on all datasets, performing on par or better than more360 complex recent models. Moreover, comparison with the ‘time-dependent’ (DD) variant confirms361 that by sharing weights across layers we do not lose performance. We note that on heterophilic362 graphs short integration time is usually needed due to the topology being harmful and the negative363 eigenvalues of W leading to exponential behaviour (see Appendix D).364 6 Conclusions365 In this work, we developed a framework for GNNs where the evolution can be interpreted as366 minimizing a multi-particle learnable energy. This translates into studying the interaction between367 the spectrum of the graph and the spectrum of the ‘channel-mixing’ leading to a better understanding368 of when and why the induced dynamics is low (high) frequency dominated. From a theoretical369 perspective, we refined existing asymptotic analysis of GNNs to account for the role of the spectrum of370 the channel-mixing as well. From a practical perspective, our framework allows for ‘educated’ choices371 resulting in a simple convolutional model that achieves competitive performance on homophilic372 and heterophilic benchmarks while being faster than GCN. Our results refute the folklore of graph373 convolutional models being too simple for heterophilic benchmarks.374 Limitations and future works. We limited our attention to a constant bilinear form W, which375 might be excessively rigid. It is possible to derive non-constant alternatives that are aware of the376 features or the position in the graph. The main challenge amounts to matching the requirement for377 local ‘heterogeneity’ with efficiency: we reserve this question for future work. Our analysis is also a378 first step into studying the interaction of the graph and ‘channel-mixing’ spectra; we did not explore379 other dynamics that are neither LFD nor HFD as per our definitions. The energy formulation points380 to new models more ‘physics’ inspired; this will be explored in future work.381 Societal impact. Our work sheds light on the actual dynamics of GNNs and could hence improve382 their understanding, which is crucial for assessing their impact on large-scale applications. We also383 show that instances of our framework achieve competitive performance on heterophilic data despite384 being faster than GCN, providing evidence for efficient methods with reduced footprint.385 References386 [1] U. Alon and E. Yahav. On the bottleneck of graph neural networks and its practical implications.387 In International Conference on Learning Representations, 2021.388 [2] M. Balcilar, G. Renton, P. Héroux, B. Gaüzère, S. Adam, and P. Honeine. Analyzing the389 expressive power of graph neural networks in a spectral perspective. In International Conference390 on Learning Representations, 2020.391 [3] M. Biloš, J. Sommer, S. S. Rangapuram, T. Januschowski, and S. Günnemann. Neural flows:392 Efficient alternative to neural odes. In Advances in Neural Information Processing Systems,393 volume 34, 2021.394 [4] D. Bo, X. Wang, C. Shi, and H. Shen. Beyond low-frequency information in graph convolutional395 networks. In AAAI. AAAI Press, 2021.396 [5] C. Bodnar, F. Di Giovanni, B. P. Chamberlain, P. Liò, and M. M. Bronstein. Neural sheaf397 diffusion: A topological perspective on heterophily and oversmoothing in gnns. arXiv preprint398 arXiv:2202.04579, 2022.399 [6] S. Brody, U. Alon, and E. Yahav. How attentive are graph attention networks? arXiv preprint400 arXiv:2105.14491, 2021.401 [7] J. Bruna, W. Zaremba, A. Szlam, and Y. LeCun. Spectral networks and locally connected402 networks on graphs. In 2nd International Conference on Learning Representations, ICLR 2014,403 2014.404 [8] C. Cai and Y. Wang. A note on over-smoothing for graph neural networks. arXiv preprint405 arXiv:2006.13318, 2020.406 [9] B. Chamberlain, J. Rowbottom, D. Eynard, F. Di Giovanni, X. Dong, and M. Bronstein. Beltrami407 flow and neural diffusion on graphs. Advances in Neural Information Processing Systems, 34,408 2021.409 [10] B. Chamberlain, J. Rowbottom, M. I. Gorinova, M. Bronstein, S. Webb, and E. Rossi. Grand:410 Graph neural diffusion. In International Conference on Machine Learning, pages 1407–1418.411 PMLR, 2021.412 [11] M. Chen, Z. Wei, Z. Huang, B. Ding, and Y. Li. Simple and deep graph convolutional networks.413 In International Conference on Machine Learning, pages 1725–1735. PMLR, 2020.414 [12] R. T. Chen, Y. Rubanova, J. Bettencourt, and D. K. Duvenaud. Neural ordinary differential415 equations. Advances in neural information processing systems, 31, 2018.416 [13] E. Chien, J. Peng, P. Li, and O. Milenkovic. Adaptive universal generalized pagerank graph417 neural network. In 9th International Conference on Learning Representations, ICLR 2021,418 2021.419 [14] F. R. Chung and F. C. Graham. Spectral graph theory. Number 92. American Mathematical420 Soc., 1997.421 [15] M. Defferrard, X. Bresson, and P. Vandergheynst. Convolutional neural networks on graphs422 with fast localized spectral filtering. Advances in neural information processing systems, 29,423 2016.424 [16] J. Eells and J. H. Sampson. Harmonic mappings of riemannian manifolds. American journal of425 mathematics, 86(1):109–160, 1964.426 [17] M. Eliasof, E. Haber, and E. Treister. Pde-gcn: Novel architectures for graph neural networks427 motivated by partial differential equations. Advances in Neural Information Processing Systems,428 34, 2021.429 [18] M. Geiger, L. Petrini, and M. Wyart. Landscape and training regimes in deep learning. Physics430 Reports, 924:1–18, 2021.431 [19] J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl. Neural message passing432 for quantum chemistry. In International Conference on Machine Learning, pages 1263–1272.433 PMLR, 2017.434 [20] C. Goller and A. Kuchler. Learning task-dependent distributed representations by backprop-435 agation through structure. In Proceedings of International Conference on Neural Networks436 (ICNN’96), volume 1, pages 347–352. IEEE, 1996.437 [21] M. Gori, G. Monfardini, and F. Scarselli. A new model for learning in graph domains. In438 Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005., volume 2,439 pages 729–734. IEEE, 2005.440 [22] E. Haber and L. Ruthotto. Stable architectures for deep neural networks. Inverse Problems, 34,441 2018.442 [23] W. Hamilton, Z. Ying, and J. Leskovec. Inductive representation learning on large graphs.443 Advances in neural information processing systems, 30, 2017.444 [24] D. K. Hammond, P. Vandergheynst, and R. Gribonval. The spectral graph wavelet transform:445 Fundamental theory and fast computation. In Vertex-Frequency Analysis of Graph Signals,446 pages 141–175. Springer, 2019.447 [25] M. He, Z. Wei, H. Xu, et al. Bernnet: Learning arbitrary graph spectral filters via bernstein448 approximation. Advances in Neural Information Processing Systems, 34, 2021.449 [26] R. Kimmel, N. Sochen, and R. Malladi. From high energy physics to low level vision. In450 International Conference on Scale-Space Theories in Computer Vision, pages 236–247. Springer,451 1997.452 [27] T. N. Kipf and M. Welling. Semi-Supervised Classification with Graph Convolutional Networks.453 In Proceedings of the 5th International Conference on Learning Representations, ICLR ’17,454 2017.455 [28] J. Klicpera, S. Weißenberger, and S. Günnemann. Diffusion improves graph learning. In456 Proceedings of the 33rd International Conference on Neural Information Processing Systems,457 2019.458 [29] H. Nt and T. Maehara. Revisiting graph neural networks: All we have is low-pass filters. arXiv459 preprint arXiv:1905.09550, 2019.460 [30] K. Oono and T. Suzuki. Graph neural networks exponentially lose expressive power for node461 classification. In International Conference on Learning Representations, 2020.462 [31] H. Pei, B. Wei, K. C. Chang, Y. Lei, and B. Yang. Geom-gcn: Geometric graph convolutional463 networks. In 8th International Conference on Learning Representations, ICLR 2020, 2020.464 [32] P. Perona and J. Malik. Scale-space and edge detection using anisotropic diffusion. PAMI,465 12(7):629–639, 1990.466 [33] B. Rozemberczki, C. Allen, and R. Sarkar. Multi-scale attributed node embedding. Journal of467 Complex Networks, 9(2):cnab014, 2021.468 [34] T. K. Rusch, B. P. Chamberlain, J. Rowbottom, S. Mishra, and M. M. Bronstein. Graph-coupled469 oscillator networks. In International Conference on Machine Learning, 2022.470 [35] M. E. Sander, P. Ablin, M. Blondel, and G. Peyré. Sinkformers: Transformers with doubly471 stochastic attention. In International Conference on Artificial Intelligence and Statistics, pages472 3515–3530. PMLR, 2022.473 [36] F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini. The graph neural474 network model. IEEE transactions on neural networks, 20(1):61–80, 2008.475 [37] P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Galligher, and T. Eliassi-Rad. Collective classifica-476 tion in network data. AI magazine, 29(3):93–93, 2008.477 [38] A. Sperduti. Encoding labeled graphs by labeling raam. Advances in Neural Information478 Processing Systems, 6, 1993.479 [39] M. Thorpe, T. M. Nguyen, H. Xia, T. Strohmer, A. Bertozzi, S. Osher, and B. Wang. Grand++:480 Graph neural diffusion with a source term. In International Conference on Learning Represen-481 tations, 2021.482 [40] J. Topping, F. Di Giovanni, B. P. Chamberlain, X. Dong, and M. M. Bronstein. Understanding483 over-squashing and bottlenecks on graphs via curvature. International Conference on Learning484 Representations, 2022.485 [41] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and486 I. Polosukhin. Attention is all you need. Advances in neural information processing systems,487 30, 2017.488 [42] P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio. Graph attention489 networks. In International Conference on Learning Representations, 2018.490 [43] F. Wu, A. Souza, T. Zhang, C. Fifty, T. Yu, and K. Weinberger. Simplifying graph convolutional491 networks. In International conference on machine learning, pages 6861–6871. PMLR, 2019.492 [44] L.-P. Xhonneux, M. Qu, and J. Tang. Continuous graph neural networks. In International493 Conference on Machine Learning, pages 10432–10441. PMLR, 2020.494 [45] Y. Yan, M. Hashemi, K. Swersky, Y. Yang, and D. Koutra. Two sides of the same coin:495 Heterophily and oversmoothing in graph convolutional neural networks. arXiv preprint496 arXiv:2102.06462, 2021.497 [46] Z. Ying, D. Bourgeois, J. You, M. Zitnik, and J. Leskovec. Gnnexplainer: Generating expla-498 nations for graph neural networks. Advances in neural information processing systems, 32,499 2019.500 [47] H. Yuan, H. Yu, S. Gui, and S. Ji. Explainability in graph neural networks: A taxonomic survey.501 arXiv preprint arXiv:2012.15445, 2020.502 [48] L. Zhao and L. Akoglu. Pairnorm: Tackling oversmoothing in gnns. arXiv preprint503 arXiv:1909.12223, 2019.504 [49] D. Zhou and B. Schölkopf. Regularization on discrete spaces. In Joint Pattern Recognition505 Symposium, pages 361–368. Springer, 2005.506 [50] K. Zhou, X. Huang, D. Zha, R. Chen, L. Li, S.-H. Choi, and X. Hu. Dirichlet energy constrained507 learning for deep graph neural networks. Advances in Neural Information Processing Systems,508 34:21834–21846, 2021.509 [51] J. Zhu, Y. Yan, L. Zhao, M. Heimann, L. Akoglu, and D. Koutra. Beyond homophily in graph510 neural networks: Current limitations and effective designs. Advances in Neural Information511 Processing Systems, 33:7793–7804, 2020.512 [52] O. Shchur, M. Mumme, A. Bojchevski, and S. Günnemann. Pitfalls of graph neural network513 evaluation. In NIPS workshop, 2018.514 [53] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin,515 N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani,516 S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala. Pytorch: An imperative style,517 high-performance deep learning library. In NeurIPS. 2019.518 [54] M. Fey and J. E. Lenssen. Fast graph representation learning with PyTorch Geometric. In ICLR519 Workshop on Representation Learning on Graphs and Manifolds, 2019.520 [55] L. Biewald. Experiment tracking with weights and biases, 2020. Software available from521 wandb.com.522 Checklist523 The checklist follows the references. Please read the checklist guidelines carefully for information on524 how to answer these questions. For each question, change the default [TODO] to [Yes] , [No] , or525 [N/A] . You are strongly encouraged to include a justification to your answer, either by referencing526 the appropriate section of your paper or providing a brief inline description. For example:527 • Did you include the license to the code and datasets? [Yes] See Section ??.528 • Did you include the license to the code and datasets? [No] The code and the data are529 proprietary.530 • Did you include the license to the code and datasets? [N/A]531 Please do not modify the questions and only use the provided macros for your answers. Note that the532 Checklist section does not count towards the page limit. In your paper, please delete this instructions533 block and only keep the Checklist section heading above along with the questions/answers below.534 1. For all authors...535 (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s536 contributions and scope? [Yes]537 (b) Did you describe the limitations of your work? [Yes] , in Section 6.538 (c) Did you discuss any potential negative societal impacts of your work? [Yes] in the539 Societal impact paragraph in Section 6.540 (d) Have you read the ethics review guidelines and ensured that your paper conforms to541 them? [Yes]542 2. If you are including theoretical results...543 (a) Did you state the full set of assumptions of all theoretical results? [Yes]544 (b) Did you include complete proofs of all theoretical results? [Yes] in Appendix A,545 Appendix B and Appendix C.546 3. If you ran experiments...547 (a) Did you include the code, data, and instructions needed to reproduce the main exper-548 imental results (either in the supplemental material or as a URL)? [Yes] Code and549 README in SM, dataloaders in code550 (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they551 were chosen)? [Yes] Splits and hyperparameters provided in code zip552 (c) Did you report error bars (e.g., with respect to the random seed after running experi-553 ments multiple times)? [Yes] Standard deviations are stated in results table554 (d) Did you include the total amount of compute and the type of resources used (e.g., type555 of GPUs, internal cluster, or cloud provider)? [Yes] in appendix D556 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...557 (a) If your work uses existing assets, did you cite the creators? [Yes] datasets and standard558 libraries cited in appendix D559 (b) Did you mention the license of the assets? [Yes] industry standard libraries and560 benchmark datasets were used in accordance with licences561 (c) Did you include any new assets either in the supplemental material or as a URL? [Yes]562 code provided in SM zip563 (d) Did you discuss whether and how consent was obtained from people whose data you’re564 using/curating? [N/A]565 (e) Did you discuss whether the data you are using/curating contains personally identifiable566 information or offensive content? [Yes] no personal data is contained within bench-567 marking datasets568 5. If you used crowdsourcing or conducted research with human subjects...569 (a) Did you include the full text of instructions given to participants and screenshots, if570 applicable? [N/A]571 (b) Did you describe any potential participant risks, with links to Institutional Review572 Board (IRB) approvals, if applicable? [N/A]573 (c) Did you include the estimated hourly wage paid to participants and the total amount574 spent on participant compensation? [N/A]575
1. What is the focus of the paper regarding GNNs, and how does it provide a new perspective on them? 2. What are the strengths of the proposed approach, particularly in its ability to explain the issues with traditional GNNs? 3. What are the weaknesses of the paper, especially concerning computational feasibility? 4. Do you have any concerns about the stability and boundedness of features during the evolution of the dynamic system? 5. How do the authors address the limitations and potential societal impacts of their work?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper In this paper, the evolution of the GNN is explained as learning attractive and repulsive forces in feature space by the positive and negative eigenvalues of a symmetric 'channel-mixing' matrix. According to the spectral analysis of the solutions, gradient flow graph convolutional models result in a dynamic dominated by graph high frequencies, which is desirable for heterophilic datasets. Moreover, the authors present structural constraints on common GNN architectures, allowing them to be interpreted as gradient flows. We perform extensive ablation studies to verify our theoretical analysis and demonstrate the comparative performance of simple and lightweight models on real-world homophilic and heterophilic datasets. Strengths And Weaknesses Strengths: In this paper, the author gives a new perspective on GNN in terms of the particle system, which explain why the original GNN does not work well on heterophilic datasets and also analysis Dirichlet energy change in the dynamic system. The whole paper's structure is clear and easy to follow. Several adequate experiments are used to verify the author's statement. Weaknesses: When the graph size is increasing, is this GNN also computation feasibly? As this model needs to compute the eigendecomposition of the graph Laplacian, when the graph size is increasing, it should be hard to compute. Questions If two particle (two nodes) is repulsive to each other, will both feature blow up (going to infinity) as the time of the dynamic system increases? How can you ensure all the feature on every node is bounded with the system is evolving? Limitations The authors discussed the limitation of their works and their social impact.
NIPS
Title Graph Neural Networks as Gradient Flows Abstract Dynamical systems minimizing an energy are ubiquitous in geometry and physics. 1 We propose a gradient flow framework for GNNs where the equations follow the 2 direction of steepest descent of a learnable energy. This approach allows to analyse 3 the GNN evolution from a multi-particle perspective as learning attractive and 4 repulsive forces in feature space via the positive and negative eigenvalues of a 5 symmetric ‘channel-mixing’ matrix. We perform spectral analysis of the solutions 6 and conclude that gradient flow graph convolutional models can induce a dynamics 7 dominated by the graph high frequencies, which is desirable for heterophilic 8 datasets. We also describe structural constraints on common GNN architectures 9 allowing to interpret them as gradient flows. We perform thorough ablation studies 10 corroborating our theoretical analysis and show competitive performance of simple 11 and lightweight models on real-world homophilic and heterophilic datasets. 12 N/A Dynamical systems minimizing an energy are ubiquitous in geometry and physics.1 We propose a gradient flow framework for GNNs where the equations follow the2 direction of steepest descent of a learnable energy. This approach allows to analyse3 the GNN evolution from a multi-particle perspective as learning attractive and4 repulsive forces in feature space via the positive and negative eigenvalues of a5 symmetric ‘channel-mixing’ matrix. We perform spectral analysis of the solutions6 and conclude that gradient flow graph convolutional models can induce a dynamics7 dominated by the graph high frequencies, which is desirable for heterophilic8 datasets. We also describe structural constraints on common GNN architectures9 allowing to interpret them as gradient flows. We perform thorough ablation studies10 corroborating our theoretical analysis and show competitive performance of simple11 and lightweight models on real-world homophilic and heterophilic datasets.12 1 Introduction and motivations13 Graph neural networks (GNNs) [38, 20, 21, 36, 7, 15, 27] and in particular their Message Passing14 formulation (MPNN) [19] have become the standard ML tool for dealing with different types of15 relations and interactions, ranging from social networks to particle physics and drug design. One16 of the often cited drawbacks of traditional GNN models is their poor ‘explainability’, making it17 hard to know why and how they make certain predictions [46, 47], and in which situations they18 may work and when they would fail. Limitations of GNNs that have attracted attention are over-19 smoothing [29, 30, 8], over-squashing and bottlenecks [1, 40], and performance on heterophilic data20 [31, 51, 13, 4, 45] – where adjacent nodes usually have different labels.21 Contributions. We propose a Gradient Flow Framework22 (GRAFF) where the GNN equations follow the direction of steep-23 est descent of a learnable energy. Thanks to this framework we can24 (i) interpret GNNs as a multi-particle dynamics where the learned25 parameters determine pairwise attractive and repulsive potentials26 in the feature space. This sheds light on how GNNs can adapt to27 heterophily and explains their performance and the smoothness of28 the prediction. (ii) GRAFF leads to residual convolutional models29 where the channel-mixing W is performed by a shared symmet-30 ric bilinear form inducing attraction and repulsion via its positive31 and negative eigenvalues, respectively. We theoretically investi-32 gate the interaction of the graph spectrum with the spectrum of the33 channel-mixing, proving that if there is more mass on the negative34 eigenvalues of W, then the dynamics is dominated by the graph-35 high frequencies, which could be desirable on heterophilic graphs.36 We also extend results of [29, 30, 8] by showing that when we drop37 the residual connection intrinsic to the gradient flow framework,38 graph convolutional models always induce a low-frequency dominated dynamics independent of the39 sign and magnitude of the spectrum of the channel-mixing. We also discuss how simple choices40 make common architectures fit GRAFF and conduct thorough ablation studies to corroborate the the-41 oretical analysis on the role of the spectrum of W. (iii) We crystallize an instance of our framework42 into a linear, residual, convolutional model that achieves competitive performance on homophilic and43 heterophilic real world graphs whilst being faster than GCN.44 Related work. Our analysis is related to studying GNNs as filters on the graph spectrum [15, 24,45 2, 25] and over-smoothing [29, 30, 8, 50] and partly adopts techniques similar to [30]. The key46 difference is that we also consider the spectrum of the ‘channel-mixing’ matrix. The concept of47 gradient flows has been a standard tool in physics and geometry [16], from which they were adopted48 for image processing [26], and recently used in ML [35] for the analysis of Transformers [41] – see49 also [18] for discussion of loss landscapes. Our continuous-time evolution equations follows the spirit50 of Neural ODES [22, 12, 3] and the study of GNNs as continuous dynamical systems [44, 10, 17, 9].51 Outline. In Section 2, we review the continuous and discrete Dirichlet energy and the associated52 gradient flow framework. We formalize the notion of over-smoothing and low(high)-frequency-53 dominated dynamics to investigate GNNs and study the dominant components in their evolution. We54 extend the graph Dirichlet energy to allow for a non-trivial norm for the feature edge-gradient. This55 leads to gradient flow equations that diffuse the features and over-smooth in the limit. Accordingly,56 in Section 3 we introduce a more general energy with a symmetric channel-mixing matrix W giving57 rise to attractive and repulsive pairwise terms via its positive and negative eigenvalues and show58 that the negative spectrum can induce high-frequency-dominant dynamics. In Section 4 we first59 compare with continuous GNN models and then discretize the equations and provide a ‘recipe’ for60 making standard GNN architectures fit a gradient flow framework. We adapt the spectral analysis to61 discrete-time showing that gradient flow convolutional models can generate a dynamics dominated by62 the high frequencies via the negative eigenvalues of W while this is impossible if we drop the residual63 connection. In Section 5 we corroborate our theoretical analysis on the role of the spectrum of W64 via ablation studies on graphs with varying homophily. Experiments on real world datasets show a65 competitive performance of our model despite its simplicity and reduced number of parameters.66 2 Gradient-flow formalism67 Notations adopted throughout the paper. Let G = (V,E) be an undirected graph with n nodes.68 We denote by F 2 Rn⇥d the matrix of d-dimensional node features, by fi 2 Rd its i-th row69 (transposed), by fr 2 Rn its r-th column, and by vec(F) 2 Rnd the vectorization of F obtained70 by stacking its columns. Given a symmetric matrix B, we let B + , B denote its most positive and71 negative eigenvalues, respectively, and ⇢B be its spectral radius. If B ⌫ 0, then gap(B) denotes the72 positive smallest eigenvalue of B. ḟ(t) denotes the temporal derivative, ⌦ is the Kronecker product73 and ‘a.e.’ means almost every w.r.t. Lebesgue measure and usually refers to data in the complement74 of some lower dimensional subspace in Rn⇥d. Proofs and additional results appear in the Appendix.75 Starting point: a geometric parallelism. To motivate a gradient-flow approach for GNNs, we start76 from the continuous case (see Appendix A.1 for details). Consider a smooth map f : Rn ! (Rd, h)77 with h a constant metric represented by H ⌫ 0. The Dirichlet energy of f is defined by78 E(f, h) = 1 2 Z Rn krfk2h dx = 1 2 dX q,r=1 nX j=1 Z Rn hqr@jf q@jf r (x)dx (1) and measures the ‘smoothness’ of f . A natural approach to find minimizers of E - called harmonic79 maps - was introduced in [16] and consists in studying the gradient flow of E , wherein a given map80 f(0) = f0 is evolved according to ḟ(t) = rfE(f(t)). These type of evolution equations have81 historically been the core of variational and PDE-based image processing; in particular, gradient82 flows of the Dirichlet energy were shown [26] to recover the Perona-Malik nonlinear diffusion [32].83 Motivation: GNNs for node-classification. We wish to extend the gradient flow formalism to node84 classification on graphs. Assume we have a graph G, node-features F0 and labels {yi} on Vtrain ⇢ V,85 and that we want to predict the labels on Vtest ⇢ V. A GNN typically evolves the features via some86 parametric rule, GNN✓(G,F0), and uses a decoding map for the prediction y = DE(GNN✓(G,F0)).87 In graph convolutional models [15, 27], GNN✓ consists of two operations: applying a shared linear88 transformation to the features (‘channel mixing’) and propagating them along the edges of the graph89 (‘diffusion’). Our goal consists in studying when GNN✓ is the gradient flow of some parametric class90 of energies E✓ : Rn⇥d ! R, which generalize the Dirichlet energy. This means that the parameters91 can be interpreted as ‘finding the right notion of smoothness’ for our task. We evolve the features by92 Ḟ(t) = rFE✓(F(t)) with prediction y = DE(F(T )) for some optimal time T .93 Why a gradient flow? Since Ė✓(F(t)) = ||rFE✓(F(t))||2, the energy dissipates along the gradient94 flow. Accordingly, this framework allows to explain the GNN dynamics as flowing the node features95 in the direction of steepest descent of E✓. Indeed, we find that parametrizing an energy leads to96 equations governed by attractive and repulsive forces that can be controlled via the spectrum of97 symmetric ‘channel-mixing’ matrices. This shows that by learning to distribute more mass over the98 negative (positive) eigenvalues of the channel-mixing, gradient flow models can generate dynamics99 dominated by the higher (respectively, lower) graph frequencies and hence tackle different homophily100 scenarios. The gradient flow framework also leads to sharing of the weights across layers (since we101 parametrize the energy rather than the evolution equations, as usually done in GNNs), allowing us to102 reduce the number of parameters without compromising performance (see Table 1).103 Analysis on graphs: preliminaries. Given a connected graph G with self-loops, its adjacency104 matrix A is defined as aij = 1 if (i, j) 2 E and zero otherwise. We let D = diag(di) be the degree105 matrix and write Ā := D 1/2AD 1/2. Let F 2 Rn⇥d be the matrix representation of a signal. Its106 graph gradient is (rF)ij := fj/ p dj fi/ p di. We define the Laplacian as := 12divr (the107 divergence div is the adjoint of r), represented by = I Ā ⌫ 0. We refer to the eigenvalues of108 as frequencies: the lowest frequency is always 0 while the highest frequency is ⇢ 2 [14]. As109 for the continuum case, the gradient allows to define a (graph) Dirichlet energy as [49]110 E Dir (F) := 1 4 X i X j:(i,j)2E ||(rF)ij || 2 ⌘ 1 4 X (i,j)2E || fi p di fjp dj || 2 = 1 2 trace(F > F), (2) where the extra 1 2 is for convenience. As for manifolds, EDir measures smoothness. If we stack the111 columns of F into vec(F) 2 Rnd, the gradient flow of EDir yields the heat equation on each channel:112 vec(Ḟ(t)) = rvec(F)E Dir (vec(F(t))) = (Id ⌦ )vec(F(t)) () ḟ r (t) = fr(t), (3) for 1 r d. Similarly to [8], we rely on EDir to assess whether a given dynamics t 7! F(t) is a113 smoothing process. A different choice of Laplacian L = D A with non-normalized adjacency114 induces the analogous Dirichlet energy EDirL (F) = 1 2 trace(F > LF). Throughout this paper, we rely115 on the following definitions (see Appendix A.3 for further equivalent formulations and justifications):116 Definition 2.1. Ḟ(t) = GNN✓(F(t), t) initialized at F(0) is smoothing if EDir(F(t)) C + '(t),117 with C a constant only depending on EDir(F(0)) and '̇(t) 0. Over-smoothing occurs if either118 E Dir (F(t)) ! 0 or EDirL (F(t)) ! 0 for t ! 1.119 Our notion of ‘over-smoothing’ is a relaxed version of the definition in [34] – although in the linear120 case one always finds an exponential decay of EDir. We note that EDir(F(t)) ! 0 iff fr(t) ! 0 for121 each column fr. As in [30], this corresponds to a loss of separation power along the solution where122 nodes with equal degree become indistinguishable since we converge to ker( ) (if we replaced 123 with L then we would not even be able to separate nodes with different degrees in the limit).124 To motivate the next definition, consider Ḟ(t) = ĀF(t). Despite ||F(t)|| being unbounded for a.e.125 F(0), the low-frequency components are growing the fastest and indeed F(t)/||F(t)|| ! F1 s.t.126 f r 1 = 0 for 1 r d. We formalize this scenario – including the opposite case of high-frequency127 components being dominant – by studying EDir(F(t)/||F(t)||), i.e. the Rayleigh quotient of Id ⌦ .128 Definition 2.2. Ḟ(t) = GNN✓(F(t), t) initialized at F(0) is Low/High-Frequency-Dominant129 (L/HFD) if EDir(F(t)/||F(t)||) ! 0 (respectively, EDir(F(t)/||F(t)||) ! ⇢ /2) for t ! 1.130 We report a consequence of Definition 2.2 and refer to Appendix A.3 for additional details and131 motivations for the characterizations of LFD and HFD.132 Lemma 2.3. GNN✓ is LFD (HFD) iff for each tj ! 1 there exist tjk ! 1 and F1 s.t.133 F(tjk)/||F(tjk)|| ! F1 and fr1 = 0 ( fr1 = ⇢ fr1, respectively).134 If a graph is homophilic, adjacent nodes are likely to share the same label and we expect a smoothing135 or LFD dynamics enhancing the low-frequency components to be successful at node classification136 tasks [43, 28]. In the opposite case of heterophily, the high-frequency components might contain more137 relevant information for separating classes [4, 5] – the prototypical example being the eigenvector of138 associated with largest frequency ⇢ separating a regular bipartite graph. In other words, the class139 of heterophilic graphs contain instances where signals should be sharpened by increasing EDir rather140 than smoothed out. Accordingly, an ideal framework for learning on graphs must accommodate both141 of these opposite scenarios by being able to induce either an LFD or a HFD dynamics.142 Parametric Dirichlet energy: channel-mixing as metric in feature space. In eq. (1) a constant143 nontrivial metric h in Rd leads to the mixing of the feature channels. We adapt this idea by considering144 a symmetric positive semi-definite H = W>W with W 2 Rd⇥d and using it to generalize EDir as145 E Dir W (F) := 1 4 dX q,r=1 X i X j:(i,j)2E hqr(rf q )ij(rf r )ij = 1 4 X (i,j)2E ||W(rF)ij || 2. (4) We note the analogy with eq. (1), where the sum over the nodes replaces the integration over the146 domain and the j-th derivative at some point i is replaced by the gradient along the edge (i, j) 2 E.147 We generally treat W as learnable weights and study the gradient flow of EDirW :148 Ḟ(t) = rFE Dir W (F(t)) = F(t)W > W. (5) We see that eq. (5) generalizes eq. (3). Below ‘smoothing’ is intended as in Definition 2.1.149 Proposition 2.4. Let P kerW be the projection onto ker(W > W). Equation (5) is smoothing since150 E Dir (F(t)) e 2tgap(W >W)gap( ) ||F(0)|| 2 + E Dir ((P kerW ⌦ In)vec(F(0))), t 0. In fact F(t) ! F1 s.t. 9 1 2 Rd: for each i 2 V we have (f1)i = p di 1 + P ker W fi(0).151 Proposition 2.4 implies that no weight matrix W in eq. (5) can separate the limit embeddings F(1)152 of nodes with same degree and input features. If W has a trivial kernel, then nodes with same degrees153 converge to the same representation and over-smoothing occurs as per Definition 2.1. Differently154 from [29, 30, 8], over-smoothing occurs independently of the spectral radius of the ‘channel-mixing’155 if its eigenvalues are positive – even for equations which lead to residual GNNs when discretized156 [12]. According to Proposition 2.4, we do not expect eq. (5) to succeed on heterophilic graphs where157 smoothing processes are generally harmful – this is confirmed in Figure 2 (see prod-curve). To158 remedy this problem, we generalize eq. (5) to a gradient flow that can be HFD as per Definition 2.2.159 3 A general parametric energy for pairwise interactions160 We first rewrite the energy EDirW in eq. (4) as161 E Dir W (F) = 1 2 X i hfi,W > Wfii 1 2 X i,j āijhfi,W > Wfji. (6) We then define a new, more general energy by replacing the occurrences of W>W with new162 symmetric matrices ⌦,W 2 Rd⇥d since we also want to generate repulsive forces:163 E tot (F) := 1 2 X i hfi,⌦fii 1 2 X i,j āijhfi,Wfji ⌘ E ext ⌦ (F) + E pair W (F), (7) with associated gradient flow of the form (see Appendix B)164 Ḟ(t) = rFE tot (F(t)) = F(t)⌦+ ĀF(t)W. (8) Note that eq. (8) is gradient flow of some energy F 7! Etot(F) iff both ⌦ and W are symmetric.165 A multi-particle system point of view: attraction vs repulsion. Consider the d-dimensional166 node-features as particles in Rd with energy Etot. While the term Eext⌦ is independent of the graph167 topology and represents an external field in the feature space, the second term EpairW constitutes a168 potential energy, with W a bilinear form determining the pairwise interactions of adjacent node169 representations. Given a symmetric W, we write W = ⇥> + ⇥+ ⇥ > ⇥ , by decomposing the170 spectrum of W in positive and negative values.We can rewrite Etot = Eext⌦ W + E Dir ⇥+ E Dir ⇥ , i.e.171 E tot (F) = 1 2 X i hfi, (⌦ W)fii+ 1 4 X i,j ||⇥+(rF)ij || 2 1 4 X i,j ||⇥ (rF)ij || 2. (9) The gradient flow of Etot minimizes EDir⇥+ and maximizes E Dir ⇥ . The matrix W encodes repulsive172 pairwise interactions via its negative-definite component ⇥ which lead to terms ||⇥ (rF)ij ||173 increasing along the solution. The latter affords a ‘sharpening’ effect desirable on heterophilic graphs174 where we need to disentangle adjacent node representations and hence ‘magnify’ the edge-gradient.175 Spectral analysis of the channel-mixing. We will now show that eq. (8) can lead to a HFD176 dynamics. To this end, we assume that ⌦ = 0 so that eq. (8) becomes Ḟ(t) = ĀF(t)W. According177 to eq. (9) the negative eigenvalues of W lead to repulsion. We show that the latter can induce HFD178 dynamics as per Definition 2.2. We let P ⇢ W be the orthogonal projection into the eigenspace of179 W ⌦ Ā associated with the eigenvalue ⇢ := | W |(⇢ 1). We define ✏HFD explicitly in eq. (24).180 Proposition 3.1. If ⇢ > W+ , then Ḟ(t) = ĀF(t)W is HFD for a.e. F(0): there exists ✏HFD s.t.181 E Dir (F(t)) = e2t⇢ ⇣⇢ 2 ||P ⇢ W F(0)|| 2 +O(e 2t✏HFD) ⌘ , t 0, and F(t)/||F(t)|| converges to F1 2 Rn⇥d such that fr1 = ⇢ fr1, for 1 r d.182 Proposition 3.1 shows that if enough mass of the spectrum of the ‘channel-mixing’ is distributed over183 the negative eigenvalues, then the evolution is dominated by the graph high frequencies. This analysis184 is made possible in our gradient flow framework where W must be symmetric. The HFD dynamics185 induced by negative eigenvalues of W is confirmed in Figure 2 (neg-prod-curve in the bottom chart).186 A more general energy. Equations with a source term may have better expressive power [44, 11, 39].187 In our framework this means adding an extra energy term of the form Esource W̃ (F) := hF,F(0)W̃i188 to eq. (7) with some learnable and W̃. This leads to the following gradient flow:189 Ḟ(t) = F(t)⌦+ ĀF(t)W F(0)W̃. (10) We also observe that one could replace the fixed matrix Ā with a more general symmetric graph190 vector field A satisfying Aij = 0 if (i, j) /2 E, although in this work we focus on the case A = Ā.191 We also note that when ⌦ = W, then eq. (8) becomes Ḟ(t) = F(t)W. We perform a spectral192 analysis of this case in Appendix B.2.193 Non-linear activations. In Appendix B.3 we discuss non-linear gradient flow equations. Here194 we study what happens if the gradient flow in eq. (10) is activated pointwise by : R ! R. We195 show that although we are no longer a gradient flow, the learnable multi-particle energy Etot is still196 decreasing along the solution, meaning that the interpretation of the channel-mixing W inducing197 attraction and repulsion via its positive and negative eigenvalues respectively is preserved.198 Proposition 3.2. Consider a non-linear map : R ! R such that the function x 7! x (x) 0. If199 t 7! F(t) solves the equation200 Ḟ(t) = ⇣ F(t)⌦+ ĀF(t)W F(0)W̃ ⌘ , where acts elementwise, then201 dEtot(F(t)) dt 0. A proof of this result and more details and discussion are reported in Appendix E. We emphasize202 here that differently from previous results about behaviour of ReLU wrt EDir [30, 8], we deal with a203 much more general energy that can also induce repulsion and a more general family of activation204 functions (that include ReLU, tanh, arctan and many others).205 4 Comparison with GNNs206 In this Section, we study standard GNN models from the perspective of our gradient flow framework.207 4.1 Continuous case208 Continuous GNN models replace layers with continuous time. In contrast with Proposition 3.1,209 we show that three main linearized continuous GNN models are either smoothing or LFD as210 per Definition 2.2. The linearized PDE-GCND model [17] corresponds to choosing = 0 and211 ⌦ = W = K(t)>K(t) in eq. (10), for some time-dependent family t 7! K(t) 2 Rd⇥d:212 ḞPDE GCND(t) = F(t)K(t) > K(t). The CGNN model [44] can be derived from eq. (10) by setting ⌦ = I ⌦̃,W = W̃ = I, = 1:213 ḞCGNN(t) = F(t) + F(t)⌦̃+ F(0). Finally, in linearized GRAND [10] a row-stochastic matrix A(F(0)) is learned from the encoding214 via an attention mechanism and we have215 ḞGRAND(t) = RWF(t) = (I A(F(0)))F(t). We note that if A is not symmetric, then GRAND is not a gradient flow.216 Proposition 4.1. PDE GCND, CGNN and GRAND satisfy the following:217 (i) PDE GCND is a smoothing model: ĖDir(FPDE GCND (t)) 0.218 (ii) For a.e. F(0) it holds: CGNN is never HFD and if we remove the source term, then219 E Dir (FCGNN(t)/||FCGNN(t)||) e gap( )t.220 (iii) If G is connected, FGRAND(t) ! µ as t ! 1, with µr = mean(fr(0)), 1 r d.221 By (ii) the source-free CGNN-evolution is LFD independent of ⌦̃. Moreover, by (iii), over-smoothing222 occurs for GRAND as per Definition 2.1. On the other hand, Proposition 3.1 shows that the negative223 eigenvalues of W can make the source-free gradient flow in eq. (8) HFD. Experiments in Section 5224 confirm that the gradient flow model outperforms CGNN and GRAND on heterophilic graphs.225 4.2 Discrete case226 We now describe a discrete version of our gradient flow model and compare it to ‘discrete’ GNNs227 where discrete time steps correspond to different layers. In the spirit of [12], we use explicit Euler228 scheme with step size ⌧ 1 to solve eq. (10) and set W̃ = I. In the gradient flow framework we229 parametrize the energy rather than the actual equations, which leads to symmetric channel-mixing230 matrices ⌦,W 2 Rd⇥d that are shared across the layers. Since the matrices are square, an encoding231 block EN : Rn⇥p ! Rn⇥d is used to process input features F0 2 Rn⇥p and generally reduce the232 hidden dimension from p to d. Moreover, the iterations inherently lead to a residual architecture233 because of the explicit Euler discretization:234 F(t+ ⌧) = F(t) + ⌧ F(t)⌦+ ĀF(t)W + F(0) , F(0) = EN(F0), (11) with prediction y = DE(F(T )) produced by a decoder DE : Rn⇥d ! Rn⇥k, where k is the235 number of label classes and T integration time of the form T = m⌧ , so that m 2 N represents the236 number of layers. Although eq. (11) is linear, we can include non-linear activations in EN, DE237 making the entire model generally non-linear. We emphasize two important points:238 • Since the framework is residual, even if the message-passing is linear, this is not equivalent239 to collapsing the dynamics into a single layer with diffusion matrix Ām, with m the number240 of layers, see eq. (27) in the appendix where we derive the expansion of the solution.241 • We could also activate the equations pointwise and maintain the physics interpretation thanks242 to Proposition 3.2 to gain greater expressive power. In the following though, we mainly243 stick to the linear discrete gradient flow unless otherwise stated.244 Are discrete GNNs gradient flows? Given a (learned) symmetric graph vector field A 2 Rn⇥n245 satisfying Aij = 0 if (i, j) /2 E, consider a family of linear GNNs with shared weights of the form246 F(t+ 1) = F(t)⌦+AF(t)W + F(0)W̃, 0 t T. (12) Symmetry is the key requirement to interpret GNNs in eq. (12) in a gradient flow framework.247 Lemma 4.2. Equation (12) is the unit step size discrete gradient flow of EextI ⌦ + E pair A,W E source W̃ ,248 with EpairA,W defined by replacing Ā with A in eq. (7), iff ⌦ and W are symmetric.249 Lemma 4.2 provides a recipe for making standard architectures into a gradient flow, with symmetry250 being the key requirement. When eq. (12) is a gradient flow, the underlying GNN dynamics is251 equivalent to minimizing a multi-particle energy by learning attractive and repulsive directions in252 feature space as discussed in Section 3. In Appendix C.2, we show how Lemma 4.2 covers linear253 versions of GCN [27, 43], GAT [42], GraphSAGE [23] and GCNII [11] to name a few.254 Over-smoothing analysis in discrete setting. By Proposition 3.1 we know that the continuous255 version of eq. (11) can be HFD thanks to the negative eigenvalues of W. The next result represents a256 discrete counterpart of Proposition 3.1 and shows that residual, symmetrized graph convolutional257 models can be HFD. Below P ⇢ W is the projection into the eigenspace associated with the eigenvalue258 ⇢ := | W |(⇢ 1) and we report the explicit value of HFD in eq. (28) in Appendix C.3. We let:259 W + (⇢ 1)) 1 < | W | < 2(⌧(2 ⇢ )) 1. (13) Theorem 4.3. Given F(t+ ⌧) = F(t) + ⌧ĀF(t)W, with W symmetric, if eq. (13) holds then260 E Dir (F(m⌧)) = (1 + ⌧⇢ ) 2m ⇢ 2 ||P ⇢ W F(0)|| 2 +O ✓ 1 + ⌧ HFD 1 + ⌧⇢ ◆2m!! , HFD < ⇢ , hence the dynamics is HFD for a.e. F(0) and in fact F(m⌧)/||F(m⌧)|| ! F1 s.t. fr1 = ⇢ fr1.261 Conversely, if G is not bipartite, then for a.e. F(0) the system F(t + ⌧) = ⌧ĀF(t)W, with W262 symmetric, is LFD independent of the spectrum of W.263 Theorem 4.3 shows that linear discrete gradient flows can be HFD due to the negative eigenvalues of264 W. This differs from statements that standard GCNs act as low-pass filters and thus over-smooth in265 the limit. Indeed, in these cases the spectrum of W is generally ignored [43, 11] or required to be266 sufficiently small in terms of singular value decomposition [29, 30, 8] when no residual connection267 is present. On the other hand, Theorem 4.3 emphasizes that the spectrum of W plays a key role to268 enhance the high frequencies when enough mass is distributed over the negative eigenvalues provided269 that a residual connection exists – this is confirmed by the neg-prod-curve in Figure 2.270 The residual connection from a spectral perspective. Given a sufficiently small step-size so271 that the right hand side of inequality 13 is satisfied, F(t+ ⌧) = F(t) + ⌧ĀF(t)W is HFD for a.e.272 F(0) if | W |(⇢ 1) > W+ , i.e. ‘there is more mass’ in the negative spectrum of W than in the273 positive one. This means that differently from [29, 30, 8], there is no requirement on the minimal274 magnitude of the spectral radius of W coming from the graph topology as long as W + is small275 enough. Conversely, without a residual term, the dynamics is LFD for a.e. F(0) independently of the276 sign and magnitude of the eigenvalues of W. This is also confirmed by the GCN-curve in Figure 2.277 Over-smoothing vs LFD. We highlight how in general a linear GCN equation as F(t + ⌧) =278 ⌧ĀF(t)W may avoid over-smoothing in the sense of Definition 2.1, meaning that EDir(F(t)) ! 1279 as soon as there exist i 2 (0, 1) and the spectral radius of W is large enough. However, this280 will not lead to over-separation since the dominating term is the lowest frequency one: in other281 words, once we re-set the scale right as per the normalization in Theorem 4.3, we encounter loss of282 separability even with large (and possibly negative) spectrum of W.283 5 Experiments284 In this section we evaluate the gradient flow framework (GRAFF). We corroborate the spectral285 analysis using synthetic data with controllable homophily. We confirm that having negative (positive)286 eigenvalues of the channel-mixing W are essential in heterophilic (homophilic) scenarios where the287 gradient flow should align with HFD (LFD) respectively. We show that the gradient flow in eq. (11)288 – a linear, residual, symmetric graph convolutional model – achieves competitive performance on289 heterophilic datasets.290 Methodology. We crystallize GRAFF in the model presented in eq. (11) with EN, DE im-291 plemented as single linear layers or MLPs, and we set ⌦ to be diagonal. For the real-world292 experiments we consider diagonally-dominant (DD), diagonal (D) and time-dependent choices293 for the structure of W that offer explicit control over its spectrum. In the (DD)-case, we consider294 a W0 2 Rd⇥d symmetric with zero diagonal and w 2 Rd defined by w↵ = q↵ P |W 0 ↵ | + r↵,295 and set W = diag(w) + W0. Due to the Gershgorin Theorem the eigenvalues of W belong to296 [w↵ P |W 0 ↵ |,w↵ + P |W 0 ↵ |], so the model ‘can’ easily re-distribute mass in the spectrum of297 W via q↵, r↵. This generalizes the decomposition of W in [11] providing a justification in terms of298 its spectrum and turns out to be more efficient w.r.t. the hidden dimension d as shown in Figure 4 in299 the Appendix. For (D) we take W to be diagonal, with entries sampled U [ 1, 1] and fixed – i.e., we300 do not train over W – and only learn EN, DE. We also include a time-dependent model where Wt301 varies across layers. To investigate the role of the spectrum of W on synthetic graphs, we construct302 three additional variants: W = W0 + W0>, W = ±W0>W0 named sum, prod and neg-prod303 respectively where prod (neg-prod) variants have only non-negative (non-positive) eigenvalues.304 Complexity and number of parameters. If we treat the number of layers as a constant, the discrete305 gradient flow scales as O(|V|pd + |E|d2), where p and d are input feature and hidden dimension306 respectively, with p d usually. Note that GCN has complexity O(|E|pd) and in fact our model is307 faster than GCN as confirmed in Figure 5 in Appendix D. Since EN, DE are single linear layers308 (MLPs), we can bound the number of parameters by pd+ d2 + 3d+ dk, with k the number of label309 classes, in the (DD)-variant while in the (D)-variant we have pd+ 3d+ dk. Further ablation studies310 appear in Figure 4 in the Appendix showing that (DD) outperforms sum and GCN – especially in the311 lower hidden dimension regime – on real-world benchmarks with varying homophily.312 Synthetic experiments and ablation studies.313 To investigate our claims in a controlled environ-314 ment we use the synthetic Cora dataset of [51, Ap-315 pendix G]. Graphs are generated for target levels316 of homophily via preferential attachment – see317 Appendix D.3 for details. Figure 2 confirms the318 spectral analysis and offers a better understanding319 in terms of performance and smoothness of the320 predictions. Each curve – except GCN – repre-321 sents one version of W as in ‘methodology’ and322 we implement eq. (11) with = 0, ⌦ = 0. Fig-323 ure 2 (top) reports the test accuracy vs true label324 homophily. Neg-prod is better than prod on low-325 homophily and viceversa on high-homophily. This326 confirms Proposition 3.1 where we have shown327 that the gradient flow can lead to a HFD dy-328 namics – that are generally desirable with low-329 homophily – through the negative eigenvalues of330 W. Conversely, the prod configuration (where we331 have an attraction-only dynamics) struggles in low-332 homophily scenarios even though a residual connection is present. Both prod and neg-prod are333 ‘extreme’ choices and serve the purpose of highlighting that by turning off one side of the spectrum334 this could be the more damaging depending on the underlying homophily. In general though ‘neutral’335 variants like sum and (DD) are indeed more flexible and better performing. In fact, (DD) outperforms336 GCN especially in low-homophily scenarios, confirming Theorem 4.3 where we have shown that337 without a residual connection convolutional models are LFD – and hence more sensitive to underlying338 homophily – irrespectively of the spectrum of W. This is further confirmed in Figure 3.339 In Figure 2 (bottom) we compute the homophily of the prediction (cross) for a given method and we340 compare with the homophily (circle) of the prediction read from the encoding (i.e. graph-agnostic).341 The homophily here is a proxy to assess whether the evolution is smoothing, the goal being explaining342 the smoothness of the prediction via the spectrum of W as per our theoretical analysis. For neg-prod343 the homophily after the evolution is lower than that of the encoding, supporting the analysis that344 negative eigenvalues of W enhance high-frequencies. The opposite behaviour occurs in the case of345 prod and explains that in the low-homophily regime prod is under-performant due to the prediction346 being smoother than the true homophily. (DD) and sum variants adapt better to the true homophily.347 We note how the encoding compensates when the dynamics can only either attract or repulse (i.e. the348 spectrum of W has a sign) by decreasing or increasing the initial homophily respectively.349 Real world experiments. We test GRAFF against a range of datasets with varying homophily350 [37, 33, 31] (see Appendix D.4 for additional details). We use results provided in [45, Table 1],351 which includes standard baselines as GCN [27], GraphSAGE [23], GAT [42], PairNorm [48] and352 recent models tailored towards the heterophilic setting (GGCN [45], Geom-GCN [31], H2GCN353 [51] and GPRGNN [13]). For Sheaf [5], a recent top-performer on heterophilic datasets, we took354 the best performing variant (out of six provided) for each dataset. We also include continuous355 baselines CGNN [44] and GRAND [10] to provide empirical evidence for Proposition 4.1. Splits356 taken from [31] are used in all the comparisons. The GRAFF model discussed in ‘methodology’357 is a very simple architecture with shared parameters across layers and run-time smaller than GCN358 and more recent models like GGCN designed for heterophilic graphs (see Figure 5 in the Appendix).359 Nevertheless, it achieves competitive results on all datasets, performing on par or better than more360 complex recent models. Moreover, comparison with the ‘time-dependent’ (DD) variant confirms361 that by sharing weights across layers we do not lose performance. We note that on heterophilic362 graphs short integration time is usually needed due to the topology being harmful and the negative363 eigenvalues of W leading to exponential behaviour (see Appendix D).364 6 Conclusions365 In this work, we developed a framework for GNNs where the evolution can be interpreted as366 minimizing a multi-particle learnable energy. This translates into studying the interaction between367 the spectrum of the graph and the spectrum of the ‘channel-mixing’ leading to a better understanding368 of when and why the induced dynamics is low (high) frequency dominated. From a theoretical369 perspective, we refined existing asymptotic analysis of GNNs to account for the role of the spectrum of370 the channel-mixing as well. From a practical perspective, our framework allows for ‘educated’ choices371 resulting in a simple convolutional model that achieves competitive performance on homophilic372 and heterophilic benchmarks while being faster than GCN. Our results refute the folklore of graph373 convolutional models being too simple for heterophilic benchmarks.374 Limitations and future works. We limited our attention to a constant bilinear form W, which375 might be excessively rigid. It is possible to derive non-constant alternatives that are aware of the376 features or the position in the graph. The main challenge amounts to matching the requirement for377 local ‘heterogeneity’ with efficiency: we reserve this question for future work. Our analysis is also a378 first step into studying the interaction of the graph and ‘channel-mixing’ spectra; we did not explore379 other dynamics that are neither LFD nor HFD as per our definitions. The energy formulation points380 to new models more ‘physics’ inspired; this will be explored in future work.381 Societal impact. Our work sheds light on the actual dynamics of GNNs and could hence improve382 their understanding, which is crucial for assessing their impact on large-scale applications. We also383 show that instances of our framework achieve competitive performance on heterophilic data despite384 being faster than GCN, providing evidence for efficient methods with reduced footprint.385 References386 [1] U. Alon and E. Yahav. On the bottleneck of graph neural networks and its practical implications.387 In International Conference on Learning Representations, 2021.388 [2] M. Balcilar, G. Renton, P. Héroux, B. Gaüzère, S. Adam, and P. Honeine. Analyzing the389 expressive power of graph neural networks in a spectral perspective. In International Conference390 on Learning Representations, 2020.391 [3] M. Biloš, J. Sommer, S. S. Rangapuram, T. Januschowski, and S. Günnemann. Neural flows:392 Efficient alternative to neural odes. In Advances in Neural Information Processing Systems,393 volume 34, 2021.394 [4] D. Bo, X. Wang, C. Shi, and H. Shen. Beyond low-frequency information in graph convolutional395 networks. In AAAI. AAAI Press, 2021.396 [5] C. Bodnar, F. Di Giovanni, B. P. Chamberlain, P. Liò, and M. M. Bronstein. Neural sheaf397 diffusion: A topological perspective on heterophily and oversmoothing in gnns. arXiv preprint398 arXiv:2202.04579, 2022.399 [6] S. Brody, U. Alon, and E. Yahav. How attentive are graph attention networks? arXiv preprint400 arXiv:2105.14491, 2021.401 [7] J. Bruna, W. Zaremba, A. Szlam, and Y. LeCun. Spectral networks and locally connected402 networks on graphs. In 2nd International Conference on Learning Representations, ICLR 2014,403 2014.404 [8] C. Cai and Y. Wang. A note on over-smoothing for graph neural networks. arXiv preprint405 arXiv:2006.13318, 2020.406 [9] B. Chamberlain, J. Rowbottom, D. Eynard, F. Di Giovanni, X. Dong, and M. Bronstein. Beltrami407 flow and neural diffusion on graphs. Advances in Neural Information Processing Systems, 34,408 2021.409 [10] B. Chamberlain, J. Rowbottom, M. I. Gorinova, M. Bronstein, S. Webb, and E. Rossi. Grand:410 Graph neural diffusion. In International Conference on Machine Learning, pages 1407–1418.411 PMLR, 2021.412 [11] M. Chen, Z. Wei, Z. Huang, B. Ding, and Y. Li. Simple and deep graph convolutional networks.413 In International Conference on Machine Learning, pages 1725–1735. PMLR, 2020.414 [12] R. T. Chen, Y. Rubanova, J. Bettencourt, and D. K. Duvenaud. Neural ordinary differential415 equations. Advances in neural information processing systems, 31, 2018.416 [13] E. Chien, J. Peng, P. Li, and O. Milenkovic. Adaptive universal generalized pagerank graph417 neural network. In 9th International Conference on Learning Representations, ICLR 2021,418 2021.419 [14] F. R. Chung and F. C. Graham. Spectral graph theory. Number 92. American Mathematical420 Soc., 1997.421 [15] M. Defferrard, X. Bresson, and P. Vandergheynst. Convolutional neural networks on graphs422 with fast localized spectral filtering. Advances in neural information processing systems, 29,423 2016.424 [16] J. Eells and J. H. Sampson. Harmonic mappings of riemannian manifolds. American journal of425 mathematics, 86(1):109–160, 1964.426 [17] M. Eliasof, E. Haber, and E. Treister. Pde-gcn: Novel architectures for graph neural networks427 motivated by partial differential equations. Advances in Neural Information Processing Systems,428 34, 2021.429 [18] M. Geiger, L. Petrini, and M. Wyart. Landscape and training regimes in deep learning. Physics430 Reports, 924:1–18, 2021.431 [19] J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl. Neural message passing432 for quantum chemistry. In International Conference on Machine Learning, pages 1263–1272.433 PMLR, 2017.434 [20] C. Goller and A. Kuchler. Learning task-dependent distributed representations by backprop-435 agation through structure. In Proceedings of International Conference on Neural Networks436 (ICNN’96), volume 1, pages 347–352. IEEE, 1996.437 [21] M. Gori, G. Monfardini, and F. Scarselli. A new model for learning in graph domains. In438 Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005., volume 2,439 pages 729–734. IEEE, 2005.440 [22] E. Haber and L. Ruthotto. Stable architectures for deep neural networks. Inverse Problems, 34,441 2018.442 [23] W. Hamilton, Z. Ying, and J. Leskovec. Inductive representation learning on large graphs.443 Advances in neural information processing systems, 30, 2017.444 [24] D. K. Hammond, P. Vandergheynst, and R. Gribonval. The spectral graph wavelet transform:445 Fundamental theory and fast computation. In Vertex-Frequency Analysis of Graph Signals,446 pages 141–175. Springer, 2019.447 [25] M. He, Z. Wei, H. Xu, et al. Bernnet: Learning arbitrary graph spectral filters via bernstein448 approximation. Advances in Neural Information Processing Systems, 34, 2021.449 [26] R. Kimmel, N. Sochen, and R. Malladi. From high energy physics to low level vision. In450 International Conference on Scale-Space Theories in Computer Vision, pages 236–247. Springer,451 1997.452 [27] T. N. Kipf and M. Welling. Semi-Supervised Classification with Graph Convolutional Networks.453 In Proceedings of the 5th International Conference on Learning Representations, ICLR ’17,454 2017.455 [28] J. Klicpera, S. Weißenberger, and S. Günnemann. Diffusion improves graph learning. In456 Proceedings of the 33rd International Conference on Neural Information Processing Systems,457 2019.458 [29] H. Nt and T. Maehara. Revisiting graph neural networks: All we have is low-pass filters. arXiv459 preprint arXiv:1905.09550, 2019.460 [30] K. Oono and T. Suzuki. Graph neural networks exponentially lose expressive power for node461 classification. In International Conference on Learning Representations, 2020.462 [31] H. Pei, B. Wei, K. C. Chang, Y. Lei, and B. Yang. Geom-gcn: Geometric graph convolutional463 networks. In 8th International Conference on Learning Representations, ICLR 2020, 2020.464 [32] P. Perona and J. Malik. Scale-space and edge detection using anisotropic diffusion. PAMI,465 12(7):629–639, 1990.466 [33] B. Rozemberczki, C. Allen, and R. Sarkar. Multi-scale attributed node embedding. Journal of467 Complex Networks, 9(2):cnab014, 2021.468 [34] T. K. Rusch, B. P. Chamberlain, J. Rowbottom, S. Mishra, and M. M. Bronstein. Graph-coupled469 oscillator networks. In International Conference on Machine Learning, 2022.470 [35] M. E. Sander, P. Ablin, M. Blondel, and G. Peyré. Sinkformers: Transformers with doubly471 stochastic attention. In International Conference on Artificial Intelligence and Statistics, pages472 3515–3530. PMLR, 2022.473 [36] F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini. The graph neural474 network model. IEEE transactions on neural networks, 20(1):61–80, 2008.475 [37] P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Galligher, and T. Eliassi-Rad. Collective classifica-476 tion in network data. AI magazine, 29(3):93–93, 2008.477 [38] A. Sperduti. Encoding labeled graphs by labeling raam. Advances in Neural Information478 Processing Systems, 6, 1993.479 [39] M. Thorpe, T. M. Nguyen, H. Xia, T. Strohmer, A. Bertozzi, S. Osher, and B. Wang. Grand++:480 Graph neural diffusion with a source term. In International Conference on Learning Represen-481 tations, 2021.482 [40] J. Topping, F. Di Giovanni, B. P. Chamberlain, X. Dong, and M. M. Bronstein. Understanding483 over-squashing and bottlenecks on graphs via curvature. International Conference on Learning484 Representations, 2022.485 [41] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and486 I. Polosukhin. Attention is all you need. Advances in neural information processing systems,487 30, 2017.488 [42] P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio. Graph attention489 networks. In International Conference on Learning Representations, 2018.490 [43] F. Wu, A. Souza, T. Zhang, C. Fifty, T. Yu, and K. Weinberger. Simplifying graph convolutional491 networks. In International conference on machine learning, pages 6861–6871. PMLR, 2019.492 [44] L.-P. Xhonneux, M. Qu, and J. Tang. Continuous graph neural networks. In International493 Conference on Machine Learning, pages 10432–10441. PMLR, 2020.494 [45] Y. Yan, M. Hashemi, K. Swersky, Y. Yang, and D. Koutra. Two sides of the same coin:495 Heterophily and oversmoothing in graph convolutional neural networks. arXiv preprint496 arXiv:2102.06462, 2021.497 [46] Z. Ying, D. Bourgeois, J. You, M. Zitnik, and J. Leskovec. Gnnexplainer: Generating expla-498 nations for graph neural networks. Advances in neural information processing systems, 32,499 2019.500 [47] H. Yuan, H. Yu, S. Gui, and S. Ji. Explainability in graph neural networks: A taxonomic survey.501 arXiv preprint arXiv:2012.15445, 2020.502 [48] L. Zhao and L. Akoglu. Pairnorm: Tackling oversmoothing in gnns. arXiv preprint503 arXiv:1909.12223, 2019.504 [49] D. Zhou and B. Schölkopf. Regularization on discrete spaces. In Joint Pattern Recognition505 Symposium, pages 361–368. Springer, 2005.506 [50] K. Zhou, X. Huang, D. Zha, R. Chen, L. Li, S.-H. Choi, and X. Hu. Dirichlet energy constrained507 learning for deep graph neural networks. Advances in Neural Information Processing Systems,508 34:21834–21846, 2021.509 [51] J. Zhu, Y. Yan, L. Zhao, M. Heimann, L. Akoglu, and D. Koutra. Beyond homophily in graph510 neural networks: Current limitations and effective designs. Advances in Neural Information511 Processing Systems, 33:7793–7804, 2020.512 [52] O. Shchur, M. Mumme, A. Bojchevski, and S. Günnemann. Pitfalls of graph neural network513 evaluation. In NIPS workshop, 2018.514 [53] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin,515 N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani,516 S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala. Pytorch: An imperative style,517 high-performance deep learning library. In NeurIPS. 2019.518 [54] M. Fey and J. E. Lenssen. Fast graph representation learning with PyTorch Geometric. In ICLR519 Workshop on Representation Learning on Graphs and Manifolds, 2019.520 [55] L. Biewald. Experiment tracking with weights and biases, 2020. Software available from521 wandb.com.522 Checklist523 The checklist follows the references. Please read the checklist guidelines carefully for information on524 how to answer these questions. For each question, change the default [TODO] to [Yes] , [No] , or525 [N/A] . You are strongly encouraged to include a justification to your answer, either by referencing526 the appropriate section of your paper or providing a brief inline description. For example:527 • Did you include the license to the code and datasets? [Yes] See Section ??.528 • Did you include the license to the code and datasets? [No] The code and the data are529 proprietary.530 • Did you include the license to the code and datasets? [N/A]531 Please do not modify the questions and only use the provided macros for your answers. Note that the532 Checklist section does not count towards the page limit. In your paper, please delete this instructions533 block and only keep the Checklist section heading above along with the questions/answers below.534 1. For all authors...535 (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s536 contributions and scope? [Yes]537 (b) Did you describe the limitations of your work? [Yes] , in Section 6.538 (c) Did you discuss any potential negative societal impacts of your work? [Yes] in the539 Societal impact paragraph in Section 6.540 (d) Have you read the ethics review guidelines and ensured that your paper conforms to541 them? [Yes]542 2. If you are including theoretical results...543 (a) Did you state the full set of assumptions of all theoretical results? [Yes]544 (b) Did you include complete proofs of all theoretical results? [Yes] in Appendix A,545 Appendix B and Appendix C.546 3. If you ran experiments...547 (a) Did you include the code, data, and instructions needed to reproduce the main exper-548 imental results (either in the supplemental material or as a URL)? [Yes] Code and549 README in SM, dataloaders in code550 (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they551 were chosen)? [Yes] Splits and hyperparameters provided in code zip552 (c) Did you report error bars (e.g., with respect to the random seed after running experi-553 ments multiple times)? [Yes] Standard deviations are stated in results table554 (d) Did you include the total amount of compute and the type of resources used (e.g., type555 of GPUs, internal cluster, or cloud provider)? [Yes] in appendix D556 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...557 (a) If your work uses existing assets, did you cite the creators? [Yes] datasets and standard558 libraries cited in appendix D559 (b) Did you mention the license of the assets? [Yes] industry standard libraries and560 benchmark datasets were used in accordance with licences561 (c) Did you include any new assets either in the supplemental material or as a URL? [Yes]562 code provided in SM zip563 (d) Did you discuss whether and how consent was obtained from people whose data you’re564 using/curating? [N/A]565 (e) Did you discuss whether the data you are using/curating contains personally identifiable566 information or offensive content? [Yes] no personal data is contained within bench-567 marking datasets568 5. If you used crowdsourcing or conducted research with human subjects...569 (a) Did you include the full text of instructions given to participants and screenshots, if570 applicable? [N/A]571 (b) Did you describe any potential participant risks, with links to Institutional Review572 Board (IRB) approvals, if applicable? [N/A]573 (c) Did you include the estimated hourly wage paid to participants and the total amount574 spent on participant compensation? [N/A]575
1. What is the focus and contribution of the paper on graph neural networks? 2. What are the strengths of the proposed approach, particularly in terms of its ability to handle heterophilious graphs? 3. What are the weaknesses of the paper, especially regarding its empirical evaluation and expressive power analysis? 4. How does the reviewer assess the clarity and notation usage in the paper? 5. What additional questions or suggestions does the reviewer have regarding the paper's content and comparisons with other works?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper introduces a new family of models, GRAFF, on graphs wherein graph features are transformed according to a dynamical system given by the negative gradient of an energy functional, which is parameterized and learned. This amounts to a re-parameterization to focus on an energy function describing a discretized iterative update, instead of parameterizing the iterative update itself, as the most widely used GNN architectures do. This relation is properly studied in Section 4, where they show that GRAFF still includes many prior GNN model (up to the perhaps critically important matter of the non-linearity). However, this re-parameterization appears to offer more than just a reinterpretation of existing models. The primary value added explored in this work is to the analysis (and empirics) of the ability to handle heterophilious graphs. Strengths And Weaknesses First, a congratulations to the authors on a nice piece of work that offers some new perspectives, and promising new directions. I enjoyed reading your work and certainly felt that I learned something in the process. Below I discuss some of the things I especially liked in this work, as well as some of the concerns I have about certain aspects. Overall, I think this work contains some great conceptual components, but leaves open so quite important questions, particularly revolving around expressive power. The empirical evaluation is also relatively weak, and leaves me uncertain whether gradient flow models would enjoy widespread adoption. I will explain why I came to each of these beliefs in more detail below. Strong aspects: The community is moving towards a number of candidate approaches for circumnavigating the weaknesses of message passing networks. Although many ideas have already been given, the debate remains open. The idea of models following gradient flows is creative, and immediately prompted an “aha!” feeling. The community is in need of creative ideas like this, as you never know which will end up having a decisive impact. On a technical level, it was pleasing to see the amenability of gradient flows to analysis of smoothing properties. The result in Section 3 on the Dirichlet energy functional were particularly interesting. It is quite unfortunate, however unfortunately typical, that the analysis doesn’t extent to non-linear activations (line 201). Weak aspects: A major missing piece of the picture is an understanding of the expressive power of the proposed gradient flow models. I found it particularly perplexing that line 202 mentions that no non-linearity is used in experiments. This raises a number of questions: are these models then of comparable expressive power to linear GNNs? Given that having no non-linearity doesn’t hurt performance does this just suggest that the empirical benchmarks considered just aren’t that challenging? The paper claims several times that GRAFF models are “explainable”. The basis for this is that the model predictions can be understood by probing property of the energy functional. While this may turn out to be a useful point, the paper does not properly substantiate the claim. Indeed, there are no examples of any such “explanation” in practice. I would ask the author to either drop the “explainable” claim entirely (which isn’t critical in any-case, despite it’s prominent position in the explanation of ”why a gradient flow?”) or to clearly substantiate it, probably via an example. Experimental evaluation is fairly limited. Table 1 is the main seat of comparisons to other models on node level classification tasks for varying levels of homophily. As billed, the strengths of GRAFF seem to emerge primarily in low homophily (high heterphily) graphs. However, the heterophilic graphs are very small: half only have a few hundred nodes, and the biggest graph considered—“Films”- has 7,600 nodes, and an MLP is a fairly competitive baseline on Films. All this means that the possible benefits to empirical methodology in the immediate future from seem unclear. To conclude, although the empirics leave a number of question marks over the immediate empirical viability, the idea of graph models via gradient flows along an energy functional is elegant and thought provoking for me. The idea itself, plus the good exploration of the connection to smoothing, is enough to put me on the side of acceptance, but the limitations mentioned keep it only marginally so. Miscellaneous: Clarity in certain places could be improved. For instance, Propositions 2.4 and 3.1 give explicit rates for the energy functional. Since (unless I am missing something!) the key point in both cases is that the models are high-frequency dominant, the rate itself seems to be more of an intermediate step towards this final HFD conclusion. Maybe it is a matter of personal taste but I would have hidden the gory details I the appendix. More generally, the paper is pretty notation heavy Questions What component(s) of GRAFF explain the inference speedup vis-à-vis GCN? Is it due to parameter sharing? Also, no details are given as to how this comparison was decided. Right now I cannot be sure that the comparison is apples to apples; maybe the GCN is a really massive model and the GRAFF is much smaller. More details on this would be great. Why just compare inference time? What about comparison of training time? It seems remiss not to include this, especially since the main paper simply mentions “run-time smaller than GCN” (line 359). The connection to spectral GNNs is interesting. Perhaps this suggests a path to developing expressive power results. Limitations Yes.
NIPS
Title Deep Supervised Summarization: Algorithm and Application to Learning Instructions Abstract We address the problem of finding representative points of datasets by learning from multiple datasets and their ground-truth summaries. We develop a supervised subset selection framework, based on the facility location utility function, which learns to map datasets to their ground-truth representatives. To do so, we propose to learn representations of data so that the input of transformed data to the facility location recovers their ground-truth representatives. Given the NP-hardness of the utility function, we consider its convex relaxation based on sparse representation and investigate conditions under which the solution of the convex optimization recovers ground-truth representatives of each dataset. We design a loss function whose minimization over the parameters of the data representation network leads to satisfying the theoretical conditions, hence guaranteeing recovering groundtruth summaries. Given the non-convexity of the loss function, we develop an efficient learning scheme that alternates between representation learning by minimizing our proposed loss given the current assignments of points to ground-truth representatives and updating assignments given the current data representation. By experiments on the problem of learning key-steps (subactivities) of instructional videos, we show that our proposed framework improves the state-of-the-art supervised subset selection algorithms. 1 Introduction Subset selection, which is the task of finding a small subset of most informative points from a large dataset, is a fundamental machine learning task with many applications, including, procedure learning [1, 2, 3], image, video, speech and document summarization [4, 5, 6, 7, 8, 9, 10, 11], data clustering [12, 13, 14, 15], feature and model selection [16, 17, 18, 19], social network marketing [20], product recommendation [21] and sensor placement [22, 23]. Subset selection involves design and optimization of utility functions that characterize the informativeness of selected data points, referred to as representatives. Different criteria have been studied in the literature, including (sequential) facility location [24, 2, 1] maximum cut [25, 26], maximum marginal relevance [27], sparse coding [28, 29] and DPPs [11, 30, 31]. Given that almost all subset selection criteria are, in general, nonconvex and NP-hard, approximate methods, such as greedy algorithms for optimizing graph-cuts and (sequential) facility location [24, 32, 2], sampling from Determinantal Point Process (DPP) [11, 31] and convex relaxation-based methods [12, 33, 29, 34, 35, 36] have been studied in the literature. Existing work on subset selection can be divided into the two main categories of unsupervised and supervised methods. The majority of existing research on subset selection falls into the unsupervised category, where one finds representatives of a dataset by optimizing the above criteria [5, 6, 7, 8, 9, 10, 11, 15, 22, 12, 28, 29] or others, such as diversity or coverage [37, 38, 39, 40, 41], importance 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. [42, 43, 5, 6, 44, 45, 46] and relevance [47, 39, 42, 48, 49]. The results are subsequently evaluated qualitatively or quantitatively against ground-truth representatives. Supervised Subset Selection. Humans perform remarkably well in summarization of video and speech data, e.g., describe the content of a long complex video by a few sentences or by selecting a few frames/segments. This has motivated the development and study of supervised subset selection techniques that learn from human, with the goal of bringing high-level reasoning and incorporating user preferences into subset selection. More formally, in the supervised setting, given datasets and their ground-truth representatives, one tries to train subset selection to recover the ground-truth summary of each training dataset and to generalize to new datasets. Despite its importance, supervised subset selection has only been more recently studied in the literature [8, 50, 51, 52, 53, 54, 55, 56, 30]. One difficulty is that supervised subset selection cannot be naively treated as classification, since, whether an item receives the label ‘representative’ or ‘non-representative’ depends on its relationships to the entire data. For example, a representative car image among images of cars, once considered in a dataset of face images will become nonrepresentative. To address the problem, [50, 52] try to learn a combination of different criteria, i.e., weights of a mixture of submodular functions. However, deciding about which submodular functions and how many to combine is a non-trivial problem, which affects the performance. On the other hand, [8, 51, 53, 54, 30, 55] learn a DPP kernel or adapt it to test videos, by maximizing the likelihood of the ground-truth summary under the DPP kernel. However, maximizing the summary likelihood for the ground-truth does not necessarily decrease the likelihood of non-ground-truth subsets. Deep Supervised Facility Location. In this paper, we address the problem of supervised subset selection based on representation learning for a convex relaxation of the uncapacitated facility location function. Facility location is a clustering-based subset selection that finds a set of representatives for which the sum of dissimilarities from every point to the closest representative is minimized [24, 32]. Given the NP-hardness of the problem, different approaches such as convex relaxation [57, 29, 12, 35, 36] and greedy submodular maximization [24, 32] have been proposed to efficiently optimize this utility function. We use convex relaxation because of an appealing property that we exploit: we show conditions under which the sparse convex relaxation recovers ground-truth representatives. We use these conditions to design a loss function to learn representation of data so that inputing each transformed dataset to the facility location leads to finding ground-truth representatives. Our loss function consists of three terms, a medoid loss that enforces each ground-truth representative be the medoid of its associated cluster, an inter-cluster loss that makes sure there is sufficient margin between points in different clusters induced by ground-truth representatives and an intra-cluster loss that enforces the distances between points in each cluster be smaller than a margin. The latter two loss functions are based on a margin that depends on the regularization parameter of the uncapacitated facility location and the number of points in induced clusters. The conditions and our proposed loss function require knowing the clustering of the data based on assignments to ground-truth representatives. However, computing the assignments requires access to the optimal representation, which is not available. Thus, we propose an optimization scheme that alternates between updating the representation by minimizing our proposed loss given the current assignments of points to ground-truth representatives and updating the assignments given the current representation. We perform experiments on the problem of supervised instructional video summarization, where each video consists of a set of key-steps (subactivities), needed to achieve a given task. In this case, each training video comes with a list of representative segments/frames, without knowing the labels of representatives and without knowing which representatives across different videos correspond to the same key-step (subactivity), making the supervised subset selection extremely more challenging than classification. Our experiments on two large datasets of ProceL [1] and Breakfast [58] show the effectiveness of our framework. Remark 1 Our setting is different than interactive subset selection [59, 60] that incorporates human supervision interactively, i.e., as we run subset selection, we receive and incorporate human feedback to improve subset selection. In our case, we do not have human in the loop interactively. Also, our setting is different than weakly supervised video summarization [61, 62] that use the name of the video categories or additional web data to perform summarization. We assume each dataset has ground-truth summary and do not use additional web data. Finally, [63] uses facility location for metric learning. However, this requires knowledge about assignments of points to predefined categories, which is a stronger requirement than only knowing the ground-truth representatives. Remark 2 To the best of our knowledge, this is the first work on supervised subset selection that derives conditions for the exactness of a subset selection utility function (i.e., conditions under which subset selection recovers ground-truth representatives) and employs these conditions to design a loss function for representation learning, e.g., via DNNs. In fact, this work takes a major step towards a theoretically motivated supervised subset selection framework. Paper Organization. The paper is organized as follows. In Section 2, we review the facility location and convex relaxation to solve the subset selection efficiently. In Section 3, we show conditions for the equivalence of the two problems, design a new loss function for representation learning whose minimum satisfies the conditions, hence, guaranteeing to obtain ground-truth representatives, and propose an efficient learning algorithm. In Section 4, we show experimental results on the ProceL and Breakfast datasets for instructional video summarization. Finally, Section 5 concludes the paper. 2 Background on Subset Selection Facility Location. Facility location is a clustering-based subset selection utility function, in which each point is assigned to one representative, hence, performing both representative selection and clustering [24]. More specifically, assume we have a dataset Y = {y1, . . . ,yN} consisting of N points, for which we are given dissimilarities between pairs of points. Let di,j = d(yi,yj) denote the dissimilarity between points yi and yj , with d(·, ·) being the dissimilarity function. The smaller the di,j is, the better yi represents yj . We assume that dissimilarities are non-negative, provide a partial ordering of data and we have djj < dij for every i 6= j. In order to find representatives, the facility location selects a subset S ⊆ {1, . . . , N} of the data points and assigns each point in Y to the representative point in S with minimum dissimilarity. In particular, the uncapacitated facility location [64, 65] tries to find a subset S with a sufficiently small cardinality that gives the best encoding of the dataset, i.e., min S⊆{1,...,N} λ|S|+ N∑ j=1 min i∈S dij , (1) where λ ≥ 0 is a regularization parameter that sets a trade-off between the number of representatives, |S|, and the encoding quality via S. When λ is zero, every point will be a representative of itself. Sparse Convex Relaxation. Optimizing the facility location in (1) is NP-hard, as it requires searching over all possible subsets of the dataset. This has motivated efficient algorithms, including forwardbackward greedy submodular maximization with worst case performance guarantees [66] as well as sparse convex relaxation [12]. To obtain the convex relaxation, which we use in the paper, one first defines assignment variables zij ∈ {0, 1}, which is 1 when yj is represented by yi and is zero otherwise. We can rewrite (1) as an equivalent optimization on the assignment variables as min {zij} λ N∑ i=1 I(‖[zi1 · · · ziN ]‖p) + N∑ i,j=1 dijzij s. t. zij ∈ {0, 1}, N∑ i=1 zij = 1, ∀i, j, (2) where I(·) is an indicator function, which is one when its argument is nonzero and is zero otherwise. Thus, the first term of the objective function measures the number of representatives, since [zi1 · · · ziN ] is nonzero when yi represents some of the data points and becomes zero otherwise. The second term measures the encoding cost, while the constraints ensure that each point is represented by only one representative. Notice that (2), which is equivalent to (1), is still an NP-hard problem. Also, (2) is a group-sparse optimization where ideally a few vectors [zi1 · · · ziN ] must be nonzero for a few i’s that would correspond to the representative points. To obtain an efficient convex relaxation based on groupsparsity (for p ≥ 1) [12, 29], we drop the indicator function and relaxe the binary constraints to zij ∈ [0, 1], hence, solve min {zij} λ N∑ i=1 ∥∥[zi1 · · · ziN ]∥∥p + N∑ i,j=1 dijzij s. t. zij ≥ 0, N∑ i=1 zij = 1, ∀i, j. (3) We then obtain the set of representativesR as points yi for which zij is nonzero for some j. Moreover, we obtain a clustering of data according to assignments of points to representatives, where for every representative i ∈ R, we obtain its cluster Gi = {j ∈ {1, . . . , N}| zij = 1} as the set of all points assigned to i. 3 Supervised Facility Location In this section, we present our proposed approach for supervised subset selection. We discuss conditions under which (3), which is the practical and efficient algorithm for solving the uncapacitated facility location, recovers ground-truth representatives from datasets. We use these conditions to design a loss function for representation learning so that for the transformed data, obtained by minimizing the loss, (3) and equivalently (1) will select ground truth summaries of training datasets. We then present an efficient learning framework to optimize our proposed loss function. 3.1 Problem Setting Assume we have L datasets and their ground-truth representatives, {(Y`,R`)}L`=1, where Y` = {y`,1, . . . ,y`,N`} denotes N` data points in the `-th dataset and R` ⊆ {1, . . . , N`} denotes the associated set of indices of ground-truth representatives. The goal of supervised subset selection is to train a subset selection method so that the input of each dataset Y` to the trained model leads to obtaining ground-truth representatives,R`. In the paper, we fix the subset selection method to the uncapacitated facility location in (1) and consider p = ∞ in (2) and (3). We cast the supervised subset selection problem as learning a transformation fΘ(·) on the input data so that running the convex algorithm (3) on fΘ(Y`) leads to obtainingR`. We use a deep neural network, parametrized by Θ, for representation learning and use the Euclidean distance as the measure of dissimilarity, i.e., we define d`i,j , ∥∥fΘ(y`,i)− fΘ(y`,j)∥∥2 . (4) Notice that we can use other dissimilarities as well (the theory and learning algorithm below work for other dissimilarities), however, the Euclidean distance results in obtaining an embedding space, where points are gathered around ground-truth representatives according to `2 distances. To learn the parameters Θ, we design a loss function using conditions that guarantee the performance of (3) for obtaining ground-truth representatives across datasets. 3.2 Proposed Learning Framework We investigate conditions under which the convex algorithm in (3) recovers a given set of points as representatives of transformed data {fΘ(y`,1), . . . , fΘ(y`,N`)}. We show that under these conditions, the solution of the convex algorithm in (3), which has the constraint zi,j ∈ [0, 1], will be integer. As a result, the convex relaxation will recover the same solution of the NP-hard non-convex uncapacitated facility location, i.e., the optimality gap between the non-convex and convex formulations vanishes. We then use these conditions to design a loss function for learning the representation parameters Θ. Theorem 1 Consider the convex relaxation of the uncapacitated facility location in (3), with a fixed λ and p = ∞. Let R` be the set of ground-truth representatives from the `-th dataset {fΘ(y`,1), . . . , fΘ(y`,N`)} and let G ` i denote the cluster associated with the representative i ∈ R`, i.e., G`i = { j | i = argmini′ d`i′,j = argmini′ ‖fΘ(y`,i′)− fΘ(y`,j)‖2 } . (5) The optimization (3) recoversR` as the set of representatives, if the following conditions hold: 1. ∀i ∈ R`, ∀i′ ∈ G`i , we have ∑ j∈G`i d`i,j ≤ ∑ j∈G`i d`i′,j ; 2. ∀i ∈ R`, ∀j ∈ G`i , ∀i′ /∈ G`i , we have λ|G`i | + d ` i,j < d ` i′,j ; 3. ∀i ∈ R`, ∀i′, j ∈ G`i , we have d`i′,j ≤ λ|G`i | + d ` i,j . The first condition (medoid condition) states that for points assigned to the cluster of i ∈ R`, the representative point i must achieve the minimum encoding cost. The second condition (inter-cluster condition) states that the closest point to each cluster from other groups must be sufficiently far from it. The third condition (intra-cluster condition) states that points in the same cluster must not be far from each other. For both the inter and intra cluster conditions, the separation margin is given by λ/|G`i |, depending on the regularization parameter and the number of points in each cluster, i.e., we have an adaptive margin to each cluster. Under the conditions of the Theorem 1, we can show that there is no gap between the NP-hard non-convex formulation in (1) and its convex relaxation in (3). Algorithm 1 : Supervised Facility Location Learning Input: Datasets {Y`}L`=1 and ground truth representatives {R`}L`=1. 1: Initialize Θ by using a pretrained network; 2: while (Not Converged) do 3: For fixed Θ, compute G`1,G`2, . . . for each dataset ` via (5); 4: For fixed {G`1,G`2, . . .}L`=1, update Θ by minimizing the loss function (7); 5: end while Output: Optimal parameters Θ. Corollary 1 Under the assumptions of the Theorem 1, the convex relaxation in (3) is equivalent to the non-convex uncapacitated facility location optimization in (1), both recovering the same integer solution, where for Y`, we recoverR` as the representative set. We can also show similar results for p = 2 (see the supplementary file). Next, we use the above result to design a loss function for supervised subset selection using the uncapacitated facility location. In fact, if we find a representation Θ using which the conditions of the Theorem 1 are satisfied, then not only the combinatorial optimization in (1) recovers the ground-truth representatives from each dataset, but also we obtain the same solution using the efficient sparse optimization in (3). To find the desired Θ, we propose a loss function that penalizes violation of the conditions of the Theorem 1. More specifically, we define three loss functions corresponding to the conditions of the theorem, as L`medoid(Θ) , ∑ i∈R` ∑ i′∈G`i ( ∑ j∈G`i d`i,j − ∑ j∈G`i d`i′,j ) + , L`inter(Θ) , ∑ i∈R` ∑ j∈G`i ∑ i′ /∈G`i ( λ |G`i | + d`i,j − d`i′,j ) + , L`intra(Θ) , ∑ i∈R` ∑ i′,j∈G`i ( d`i′,j − d`i,j − λ |G`i | ) + , (6) where (x)+ , max{0, x} is the non-negative thresholding (or ReLU) operator, and L`1,L`2,L`3 measure and penalize violation of the medoid, inter-cluster and intra-cluster conditions, respectively, for the dataset `. Putting the three loss functions together, we propose to minimize the following cost function, defined over the L datasets, min Θ L(Θ) , L∑ `=1 ( L`medoid(Θ) + ρinterL`inter(Θ) + ρintraL`intra(Θ) ) , (7) where ρinter, ρintra ≥ 0 are regularization parameters that set a trade-off between the three terms. To minimize L, we need to use the clustering of points in every dataset Y` according to assignments of points to the ground-truth representative setR`, which requires computing G`i ’s. However, computing such clustering via (5) requires knowledge of the optimal representation of the data Θ, which is not available. To address the problem, we propose an efficient learning algorithm that alternates between updating the representation parameters Θ by minimizing the proposed loss given the current assignments of points to ground-truth representatives and updating the assignments given the current representation. Algorithm 1 shows the steps of our learning algorithm. Notice that the loss functions naively require considering every representative and every pair of points in the same or different clusters. Given the redundancy of points, this is not often needed and we can only sample a few pairs of points in the same or different clusters to compute each loss. Adaptive Margin. It is important to note that our derived conditions and the loss function make use of a margin λ/|Gi| that depends on the facility location hyperparameter and the number of points in each cluster Gi. In other words, the margin would be different for different clusters during different iterations of our learning scheme. More specifically, for a representative that has few points assigned to it, the size of the cluster would be small, hence, incurring a larger margin than clusters with more number of points. This has the following effect: when a cluster has a small number of points, it could be considered as under-sampled, hence, to generalize better to test data, we need to have a better separation from other clusters, i.e., larger margin. On the other hand, for a cluster with a large number of points, the margin could be smaller as the chance of changing the distances between and within clusters by adding more samples to it would be low. This is in contrast to contrastive loss functions that use a fixed margin for pairs of dissimilar items, while reducing the distances of similar items as much as possible. Another difference with respect to contrastive loss functions is that in our loss, we compare the encoding quality of each representative point to non-representative points, whereas in contrastive loss, one uses all pairs of similar and dissimilar items. Remark 3 While [35, 36] have shown the integrality of convex relaxation for cardinality-constrained facility location, we showed equivalence conditions for the uncapacitated problem. Moreover, the nature of our conditions, as opposed to asymptotic results, allowed to design the loss in (7). Also, we learn to effectively use a common λ across different datasets, which cannot be done in the cardinality-constrained case, where the number of ground-truth representatives is already given. 4 Experiments In this section, we evaluate the performance of our method, which we refer to as Supervised Facility Location (SupFL), as well as other algorithms for learning key-steps (subactivities) of instructional videos by learning from ground-truth summaries. Notice that each training dataset comes with a list of representative segments/frames, without knowing the labels of representatives and without knowing which representatives across different videos correspond to the same key-step (subactivity). This makes the supervised subset selection different and extremely more challenging than classification. 4.1 Experimental Setting Dataset. We perform experiments on ProceL [1] and Breakfast [58] datasets. The ProceL is a large multimodal dataset of 12 diverse tasks, such as ‘install Chromecast’, ‘assemble Clarinet’, ‘perform CPR’. Each task consists of about 60 videos and has a grammar of key-steps, e.g. ‘perform CPR’ consists of ‘call emergency’, ‘check pulse’, ‘open airway’, ‘give compression’ and ‘give breath’. Each video is annotated with the key-steps. Breakfast is another large dataset of 10 cooking activities by 52 individuals performed in 18 different kitchens. The videos are captured using multiple cameras with different view points. Each activity has approximately 200 videos, corresponding to different views of each person doing the same task, hence a total of 1989 videos in the dataset. Similar to ProceL, each task consists of multiple key-steps (subactivities) required to achieve the task. For example, ‘making cereal’ consists of ‘take a bowl’, ‘pour cereals’, ‘pour milk’, ‘stir cereals’, ‘sil’ (for background frames at the beginning and the end). For the experiments on ProceL, we split the videos of each task into 70% for training, 15% for validation and 15% for testing. For the Breakfast, we split the videos of each activity into 60% for training, 20% for validation, and 20% for testing. We use the middle segment of each subactivity as the ground-truth representative. Feature Extraction and Learning. Given the similarity of consecutive frames, we perform the subset selection at the segment level. For ProceL, we use the segments provided in the dataset and for Breakfast, we divide each video into segments of 16-frame length with 8 frames overlap between two consecutive segments. We use the C3D network [67] for feature extraction in each segment and use the 4, 096-dimensional feature obtained by the first dense layer after the convolutional layers. We consider two variants of our method: i) SupFL(L), where we learn a linear transformation on the C3D features; ii) SupFL(N), where we learn the parameters of a neural network applied to C3D features. We use Euclidean distance for pairwise dissimilarities. Algorithms and Baselines. We compare the two variants of our method, SupFL(L) and SupFL(N), discussed above, against SubmodMix [52], which learns the weights of a mixture of submodular functions, and dppLSTM[54], which learns to select representatives using a bidirectional LSTM combined with the DPP kernel, and FCSN [68], which learns the weights of a fully convolutional network by treating subset selection as classification of each segment into representative vs nonrepresentative. To show the effectiveness of learning, we also compare with two unsupervised baselines: Uniform, which selects representatives uniformly at random from all segments, and UFL, which corresponds to running the uncapacitated facility location via the forward-backward greedy method on dissimilarities computed via C3D features. This particularly allows to investigate the effectiveness of our method in taking advantage of ground-truth summaries. Evaluation metric. Following [58], we report the segment-wise precision (P), action-wise recall (R) and F1 score (F). These metrics help to measure the performance of finding a representative for each key-step and the correctness of video segmentation based on assignments of segments to representatives. More specifically, for a video with Ns segments and Na ground-truth key-steps, after running subset selection we assign each segment to each recovered representative. We compute P = N̂s Ns , R = N̂a Na , F = 2PR P +R , (8) where N̂s is the number of the segments that are correctly assigned to representatives, given the ground-truth assignment labels. N̂a is the number of recovered key-steps in the video via representatives. The F1 score is the harmonic mean between the segment-wise precision and action-wise recall, which is between 0 and 1. We report the average of each score over the videos of each task. Implementation details. We implemented our framework in Pytorch and used the ADMM framework in [12] for subset selection via UFL and our SupFL. We train a model for each individual activity. For SupFL(L), we set the dimension of the transformed data to 1000 and 500 for ProceL and Breakfast, respectively, while for SupFL(N) we set the dimension of the network to 4096× 1000 ×1000 and 4096× 1000 ×500 for ProceL and Breakfast, respectively, where we use ReLu activations for the second layer. We use stochastic gradient descent to train our model and use 5 videos in each minibatch. We use the Adam optimizer with the learning rate of 1e-4 and weight decay of 5e-4. We train our model for at most 50 epochs. In order to improve the training time, after we compute assignments of points to each representative in our alternating algorithm, we randomly sample 10 points from each group and use them to form the loss functions in (6). Our method has three hyperparameters (λ, ρinter, ρintra), where λ is the regularization of the UFL in (3), while ρinter and ρintra are regularization parameters of our loss function in (7). We set the values of hyperparameters using the validation set (we did not perform heavy hyperparameter tuning). In the experiments, we show the effect of the regularization parameters on the performance. To have a fair comparison, we run all methods to select the same number of representatives as the number of ground-truth key-steps in the grammar of the task. 4.2 Experimental Results Table 1 shows the average F1 score (%) of different methods on each task in the ProceL dataset. Notice that our method outperforms other algorithms, obtaining 67.8% and 67.0% F1 score via SupFL(N) and SupFL(L), respectively, over the entire dataset. Compared to the UFL, which is the unsupervised version of our framework, we obtain significant improvement in all tasks, e.g., improving the F1 score by 8.4% and 7.3% for ‘tie a tie’ and ‘change iPhone battery’, respectively. dppLSTM, which is supervised, does not do as well as our method and other two supervised algorithms. This comes from the fact that dppLSTM often selects multiple segments from one key-step and from background, due to their appearance diversity, while missing some of the key-steps to choose segments from (see Figure 3). While SubmodMix and FCSN perform better than other baselines, their overall performance is about 4% lower than our method. This comes from the fact that SubmodMix has limited learning capacity, depending on which functions to add, while FCSN treats supervised subset selection as classification, hence embeds ground-truth representative segments (class 1) close to each other and far from non-representative segments (class 0), which is not desired as a representative and a non-representative segment could be very similar. Table 2 shows the average F1 score (%) in the Breakfast dataset1. While both versions of our method outperform other algorithms, in contrast to the ProceL, SupFL(L) generally does better than SupFL(N). Moreover, the gap between the performance of UFL and SupFL is smaller. This comes from the fact that the C3D features capture discriminative information for separating different key-steps (subactivities), hence, learning a linear transformation generally does better than a nonlinear one and less improvement will be expected by learning from ground-truth summaries. Figure 1 shows the average F1 score improvement over not learning data representation on the test videos of the four tasks of ‘perform CPR’, ‘change iPhone battery’, ‘make coffee’ and ‘change tire’ in ProceL as a function of the number of training epochs. Notice that generally as the training continues the F1 score improves, obtaining between 4% and 10% improvement, depending on the task, over using C3D features. Hyperparameter Effect. We also analyze the performance of our method as a function of the regularization parameters (λ, ρinter, ρintra), where λ corresponds to the regularization parameter of the uncapacitated FL utility function in (3), while ρinter, ρintra correspond to the hyperparameters that set a trade off between the three terms of our loss function in (7). Figure 2 shows the F1 score on the ProceL dataset, where to see the effect of each hyperparameter, we have fixed the values of the other two (these fixed values depend on the task). Notice that the F1 score is relatively stable with respect to the hyperparameter change. In particular, changing λ from 0.001 to 0.1 the performance over the dataset changes by at most 1.2% in F1 score, while changing ρinter and ρintra from 0.01 to 10, the performance changes by at most 0.6% and 2.1%, respectively. Ablation Studies. To show the effectiveness of using all three loss functions in our proposed cost function in (7), we perform ablation studies. Table 3 shows the average precision, recall and F1 scores on the ProceL dataset. Notice that when we use only one loss or a combination of two loss functions, we achieve relatively similar low scores, being about 7% lower than using the three loss functions together. This shows that, as expected from the theoretical results, we need to use all 1FCSN on Breakfast produced significantly lower F1 scores compared to all other baselines. loss functions corresponding to the three theoretical conditions in order to effectively learn from ground-truth summaries. Also, notice that the medoid loss alone or its combination with either of the two other losses obtains slightly better performance than using the inter-cluster or intra cluster loss or their combination. This is expected as the medoid loss tries to center points around each ground-truth representative. Finally, the combination of the inter-cluster and intra-cluster loss, which has weak resemblance to the contrastive loss, does not do well in the supervised subset selection problem. Qualitative Results. Figure 3 shows a qualitative result of running different methods for two videos from the two tasks of ‘change iPhone battery’ and ‘make smoke salmon sandwich’ from the ProceL dataset, where all methods choose the same number of representatives (for clarity, we do not show representatives obtained from background). Notice that for ‘smoke salmon sandwich’ our method correctly finds representatives from all key-steps, while other methods miss one of the key-steps. Similarly, for ‘change iPhone screen’, our method is more successful than baselines, which miss 5 or 6 key-steps. Our method in general does better in obtaining diverse representative segments, while other supervised baselines often obtain multiple redundant representatives from the same key-step. 5 Conclusions We addressed the problem of supervised subset selection by generalizing the facility location to learn from ground-truth summaries. We considered an efficient sparse optimization of the uncapacitated facility location and investigated conditions under which it recovers ground-truth representatives and also becomes equivalent to the original NP-hard problem. We designed a loss function and an efficient framework to learn representations of data so that the input of transformed data to the facility location satisfies the theoretical conditions, hence, recovers ground-truth summaries. We showed the effectiveness of our method for recovering key-steps of instructional videos. To the best of our knowledge, this is the first work on supervised subset selection that derives conditions under which subset selection recovers ground-truth representatives and employs them to design a loss function for deep representation learning. We believe that this work took a major step towards a theoretically motivated supervised subset selection framework. Acknowledgements This work is supported by DARPA Young Faculty Award (D18AP00050), NSF (IIS-1657197), ONR (N000141812132) and ARO (W911NF1810300). Chengguang Xu would like to thank Dat Huynh and Zwe Naing for their help and advice with some of the implementations during his research assistantship at MCADS lab, which resulted in this work.
1. What are the main contributions of the paper, and how do they relate to the concept of facility location and clustering? 2. How does the paper justify its technical ideas, particularly those related to clustering? 3. What kind of experimental results does the paper provide, and how do they demonstrate the effectiveness of the proposed method? 4. Are there any limitations or potential improvements regarding the number of datasets used in the experiments?
Review
Review The paper is well-written and easy to follow. The main contributions are clear and justified, and they follow quite directly from the concept of using facility location and clustering to do learning. The technical ideas are strongly related to clustering, from the clustering-based loss functions to the EM/K-means flavor of the main algorithm. The experimental results quite thoroughly demonstrate the value of the proposed method on a dataset, though perhaps more datasets could be used. [Update after Author Feedback]. I thank the authors for their detailed responses to the reviewer questions.
NIPS
Title Deep Supervised Summarization: Algorithm and Application to Learning Instructions Abstract We address the problem of finding representative points of datasets by learning from multiple datasets and their ground-truth summaries. We develop a supervised subset selection framework, based on the facility location utility function, which learns to map datasets to their ground-truth representatives. To do so, we propose to learn representations of data so that the input of transformed data to the facility location recovers their ground-truth representatives. Given the NP-hardness of the utility function, we consider its convex relaxation based on sparse representation and investigate conditions under which the solution of the convex optimization recovers ground-truth representatives of each dataset. We design a loss function whose minimization over the parameters of the data representation network leads to satisfying the theoretical conditions, hence guaranteeing recovering groundtruth summaries. Given the non-convexity of the loss function, we develop an efficient learning scheme that alternates between representation learning by minimizing our proposed loss given the current assignments of points to ground-truth representatives and updating assignments given the current data representation. By experiments on the problem of learning key-steps (subactivities) of instructional videos, we show that our proposed framework improves the state-of-the-art supervised subset selection algorithms. 1 Introduction Subset selection, which is the task of finding a small subset of most informative points from a large dataset, is a fundamental machine learning task with many applications, including, procedure learning [1, 2, 3], image, video, speech and document summarization [4, 5, 6, 7, 8, 9, 10, 11], data clustering [12, 13, 14, 15], feature and model selection [16, 17, 18, 19], social network marketing [20], product recommendation [21] and sensor placement [22, 23]. Subset selection involves design and optimization of utility functions that characterize the informativeness of selected data points, referred to as representatives. Different criteria have been studied in the literature, including (sequential) facility location [24, 2, 1] maximum cut [25, 26], maximum marginal relevance [27], sparse coding [28, 29] and DPPs [11, 30, 31]. Given that almost all subset selection criteria are, in general, nonconvex and NP-hard, approximate methods, such as greedy algorithms for optimizing graph-cuts and (sequential) facility location [24, 32, 2], sampling from Determinantal Point Process (DPP) [11, 31] and convex relaxation-based methods [12, 33, 29, 34, 35, 36] have been studied in the literature. Existing work on subset selection can be divided into the two main categories of unsupervised and supervised methods. The majority of existing research on subset selection falls into the unsupervised category, where one finds representatives of a dataset by optimizing the above criteria [5, 6, 7, 8, 9, 10, 11, 15, 22, 12, 28, 29] or others, such as diversity or coverage [37, 38, 39, 40, 41], importance 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. [42, 43, 5, 6, 44, 45, 46] and relevance [47, 39, 42, 48, 49]. The results are subsequently evaluated qualitatively or quantitatively against ground-truth representatives. Supervised Subset Selection. Humans perform remarkably well in summarization of video and speech data, e.g., describe the content of a long complex video by a few sentences or by selecting a few frames/segments. This has motivated the development and study of supervised subset selection techniques that learn from human, with the goal of bringing high-level reasoning and incorporating user preferences into subset selection. More formally, in the supervised setting, given datasets and their ground-truth representatives, one tries to train subset selection to recover the ground-truth summary of each training dataset and to generalize to new datasets. Despite its importance, supervised subset selection has only been more recently studied in the literature [8, 50, 51, 52, 53, 54, 55, 56, 30]. One difficulty is that supervised subset selection cannot be naively treated as classification, since, whether an item receives the label ‘representative’ or ‘non-representative’ depends on its relationships to the entire data. For example, a representative car image among images of cars, once considered in a dataset of face images will become nonrepresentative. To address the problem, [50, 52] try to learn a combination of different criteria, i.e., weights of a mixture of submodular functions. However, deciding about which submodular functions and how many to combine is a non-trivial problem, which affects the performance. On the other hand, [8, 51, 53, 54, 30, 55] learn a DPP kernel or adapt it to test videos, by maximizing the likelihood of the ground-truth summary under the DPP kernel. However, maximizing the summary likelihood for the ground-truth does not necessarily decrease the likelihood of non-ground-truth subsets. Deep Supervised Facility Location. In this paper, we address the problem of supervised subset selection based on representation learning for a convex relaxation of the uncapacitated facility location function. Facility location is a clustering-based subset selection that finds a set of representatives for which the sum of dissimilarities from every point to the closest representative is minimized [24, 32]. Given the NP-hardness of the problem, different approaches such as convex relaxation [57, 29, 12, 35, 36] and greedy submodular maximization [24, 32] have been proposed to efficiently optimize this utility function. We use convex relaxation because of an appealing property that we exploit: we show conditions under which the sparse convex relaxation recovers ground-truth representatives. We use these conditions to design a loss function to learn representation of data so that inputing each transformed dataset to the facility location leads to finding ground-truth representatives. Our loss function consists of three terms, a medoid loss that enforces each ground-truth representative be the medoid of its associated cluster, an inter-cluster loss that makes sure there is sufficient margin between points in different clusters induced by ground-truth representatives and an intra-cluster loss that enforces the distances between points in each cluster be smaller than a margin. The latter two loss functions are based on a margin that depends on the regularization parameter of the uncapacitated facility location and the number of points in induced clusters. The conditions and our proposed loss function require knowing the clustering of the data based on assignments to ground-truth representatives. However, computing the assignments requires access to the optimal representation, which is not available. Thus, we propose an optimization scheme that alternates between updating the representation by minimizing our proposed loss given the current assignments of points to ground-truth representatives and updating the assignments given the current representation. We perform experiments on the problem of supervised instructional video summarization, where each video consists of a set of key-steps (subactivities), needed to achieve a given task. In this case, each training video comes with a list of representative segments/frames, without knowing the labels of representatives and without knowing which representatives across different videos correspond to the same key-step (subactivity), making the supervised subset selection extremely more challenging than classification. Our experiments on two large datasets of ProceL [1] and Breakfast [58] show the effectiveness of our framework. Remark 1 Our setting is different than interactive subset selection [59, 60] that incorporates human supervision interactively, i.e., as we run subset selection, we receive and incorporate human feedback to improve subset selection. In our case, we do not have human in the loop interactively. Also, our setting is different than weakly supervised video summarization [61, 62] that use the name of the video categories or additional web data to perform summarization. We assume each dataset has ground-truth summary and do not use additional web data. Finally, [63] uses facility location for metric learning. However, this requires knowledge about assignments of points to predefined categories, which is a stronger requirement than only knowing the ground-truth representatives. Remark 2 To the best of our knowledge, this is the first work on supervised subset selection that derives conditions for the exactness of a subset selection utility function (i.e., conditions under which subset selection recovers ground-truth representatives) and employs these conditions to design a loss function for representation learning, e.g., via DNNs. In fact, this work takes a major step towards a theoretically motivated supervised subset selection framework. Paper Organization. The paper is organized as follows. In Section 2, we review the facility location and convex relaxation to solve the subset selection efficiently. In Section 3, we show conditions for the equivalence of the two problems, design a new loss function for representation learning whose minimum satisfies the conditions, hence, guaranteeing to obtain ground-truth representatives, and propose an efficient learning algorithm. In Section 4, we show experimental results on the ProceL and Breakfast datasets for instructional video summarization. Finally, Section 5 concludes the paper. 2 Background on Subset Selection Facility Location. Facility location is a clustering-based subset selection utility function, in which each point is assigned to one representative, hence, performing both representative selection and clustering [24]. More specifically, assume we have a dataset Y = {y1, . . . ,yN} consisting of N points, for which we are given dissimilarities between pairs of points. Let di,j = d(yi,yj) denote the dissimilarity between points yi and yj , with d(·, ·) being the dissimilarity function. The smaller the di,j is, the better yi represents yj . We assume that dissimilarities are non-negative, provide a partial ordering of data and we have djj < dij for every i 6= j. In order to find representatives, the facility location selects a subset S ⊆ {1, . . . , N} of the data points and assigns each point in Y to the representative point in S with minimum dissimilarity. In particular, the uncapacitated facility location [64, 65] tries to find a subset S with a sufficiently small cardinality that gives the best encoding of the dataset, i.e., min S⊆{1,...,N} λ|S|+ N∑ j=1 min i∈S dij , (1) where λ ≥ 0 is a regularization parameter that sets a trade-off between the number of representatives, |S|, and the encoding quality via S. When λ is zero, every point will be a representative of itself. Sparse Convex Relaxation. Optimizing the facility location in (1) is NP-hard, as it requires searching over all possible subsets of the dataset. This has motivated efficient algorithms, including forwardbackward greedy submodular maximization with worst case performance guarantees [66] as well as sparse convex relaxation [12]. To obtain the convex relaxation, which we use in the paper, one first defines assignment variables zij ∈ {0, 1}, which is 1 when yj is represented by yi and is zero otherwise. We can rewrite (1) as an equivalent optimization on the assignment variables as min {zij} λ N∑ i=1 I(‖[zi1 · · · ziN ]‖p) + N∑ i,j=1 dijzij s. t. zij ∈ {0, 1}, N∑ i=1 zij = 1, ∀i, j, (2) where I(·) is an indicator function, which is one when its argument is nonzero and is zero otherwise. Thus, the first term of the objective function measures the number of representatives, since [zi1 · · · ziN ] is nonzero when yi represents some of the data points and becomes zero otherwise. The second term measures the encoding cost, while the constraints ensure that each point is represented by only one representative. Notice that (2), which is equivalent to (1), is still an NP-hard problem. Also, (2) is a group-sparse optimization where ideally a few vectors [zi1 · · · ziN ] must be nonzero for a few i’s that would correspond to the representative points. To obtain an efficient convex relaxation based on groupsparsity (for p ≥ 1) [12, 29], we drop the indicator function and relaxe the binary constraints to zij ∈ [0, 1], hence, solve min {zij} λ N∑ i=1 ∥∥[zi1 · · · ziN ]∥∥p + N∑ i,j=1 dijzij s. t. zij ≥ 0, N∑ i=1 zij = 1, ∀i, j. (3) We then obtain the set of representativesR as points yi for which zij is nonzero for some j. Moreover, we obtain a clustering of data according to assignments of points to representatives, where for every representative i ∈ R, we obtain its cluster Gi = {j ∈ {1, . . . , N}| zij = 1} as the set of all points assigned to i. 3 Supervised Facility Location In this section, we present our proposed approach for supervised subset selection. We discuss conditions under which (3), which is the practical and efficient algorithm for solving the uncapacitated facility location, recovers ground-truth representatives from datasets. We use these conditions to design a loss function for representation learning so that for the transformed data, obtained by minimizing the loss, (3) and equivalently (1) will select ground truth summaries of training datasets. We then present an efficient learning framework to optimize our proposed loss function. 3.1 Problem Setting Assume we have L datasets and their ground-truth representatives, {(Y`,R`)}L`=1, where Y` = {y`,1, . . . ,y`,N`} denotes N` data points in the `-th dataset and R` ⊆ {1, . . . , N`} denotes the associated set of indices of ground-truth representatives. The goal of supervised subset selection is to train a subset selection method so that the input of each dataset Y` to the trained model leads to obtaining ground-truth representatives,R`. In the paper, we fix the subset selection method to the uncapacitated facility location in (1) and consider p = ∞ in (2) and (3). We cast the supervised subset selection problem as learning a transformation fΘ(·) on the input data so that running the convex algorithm (3) on fΘ(Y`) leads to obtainingR`. We use a deep neural network, parametrized by Θ, for representation learning and use the Euclidean distance as the measure of dissimilarity, i.e., we define d`i,j , ∥∥fΘ(y`,i)− fΘ(y`,j)∥∥2 . (4) Notice that we can use other dissimilarities as well (the theory and learning algorithm below work for other dissimilarities), however, the Euclidean distance results in obtaining an embedding space, where points are gathered around ground-truth representatives according to `2 distances. To learn the parameters Θ, we design a loss function using conditions that guarantee the performance of (3) for obtaining ground-truth representatives across datasets. 3.2 Proposed Learning Framework We investigate conditions under which the convex algorithm in (3) recovers a given set of points as representatives of transformed data {fΘ(y`,1), . . . , fΘ(y`,N`)}. We show that under these conditions, the solution of the convex algorithm in (3), which has the constraint zi,j ∈ [0, 1], will be integer. As a result, the convex relaxation will recover the same solution of the NP-hard non-convex uncapacitated facility location, i.e., the optimality gap between the non-convex and convex formulations vanishes. We then use these conditions to design a loss function for learning the representation parameters Θ. Theorem 1 Consider the convex relaxation of the uncapacitated facility location in (3), with a fixed λ and p = ∞. Let R` be the set of ground-truth representatives from the `-th dataset {fΘ(y`,1), . . . , fΘ(y`,N`)} and let G ` i denote the cluster associated with the representative i ∈ R`, i.e., G`i = { j | i = argmini′ d`i′,j = argmini′ ‖fΘ(y`,i′)− fΘ(y`,j)‖2 } . (5) The optimization (3) recoversR` as the set of representatives, if the following conditions hold: 1. ∀i ∈ R`, ∀i′ ∈ G`i , we have ∑ j∈G`i d`i,j ≤ ∑ j∈G`i d`i′,j ; 2. ∀i ∈ R`, ∀j ∈ G`i , ∀i′ /∈ G`i , we have λ|G`i | + d ` i,j < d ` i′,j ; 3. ∀i ∈ R`, ∀i′, j ∈ G`i , we have d`i′,j ≤ λ|G`i | + d ` i,j . The first condition (medoid condition) states that for points assigned to the cluster of i ∈ R`, the representative point i must achieve the minimum encoding cost. The second condition (inter-cluster condition) states that the closest point to each cluster from other groups must be sufficiently far from it. The third condition (intra-cluster condition) states that points in the same cluster must not be far from each other. For both the inter and intra cluster conditions, the separation margin is given by λ/|G`i |, depending on the regularization parameter and the number of points in each cluster, i.e., we have an adaptive margin to each cluster. Under the conditions of the Theorem 1, we can show that there is no gap between the NP-hard non-convex formulation in (1) and its convex relaxation in (3). Algorithm 1 : Supervised Facility Location Learning Input: Datasets {Y`}L`=1 and ground truth representatives {R`}L`=1. 1: Initialize Θ by using a pretrained network; 2: while (Not Converged) do 3: For fixed Θ, compute G`1,G`2, . . . for each dataset ` via (5); 4: For fixed {G`1,G`2, . . .}L`=1, update Θ by minimizing the loss function (7); 5: end while Output: Optimal parameters Θ. Corollary 1 Under the assumptions of the Theorem 1, the convex relaxation in (3) is equivalent to the non-convex uncapacitated facility location optimization in (1), both recovering the same integer solution, where for Y`, we recoverR` as the representative set. We can also show similar results for p = 2 (see the supplementary file). Next, we use the above result to design a loss function for supervised subset selection using the uncapacitated facility location. In fact, if we find a representation Θ using which the conditions of the Theorem 1 are satisfied, then not only the combinatorial optimization in (1) recovers the ground-truth representatives from each dataset, but also we obtain the same solution using the efficient sparse optimization in (3). To find the desired Θ, we propose a loss function that penalizes violation of the conditions of the Theorem 1. More specifically, we define three loss functions corresponding to the conditions of the theorem, as L`medoid(Θ) , ∑ i∈R` ∑ i′∈G`i ( ∑ j∈G`i d`i,j − ∑ j∈G`i d`i′,j ) + , L`inter(Θ) , ∑ i∈R` ∑ j∈G`i ∑ i′ /∈G`i ( λ |G`i | + d`i,j − d`i′,j ) + , L`intra(Θ) , ∑ i∈R` ∑ i′,j∈G`i ( d`i′,j − d`i,j − λ |G`i | ) + , (6) where (x)+ , max{0, x} is the non-negative thresholding (or ReLU) operator, and L`1,L`2,L`3 measure and penalize violation of the medoid, inter-cluster and intra-cluster conditions, respectively, for the dataset `. Putting the three loss functions together, we propose to minimize the following cost function, defined over the L datasets, min Θ L(Θ) , L∑ `=1 ( L`medoid(Θ) + ρinterL`inter(Θ) + ρintraL`intra(Θ) ) , (7) where ρinter, ρintra ≥ 0 are regularization parameters that set a trade-off between the three terms. To minimize L, we need to use the clustering of points in every dataset Y` according to assignments of points to the ground-truth representative setR`, which requires computing G`i ’s. However, computing such clustering via (5) requires knowledge of the optimal representation of the data Θ, which is not available. To address the problem, we propose an efficient learning algorithm that alternates between updating the representation parameters Θ by minimizing the proposed loss given the current assignments of points to ground-truth representatives and updating the assignments given the current representation. Algorithm 1 shows the steps of our learning algorithm. Notice that the loss functions naively require considering every representative and every pair of points in the same or different clusters. Given the redundancy of points, this is not often needed and we can only sample a few pairs of points in the same or different clusters to compute each loss. Adaptive Margin. It is important to note that our derived conditions and the loss function make use of a margin λ/|Gi| that depends on the facility location hyperparameter and the number of points in each cluster Gi. In other words, the margin would be different for different clusters during different iterations of our learning scheme. More specifically, for a representative that has few points assigned to it, the size of the cluster would be small, hence, incurring a larger margin than clusters with more number of points. This has the following effect: when a cluster has a small number of points, it could be considered as under-sampled, hence, to generalize better to test data, we need to have a better separation from other clusters, i.e., larger margin. On the other hand, for a cluster with a large number of points, the margin could be smaller as the chance of changing the distances between and within clusters by adding more samples to it would be low. This is in contrast to contrastive loss functions that use a fixed margin for pairs of dissimilar items, while reducing the distances of similar items as much as possible. Another difference with respect to contrastive loss functions is that in our loss, we compare the encoding quality of each representative point to non-representative points, whereas in contrastive loss, one uses all pairs of similar and dissimilar items. Remark 3 While [35, 36] have shown the integrality of convex relaxation for cardinality-constrained facility location, we showed equivalence conditions for the uncapacitated problem. Moreover, the nature of our conditions, as opposed to asymptotic results, allowed to design the loss in (7). Also, we learn to effectively use a common λ across different datasets, which cannot be done in the cardinality-constrained case, where the number of ground-truth representatives is already given. 4 Experiments In this section, we evaluate the performance of our method, which we refer to as Supervised Facility Location (SupFL), as well as other algorithms for learning key-steps (subactivities) of instructional videos by learning from ground-truth summaries. Notice that each training dataset comes with a list of representative segments/frames, without knowing the labels of representatives and without knowing which representatives across different videos correspond to the same key-step (subactivity). This makes the supervised subset selection different and extremely more challenging than classification. 4.1 Experimental Setting Dataset. We perform experiments on ProceL [1] and Breakfast [58] datasets. The ProceL is a large multimodal dataset of 12 diverse tasks, such as ‘install Chromecast’, ‘assemble Clarinet’, ‘perform CPR’. Each task consists of about 60 videos and has a grammar of key-steps, e.g. ‘perform CPR’ consists of ‘call emergency’, ‘check pulse’, ‘open airway’, ‘give compression’ and ‘give breath’. Each video is annotated with the key-steps. Breakfast is another large dataset of 10 cooking activities by 52 individuals performed in 18 different kitchens. The videos are captured using multiple cameras with different view points. Each activity has approximately 200 videos, corresponding to different views of each person doing the same task, hence a total of 1989 videos in the dataset. Similar to ProceL, each task consists of multiple key-steps (subactivities) required to achieve the task. For example, ‘making cereal’ consists of ‘take a bowl’, ‘pour cereals’, ‘pour milk’, ‘stir cereals’, ‘sil’ (for background frames at the beginning and the end). For the experiments on ProceL, we split the videos of each task into 70% for training, 15% for validation and 15% for testing. For the Breakfast, we split the videos of each activity into 60% for training, 20% for validation, and 20% for testing. We use the middle segment of each subactivity as the ground-truth representative. Feature Extraction and Learning. Given the similarity of consecutive frames, we perform the subset selection at the segment level. For ProceL, we use the segments provided in the dataset and for Breakfast, we divide each video into segments of 16-frame length with 8 frames overlap between two consecutive segments. We use the C3D network [67] for feature extraction in each segment and use the 4, 096-dimensional feature obtained by the first dense layer after the convolutional layers. We consider two variants of our method: i) SupFL(L), where we learn a linear transformation on the C3D features; ii) SupFL(N), where we learn the parameters of a neural network applied to C3D features. We use Euclidean distance for pairwise dissimilarities. Algorithms and Baselines. We compare the two variants of our method, SupFL(L) and SupFL(N), discussed above, against SubmodMix [52], which learns the weights of a mixture of submodular functions, and dppLSTM[54], which learns to select representatives using a bidirectional LSTM combined with the DPP kernel, and FCSN [68], which learns the weights of a fully convolutional network by treating subset selection as classification of each segment into representative vs nonrepresentative. To show the effectiveness of learning, we also compare with two unsupervised baselines: Uniform, which selects representatives uniformly at random from all segments, and UFL, which corresponds to running the uncapacitated facility location via the forward-backward greedy method on dissimilarities computed via C3D features. This particularly allows to investigate the effectiveness of our method in taking advantage of ground-truth summaries. Evaluation metric. Following [58], we report the segment-wise precision (P), action-wise recall (R) and F1 score (F). These metrics help to measure the performance of finding a representative for each key-step and the correctness of video segmentation based on assignments of segments to representatives. More specifically, for a video with Ns segments and Na ground-truth key-steps, after running subset selection we assign each segment to each recovered representative. We compute P = N̂s Ns , R = N̂a Na , F = 2PR P +R , (8) where N̂s is the number of the segments that are correctly assigned to representatives, given the ground-truth assignment labels. N̂a is the number of recovered key-steps in the video via representatives. The F1 score is the harmonic mean between the segment-wise precision and action-wise recall, which is between 0 and 1. We report the average of each score over the videos of each task. Implementation details. We implemented our framework in Pytorch and used the ADMM framework in [12] for subset selection via UFL and our SupFL. We train a model for each individual activity. For SupFL(L), we set the dimension of the transformed data to 1000 and 500 for ProceL and Breakfast, respectively, while for SupFL(N) we set the dimension of the network to 4096× 1000 ×1000 and 4096× 1000 ×500 for ProceL and Breakfast, respectively, where we use ReLu activations for the second layer. We use stochastic gradient descent to train our model and use 5 videos in each minibatch. We use the Adam optimizer with the learning rate of 1e-4 and weight decay of 5e-4. We train our model for at most 50 epochs. In order to improve the training time, after we compute assignments of points to each representative in our alternating algorithm, we randomly sample 10 points from each group and use them to form the loss functions in (6). Our method has three hyperparameters (λ, ρinter, ρintra), where λ is the regularization of the UFL in (3), while ρinter and ρintra are regularization parameters of our loss function in (7). We set the values of hyperparameters using the validation set (we did not perform heavy hyperparameter tuning). In the experiments, we show the effect of the regularization parameters on the performance. To have a fair comparison, we run all methods to select the same number of representatives as the number of ground-truth key-steps in the grammar of the task. 4.2 Experimental Results Table 1 shows the average F1 score (%) of different methods on each task in the ProceL dataset. Notice that our method outperforms other algorithms, obtaining 67.8% and 67.0% F1 score via SupFL(N) and SupFL(L), respectively, over the entire dataset. Compared to the UFL, which is the unsupervised version of our framework, we obtain significant improvement in all tasks, e.g., improving the F1 score by 8.4% and 7.3% for ‘tie a tie’ and ‘change iPhone battery’, respectively. dppLSTM, which is supervised, does not do as well as our method and other two supervised algorithms. This comes from the fact that dppLSTM often selects multiple segments from one key-step and from background, due to their appearance diversity, while missing some of the key-steps to choose segments from (see Figure 3). While SubmodMix and FCSN perform better than other baselines, their overall performance is about 4% lower than our method. This comes from the fact that SubmodMix has limited learning capacity, depending on which functions to add, while FCSN treats supervised subset selection as classification, hence embeds ground-truth representative segments (class 1) close to each other and far from non-representative segments (class 0), which is not desired as a representative and a non-representative segment could be very similar. Table 2 shows the average F1 score (%) in the Breakfast dataset1. While both versions of our method outperform other algorithms, in contrast to the ProceL, SupFL(L) generally does better than SupFL(N). Moreover, the gap between the performance of UFL and SupFL is smaller. This comes from the fact that the C3D features capture discriminative information for separating different key-steps (subactivities), hence, learning a linear transformation generally does better than a nonlinear one and less improvement will be expected by learning from ground-truth summaries. Figure 1 shows the average F1 score improvement over not learning data representation on the test videos of the four tasks of ‘perform CPR’, ‘change iPhone battery’, ‘make coffee’ and ‘change tire’ in ProceL as a function of the number of training epochs. Notice that generally as the training continues the F1 score improves, obtaining between 4% and 10% improvement, depending on the task, over using C3D features. Hyperparameter Effect. We also analyze the performance of our method as a function of the regularization parameters (λ, ρinter, ρintra), where λ corresponds to the regularization parameter of the uncapacitated FL utility function in (3), while ρinter, ρintra correspond to the hyperparameters that set a trade off between the three terms of our loss function in (7). Figure 2 shows the F1 score on the ProceL dataset, where to see the effect of each hyperparameter, we have fixed the values of the other two (these fixed values depend on the task). Notice that the F1 score is relatively stable with respect to the hyperparameter change. In particular, changing λ from 0.001 to 0.1 the performance over the dataset changes by at most 1.2% in F1 score, while changing ρinter and ρintra from 0.01 to 10, the performance changes by at most 0.6% and 2.1%, respectively. Ablation Studies. To show the effectiveness of using all three loss functions in our proposed cost function in (7), we perform ablation studies. Table 3 shows the average precision, recall and F1 scores on the ProceL dataset. Notice that when we use only one loss or a combination of two loss functions, we achieve relatively similar low scores, being about 7% lower than using the three loss functions together. This shows that, as expected from the theoretical results, we need to use all 1FCSN on Breakfast produced significantly lower F1 scores compared to all other baselines. loss functions corresponding to the three theoretical conditions in order to effectively learn from ground-truth summaries. Also, notice that the medoid loss alone or its combination with either of the two other losses obtains slightly better performance than using the inter-cluster or intra cluster loss or their combination. This is expected as the medoid loss tries to center points around each ground-truth representative. Finally, the combination of the inter-cluster and intra-cluster loss, which has weak resemblance to the contrastive loss, does not do well in the supervised subset selection problem. Qualitative Results. Figure 3 shows a qualitative result of running different methods for two videos from the two tasks of ‘change iPhone battery’ and ‘make smoke salmon sandwich’ from the ProceL dataset, where all methods choose the same number of representatives (for clarity, we do not show representatives obtained from background). Notice that for ‘smoke salmon sandwich’ our method correctly finds representatives from all key-steps, while other methods miss one of the key-steps. Similarly, for ‘change iPhone screen’, our method is more successful than baselines, which miss 5 or 6 key-steps. Our method in general does better in obtaining diverse representative segments, while other supervised baselines often obtain multiple redundant representatives from the same key-step. 5 Conclusions We addressed the problem of supervised subset selection by generalizing the facility location to learn from ground-truth summaries. We considered an efficient sparse optimization of the uncapacitated facility location and investigated conditions under which it recovers ground-truth representatives and also becomes equivalent to the original NP-hard problem. We designed a loss function and an efficient framework to learn representations of data so that the input of transformed data to the facility location satisfies the theoretical conditions, hence, recovers ground-truth summaries. We showed the effectiveness of our method for recovering key-steps of instructional videos. To the best of our knowledge, this is the first work on supervised subset selection that derives conditions under which subset selection recovers ground-truth representatives and employs them to design a loss function for deep representation learning. We believe that this work took a major step towards a theoretically motivated supervised subset selection framework. Acknowledgements This work is supported by DARPA Young Faculty Award (D18AP00050), NSF (IIS-1657197), ONR (N000141812132) and ARO (W911NF1810300). Chengguang Xu would like to thank Dat Huynh and Zwe Naing for their help and advice with some of the implementations during his research assistantship at MCADS lab, which resulted in this work.
1. What is the main contribution of the paper in terms of facility location utility functions? 2. What are the strengths of the proposed approach regarding subset selection and supervised learning? 3. What are the weaknesses of the paper regarding experimentation and interpretation of results? 4. How could the authors improve their analysis and presentation of the results?
Review
Review This paper proposes a sparse convex relation of the facility location utility function for subset selection, for the problem of recovering ground-truth representatives for datasets. This relaxation is used to develop a supervised learning approach for this problem, which involves a learning algorithm that alternatively updates three loss functions (Eq. 7 and Alg. 1) based on three conditions for which this relaxation recovers ground-truth representatives (Theorem 1). The supervised facility learning approach described in this paper appears to be novel, and is described clearly. The experimental results are reasonably convincing overall. One weakness is that only one dataset is used, the Breakfast dataset. It would be more convincing to include results for at least one other dataset. The epoch 0 (before training) TSNE visualization is unnecessary and can be removed. Also, the interpretation of Fig. 3 provided in the paper somewhat subjective and not completely convincing. For example, while the authors point out that the SubModMix and dppLSTM baselines can select multiple representatives from the same activity, SupUFL-L (one of the proposed approaches in the paper) can also select multiple representatives from the same activity. Also, while the baselines fail to select representatives from the “take cup” subactivity, SupUFL-L fails to select representatives from the SIL subactivity. Finally, error bars (confidence estimates) should be provided for the scores in Tables 1 and 2.
NIPS
Title Deep Supervised Summarization: Algorithm and Application to Learning Instructions Abstract We address the problem of finding representative points of datasets by learning from multiple datasets and their ground-truth summaries. We develop a supervised subset selection framework, based on the facility location utility function, which learns to map datasets to their ground-truth representatives. To do so, we propose to learn representations of data so that the input of transformed data to the facility location recovers their ground-truth representatives. Given the NP-hardness of the utility function, we consider its convex relaxation based on sparse representation and investigate conditions under which the solution of the convex optimization recovers ground-truth representatives of each dataset. We design a loss function whose minimization over the parameters of the data representation network leads to satisfying the theoretical conditions, hence guaranteeing recovering groundtruth summaries. Given the non-convexity of the loss function, we develop an efficient learning scheme that alternates between representation learning by minimizing our proposed loss given the current assignments of points to ground-truth representatives and updating assignments given the current data representation. By experiments on the problem of learning key-steps (subactivities) of instructional videos, we show that our proposed framework improves the state-of-the-art supervised subset selection algorithms. 1 Introduction Subset selection, which is the task of finding a small subset of most informative points from a large dataset, is a fundamental machine learning task with many applications, including, procedure learning [1, 2, 3], image, video, speech and document summarization [4, 5, 6, 7, 8, 9, 10, 11], data clustering [12, 13, 14, 15], feature and model selection [16, 17, 18, 19], social network marketing [20], product recommendation [21] and sensor placement [22, 23]. Subset selection involves design and optimization of utility functions that characterize the informativeness of selected data points, referred to as representatives. Different criteria have been studied in the literature, including (sequential) facility location [24, 2, 1] maximum cut [25, 26], maximum marginal relevance [27], sparse coding [28, 29] and DPPs [11, 30, 31]. Given that almost all subset selection criteria are, in general, nonconvex and NP-hard, approximate methods, such as greedy algorithms for optimizing graph-cuts and (sequential) facility location [24, 32, 2], sampling from Determinantal Point Process (DPP) [11, 31] and convex relaxation-based methods [12, 33, 29, 34, 35, 36] have been studied in the literature. Existing work on subset selection can be divided into the two main categories of unsupervised and supervised methods. The majority of existing research on subset selection falls into the unsupervised category, where one finds representatives of a dataset by optimizing the above criteria [5, 6, 7, 8, 9, 10, 11, 15, 22, 12, 28, 29] or others, such as diversity or coverage [37, 38, 39, 40, 41], importance 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. [42, 43, 5, 6, 44, 45, 46] and relevance [47, 39, 42, 48, 49]. The results are subsequently evaluated qualitatively or quantitatively against ground-truth representatives. Supervised Subset Selection. Humans perform remarkably well in summarization of video and speech data, e.g., describe the content of a long complex video by a few sentences or by selecting a few frames/segments. This has motivated the development and study of supervised subset selection techniques that learn from human, with the goal of bringing high-level reasoning and incorporating user preferences into subset selection. More formally, in the supervised setting, given datasets and their ground-truth representatives, one tries to train subset selection to recover the ground-truth summary of each training dataset and to generalize to new datasets. Despite its importance, supervised subset selection has only been more recently studied in the literature [8, 50, 51, 52, 53, 54, 55, 56, 30]. One difficulty is that supervised subset selection cannot be naively treated as classification, since, whether an item receives the label ‘representative’ or ‘non-representative’ depends on its relationships to the entire data. For example, a representative car image among images of cars, once considered in a dataset of face images will become nonrepresentative. To address the problem, [50, 52] try to learn a combination of different criteria, i.e., weights of a mixture of submodular functions. However, deciding about which submodular functions and how many to combine is a non-trivial problem, which affects the performance. On the other hand, [8, 51, 53, 54, 30, 55] learn a DPP kernel or adapt it to test videos, by maximizing the likelihood of the ground-truth summary under the DPP kernel. However, maximizing the summary likelihood for the ground-truth does not necessarily decrease the likelihood of non-ground-truth subsets. Deep Supervised Facility Location. In this paper, we address the problem of supervised subset selection based on representation learning for a convex relaxation of the uncapacitated facility location function. Facility location is a clustering-based subset selection that finds a set of representatives for which the sum of dissimilarities from every point to the closest representative is minimized [24, 32]. Given the NP-hardness of the problem, different approaches such as convex relaxation [57, 29, 12, 35, 36] and greedy submodular maximization [24, 32] have been proposed to efficiently optimize this utility function. We use convex relaxation because of an appealing property that we exploit: we show conditions under which the sparse convex relaxation recovers ground-truth representatives. We use these conditions to design a loss function to learn representation of data so that inputing each transformed dataset to the facility location leads to finding ground-truth representatives. Our loss function consists of three terms, a medoid loss that enforces each ground-truth representative be the medoid of its associated cluster, an inter-cluster loss that makes sure there is sufficient margin between points in different clusters induced by ground-truth representatives and an intra-cluster loss that enforces the distances between points in each cluster be smaller than a margin. The latter two loss functions are based on a margin that depends on the regularization parameter of the uncapacitated facility location and the number of points in induced clusters. The conditions and our proposed loss function require knowing the clustering of the data based on assignments to ground-truth representatives. However, computing the assignments requires access to the optimal representation, which is not available. Thus, we propose an optimization scheme that alternates between updating the representation by minimizing our proposed loss given the current assignments of points to ground-truth representatives and updating the assignments given the current representation. We perform experiments on the problem of supervised instructional video summarization, where each video consists of a set of key-steps (subactivities), needed to achieve a given task. In this case, each training video comes with a list of representative segments/frames, without knowing the labels of representatives and without knowing which representatives across different videos correspond to the same key-step (subactivity), making the supervised subset selection extremely more challenging than classification. Our experiments on two large datasets of ProceL [1] and Breakfast [58] show the effectiveness of our framework. Remark 1 Our setting is different than interactive subset selection [59, 60] that incorporates human supervision interactively, i.e., as we run subset selection, we receive and incorporate human feedback to improve subset selection. In our case, we do not have human in the loop interactively. Also, our setting is different than weakly supervised video summarization [61, 62] that use the name of the video categories or additional web data to perform summarization. We assume each dataset has ground-truth summary and do not use additional web data. Finally, [63] uses facility location for metric learning. However, this requires knowledge about assignments of points to predefined categories, which is a stronger requirement than only knowing the ground-truth representatives. Remark 2 To the best of our knowledge, this is the first work on supervised subset selection that derives conditions for the exactness of a subset selection utility function (i.e., conditions under which subset selection recovers ground-truth representatives) and employs these conditions to design a loss function for representation learning, e.g., via DNNs. In fact, this work takes a major step towards a theoretically motivated supervised subset selection framework. Paper Organization. The paper is organized as follows. In Section 2, we review the facility location and convex relaxation to solve the subset selection efficiently. In Section 3, we show conditions for the equivalence of the two problems, design a new loss function for representation learning whose minimum satisfies the conditions, hence, guaranteeing to obtain ground-truth representatives, and propose an efficient learning algorithm. In Section 4, we show experimental results on the ProceL and Breakfast datasets for instructional video summarization. Finally, Section 5 concludes the paper. 2 Background on Subset Selection Facility Location. Facility location is a clustering-based subset selection utility function, in which each point is assigned to one representative, hence, performing both representative selection and clustering [24]. More specifically, assume we have a dataset Y = {y1, . . . ,yN} consisting of N points, for which we are given dissimilarities between pairs of points. Let di,j = d(yi,yj) denote the dissimilarity between points yi and yj , with d(·, ·) being the dissimilarity function. The smaller the di,j is, the better yi represents yj . We assume that dissimilarities are non-negative, provide a partial ordering of data and we have djj < dij for every i 6= j. In order to find representatives, the facility location selects a subset S ⊆ {1, . . . , N} of the data points and assigns each point in Y to the representative point in S with minimum dissimilarity. In particular, the uncapacitated facility location [64, 65] tries to find a subset S with a sufficiently small cardinality that gives the best encoding of the dataset, i.e., min S⊆{1,...,N} λ|S|+ N∑ j=1 min i∈S dij , (1) where λ ≥ 0 is a regularization parameter that sets a trade-off between the number of representatives, |S|, and the encoding quality via S. When λ is zero, every point will be a representative of itself. Sparse Convex Relaxation. Optimizing the facility location in (1) is NP-hard, as it requires searching over all possible subsets of the dataset. This has motivated efficient algorithms, including forwardbackward greedy submodular maximization with worst case performance guarantees [66] as well as sparse convex relaxation [12]. To obtain the convex relaxation, which we use in the paper, one first defines assignment variables zij ∈ {0, 1}, which is 1 when yj is represented by yi and is zero otherwise. We can rewrite (1) as an equivalent optimization on the assignment variables as min {zij} λ N∑ i=1 I(‖[zi1 · · · ziN ]‖p) + N∑ i,j=1 dijzij s. t. zij ∈ {0, 1}, N∑ i=1 zij = 1, ∀i, j, (2) where I(·) is an indicator function, which is one when its argument is nonzero and is zero otherwise. Thus, the first term of the objective function measures the number of representatives, since [zi1 · · · ziN ] is nonzero when yi represents some of the data points and becomes zero otherwise. The second term measures the encoding cost, while the constraints ensure that each point is represented by only one representative. Notice that (2), which is equivalent to (1), is still an NP-hard problem. Also, (2) is a group-sparse optimization where ideally a few vectors [zi1 · · · ziN ] must be nonzero for a few i’s that would correspond to the representative points. To obtain an efficient convex relaxation based on groupsparsity (for p ≥ 1) [12, 29], we drop the indicator function and relaxe the binary constraints to zij ∈ [0, 1], hence, solve min {zij} λ N∑ i=1 ∥∥[zi1 · · · ziN ]∥∥p + N∑ i,j=1 dijzij s. t. zij ≥ 0, N∑ i=1 zij = 1, ∀i, j. (3) We then obtain the set of representativesR as points yi for which zij is nonzero for some j. Moreover, we obtain a clustering of data according to assignments of points to representatives, where for every representative i ∈ R, we obtain its cluster Gi = {j ∈ {1, . . . , N}| zij = 1} as the set of all points assigned to i. 3 Supervised Facility Location In this section, we present our proposed approach for supervised subset selection. We discuss conditions under which (3), which is the practical and efficient algorithm for solving the uncapacitated facility location, recovers ground-truth representatives from datasets. We use these conditions to design a loss function for representation learning so that for the transformed data, obtained by minimizing the loss, (3) and equivalently (1) will select ground truth summaries of training datasets. We then present an efficient learning framework to optimize our proposed loss function. 3.1 Problem Setting Assume we have L datasets and their ground-truth representatives, {(Y`,R`)}L`=1, where Y` = {y`,1, . . . ,y`,N`} denotes N` data points in the `-th dataset and R` ⊆ {1, . . . , N`} denotes the associated set of indices of ground-truth representatives. The goal of supervised subset selection is to train a subset selection method so that the input of each dataset Y` to the trained model leads to obtaining ground-truth representatives,R`. In the paper, we fix the subset selection method to the uncapacitated facility location in (1) and consider p = ∞ in (2) and (3). We cast the supervised subset selection problem as learning a transformation fΘ(·) on the input data so that running the convex algorithm (3) on fΘ(Y`) leads to obtainingR`. We use a deep neural network, parametrized by Θ, for representation learning and use the Euclidean distance as the measure of dissimilarity, i.e., we define d`i,j , ∥∥fΘ(y`,i)− fΘ(y`,j)∥∥2 . (4) Notice that we can use other dissimilarities as well (the theory and learning algorithm below work for other dissimilarities), however, the Euclidean distance results in obtaining an embedding space, where points are gathered around ground-truth representatives according to `2 distances. To learn the parameters Θ, we design a loss function using conditions that guarantee the performance of (3) for obtaining ground-truth representatives across datasets. 3.2 Proposed Learning Framework We investigate conditions under which the convex algorithm in (3) recovers a given set of points as representatives of transformed data {fΘ(y`,1), . . . , fΘ(y`,N`)}. We show that under these conditions, the solution of the convex algorithm in (3), which has the constraint zi,j ∈ [0, 1], will be integer. As a result, the convex relaxation will recover the same solution of the NP-hard non-convex uncapacitated facility location, i.e., the optimality gap between the non-convex and convex formulations vanishes. We then use these conditions to design a loss function for learning the representation parameters Θ. Theorem 1 Consider the convex relaxation of the uncapacitated facility location in (3), with a fixed λ and p = ∞. Let R` be the set of ground-truth representatives from the `-th dataset {fΘ(y`,1), . . . , fΘ(y`,N`)} and let G ` i denote the cluster associated with the representative i ∈ R`, i.e., G`i = { j | i = argmini′ d`i′,j = argmini′ ‖fΘ(y`,i′)− fΘ(y`,j)‖2 } . (5) The optimization (3) recoversR` as the set of representatives, if the following conditions hold: 1. ∀i ∈ R`, ∀i′ ∈ G`i , we have ∑ j∈G`i d`i,j ≤ ∑ j∈G`i d`i′,j ; 2. ∀i ∈ R`, ∀j ∈ G`i , ∀i′ /∈ G`i , we have λ|G`i | + d ` i,j < d ` i′,j ; 3. ∀i ∈ R`, ∀i′, j ∈ G`i , we have d`i′,j ≤ λ|G`i | + d ` i,j . The first condition (medoid condition) states that for points assigned to the cluster of i ∈ R`, the representative point i must achieve the minimum encoding cost. The second condition (inter-cluster condition) states that the closest point to each cluster from other groups must be sufficiently far from it. The third condition (intra-cluster condition) states that points in the same cluster must not be far from each other. For both the inter and intra cluster conditions, the separation margin is given by λ/|G`i |, depending on the regularization parameter and the number of points in each cluster, i.e., we have an adaptive margin to each cluster. Under the conditions of the Theorem 1, we can show that there is no gap between the NP-hard non-convex formulation in (1) and its convex relaxation in (3). Algorithm 1 : Supervised Facility Location Learning Input: Datasets {Y`}L`=1 and ground truth representatives {R`}L`=1. 1: Initialize Θ by using a pretrained network; 2: while (Not Converged) do 3: For fixed Θ, compute G`1,G`2, . . . for each dataset ` via (5); 4: For fixed {G`1,G`2, . . .}L`=1, update Θ by minimizing the loss function (7); 5: end while Output: Optimal parameters Θ. Corollary 1 Under the assumptions of the Theorem 1, the convex relaxation in (3) is equivalent to the non-convex uncapacitated facility location optimization in (1), both recovering the same integer solution, where for Y`, we recoverR` as the representative set. We can also show similar results for p = 2 (see the supplementary file). Next, we use the above result to design a loss function for supervised subset selection using the uncapacitated facility location. In fact, if we find a representation Θ using which the conditions of the Theorem 1 are satisfied, then not only the combinatorial optimization in (1) recovers the ground-truth representatives from each dataset, but also we obtain the same solution using the efficient sparse optimization in (3). To find the desired Θ, we propose a loss function that penalizes violation of the conditions of the Theorem 1. More specifically, we define three loss functions corresponding to the conditions of the theorem, as L`medoid(Θ) , ∑ i∈R` ∑ i′∈G`i ( ∑ j∈G`i d`i,j − ∑ j∈G`i d`i′,j ) + , L`inter(Θ) , ∑ i∈R` ∑ j∈G`i ∑ i′ /∈G`i ( λ |G`i | + d`i,j − d`i′,j ) + , L`intra(Θ) , ∑ i∈R` ∑ i′,j∈G`i ( d`i′,j − d`i,j − λ |G`i | ) + , (6) where (x)+ , max{0, x} is the non-negative thresholding (or ReLU) operator, and L`1,L`2,L`3 measure and penalize violation of the medoid, inter-cluster and intra-cluster conditions, respectively, for the dataset `. Putting the three loss functions together, we propose to minimize the following cost function, defined over the L datasets, min Θ L(Θ) , L∑ `=1 ( L`medoid(Θ) + ρinterL`inter(Θ) + ρintraL`intra(Θ) ) , (7) where ρinter, ρintra ≥ 0 are regularization parameters that set a trade-off between the three terms. To minimize L, we need to use the clustering of points in every dataset Y` according to assignments of points to the ground-truth representative setR`, which requires computing G`i ’s. However, computing such clustering via (5) requires knowledge of the optimal representation of the data Θ, which is not available. To address the problem, we propose an efficient learning algorithm that alternates between updating the representation parameters Θ by minimizing the proposed loss given the current assignments of points to ground-truth representatives and updating the assignments given the current representation. Algorithm 1 shows the steps of our learning algorithm. Notice that the loss functions naively require considering every representative and every pair of points in the same or different clusters. Given the redundancy of points, this is not often needed and we can only sample a few pairs of points in the same or different clusters to compute each loss. Adaptive Margin. It is important to note that our derived conditions and the loss function make use of a margin λ/|Gi| that depends on the facility location hyperparameter and the number of points in each cluster Gi. In other words, the margin would be different for different clusters during different iterations of our learning scheme. More specifically, for a representative that has few points assigned to it, the size of the cluster would be small, hence, incurring a larger margin than clusters with more number of points. This has the following effect: when a cluster has a small number of points, it could be considered as under-sampled, hence, to generalize better to test data, we need to have a better separation from other clusters, i.e., larger margin. On the other hand, for a cluster with a large number of points, the margin could be smaller as the chance of changing the distances between and within clusters by adding more samples to it would be low. This is in contrast to contrastive loss functions that use a fixed margin for pairs of dissimilar items, while reducing the distances of similar items as much as possible. Another difference with respect to contrastive loss functions is that in our loss, we compare the encoding quality of each representative point to non-representative points, whereas in contrastive loss, one uses all pairs of similar and dissimilar items. Remark 3 While [35, 36] have shown the integrality of convex relaxation for cardinality-constrained facility location, we showed equivalence conditions for the uncapacitated problem. Moreover, the nature of our conditions, as opposed to asymptotic results, allowed to design the loss in (7). Also, we learn to effectively use a common λ across different datasets, which cannot be done in the cardinality-constrained case, where the number of ground-truth representatives is already given. 4 Experiments In this section, we evaluate the performance of our method, which we refer to as Supervised Facility Location (SupFL), as well as other algorithms for learning key-steps (subactivities) of instructional videos by learning from ground-truth summaries. Notice that each training dataset comes with a list of representative segments/frames, without knowing the labels of representatives and without knowing which representatives across different videos correspond to the same key-step (subactivity). This makes the supervised subset selection different and extremely more challenging than classification. 4.1 Experimental Setting Dataset. We perform experiments on ProceL [1] and Breakfast [58] datasets. The ProceL is a large multimodal dataset of 12 diverse tasks, such as ‘install Chromecast’, ‘assemble Clarinet’, ‘perform CPR’. Each task consists of about 60 videos and has a grammar of key-steps, e.g. ‘perform CPR’ consists of ‘call emergency’, ‘check pulse’, ‘open airway’, ‘give compression’ and ‘give breath’. Each video is annotated with the key-steps. Breakfast is another large dataset of 10 cooking activities by 52 individuals performed in 18 different kitchens. The videos are captured using multiple cameras with different view points. Each activity has approximately 200 videos, corresponding to different views of each person doing the same task, hence a total of 1989 videos in the dataset. Similar to ProceL, each task consists of multiple key-steps (subactivities) required to achieve the task. For example, ‘making cereal’ consists of ‘take a bowl’, ‘pour cereals’, ‘pour milk’, ‘stir cereals’, ‘sil’ (for background frames at the beginning and the end). For the experiments on ProceL, we split the videos of each task into 70% for training, 15% for validation and 15% for testing. For the Breakfast, we split the videos of each activity into 60% for training, 20% for validation, and 20% for testing. We use the middle segment of each subactivity as the ground-truth representative. Feature Extraction and Learning. Given the similarity of consecutive frames, we perform the subset selection at the segment level. For ProceL, we use the segments provided in the dataset and for Breakfast, we divide each video into segments of 16-frame length with 8 frames overlap between two consecutive segments. We use the C3D network [67] for feature extraction in each segment and use the 4, 096-dimensional feature obtained by the first dense layer after the convolutional layers. We consider two variants of our method: i) SupFL(L), where we learn a linear transformation on the C3D features; ii) SupFL(N), where we learn the parameters of a neural network applied to C3D features. We use Euclidean distance for pairwise dissimilarities. Algorithms and Baselines. We compare the two variants of our method, SupFL(L) and SupFL(N), discussed above, against SubmodMix [52], which learns the weights of a mixture of submodular functions, and dppLSTM[54], which learns to select representatives using a bidirectional LSTM combined with the DPP kernel, and FCSN [68], which learns the weights of a fully convolutional network by treating subset selection as classification of each segment into representative vs nonrepresentative. To show the effectiveness of learning, we also compare with two unsupervised baselines: Uniform, which selects representatives uniformly at random from all segments, and UFL, which corresponds to running the uncapacitated facility location via the forward-backward greedy method on dissimilarities computed via C3D features. This particularly allows to investigate the effectiveness of our method in taking advantage of ground-truth summaries. Evaluation metric. Following [58], we report the segment-wise precision (P), action-wise recall (R) and F1 score (F). These metrics help to measure the performance of finding a representative for each key-step and the correctness of video segmentation based on assignments of segments to representatives. More specifically, for a video with Ns segments and Na ground-truth key-steps, after running subset selection we assign each segment to each recovered representative. We compute P = N̂s Ns , R = N̂a Na , F = 2PR P +R , (8) where N̂s is the number of the segments that are correctly assigned to representatives, given the ground-truth assignment labels. N̂a is the number of recovered key-steps in the video via representatives. The F1 score is the harmonic mean between the segment-wise precision and action-wise recall, which is between 0 and 1. We report the average of each score over the videos of each task. Implementation details. We implemented our framework in Pytorch and used the ADMM framework in [12] for subset selection via UFL and our SupFL. We train a model for each individual activity. For SupFL(L), we set the dimension of the transformed data to 1000 and 500 for ProceL and Breakfast, respectively, while for SupFL(N) we set the dimension of the network to 4096× 1000 ×1000 and 4096× 1000 ×500 for ProceL and Breakfast, respectively, where we use ReLu activations for the second layer. We use stochastic gradient descent to train our model and use 5 videos in each minibatch. We use the Adam optimizer with the learning rate of 1e-4 and weight decay of 5e-4. We train our model for at most 50 epochs. In order to improve the training time, after we compute assignments of points to each representative in our alternating algorithm, we randomly sample 10 points from each group and use them to form the loss functions in (6). Our method has three hyperparameters (λ, ρinter, ρintra), where λ is the regularization of the UFL in (3), while ρinter and ρintra are regularization parameters of our loss function in (7). We set the values of hyperparameters using the validation set (we did not perform heavy hyperparameter tuning). In the experiments, we show the effect of the regularization parameters on the performance. To have a fair comparison, we run all methods to select the same number of representatives as the number of ground-truth key-steps in the grammar of the task. 4.2 Experimental Results Table 1 shows the average F1 score (%) of different methods on each task in the ProceL dataset. Notice that our method outperforms other algorithms, obtaining 67.8% and 67.0% F1 score via SupFL(N) and SupFL(L), respectively, over the entire dataset. Compared to the UFL, which is the unsupervised version of our framework, we obtain significant improvement in all tasks, e.g., improving the F1 score by 8.4% and 7.3% for ‘tie a tie’ and ‘change iPhone battery’, respectively. dppLSTM, which is supervised, does not do as well as our method and other two supervised algorithms. This comes from the fact that dppLSTM often selects multiple segments from one key-step and from background, due to their appearance diversity, while missing some of the key-steps to choose segments from (see Figure 3). While SubmodMix and FCSN perform better than other baselines, their overall performance is about 4% lower than our method. This comes from the fact that SubmodMix has limited learning capacity, depending on which functions to add, while FCSN treats supervised subset selection as classification, hence embeds ground-truth representative segments (class 1) close to each other and far from non-representative segments (class 0), which is not desired as a representative and a non-representative segment could be very similar. Table 2 shows the average F1 score (%) in the Breakfast dataset1. While both versions of our method outperform other algorithms, in contrast to the ProceL, SupFL(L) generally does better than SupFL(N). Moreover, the gap between the performance of UFL and SupFL is smaller. This comes from the fact that the C3D features capture discriminative information for separating different key-steps (subactivities), hence, learning a linear transformation generally does better than a nonlinear one and less improvement will be expected by learning from ground-truth summaries. Figure 1 shows the average F1 score improvement over not learning data representation on the test videos of the four tasks of ‘perform CPR’, ‘change iPhone battery’, ‘make coffee’ and ‘change tire’ in ProceL as a function of the number of training epochs. Notice that generally as the training continues the F1 score improves, obtaining between 4% and 10% improvement, depending on the task, over using C3D features. Hyperparameter Effect. We also analyze the performance of our method as a function of the regularization parameters (λ, ρinter, ρintra), where λ corresponds to the regularization parameter of the uncapacitated FL utility function in (3), while ρinter, ρintra correspond to the hyperparameters that set a trade off between the three terms of our loss function in (7). Figure 2 shows the F1 score on the ProceL dataset, where to see the effect of each hyperparameter, we have fixed the values of the other two (these fixed values depend on the task). Notice that the F1 score is relatively stable with respect to the hyperparameter change. In particular, changing λ from 0.001 to 0.1 the performance over the dataset changes by at most 1.2% in F1 score, while changing ρinter and ρintra from 0.01 to 10, the performance changes by at most 0.6% and 2.1%, respectively. Ablation Studies. To show the effectiveness of using all three loss functions in our proposed cost function in (7), we perform ablation studies. Table 3 shows the average precision, recall and F1 scores on the ProceL dataset. Notice that when we use only one loss or a combination of two loss functions, we achieve relatively similar low scores, being about 7% lower than using the three loss functions together. This shows that, as expected from the theoretical results, we need to use all 1FCSN on Breakfast produced significantly lower F1 scores compared to all other baselines. loss functions corresponding to the three theoretical conditions in order to effectively learn from ground-truth summaries. Also, notice that the medoid loss alone or its combination with either of the two other losses obtains slightly better performance than using the inter-cluster or intra cluster loss or their combination. This is expected as the medoid loss tries to center points around each ground-truth representative. Finally, the combination of the inter-cluster and intra-cluster loss, which has weak resemblance to the contrastive loss, does not do well in the supervised subset selection problem. Qualitative Results. Figure 3 shows a qualitative result of running different methods for two videos from the two tasks of ‘change iPhone battery’ and ‘make smoke salmon sandwich’ from the ProceL dataset, where all methods choose the same number of representatives (for clarity, we do not show representatives obtained from background). Notice that for ‘smoke salmon sandwich’ our method correctly finds representatives from all key-steps, while other methods miss one of the key-steps. Similarly, for ‘change iPhone screen’, our method is more successful than baselines, which miss 5 or 6 key-steps. Our method in general does better in obtaining diverse representative segments, while other supervised baselines often obtain multiple redundant representatives from the same key-step. 5 Conclusions We addressed the problem of supervised subset selection by generalizing the facility location to learn from ground-truth summaries. We considered an efficient sparse optimization of the uncapacitated facility location and investigated conditions under which it recovers ground-truth representatives and also becomes equivalent to the original NP-hard problem. We designed a loss function and an efficient framework to learn representations of data so that the input of transformed data to the facility location satisfies the theoretical conditions, hence, recovers ground-truth summaries. We showed the effectiveness of our method for recovering key-steps of instructional videos. To the best of our knowledge, this is the first work on supervised subset selection that derives conditions under which subset selection recovers ground-truth representatives and employs them to design a loss function for deep representation learning. We believe that this work took a major step towards a theoretically motivated supervised subset selection framework. Acknowledgements This work is supported by DARPA Young Faculty Award (D18AP00050), NSF (IIS-1657197), ONR (N000141812132) and ARO (W911NF1810300). Chengguang Xu would like to thank Dat Huynh and Zwe Naing for their help and advice with some of the implementations during his research assistantship at MCADS lab, which resulted in this work.
1. How does the reviewer perceive the issue with the trivial solution in the optimization problem (7)? 2. What is the reviewer's suggestion for improving the experimental comparison by including a tweaked version of [59] as another baseline? 3. Does the reviewer have any concerns or reservations about the paper's approach to solving the problem?
Review
Review Minor comment: -Probably a naive comment -- In (7), there may be some trivial \Theta, (say \Theta=0 in some settings) which enforce all f_\Theta(y) to be equal so that all the data points will be assigned to one cluster. In practice, a random initialization of \Theta and an iterative gradient algorithm may result in a reasonably good \Theta, but the problem in (7) is not what is being solved? -In experiments, is it possible to include a tweaked version of [59] as another baseline?
NIPS
Title From Canonical Correlation Analysis to Self-supervised Graph Neural Networks Abstract We introduce a conceptually simple yet effective model for self-supervised representation learning with graph data. It follows the previous methods that generate two views of an input graph through data augmentation. However, unlike contrastive methods that focus on instance-level discrimination, we optimize an innovative feature-level objective inspired by classical Canonical Correlation Analysis. Compared with other works, our approach requires none of the parameterized mutual information estimator, additional projector, asymmetric structures, and most importantly, negative samples which can be costly. We show that the new objective essentially 1) aims at discarding augmentation-variant information by learning invariant representations, and 2) can prevent degenerated solutions by decorrelating features in different dimensions. Our theoretical analysis further provides an understanding for the new objective which can be equivalently seen as an instantiation of the Information Bottleneck Principle under the self-supervised setting. Despite its simplicity, our method performs competitively on seven public graph datasets. The code is available at: https://github.com/hengruizhang98/CCA-SSG. 1 Introduction Self-supervised learning (SSL) has been a promising paradigm for learning useful representations without costly labels [7, 46, 5]. In general, it learns representations via a proxy objective between inputs and self-defined signals, among which contrastive methods [46, 40, 16, 5, 12] have achieved impressive performance on learning image representations by maximizing the mutual information of two views (or augmentations) of the same input. Such methods can be interpreted as a discrimination of a joint distribution (positive pairs) from the product of two marginal ones (negative pairs) [50]. Inspired by the success of contrastive learning in vision [17, 46, 40, 5, 16, 12, 6], similar methods have been adapted to learning graph neural networks [48, 15, 33, 57, 58]. Although these models have achieved impressive performance, they require complex designs and architectures. For example, DGI [48] and MVGRL [15] rely on a parameterized mutual information estimator to discriminate positive node-graph pairs from negative ones; GRACE [57] and GCA [58] harness an additional MLP-projector to guarantee sufficient capacity. Moreover, negative pairs sampled or constructed from data often play an indispensable role in providing effective contrastive signals and have a large impact on performance. Selecting proper negative samples is often nontrivial for graph-structured data, not to mention the extra storage cost for prohibitively large graphs. BGRL [39] is a recent endeavor on ⇤This work was done during the author’s internship at AWS Shanghai AI Lab. †Corresponding author. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). targeting a negative-sample-free approach for GNN learning through asymmetric architectures [12, 6]. However, it requires additional components, e.g., an exponential moving average (EMA) and StopGradient, to empirically avoid degenerated solutions, leading to a more intricate architecture. Deviating from the large body of previous works on contrastive learning, in this paper we take a new perspective to address SSL on graphs. We introduce Canonical Correlation Analysis inspired Self-Supervised Learning on Graphs (CCA-SSG), a simple yet effective approach that opens the way to a new SSL objective and frees the model from intricate designs. It follows the common practice of prior arts, generating two views of an input graph through random augmentation and acquiring node representations through a shared GNN encoder. Differently, we propose to harness a non-contrastive and non-discriminative feature-level objective, which is inspired by the well-studied Canonical Correlation Analysis (CCA) methods [18, 10, 11, 14, 2, 4]. More specifically, the new objective aims at maximizing the correlation between two augmented views of the same input and meanwhile decorrelating different (feature) dimensions of a single view’s representation. We show that the objective 1) essentially pursuits discarding augmentation-variant information and preserving augmentation-invariant information, and 2) can prevent dimensional collapse [19] (i.e., different dimensions capture the same information) in nature. Furthermore, our theoretical analysis sheds more lights that under mild assumptions, our model is an instantiation of Information Bottleneck Principle [43, 44, 37] under SSL settings [53, 9, 45]. To sum up, as shown in Table 1, our new objective induces a simple and light model without reliance on negative pairs [48, 15, 57, 58], a parameterized mutual information estimator [48, 15], an additional projector or predictor [57, 58, 39] or asymmetric architectures [39, 15]. We provide a thorough evaluation for the model on seven node classification benchmarks. The empirical results demonstrate that despite its simplicity, CCA-SSG can achieve very competitive performance in general and even superior test accuracy in five datasets. It is worth noting that our approach is agnostic to the input data format, which means that it can potentially be applied to other scenarios beyond graph-structured data (such as vision, language, etc.). We leave such a technical extension for future works. Our contributions are as follows: 1) We introduce a non-contrastive and non-discriminative objective for self-supervised learning, which is inspired by Canonical Correlation Analysis methods. It does not rely on negative samples, and can naturally remove the complicated components. Based on it we propose CCA-SSG, a simple yet effective framework for learning node representations without supervision (see Section 3). 2) We theoretically prove that the proposed objective aims at keeping augmentation-invariant information while discarding augmentation-variant one, and possesses an inherent relationship to an embodiment of Information Bottleneck Principle under self-supervised settings (see Section 4). 3) Experimental results show that without complex designs, our method outperforms state-of-the-art self-supervised methods MVGRL [15] and GCA [58] on 5 out of 7 benchmarks. We also provide thorough ablation studies on the effectiveness of the key components of CCA-SSG (see Section 5). 2 Related Works and Background Contrastive Learning on Graphs. Contrastive methods [46, 40, 17, 16, 5, 12] have been shown to be effective for unsupervised learning in vision, which have also been adapted to graphs. Inspired by the local-global mutual information maximization viewpoints [17], DGI [48] and InfoGraph [38] put forward unsupervised schemes for node and graph representation learning, respectively. MVGRL [15] generalizes CMC [40] to graph-structured data by introducing graph diffusion [23] to create another view for a graph. GCC [33] adopts InfoNCE loss [46] and MoCo-based negative pool [16] for largescale GNN pretraining. GRACE [57], GCA [58] and GraphCL [52] follow the spirit of SimCLR [5] and learn node/graph representations by directly treating other nodes/graphs as negative samples. BGRL [39] targets a negative-sample-free model, inspired by BYOL [12], on node representation learning. But it still requires complex asymmetric architectures. Feature-level Self-supervised Objectives. The above-mentioned methods all focus on instancelevel contrastive learning. To address their drawbacks, some recent works have been turning to feature-level objectives. For example, Contrastive Clustering [25] regards different feature dimensions as different clusters, thus combining the cluster-level discrimination with instance-level discrimination. W-MSE [8] performs a differentiable whitening operation on learned embeddings, which implicitly scatters data points in embedding space. Barlow Twins [53] borrows the idea of redundancy reduction and adopts a soft decorrelation term that makes the cross-correlation matrix of two views’ representations close to an identity matrix. By contrast, our method is based on the classical Canonical Correlation Analysis, working by correlating the representations of two views from data augmentation and meanwhile decorrelating different feature dimensions of each view’s representation. Canonical Correlation Analysis. CCA is a classical multivariate analysis method, which is first introduced in [18]. For two random variables X 2 Rm and Y 2 Rn, their covariance matrix is ⌃XY = Cov(X,Y ). CCA aims at seeking two vectors a 2 Rm and b 2 Rn such that the correlation ⇢ = corr(a>X, b>Y ) = a >⌃XY bp a>⌃XXa p b>⌃Y Y b is maximized. Formally, the objective is max a,b a>⌃XY b, s.t. a>⌃XXa = b>⌃Y Y b = 1. (1) For multi-dimensional cases, CCA seeks two sets of vectors maximizing their correlation and subjected to the constraint that they are uncorrelated with each other [10]. Later studies apply CCA to multi-view learning with deep models [2, 11, 14], by replacing the linear transformation with neural networks. Concretely, assuming X1, X2 as two views of an input data, it optimizes max ✓1,✓2 Tr P>✓1(X1)P✓2(X2) s.t. P>✓1(X1)P✓1(X1) = P > ✓2(X2)P✓2(X2) = I. (2) where P✓1 and P✓2 are two feedforward neural networks and I is an identity matrix. Despite its preciseness, such computation is really expensive [4]. Fortunately, soft CCA [4] removes the hard decorrelation constraint by adopting the following Lagrangian relaxation: min ✓1,✓2 Ldist (P✓1(X1), P✓2(X2)) + (LSDL(P✓1(X1)) + LSDL(P✓2(X2))) , (3) where Ldist measures correlation between two views’ representations and LSDL (called stochastic decorrelation loss) computes an L1 distance between P✓i(Xi) and an identity matrix, for i = 1, 2. 3 Approach 3.1 Model Framework In this paper we focus on self-supervised node representation learning, where we consider a single graph G = (X,A). X 2 RN⇥F and A 2 RN⇥N denote node features and adjacency matrix respectively. Here N is the number of nodes within the graph and F denotes feature dimension. Our model simply consists of three parts: 1) a random graph augmentation generator T . 2) a GNNbased graph encoder f✓ where ✓ denotes its parameters. 3) a novel feature-level objective function based on Canonical Correlation Analysis. Fig. 1 is an illustration of the proposed model. Algorithm 1: PyTorch-style code for CCA-SSG # f: encoder network # lambda: trade-off # D: embedding dimension # g: input graph # feat: node features # generate two views through random augmentation g1, feat1 = augment(g, feat) g2, feat2 = augment(g, feat) z1 = f(g1, feat1) # embedding of the 1st view z2 = f(g2, feat2) # embedding of the 2st view # batch normalization z1_norm = ((z1 - z1.mean(0)) / z1.std(0))/ sqrt(N) z2_norm = ((z2 - z2.mean(0)) / z2.std(0))/ sqrt(N) # covariance matrix of each view c1 = torch.mm(z1_norm.T(), z1_norm) c2 = torch.mm(z2_norm.T(), z2_norm) iden = torch.eye(D) loss_inv = (z1_norm - z2_norm).pow(2).sum() loss_dec_1 = (c1 - iden).pow(2).sum() loss_dec_2 = (c2 - iden).pow(2).sum() loss_dec = loss_dec_1 + loss_dec_2 loss = loss_inv + lambda * loss_dec Graph augmentations. We consider the standard pipeline for random graph augmentation that has been commonly used in previous works [57, 39]. To be specific, we harness two ways for augmentation: edge dropping and node feature masking. Edge dropping randomly drops a fraction of edges from the original graph, while node feature masking randomly masks a fraction of features for all the nodes. In this way, T is composed of all the possible graph transformation operations and each t ⇠ T denotes a specific graph transformation for graph G. Note that we use commonly adopted augmentation methods to stay our focus on the design of objective function and conduct fair comparison with existing approaches. More complicated random augmentations [52, 58] can also be readily plugged into our model. Details for the used augmentation functions are in Appendix E. Training. In each training iteration, we first randomly sample two graph transformations tA and tB from T , and then generate two views G̃A = (X̃A, ÃA) and G̃B = (X̃B , ÃB) according to the transformations. The two views are subsequently fed into a shared GNN encoder to generate the node embeddings of the two views: ZA = f✓(X̃A, ÃA), ZB = f✓(X̃B , ÃB), where ZA,ZB 2 RN⇥D and D denotes embedding dimension. We further normalize the node embeddings along instance dimension so that each feature dimension has a 0-mean and 1/ p N -standard deviation distribution: Z̃ = Z µ(Z) (Z) ⇤ p N (4) The normalized Z̃A, Z̃B will be used to compute a feature-level objective in Section 3.2. To help better understand the proposed framework, we provide the PyTorch-style pseudocode for training CCA-SSG in Algorithm 1. Inference. To generate node embeddings for downstream tasks, we put the original graph G = (X,A) into the trained graph neural network f✓ and obtain node embeddings Z = f✓(X,A). 3.2 Learning Objective Canonical Correlation Analysis has shown its great power in multi-view learning like instance recognition [4]. However, it still remains unexplored to leverage CCA for self-supervised learning. Note that in SSL, one generates two sets of data from the same input through transformation or random data augmentation, which could be regraded as two views of the input data. This inspires us to introduce the following objective for self-supervised representation learning: L = Z̃A Z̃B 2 F| {z } invariance term + ✓ Z̃>AZ̃A I 2 F + Z̃>BZ̃B I 2 F ◆ | {z } decorrelation term (5) where is a non-negative hyperparameter trading off two terms. Note that minimizing the invariance term is essentially maximizing the correlation between two views as their representations are already normalized. In SSL, as the two augmented views come randomly from the same distribution, we can adopt one encoder f✓ that is shared across two branches and seek for a regularization that encourages different feature dimensions to capture distinct semantics via the decorrelation term. We next provide a variance-covariance perspective to the new objective, following similar lines of reasoning in [41, 42]. Assume that input data come from a distribution x ⇠ p(x) and s is a view of x through random augmentation s ⇠ paug(·|x). Denote zs as the representation of s, then minimizing the invariance term, by expectation, is to minimize the variance of the normalized representation z̃s, conditioned on x. Also, minimizing the decorrelation term is to push the off-diagonal elements of the covariance matrix (given by two z̃s’s) close to 0. Formally, we have Linv = Z̃A Z̃B 2 F = NX i=1 DX k=1 (z̃Ai,j z̃Bi,j)2 ⇠= Ex " DX k=1 Vs|x[z̃s,k] # ⇤ 2N, (6) Ldec = Z̃>S Z̃S I 2 F = kCovs[z̃] Ik2F ⇠= X i 6=j ⇢zsi,j 2 , for Z̃S 2 {Z̃A, Z̃B}, (7) where ⇢ is the Pearson correlation coefficient. 3.3 Advantages over Contrastive Methods In this subsection we provide a systematic comparison with previous self-supervised methods for node representation learning, including DGI [48], MVGRL [15], GRACE [57], GCA [58] and BGRL [39], and highlight the merits of CCA-SSG. A quick overview is presented in Table 1. No reliance on negative samples. Most of previous works highly rely on negative pairs to avoid collapse or interchangeable, trivial/degenerated solutions [48, 15, 57, 58]. E.g., DGI and MVGRL generate negative examples by corrupting the graph structure severely, and GRACE/GCA treats all the other nodes within a graph as negative examples. However, for self-supervised learning on graphs, it is non-trivial to construct informative negative examples since nodes are structurally connected, and selecting negative examples in an arbitrary manner may lead to large variance for stochastic gradients and slow training convergence [51]. The recently proposed BGRL model adopts asymmetric encoder architectures for SSL on graphs without the use of negative samples. However, though BGRL could avoid collapse empirically, it still remains as an open problem concerning its theoretical guarantee for preventing trivial solutions [41]. Compared with these methods, our model does not rely on negative pairs and asymmetric encoders. The feature decorrelation term can naturally prevent trivial solutions caused by the invariance term. We discuss the collapse issue detailedly in Appendix B. No MI estimator, projector network nor asymmetric architectures. Most previous works rely on additional components besides the GNN encoder to estimate some score functions in final objectives. DGI and MVGRL require a parameterized estimator to approximate mutual information between two views, and GRACE leverages a MLP projector followed by an InfoNCE estimator. BGRL harnesses asymmetric encoder architecture which consists of EMA (Exponential Moving Average), Stop-Gradient and an additional projector. MVGRL also induces asymmetric architectures as it adopts two different GNNs for the input graph and the diffusion graph respectively. In contrast, our approach requires no additional components except a single GNN encoder. Better efficiency and scalability to large graphs. Consider a graph with N nodes. DGI and MVGRL contrast node embeddings with graph embedding, which would require O(N) space cost. GRACE treats two views of the same node as positive pairs and treat views of different nodes as negative pairs, which would take O(N2) space. BGRL focuses only on positive pairs, which will also take O(N) space. By contrast, our method works on feature dimension. If we embed each node into a D-dimensional vector, the computation of the loss function would require O(D2) space. This indicates that the memory cost does not grow consistently as the size of graph increases. As a result, our method is promising for handling large-scale graphs without prohibitively large space costs. 4 Theoretical Insights with Connection to Information Theory In this section we provide some analysis of the proposed objective function: 1) Interpretation of the loss function with entropy and mutual information. 2) The connection between the proposed objective and the Information Bottleneck principle. 3) Why the learned representations would be informative to downstream tasks. The proofs of propositions, theorems and corollaries are in Appendix D. Notations. Denote the random variable of input data as X and the downstream task as T (it could be the label Y if the downstream task is classification). Note that in SSL, we have no access to T in training and here we introduce the notation for our analysis. Define S as the self-supervised signal (i.e., an augmented view of X), and S shares the same space as X . Our model learns a representation for the input, denoted by ZX and its views, denoted by ZS . ZX = f✓(X), ZS = f✓(S), f✓(·) is a encoder shared by the original data and its views, which is parameterized by ✓. The target of representation learning is to learn a optimal encoder parameter ✓. Furthermore, for random variable A,B,C, we use I(A,B) to denote the mutual information between A and B, I(A,B|C) to denote conditional mutual information of A and B on a given C, H(A) for the entropy, and H(A|B) for conditional entropy. The proofs of propositions, theorems and corollaries are in Appendix D. 4.1 An Entropy and Mutual Information Interpretation of the Objective We first introduce an assumption about the distributions of P (ZS) and P (ZS |X). Assumption 1. (Gaussian assumption of P (ZS |X) and P (ZS)): P (ZS |X) = N (µX ,⌃X), P (ZS) = N (µ,⌃). (8) With Assumption 1, we can arrive at the following propositions: Proposition 1. In expectation, minimizing Eq. (6) is equivalent to minimizing the entropy of ZS conditioned on input X , i.e., min ✓ Linv ⇠= min ✓ H(ZS |X). (9) Proposition 2. Minimizing Eq. (7) is equivalent to maximizing the entropy of ZS , i.e., min ✓ Ldec ⇠= max ✓ H(ZS). (10) The two propositions unveil the effects of two terms in our objective. Combining two propositions, we can further interpret Eq. (5) from an information-theoretic perspective. Theorem 1. By optimizing Eq (5), we maximize the mutual information between the augmented view’s embedding ZS and the input data X , and minimize the mutual information between ZS and the view itself S, conditioned on the input data X . Formally we have min ✓ L ) max ✓ I(ZS , X) and min ✓ I(ZS , S|X). (11) The proof is based on the facts I(ZS , X) = H(ZS) H(ZS |X) and I(ZS , S|X) = H(ZS |X) + H(ZS |S) = H(ZS |X). Theorem 1 indicates that our objective Eq. (5) learns representations that maximize the information of the input data, i.e., I(ZS , X), and meanwhile minimize the lost information during augmentation, i.e., I(ZS , S|X). 4.2 Connection with the Information Bottleneck Principle The analysis in Section 4.1 enables us to further build a connection between our objective Eq. (5) and the well-studied Information Bottleneck Principle [43, 44, 37, 1] under SSL settings. Recall that the supervised Information Bottleneck (IB) is defined as follows: Definition 1. The supervised IB aims at maximizing an Information Bottleneck Lagrangian: IBsup = I(Y, ZX) I(X,ZX), where > 0. (12) As we can see, IBsup attempts to maximize the information between the data representation ZX and its corresponding label Y , and concurrently minimize the information between ZX and the input data X (i.e., exploiting compression of ZX from X). The intuition of IB principle is that ZX is expected to contain only the information that is useful for predicting Y . Several recent works [9, 45, 53] propose various forms of IB under self-supervised settings. The most relevant one names Self-supervised Information Bottleneck: Definition 2. (Self-supervised Information Bottleneck [53]). The Self-supervised IB aims at maximizing the following Lagrangian: IBssl = I(X,ZS) I(S,ZS), where > 0. (13) Intuitively, IBssl posits that a desirable representation is expected to be informative to augmentation invariant features, and to be a maximally compressed representation of the input. Our objective Eq. (5) is essentially an embodiment of IBssl: Theorem 2. Assume 0 < 1, then by minimizing Eq. (5), the self-supervised Information Bottleneck objective is maximized, formally: min ✓ L ) max ✓ IBssl (14) Theorem 2 also shows that Eq. (5) implicitly follows the same spirit of IB principle under selfsupervised settings. As further enlightenment, we can relate Eq. (5) with the multi-view Information Bottleneck [9] and the minimal and sufficient representations for self-supervision [45]: Corollary 1. Let X1 = S, X2 = X and assume 0 < 1, then minimizing Eq. (5) is equivalent to minimizing the Multi-view Information Bottleneck Loss in [9]: LMIB = I(Z1, X1|X2) I(X2, Z1), where 0 < 1. (15) Corollary 2. When the data augmentation process is reversible, minimizing Eq. (5) is equivalent to learning the Minimal and Sufficient Representations for Self-supervision in [45]: ZsslX = argmax ZX I(ZX , S), Z sslmin X = argmin ZX H(ZX |S) s.t. I(ZX , S) is maximized. (16) 4.3 Influence on Downstream Tasks We have provided a principled understanding for our new objective. Next, we discuss its effect on downstream tasks T . The rationality of data augmentations in SSL is rooted in a conjecture that an ideal data augmentation approach would not change the information related to its label. We formulate this hypothesis as a building block for analysis on downstream tasks [36, 9]. Assumption 2. (Task-relevant information and data augmentation). All the task-relevant information is shared across the input data X and its augmentations S, i.e., I(X,T ) = I(S, T ) = I(X,S, T ), or equivalently, I(X,T |S) = I(S, T |X) = 0. This indicates that all the task-relevant information is contained in augmentation invariant features. We proceed to derive the following theorem which reveals the efficacy of the learned representations by our objective with respect to downstream tasks. Theorem 3. (Task-relevant/irrelevant information). By optimizing Eq. (5), the task-relevant information I(ZS , T ) is maximized, and the task-irrelevant information H(ZS |T ) is minimized. Formally, min ✓ L ) max ✓ I(ZS , T ) and min ✓ H(ZS |T ). (17) Therefore, the learned representation ZS is expected to contain minimal and sufficient information about downstream tasks [45, 9], which further illuminates the reason why the embeddings given by SSL approaches have superior performance on various downstream tasks. 5 Experiments We assess the quality of representations after self-supervised pretraining on seven node classification benchmarks: Cora, Citeseer, Pubmed, Coauthor CS, Coauthor Physics and Amazon Computer, Amazon-Photo. We adopt the public splits for Cora, Citeseer, Pubmed, and a 1:1:9 training/validation/testing splits for the other 4 datasets. Details of the datasets are in Appendix E. Evaluation protocol. We follow the linear evaluation scheme as introduced in [48]: i) We first train the model on all the nodes in a graph without supervision, by optimizing the objective in Eq. (5). ii) After that, we freeze the parameters of the encoder and obtain all the nodes’ embeddings, which are subsequently fed into a linear classifier (i.e., a logistic regression model) to generate a predicted label for each node. In the second stage, only nodes in training set are used for training the classifier, and we report the classification accuracy on testing nodes. We implement the model with PyTorch. All experiments are conducted on a NVIDIA V100 GPU with 16 GB memory. We use the Adam optimizer [20] for both stages. The graph encoder f✓ is specified as a standard two-layer GCN model [22] for all the datasets except citeseer (where we empirically find that a one-layer GCN is better). We report the mean accuracy with a standard deviation through 20 random initialization (on Coauthor CS, Coauthor Physics and Amazon Computer, Amazon-Photo, the split is also randomly generated). Detailed hyperparameter settings are in Appendix E. 5.1 Comparison with Peer Methods We compare CCA-SSG with classical unsupervised models, Deepwalk [32] and GAE [21], and self-supervised models, DGI [48], MVGRL [15], GRACE [57] and GCA [58]. We also compare with supervised learning models, including MLP, Label Propagation (LP) [56], and supervised baselines GCN [22] and GAT [47]3. The results of baselines are quoted from [15, 57, 58] if not specified. We report the node classification results of citation networks and other datasets in Table 2 and Table 3 respectively. As we can see, CCA-SSG outperforms both the unsupervised competitors and the fully supervised baselines on Cora and Pubmed, despite its simple architecture. On Citeseer, CCA-SSG achieves competitive results as of the most powerful baseline MVGRL. On four larger benchmarks, CCA-SSG also achieves the best performance in four datasets except Coauther-Physics. It is worth mentioning that we empirically find that on Coauthor-CS a pure 2-layer-MLP encoder is better than GNN models. This might because the graph-structured information is much less informative than the node features, presumably providing harmful signals for classification (in fact, on Coauthor-CS, linear models using merely node features can greatly outperform DeepWalk/DeepWalk+features). 3The BGRL [39] is not compared as its source code has not been released. 5.2 Ablation Study and Scalability Comparison Effectiveness of invariance/decorrelation terms. We alter our loss by removing the invariance/decorrelation term respectively to study the effects of each component, with results reported in Table 4. We find that only using the invariance term will lead to merely performance drop instead of completely collapsed solutions. This is because node embeddings are normalized along the instance dimension to have a zero-mean and fixed-standard deviation, and the worst solution is no worse than dimensional collapse (i.e., all the embeddings lie in an line, and our decorrelation term can help to prevent it) instead of complete collapse (i.e., all the embeddings degenerate into a single point). As expected, only optimizing the decorrelation term will lead to poor result, as the model learns nothing meaningful but disentangled representation. In Appendix B we discuss the relationship between complete/dimensional collapse, when the two cases happen and how to avoid them. Effect of decorrelation intensity. We study how the intensity of feature decorrelation improves/degrades the performance by increasing the trade-off hyper-parameter . Fig. 2 shows test accuracy w.r.t. different ’s on Cora, Citeseer and Pubmed. The performance benefits from a proper selection of (from 0.0005 to 0.001 in our experiments). When is too small, the decorrelation term does not work; if it is too large, the invariance term would be neglected, leading to serious performance degrade. An interesting finding is that even when is very small or even equals to 0 (w/o Ldec in Table 4), the test accuracy on Citeseer does not degrade as much as that on Cora and Citeseer. The reason is that node embeddings of Citeseer is already highly uncorrelated even without the decorrelation term. Appendix F visualizes the correlation matrices without/with decorrelations. Effect of embedding dimension. Fig. 3 shows the effect of the embedding dimension. Similar to contrastive methods [48, 15, 57, 58], CCA-SSG benefits from a large embedding dimension (compared with supervised learning), while the optimal embedding dimension of CCA-SSG (512 on most benchmarks) is a bit larger than other methods (usually 128 or 256). Yet, we notice a performance drop as the embedding dimension increases. We conjecture that the CCA is essentially a dimension-reduction method, the ideal embedding dimension ought to be smaller than the dimension of input. Hence we do not apply it on well-compressed datasets (e.g. ogbn-arXiv and ogbn-product). Scalability Comparison. Table 5 compares model size, training time (till the epoch that gives the highest evaluation accuracy) and memory cost of CCA-SSG with other methods, on Cora, Pubmed and Amazon-Computers. Overall, our method has fewer parameters, shorter training time, and fewer memory cost than MVGRL, GRACE and GCA in most cases. DGI is another simple and efficient model, but it yields much poorer performance. The results show that despite its simplicity and efficiency, our method achieves even better (or competitive) performance. Table 4: Ablation study of node classification accuracy (%) on the key components of CCA-SSG. Variants Cora Citeseer Pubmed Baseline 84.2 73.1 81.6 w/o Ldec 79.1 72.2 75.3 w/o Linv 40.1 28.9 46.5 Figure 2: Effect of . Figure 3: Effect of D. 6 Conclusion and Discussions In this paper, we have introduced CCA-SSG, a conceptually simple, efficient yet effective method for self-supervised representation learning on graphs, based on the idea of Canonical Correlation Analysis. Compared with contrastive methods, our model does not require additional components except random augmentations and a GNN encoder, whose effectiveness is justified in experiments. Limitations of the work. Despite the theoretical grounds and the promising experimental justifications, our method would suffer from several limitations. 1) The objective Eq. (5) is essentially performing dimension reduction, while SSL approach usually requires a large embedding dimension. As a result, our method might not work well on datasets where input data does not have a large feature dimension. 2) Like other augmentation based methods, CCA-SSG highly relies on a high-quality, informative and especially, label-invariant augmentations. However, the augmentations used in our model might not perfectly meet these requirements, and it remains an open problem how to generate informative graph augmentations that have non-negative impacts on the downstream tasks. Potential negative societal impacts. This work explores a simple pipeline for representation learning without large amount of labeled data. However, in industry there are many career workers whose responsibility is to label or annotate data. The proposed method might reduce the need for labeling data manually, and thus makes a few individuals unemployed (especially for developing countries and remote areas). Furthermore, our model might be biased, as it tends to pay more attention to the majority and dominant features (shared information across most of the data). The minority group whose features are scare are likely to be downplayed by the algorithm. Acknowledgments and Disclosure of Funding This work was supported in part by NSF under grants III-1763325, III-1909323, III-2106758, and SaTC-1930941. Qitian Wu and Junchi Yan were partly supported by Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102). We thank Amazon Web Services for sponsoring computation resources for this work.
1. What is the focus and contribution of the paper regarding self-supervised learning? 2. What are the strengths of the proposed approach, particularly in its connection to mutual information and information bottleneck principles? 3. Do you have any concerns or questions regarding the theoretical proof and its assumptions? 4. What are the minor issues or typos in the paper that need attention?
Summary Of The Paper Review
Summary Of The Paper This work proposes an alternative self-supervised objective inspired by canonical correlation analysis. Theoretical proof shows its connection to mutual information (MI) estimator and information bottleneck principles. Empirical experimental results demonstrate the effectiveness on node classification. Review The proposed self-supervised objective shares the same idea on making the representations of augmented views from the same data be close as much as possible. While the difference lies at the usage of negative samples. The proposed loss function does not need negative sample, but a regularization term instead. The key idea of this work is very easy to follow, and the experimental results on node classification task demonstrates its superiority to baselines. Concerns: I have a question about the correctness of the theoretical proof to show its connection to previous approach needs more explanation. It seems to be very important to dissect the proposed loss function. Authors drop an assumption that S denotes the augmented view from original input data X, and they share the same space, referring to Section 4, line 185. Then two different distributions (i.e., P(Z_S) and P(Z_S | X)) for the variable Z_S are defined in Line 191. Theoretical analysis are conducted based on this assumption. However, I'm confused about the difference between P(Z_S) and P(Z_S | X). It's easy to understand the variable Z_S depends on X because the augmented view S is generated from X. But it's difficult to understand why we can define a marginal distribution P(Z_S). It's not reasonable because S still depends on X. It causes a big trouble to follow the conclusion in Proposition 2. Why we can discuss the entropy of variable Z_S without the condition on input data X. Minor Issue: In the appendix D, please double check the Remark 1. How can we get the equality H(Z_S|X) - H(Z_S | S, X) = H(Z_S | X) according to property 6?
NIPS
Title From Canonical Correlation Analysis to Self-supervised Graph Neural Networks Abstract We introduce a conceptually simple yet effective model for self-supervised representation learning with graph data. It follows the previous methods that generate two views of an input graph through data augmentation. However, unlike contrastive methods that focus on instance-level discrimination, we optimize an innovative feature-level objective inspired by classical Canonical Correlation Analysis. Compared with other works, our approach requires none of the parameterized mutual information estimator, additional projector, asymmetric structures, and most importantly, negative samples which can be costly. We show that the new objective essentially 1) aims at discarding augmentation-variant information by learning invariant representations, and 2) can prevent degenerated solutions by decorrelating features in different dimensions. Our theoretical analysis further provides an understanding for the new objective which can be equivalently seen as an instantiation of the Information Bottleneck Principle under the self-supervised setting. Despite its simplicity, our method performs competitively on seven public graph datasets. The code is available at: https://github.com/hengruizhang98/CCA-SSG. 1 Introduction Self-supervised learning (SSL) has been a promising paradigm for learning useful representations without costly labels [7, 46, 5]. In general, it learns representations via a proxy objective between inputs and self-defined signals, among which contrastive methods [46, 40, 16, 5, 12] have achieved impressive performance on learning image representations by maximizing the mutual information of two views (or augmentations) of the same input. Such methods can be interpreted as a discrimination of a joint distribution (positive pairs) from the product of two marginal ones (negative pairs) [50]. Inspired by the success of contrastive learning in vision [17, 46, 40, 5, 16, 12, 6], similar methods have been adapted to learning graph neural networks [48, 15, 33, 57, 58]. Although these models have achieved impressive performance, they require complex designs and architectures. For example, DGI [48] and MVGRL [15] rely on a parameterized mutual information estimator to discriminate positive node-graph pairs from negative ones; GRACE [57] and GCA [58] harness an additional MLP-projector to guarantee sufficient capacity. Moreover, negative pairs sampled or constructed from data often play an indispensable role in providing effective contrastive signals and have a large impact on performance. Selecting proper negative samples is often nontrivial for graph-structured data, not to mention the extra storage cost for prohibitively large graphs. BGRL [39] is a recent endeavor on ⇤This work was done during the author’s internship at AWS Shanghai AI Lab. †Corresponding author. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). targeting a negative-sample-free approach for GNN learning through asymmetric architectures [12, 6]. However, it requires additional components, e.g., an exponential moving average (EMA) and StopGradient, to empirically avoid degenerated solutions, leading to a more intricate architecture. Deviating from the large body of previous works on contrastive learning, in this paper we take a new perspective to address SSL on graphs. We introduce Canonical Correlation Analysis inspired Self-Supervised Learning on Graphs (CCA-SSG), a simple yet effective approach that opens the way to a new SSL objective and frees the model from intricate designs. It follows the common practice of prior arts, generating two views of an input graph through random augmentation and acquiring node representations through a shared GNN encoder. Differently, we propose to harness a non-contrastive and non-discriminative feature-level objective, which is inspired by the well-studied Canonical Correlation Analysis (CCA) methods [18, 10, 11, 14, 2, 4]. More specifically, the new objective aims at maximizing the correlation between two augmented views of the same input and meanwhile decorrelating different (feature) dimensions of a single view’s representation. We show that the objective 1) essentially pursuits discarding augmentation-variant information and preserving augmentation-invariant information, and 2) can prevent dimensional collapse [19] (i.e., different dimensions capture the same information) in nature. Furthermore, our theoretical analysis sheds more lights that under mild assumptions, our model is an instantiation of Information Bottleneck Principle [43, 44, 37] under SSL settings [53, 9, 45]. To sum up, as shown in Table 1, our new objective induces a simple and light model without reliance on negative pairs [48, 15, 57, 58], a parameterized mutual information estimator [48, 15], an additional projector or predictor [57, 58, 39] or asymmetric architectures [39, 15]. We provide a thorough evaluation for the model on seven node classification benchmarks. The empirical results demonstrate that despite its simplicity, CCA-SSG can achieve very competitive performance in general and even superior test accuracy in five datasets. It is worth noting that our approach is agnostic to the input data format, which means that it can potentially be applied to other scenarios beyond graph-structured data (such as vision, language, etc.). We leave such a technical extension for future works. Our contributions are as follows: 1) We introduce a non-contrastive and non-discriminative objective for self-supervised learning, which is inspired by Canonical Correlation Analysis methods. It does not rely on negative samples, and can naturally remove the complicated components. Based on it we propose CCA-SSG, a simple yet effective framework for learning node representations without supervision (see Section 3). 2) We theoretically prove that the proposed objective aims at keeping augmentation-invariant information while discarding augmentation-variant one, and possesses an inherent relationship to an embodiment of Information Bottleneck Principle under self-supervised settings (see Section 4). 3) Experimental results show that without complex designs, our method outperforms state-of-the-art self-supervised methods MVGRL [15] and GCA [58] on 5 out of 7 benchmarks. We also provide thorough ablation studies on the effectiveness of the key components of CCA-SSG (see Section 5). 2 Related Works and Background Contrastive Learning on Graphs. Contrastive methods [46, 40, 17, 16, 5, 12] have been shown to be effective for unsupervised learning in vision, which have also been adapted to graphs. Inspired by the local-global mutual information maximization viewpoints [17], DGI [48] and InfoGraph [38] put forward unsupervised schemes for node and graph representation learning, respectively. MVGRL [15] generalizes CMC [40] to graph-structured data by introducing graph diffusion [23] to create another view for a graph. GCC [33] adopts InfoNCE loss [46] and MoCo-based negative pool [16] for largescale GNN pretraining. GRACE [57], GCA [58] and GraphCL [52] follow the spirit of SimCLR [5] and learn node/graph representations by directly treating other nodes/graphs as negative samples. BGRL [39] targets a negative-sample-free model, inspired by BYOL [12], on node representation learning. But it still requires complex asymmetric architectures. Feature-level Self-supervised Objectives. The above-mentioned methods all focus on instancelevel contrastive learning. To address their drawbacks, some recent works have been turning to feature-level objectives. For example, Contrastive Clustering [25] regards different feature dimensions as different clusters, thus combining the cluster-level discrimination with instance-level discrimination. W-MSE [8] performs a differentiable whitening operation on learned embeddings, which implicitly scatters data points in embedding space. Barlow Twins [53] borrows the idea of redundancy reduction and adopts a soft decorrelation term that makes the cross-correlation matrix of two views’ representations close to an identity matrix. By contrast, our method is based on the classical Canonical Correlation Analysis, working by correlating the representations of two views from data augmentation and meanwhile decorrelating different feature dimensions of each view’s representation. Canonical Correlation Analysis. CCA is a classical multivariate analysis method, which is first introduced in [18]. For two random variables X 2 Rm and Y 2 Rn, their covariance matrix is ⌃XY = Cov(X,Y ). CCA aims at seeking two vectors a 2 Rm and b 2 Rn such that the correlation ⇢ = corr(a>X, b>Y ) = a >⌃XY bp a>⌃XXa p b>⌃Y Y b is maximized. Formally, the objective is max a,b a>⌃XY b, s.t. a>⌃XXa = b>⌃Y Y b = 1. (1) For multi-dimensional cases, CCA seeks two sets of vectors maximizing their correlation and subjected to the constraint that they are uncorrelated with each other [10]. Later studies apply CCA to multi-view learning with deep models [2, 11, 14], by replacing the linear transformation with neural networks. Concretely, assuming X1, X2 as two views of an input data, it optimizes max ✓1,✓2 Tr P>✓1(X1)P✓2(X2) s.t. P>✓1(X1)P✓1(X1) = P > ✓2(X2)P✓2(X2) = I. (2) where P✓1 and P✓2 are two feedforward neural networks and I is an identity matrix. Despite its preciseness, such computation is really expensive [4]. Fortunately, soft CCA [4] removes the hard decorrelation constraint by adopting the following Lagrangian relaxation: min ✓1,✓2 Ldist (P✓1(X1), P✓2(X2)) + (LSDL(P✓1(X1)) + LSDL(P✓2(X2))) , (3) where Ldist measures correlation between two views’ representations and LSDL (called stochastic decorrelation loss) computes an L1 distance between P✓i(Xi) and an identity matrix, for i = 1, 2. 3 Approach 3.1 Model Framework In this paper we focus on self-supervised node representation learning, where we consider a single graph G = (X,A). X 2 RN⇥F and A 2 RN⇥N denote node features and adjacency matrix respectively. Here N is the number of nodes within the graph and F denotes feature dimension. Our model simply consists of three parts: 1) a random graph augmentation generator T . 2) a GNNbased graph encoder f✓ where ✓ denotes its parameters. 3) a novel feature-level objective function based on Canonical Correlation Analysis. Fig. 1 is an illustration of the proposed model. Algorithm 1: PyTorch-style code for CCA-SSG # f: encoder network # lambda: trade-off # D: embedding dimension # g: input graph # feat: node features # generate two views through random augmentation g1, feat1 = augment(g, feat) g2, feat2 = augment(g, feat) z1 = f(g1, feat1) # embedding of the 1st view z2 = f(g2, feat2) # embedding of the 2st view # batch normalization z1_norm = ((z1 - z1.mean(0)) / z1.std(0))/ sqrt(N) z2_norm = ((z2 - z2.mean(0)) / z2.std(0))/ sqrt(N) # covariance matrix of each view c1 = torch.mm(z1_norm.T(), z1_norm) c2 = torch.mm(z2_norm.T(), z2_norm) iden = torch.eye(D) loss_inv = (z1_norm - z2_norm).pow(2).sum() loss_dec_1 = (c1 - iden).pow(2).sum() loss_dec_2 = (c2 - iden).pow(2).sum() loss_dec = loss_dec_1 + loss_dec_2 loss = loss_inv + lambda * loss_dec Graph augmentations. We consider the standard pipeline for random graph augmentation that has been commonly used in previous works [57, 39]. To be specific, we harness two ways for augmentation: edge dropping and node feature masking. Edge dropping randomly drops a fraction of edges from the original graph, while node feature masking randomly masks a fraction of features for all the nodes. In this way, T is composed of all the possible graph transformation operations and each t ⇠ T denotes a specific graph transformation for graph G. Note that we use commonly adopted augmentation methods to stay our focus on the design of objective function and conduct fair comparison with existing approaches. More complicated random augmentations [52, 58] can also be readily plugged into our model. Details for the used augmentation functions are in Appendix E. Training. In each training iteration, we first randomly sample two graph transformations tA and tB from T , and then generate two views G̃A = (X̃A, ÃA) and G̃B = (X̃B , ÃB) according to the transformations. The two views are subsequently fed into a shared GNN encoder to generate the node embeddings of the two views: ZA = f✓(X̃A, ÃA), ZB = f✓(X̃B , ÃB), where ZA,ZB 2 RN⇥D and D denotes embedding dimension. We further normalize the node embeddings along instance dimension so that each feature dimension has a 0-mean and 1/ p N -standard deviation distribution: Z̃ = Z µ(Z) (Z) ⇤ p N (4) The normalized Z̃A, Z̃B will be used to compute a feature-level objective in Section 3.2. To help better understand the proposed framework, we provide the PyTorch-style pseudocode for training CCA-SSG in Algorithm 1. Inference. To generate node embeddings for downstream tasks, we put the original graph G = (X,A) into the trained graph neural network f✓ and obtain node embeddings Z = f✓(X,A). 3.2 Learning Objective Canonical Correlation Analysis has shown its great power in multi-view learning like instance recognition [4]. However, it still remains unexplored to leverage CCA for self-supervised learning. Note that in SSL, one generates two sets of data from the same input through transformation or random data augmentation, which could be regraded as two views of the input data. This inspires us to introduce the following objective for self-supervised representation learning: L = Z̃A Z̃B 2 F| {z } invariance term + ✓ Z̃>AZ̃A I 2 F + Z̃>BZ̃B I 2 F ◆ | {z } decorrelation term (5) where is a non-negative hyperparameter trading off two terms. Note that minimizing the invariance term is essentially maximizing the correlation between two views as their representations are already normalized. In SSL, as the two augmented views come randomly from the same distribution, we can adopt one encoder f✓ that is shared across two branches and seek for a regularization that encourages different feature dimensions to capture distinct semantics via the decorrelation term. We next provide a variance-covariance perspective to the new objective, following similar lines of reasoning in [41, 42]. Assume that input data come from a distribution x ⇠ p(x) and s is a view of x through random augmentation s ⇠ paug(·|x). Denote zs as the representation of s, then minimizing the invariance term, by expectation, is to minimize the variance of the normalized representation z̃s, conditioned on x. Also, minimizing the decorrelation term is to push the off-diagonal elements of the covariance matrix (given by two z̃s’s) close to 0. Formally, we have Linv = Z̃A Z̃B 2 F = NX i=1 DX k=1 (z̃Ai,j z̃Bi,j)2 ⇠= Ex " DX k=1 Vs|x[z̃s,k] # ⇤ 2N, (6) Ldec = Z̃>S Z̃S I 2 F = kCovs[z̃] Ik2F ⇠= X i 6=j ⇢zsi,j 2 , for Z̃S 2 {Z̃A, Z̃B}, (7) where ⇢ is the Pearson correlation coefficient. 3.3 Advantages over Contrastive Methods In this subsection we provide a systematic comparison with previous self-supervised methods for node representation learning, including DGI [48], MVGRL [15], GRACE [57], GCA [58] and BGRL [39], and highlight the merits of CCA-SSG. A quick overview is presented in Table 1. No reliance on negative samples. Most of previous works highly rely on negative pairs to avoid collapse or interchangeable, trivial/degenerated solutions [48, 15, 57, 58]. E.g., DGI and MVGRL generate negative examples by corrupting the graph structure severely, and GRACE/GCA treats all the other nodes within a graph as negative examples. However, for self-supervised learning on graphs, it is non-trivial to construct informative negative examples since nodes are structurally connected, and selecting negative examples in an arbitrary manner may lead to large variance for stochastic gradients and slow training convergence [51]. The recently proposed BGRL model adopts asymmetric encoder architectures for SSL on graphs without the use of negative samples. However, though BGRL could avoid collapse empirically, it still remains as an open problem concerning its theoretical guarantee for preventing trivial solutions [41]. Compared with these methods, our model does not rely on negative pairs and asymmetric encoders. The feature decorrelation term can naturally prevent trivial solutions caused by the invariance term. We discuss the collapse issue detailedly in Appendix B. No MI estimator, projector network nor asymmetric architectures. Most previous works rely on additional components besides the GNN encoder to estimate some score functions in final objectives. DGI and MVGRL require a parameterized estimator to approximate mutual information between two views, and GRACE leverages a MLP projector followed by an InfoNCE estimator. BGRL harnesses asymmetric encoder architecture which consists of EMA (Exponential Moving Average), Stop-Gradient and an additional projector. MVGRL also induces asymmetric architectures as it adopts two different GNNs for the input graph and the diffusion graph respectively. In contrast, our approach requires no additional components except a single GNN encoder. Better efficiency and scalability to large graphs. Consider a graph with N nodes. DGI and MVGRL contrast node embeddings with graph embedding, which would require O(N) space cost. GRACE treats two views of the same node as positive pairs and treat views of different nodes as negative pairs, which would take O(N2) space. BGRL focuses only on positive pairs, which will also take O(N) space. By contrast, our method works on feature dimension. If we embed each node into a D-dimensional vector, the computation of the loss function would require O(D2) space. This indicates that the memory cost does not grow consistently as the size of graph increases. As a result, our method is promising for handling large-scale graphs without prohibitively large space costs. 4 Theoretical Insights with Connection to Information Theory In this section we provide some analysis of the proposed objective function: 1) Interpretation of the loss function with entropy and mutual information. 2) The connection between the proposed objective and the Information Bottleneck principle. 3) Why the learned representations would be informative to downstream tasks. The proofs of propositions, theorems and corollaries are in Appendix D. Notations. Denote the random variable of input data as X and the downstream task as T (it could be the label Y if the downstream task is classification). Note that in SSL, we have no access to T in training and here we introduce the notation for our analysis. Define S as the self-supervised signal (i.e., an augmented view of X), and S shares the same space as X . Our model learns a representation for the input, denoted by ZX and its views, denoted by ZS . ZX = f✓(X), ZS = f✓(S), f✓(·) is a encoder shared by the original data and its views, which is parameterized by ✓. The target of representation learning is to learn a optimal encoder parameter ✓. Furthermore, for random variable A,B,C, we use I(A,B) to denote the mutual information between A and B, I(A,B|C) to denote conditional mutual information of A and B on a given C, H(A) for the entropy, and H(A|B) for conditional entropy. The proofs of propositions, theorems and corollaries are in Appendix D. 4.1 An Entropy and Mutual Information Interpretation of the Objective We first introduce an assumption about the distributions of P (ZS) and P (ZS |X). Assumption 1. (Gaussian assumption of P (ZS |X) and P (ZS)): P (ZS |X) = N (µX ,⌃X), P (ZS) = N (µ,⌃). (8) With Assumption 1, we can arrive at the following propositions: Proposition 1. In expectation, minimizing Eq. (6) is equivalent to minimizing the entropy of ZS conditioned on input X , i.e., min ✓ Linv ⇠= min ✓ H(ZS |X). (9) Proposition 2. Minimizing Eq. (7) is equivalent to maximizing the entropy of ZS , i.e., min ✓ Ldec ⇠= max ✓ H(ZS). (10) The two propositions unveil the effects of two terms in our objective. Combining two propositions, we can further interpret Eq. (5) from an information-theoretic perspective. Theorem 1. By optimizing Eq (5), we maximize the mutual information between the augmented view’s embedding ZS and the input data X , and minimize the mutual information between ZS and the view itself S, conditioned on the input data X . Formally we have min ✓ L ) max ✓ I(ZS , X) and min ✓ I(ZS , S|X). (11) The proof is based on the facts I(ZS , X) = H(ZS) H(ZS |X) and I(ZS , S|X) = H(ZS |X) + H(ZS |S) = H(ZS |X). Theorem 1 indicates that our objective Eq. (5) learns representations that maximize the information of the input data, i.e., I(ZS , X), and meanwhile minimize the lost information during augmentation, i.e., I(ZS , S|X). 4.2 Connection with the Information Bottleneck Principle The analysis in Section 4.1 enables us to further build a connection between our objective Eq. (5) and the well-studied Information Bottleneck Principle [43, 44, 37, 1] under SSL settings. Recall that the supervised Information Bottleneck (IB) is defined as follows: Definition 1. The supervised IB aims at maximizing an Information Bottleneck Lagrangian: IBsup = I(Y, ZX) I(X,ZX), where > 0. (12) As we can see, IBsup attempts to maximize the information between the data representation ZX and its corresponding label Y , and concurrently minimize the information between ZX and the input data X (i.e., exploiting compression of ZX from X). The intuition of IB principle is that ZX is expected to contain only the information that is useful for predicting Y . Several recent works [9, 45, 53] propose various forms of IB under self-supervised settings. The most relevant one names Self-supervised Information Bottleneck: Definition 2. (Self-supervised Information Bottleneck [53]). The Self-supervised IB aims at maximizing the following Lagrangian: IBssl = I(X,ZS) I(S,ZS), where > 0. (13) Intuitively, IBssl posits that a desirable representation is expected to be informative to augmentation invariant features, and to be a maximally compressed representation of the input. Our objective Eq. (5) is essentially an embodiment of IBssl: Theorem 2. Assume 0 < 1, then by minimizing Eq. (5), the self-supervised Information Bottleneck objective is maximized, formally: min ✓ L ) max ✓ IBssl (14) Theorem 2 also shows that Eq. (5) implicitly follows the same spirit of IB principle under selfsupervised settings. As further enlightenment, we can relate Eq. (5) with the multi-view Information Bottleneck [9] and the minimal and sufficient representations for self-supervision [45]: Corollary 1. Let X1 = S, X2 = X and assume 0 < 1, then minimizing Eq. (5) is equivalent to minimizing the Multi-view Information Bottleneck Loss in [9]: LMIB = I(Z1, X1|X2) I(X2, Z1), where 0 < 1. (15) Corollary 2. When the data augmentation process is reversible, minimizing Eq. (5) is equivalent to learning the Minimal and Sufficient Representations for Self-supervision in [45]: ZsslX = argmax ZX I(ZX , S), Z sslmin X = argmin ZX H(ZX |S) s.t. I(ZX , S) is maximized. (16) 4.3 Influence on Downstream Tasks We have provided a principled understanding for our new objective. Next, we discuss its effect on downstream tasks T . The rationality of data augmentations in SSL is rooted in a conjecture that an ideal data augmentation approach would not change the information related to its label. We formulate this hypothesis as a building block for analysis on downstream tasks [36, 9]. Assumption 2. (Task-relevant information and data augmentation). All the task-relevant information is shared across the input data X and its augmentations S, i.e., I(X,T ) = I(S, T ) = I(X,S, T ), or equivalently, I(X,T |S) = I(S, T |X) = 0. This indicates that all the task-relevant information is contained in augmentation invariant features. We proceed to derive the following theorem which reveals the efficacy of the learned representations by our objective with respect to downstream tasks. Theorem 3. (Task-relevant/irrelevant information). By optimizing Eq. (5), the task-relevant information I(ZS , T ) is maximized, and the task-irrelevant information H(ZS |T ) is minimized. Formally, min ✓ L ) max ✓ I(ZS , T ) and min ✓ H(ZS |T ). (17) Therefore, the learned representation ZS is expected to contain minimal and sufficient information about downstream tasks [45, 9], which further illuminates the reason why the embeddings given by SSL approaches have superior performance on various downstream tasks. 5 Experiments We assess the quality of representations after self-supervised pretraining on seven node classification benchmarks: Cora, Citeseer, Pubmed, Coauthor CS, Coauthor Physics and Amazon Computer, Amazon-Photo. We adopt the public splits for Cora, Citeseer, Pubmed, and a 1:1:9 training/validation/testing splits for the other 4 datasets. Details of the datasets are in Appendix E. Evaluation protocol. We follow the linear evaluation scheme as introduced in [48]: i) We first train the model on all the nodes in a graph without supervision, by optimizing the objective in Eq. (5). ii) After that, we freeze the parameters of the encoder and obtain all the nodes’ embeddings, which are subsequently fed into a linear classifier (i.e., a logistic regression model) to generate a predicted label for each node. In the second stage, only nodes in training set are used for training the classifier, and we report the classification accuracy on testing nodes. We implement the model with PyTorch. All experiments are conducted on a NVIDIA V100 GPU with 16 GB memory. We use the Adam optimizer [20] for both stages. The graph encoder f✓ is specified as a standard two-layer GCN model [22] for all the datasets except citeseer (where we empirically find that a one-layer GCN is better). We report the mean accuracy with a standard deviation through 20 random initialization (on Coauthor CS, Coauthor Physics and Amazon Computer, Amazon-Photo, the split is also randomly generated). Detailed hyperparameter settings are in Appendix E. 5.1 Comparison with Peer Methods We compare CCA-SSG with classical unsupervised models, Deepwalk [32] and GAE [21], and self-supervised models, DGI [48], MVGRL [15], GRACE [57] and GCA [58]. We also compare with supervised learning models, including MLP, Label Propagation (LP) [56], and supervised baselines GCN [22] and GAT [47]3. The results of baselines are quoted from [15, 57, 58] if not specified. We report the node classification results of citation networks and other datasets in Table 2 and Table 3 respectively. As we can see, CCA-SSG outperforms both the unsupervised competitors and the fully supervised baselines on Cora and Pubmed, despite its simple architecture. On Citeseer, CCA-SSG achieves competitive results as of the most powerful baseline MVGRL. On four larger benchmarks, CCA-SSG also achieves the best performance in four datasets except Coauther-Physics. It is worth mentioning that we empirically find that on Coauthor-CS a pure 2-layer-MLP encoder is better than GNN models. This might because the graph-structured information is much less informative than the node features, presumably providing harmful signals for classification (in fact, on Coauthor-CS, linear models using merely node features can greatly outperform DeepWalk/DeepWalk+features). 3The BGRL [39] is not compared as its source code has not been released. 5.2 Ablation Study and Scalability Comparison Effectiveness of invariance/decorrelation terms. We alter our loss by removing the invariance/decorrelation term respectively to study the effects of each component, with results reported in Table 4. We find that only using the invariance term will lead to merely performance drop instead of completely collapsed solutions. This is because node embeddings are normalized along the instance dimension to have a zero-mean and fixed-standard deviation, and the worst solution is no worse than dimensional collapse (i.e., all the embeddings lie in an line, and our decorrelation term can help to prevent it) instead of complete collapse (i.e., all the embeddings degenerate into a single point). As expected, only optimizing the decorrelation term will lead to poor result, as the model learns nothing meaningful but disentangled representation. In Appendix B we discuss the relationship between complete/dimensional collapse, when the two cases happen and how to avoid them. Effect of decorrelation intensity. We study how the intensity of feature decorrelation improves/degrades the performance by increasing the trade-off hyper-parameter . Fig. 2 shows test accuracy w.r.t. different ’s on Cora, Citeseer and Pubmed. The performance benefits from a proper selection of (from 0.0005 to 0.001 in our experiments). When is too small, the decorrelation term does not work; if it is too large, the invariance term would be neglected, leading to serious performance degrade. An interesting finding is that even when is very small or even equals to 0 (w/o Ldec in Table 4), the test accuracy on Citeseer does not degrade as much as that on Cora and Citeseer. The reason is that node embeddings of Citeseer is already highly uncorrelated even without the decorrelation term. Appendix F visualizes the correlation matrices without/with decorrelations. Effect of embedding dimension. Fig. 3 shows the effect of the embedding dimension. Similar to contrastive methods [48, 15, 57, 58], CCA-SSG benefits from a large embedding dimension (compared with supervised learning), while the optimal embedding dimension of CCA-SSG (512 on most benchmarks) is a bit larger than other methods (usually 128 or 256). Yet, we notice a performance drop as the embedding dimension increases. We conjecture that the CCA is essentially a dimension-reduction method, the ideal embedding dimension ought to be smaller than the dimension of input. Hence we do not apply it on well-compressed datasets (e.g. ogbn-arXiv and ogbn-product). Scalability Comparison. Table 5 compares model size, training time (till the epoch that gives the highest evaluation accuracy) and memory cost of CCA-SSG with other methods, on Cora, Pubmed and Amazon-Computers. Overall, our method has fewer parameters, shorter training time, and fewer memory cost than MVGRL, GRACE and GCA in most cases. DGI is another simple and efficient model, but it yields much poorer performance. The results show that despite its simplicity and efficiency, our method achieves even better (or competitive) performance. Table 4: Ablation study of node classification accuracy (%) on the key components of CCA-SSG. Variants Cora Citeseer Pubmed Baseline 84.2 73.1 81.6 w/o Ldec 79.1 72.2 75.3 w/o Linv 40.1 28.9 46.5 Figure 2: Effect of . Figure 3: Effect of D. 6 Conclusion and Discussions In this paper, we have introduced CCA-SSG, a conceptually simple, efficient yet effective method for self-supervised representation learning on graphs, based on the idea of Canonical Correlation Analysis. Compared with contrastive methods, our model does not require additional components except random augmentations and a GNN encoder, whose effectiveness is justified in experiments. Limitations of the work. Despite the theoretical grounds and the promising experimental justifications, our method would suffer from several limitations. 1) The objective Eq. (5) is essentially performing dimension reduction, while SSL approach usually requires a large embedding dimension. As a result, our method might not work well on datasets where input data does not have a large feature dimension. 2) Like other augmentation based methods, CCA-SSG highly relies on a high-quality, informative and especially, label-invariant augmentations. However, the augmentations used in our model might not perfectly meet these requirements, and it remains an open problem how to generate informative graph augmentations that have non-negative impacts on the downstream tasks. Potential negative societal impacts. This work explores a simple pipeline for representation learning without large amount of labeled data. However, in industry there are many career workers whose responsibility is to label or annotate data. The proposed method might reduce the need for labeling data manually, and thus makes a few individuals unemployed (especially for developing countries and remote areas). Furthermore, our model might be biased, as it tends to pay more attention to the majority and dominant features (shared information across most of the data). The minority group whose features are scare are likely to be downplayed by the algorithm. Acknowledgments and Disclosure of Funding This work was supported in part by NSF under grants III-1763325, III-1909323, III-2106758, and SaTC-1930941. Qitian Wu and Junchi Yan were partly supported by Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102). We thank Amazon Web Services for sponsoring computation resources for this work.
1. What is the focus and contribution of the paper on self-supervised representation learning? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its simplicity and efficiency? 3. Do you have any concerns about the problem setting or the link between CCA and node representation? 4. How does the reviewer assess the comparisons with other works and the omission of traditional CCA in the analysis? 5. What are the questions regarding the choice of logistic regression for label prediction and the dependence of performance on hyperparameters?
Summary Of The Paper Review
Summary Of The Paper The authors propose CCA-SSG for self-supervised representation learning on graph. Tee method is simple and efficient. Experimental results show high performance of the method on the node classification task. Review The problem setting is not clear. The basic objective of CCA is to extract two different correlated features from two different observed data sets. The link between CCA and node representation is not clear. There is almost no explanation on the details of datasets in the experimental section. Each dataset has two different observations? The authors made a comparison of the performance of many methods, but traditional CCA is missing. Traditional CCA can not be applied to the node representation learning problem? The title of this paper is "From Canonical Correlation Analysis to Self-supervised Graph Neural Networks". It would be better to show the performance of CCA as well. For the label prediction, why did the authors use logistic regression? From Table 2, unsupervised methods tend to work better than supervised methods. It is not strange? The perfromance is heavily dependent on lambda and the embedding dimension. How can appropriate parameters be determined in practice? Thank you for the answers. I have raised my score.
NIPS
Title From Canonical Correlation Analysis to Self-supervised Graph Neural Networks Abstract We introduce a conceptually simple yet effective model for self-supervised representation learning with graph data. It follows the previous methods that generate two views of an input graph through data augmentation. However, unlike contrastive methods that focus on instance-level discrimination, we optimize an innovative feature-level objective inspired by classical Canonical Correlation Analysis. Compared with other works, our approach requires none of the parameterized mutual information estimator, additional projector, asymmetric structures, and most importantly, negative samples which can be costly. We show that the new objective essentially 1) aims at discarding augmentation-variant information by learning invariant representations, and 2) can prevent degenerated solutions by decorrelating features in different dimensions. Our theoretical analysis further provides an understanding for the new objective which can be equivalently seen as an instantiation of the Information Bottleneck Principle under the self-supervised setting. Despite its simplicity, our method performs competitively on seven public graph datasets. The code is available at: https://github.com/hengruizhang98/CCA-SSG. 1 Introduction Self-supervised learning (SSL) has been a promising paradigm for learning useful representations without costly labels [7, 46, 5]. In general, it learns representations via a proxy objective between inputs and self-defined signals, among which contrastive methods [46, 40, 16, 5, 12] have achieved impressive performance on learning image representations by maximizing the mutual information of two views (or augmentations) of the same input. Such methods can be interpreted as a discrimination of a joint distribution (positive pairs) from the product of two marginal ones (negative pairs) [50]. Inspired by the success of contrastive learning in vision [17, 46, 40, 5, 16, 12, 6], similar methods have been adapted to learning graph neural networks [48, 15, 33, 57, 58]. Although these models have achieved impressive performance, they require complex designs and architectures. For example, DGI [48] and MVGRL [15] rely on a parameterized mutual information estimator to discriminate positive node-graph pairs from negative ones; GRACE [57] and GCA [58] harness an additional MLP-projector to guarantee sufficient capacity. Moreover, negative pairs sampled or constructed from data often play an indispensable role in providing effective contrastive signals and have a large impact on performance. Selecting proper negative samples is often nontrivial for graph-structured data, not to mention the extra storage cost for prohibitively large graphs. BGRL [39] is a recent endeavor on ⇤This work was done during the author’s internship at AWS Shanghai AI Lab. †Corresponding author. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). targeting a negative-sample-free approach for GNN learning through asymmetric architectures [12, 6]. However, it requires additional components, e.g., an exponential moving average (EMA) and StopGradient, to empirically avoid degenerated solutions, leading to a more intricate architecture. Deviating from the large body of previous works on contrastive learning, in this paper we take a new perspective to address SSL on graphs. We introduce Canonical Correlation Analysis inspired Self-Supervised Learning on Graphs (CCA-SSG), a simple yet effective approach that opens the way to a new SSL objective and frees the model from intricate designs. It follows the common practice of prior arts, generating two views of an input graph through random augmentation and acquiring node representations through a shared GNN encoder. Differently, we propose to harness a non-contrastive and non-discriminative feature-level objective, which is inspired by the well-studied Canonical Correlation Analysis (CCA) methods [18, 10, 11, 14, 2, 4]. More specifically, the new objective aims at maximizing the correlation between two augmented views of the same input and meanwhile decorrelating different (feature) dimensions of a single view’s representation. We show that the objective 1) essentially pursuits discarding augmentation-variant information and preserving augmentation-invariant information, and 2) can prevent dimensional collapse [19] (i.e., different dimensions capture the same information) in nature. Furthermore, our theoretical analysis sheds more lights that under mild assumptions, our model is an instantiation of Information Bottleneck Principle [43, 44, 37] under SSL settings [53, 9, 45]. To sum up, as shown in Table 1, our new objective induces a simple and light model without reliance on negative pairs [48, 15, 57, 58], a parameterized mutual information estimator [48, 15], an additional projector or predictor [57, 58, 39] or asymmetric architectures [39, 15]. We provide a thorough evaluation for the model on seven node classification benchmarks. The empirical results demonstrate that despite its simplicity, CCA-SSG can achieve very competitive performance in general and even superior test accuracy in five datasets. It is worth noting that our approach is agnostic to the input data format, which means that it can potentially be applied to other scenarios beyond graph-structured data (such as vision, language, etc.). We leave such a technical extension for future works. Our contributions are as follows: 1) We introduce a non-contrastive and non-discriminative objective for self-supervised learning, which is inspired by Canonical Correlation Analysis methods. It does not rely on negative samples, and can naturally remove the complicated components. Based on it we propose CCA-SSG, a simple yet effective framework for learning node representations without supervision (see Section 3). 2) We theoretically prove that the proposed objective aims at keeping augmentation-invariant information while discarding augmentation-variant one, and possesses an inherent relationship to an embodiment of Information Bottleneck Principle under self-supervised settings (see Section 4). 3) Experimental results show that without complex designs, our method outperforms state-of-the-art self-supervised methods MVGRL [15] and GCA [58] on 5 out of 7 benchmarks. We also provide thorough ablation studies on the effectiveness of the key components of CCA-SSG (see Section 5). 2 Related Works and Background Contrastive Learning on Graphs. Contrastive methods [46, 40, 17, 16, 5, 12] have been shown to be effective for unsupervised learning in vision, which have also been adapted to graphs. Inspired by the local-global mutual information maximization viewpoints [17], DGI [48] and InfoGraph [38] put forward unsupervised schemes for node and graph representation learning, respectively. MVGRL [15] generalizes CMC [40] to graph-structured data by introducing graph diffusion [23] to create another view for a graph. GCC [33] adopts InfoNCE loss [46] and MoCo-based negative pool [16] for largescale GNN pretraining. GRACE [57], GCA [58] and GraphCL [52] follow the spirit of SimCLR [5] and learn node/graph representations by directly treating other nodes/graphs as negative samples. BGRL [39] targets a negative-sample-free model, inspired by BYOL [12], on node representation learning. But it still requires complex asymmetric architectures. Feature-level Self-supervised Objectives. The above-mentioned methods all focus on instancelevel contrastive learning. To address their drawbacks, some recent works have been turning to feature-level objectives. For example, Contrastive Clustering [25] regards different feature dimensions as different clusters, thus combining the cluster-level discrimination with instance-level discrimination. W-MSE [8] performs a differentiable whitening operation on learned embeddings, which implicitly scatters data points in embedding space. Barlow Twins [53] borrows the idea of redundancy reduction and adopts a soft decorrelation term that makes the cross-correlation matrix of two views’ representations close to an identity matrix. By contrast, our method is based on the classical Canonical Correlation Analysis, working by correlating the representations of two views from data augmentation and meanwhile decorrelating different feature dimensions of each view’s representation. Canonical Correlation Analysis. CCA is a classical multivariate analysis method, which is first introduced in [18]. For two random variables X 2 Rm and Y 2 Rn, their covariance matrix is ⌃XY = Cov(X,Y ). CCA aims at seeking two vectors a 2 Rm and b 2 Rn such that the correlation ⇢ = corr(a>X, b>Y ) = a >⌃XY bp a>⌃XXa p b>⌃Y Y b is maximized. Formally, the objective is max a,b a>⌃XY b, s.t. a>⌃XXa = b>⌃Y Y b = 1. (1) For multi-dimensional cases, CCA seeks two sets of vectors maximizing their correlation and subjected to the constraint that they are uncorrelated with each other [10]. Later studies apply CCA to multi-view learning with deep models [2, 11, 14], by replacing the linear transformation with neural networks. Concretely, assuming X1, X2 as two views of an input data, it optimizes max ✓1,✓2 Tr P>✓1(X1)P✓2(X2) s.t. P>✓1(X1)P✓1(X1) = P > ✓2(X2)P✓2(X2) = I. (2) where P✓1 and P✓2 are two feedforward neural networks and I is an identity matrix. Despite its preciseness, such computation is really expensive [4]. Fortunately, soft CCA [4] removes the hard decorrelation constraint by adopting the following Lagrangian relaxation: min ✓1,✓2 Ldist (P✓1(X1), P✓2(X2)) + (LSDL(P✓1(X1)) + LSDL(P✓2(X2))) , (3) where Ldist measures correlation between two views’ representations and LSDL (called stochastic decorrelation loss) computes an L1 distance between P✓i(Xi) and an identity matrix, for i = 1, 2. 3 Approach 3.1 Model Framework In this paper we focus on self-supervised node representation learning, where we consider a single graph G = (X,A). X 2 RN⇥F and A 2 RN⇥N denote node features and adjacency matrix respectively. Here N is the number of nodes within the graph and F denotes feature dimension. Our model simply consists of three parts: 1) a random graph augmentation generator T . 2) a GNNbased graph encoder f✓ where ✓ denotes its parameters. 3) a novel feature-level objective function based on Canonical Correlation Analysis. Fig. 1 is an illustration of the proposed model. Algorithm 1: PyTorch-style code for CCA-SSG # f: encoder network # lambda: trade-off # D: embedding dimension # g: input graph # feat: node features # generate two views through random augmentation g1, feat1 = augment(g, feat) g2, feat2 = augment(g, feat) z1 = f(g1, feat1) # embedding of the 1st view z2 = f(g2, feat2) # embedding of the 2st view # batch normalization z1_norm = ((z1 - z1.mean(0)) / z1.std(0))/ sqrt(N) z2_norm = ((z2 - z2.mean(0)) / z2.std(0))/ sqrt(N) # covariance matrix of each view c1 = torch.mm(z1_norm.T(), z1_norm) c2 = torch.mm(z2_norm.T(), z2_norm) iden = torch.eye(D) loss_inv = (z1_norm - z2_norm).pow(2).sum() loss_dec_1 = (c1 - iden).pow(2).sum() loss_dec_2 = (c2 - iden).pow(2).sum() loss_dec = loss_dec_1 + loss_dec_2 loss = loss_inv + lambda * loss_dec Graph augmentations. We consider the standard pipeline for random graph augmentation that has been commonly used in previous works [57, 39]. To be specific, we harness two ways for augmentation: edge dropping and node feature masking. Edge dropping randomly drops a fraction of edges from the original graph, while node feature masking randomly masks a fraction of features for all the nodes. In this way, T is composed of all the possible graph transformation operations and each t ⇠ T denotes a specific graph transformation for graph G. Note that we use commonly adopted augmentation methods to stay our focus on the design of objective function and conduct fair comparison with existing approaches. More complicated random augmentations [52, 58] can also be readily plugged into our model. Details for the used augmentation functions are in Appendix E. Training. In each training iteration, we first randomly sample two graph transformations tA and tB from T , and then generate two views G̃A = (X̃A, ÃA) and G̃B = (X̃B , ÃB) according to the transformations. The two views are subsequently fed into a shared GNN encoder to generate the node embeddings of the two views: ZA = f✓(X̃A, ÃA), ZB = f✓(X̃B , ÃB), where ZA,ZB 2 RN⇥D and D denotes embedding dimension. We further normalize the node embeddings along instance dimension so that each feature dimension has a 0-mean and 1/ p N -standard deviation distribution: Z̃ = Z µ(Z) (Z) ⇤ p N (4) The normalized Z̃A, Z̃B will be used to compute a feature-level objective in Section 3.2. To help better understand the proposed framework, we provide the PyTorch-style pseudocode for training CCA-SSG in Algorithm 1. Inference. To generate node embeddings for downstream tasks, we put the original graph G = (X,A) into the trained graph neural network f✓ and obtain node embeddings Z = f✓(X,A). 3.2 Learning Objective Canonical Correlation Analysis has shown its great power in multi-view learning like instance recognition [4]. However, it still remains unexplored to leverage CCA for self-supervised learning. Note that in SSL, one generates two sets of data from the same input through transformation or random data augmentation, which could be regraded as two views of the input data. This inspires us to introduce the following objective for self-supervised representation learning: L = Z̃A Z̃B 2 F| {z } invariance term + ✓ Z̃>AZ̃A I 2 F + Z̃>BZ̃B I 2 F ◆ | {z } decorrelation term (5) where is a non-negative hyperparameter trading off two terms. Note that minimizing the invariance term is essentially maximizing the correlation between two views as their representations are already normalized. In SSL, as the two augmented views come randomly from the same distribution, we can adopt one encoder f✓ that is shared across two branches and seek for a regularization that encourages different feature dimensions to capture distinct semantics via the decorrelation term. We next provide a variance-covariance perspective to the new objective, following similar lines of reasoning in [41, 42]. Assume that input data come from a distribution x ⇠ p(x) and s is a view of x through random augmentation s ⇠ paug(·|x). Denote zs as the representation of s, then minimizing the invariance term, by expectation, is to minimize the variance of the normalized representation z̃s, conditioned on x. Also, minimizing the decorrelation term is to push the off-diagonal elements of the covariance matrix (given by two z̃s’s) close to 0. Formally, we have Linv = Z̃A Z̃B 2 F = NX i=1 DX k=1 (z̃Ai,j z̃Bi,j)2 ⇠= Ex " DX k=1 Vs|x[z̃s,k] # ⇤ 2N, (6) Ldec = Z̃>S Z̃S I 2 F = kCovs[z̃] Ik2F ⇠= X i 6=j ⇢zsi,j 2 , for Z̃S 2 {Z̃A, Z̃B}, (7) where ⇢ is the Pearson correlation coefficient. 3.3 Advantages over Contrastive Methods In this subsection we provide a systematic comparison with previous self-supervised methods for node representation learning, including DGI [48], MVGRL [15], GRACE [57], GCA [58] and BGRL [39], and highlight the merits of CCA-SSG. A quick overview is presented in Table 1. No reliance on negative samples. Most of previous works highly rely on negative pairs to avoid collapse or interchangeable, trivial/degenerated solutions [48, 15, 57, 58]. E.g., DGI and MVGRL generate negative examples by corrupting the graph structure severely, and GRACE/GCA treats all the other nodes within a graph as negative examples. However, for self-supervised learning on graphs, it is non-trivial to construct informative negative examples since nodes are structurally connected, and selecting negative examples in an arbitrary manner may lead to large variance for stochastic gradients and slow training convergence [51]. The recently proposed BGRL model adopts asymmetric encoder architectures for SSL on graphs without the use of negative samples. However, though BGRL could avoid collapse empirically, it still remains as an open problem concerning its theoretical guarantee for preventing trivial solutions [41]. Compared with these methods, our model does not rely on negative pairs and asymmetric encoders. The feature decorrelation term can naturally prevent trivial solutions caused by the invariance term. We discuss the collapse issue detailedly in Appendix B. No MI estimator, projector network nor asymmetric architectures. Most previous works rely on additional components besides the GNN encoder to estimate some score functions in final objectives. DGI and MVGRL require a parameterized estimator to approximate mutual information between two views, and GRACE leverages a MLP projector followed by an InfoNCE estimator. BGRL harnesses asymmetric encoder architecture which consists of EMA (Exponential Moving Average), Stop-Gradient and an additional projector. MVGRL also induces asymmetric architectures as it adopts two different GNNs for the input graph and the diffusion graph respectively. In contrast, our approach requires no additional components except a single GNN encoder. Better efficiency and scalability to large graphs. Consider a graph with N nodes. DGI and MVGRL contrast node embeddings with graph embedding, which would require O(N) space cost. GRACE treats two views of the same node as positive pairs and treat views of different nodes as negative pairs, which would take O(N2) space. BGRL focuses only on positive pairs, which will also take O(N) space. By contrast, our method works on feature dimension. If we embed each node into a D-dimensional vector, the computation of the loss function would require O(D2) space. This indicates that the memory cost does not grow consistently as the size of graph increases. As a result, our method is promising for handling large-scale graphs without prohibitively large space costs. 4 Theoretical Insights with Connection to Information Theory In this section we provide some analysis of the proposed objective function: 1) Interpretation of the loss function with entropy and mutual information. 2) The connection between the proposed objective and the Information Bottleneck principle. 3) Why the learned representations would be informative to downstream tasks. The proofs of propositions, theorems and corollaries are in Appendix D. Notations. Denote the random variable of input data as X and the downstream task as T (it could be the label Y if the downstream task is classification). Note that in SSL, we have no access to T in training and here we introduce the notation for our analysis. Define S as the self-supervised signal (i.e., an augmented view of X), and S shares the same space as X . Our model learns a representation for the input, denoted by ZX and its views, denoted by ZS . ZX = f✓(X), ZS = f✓(S), f✓(·) is a encoder shared by the original data and its views, which is parameterized by ✓. The target of representation learning is to learn a optimal encoder parameter ✓. Furthermore, for random variable A,B,C, we use I(A,B) to denote the mutual information between A and B, I(A,B|C) to denote conditional mutual information of A and B on a given C, H(A) for the entropy, and H(A|B) for conditional entropy. The proofs of propositions, theorems and corollaries are in Appendix D. 4.1 An Entropy and Mutual Information Interpretation of the Objective We first introduce an assumption about the distributions of P (ZS) and P (ZS |X). Assumption 1. (Gaussian assumption of P (ZS |X) and P (ZS)): P (ZS |X) = N (µX ,⌃X), P (ZS) = N (µ,⌃). (8) With Assumption 1, we can arrive at the following propositions: Proposition 1. In expectation, minimizing Eq. (6) is equivalent to minimizing the entropy of ZS conditioned on input X , i.e., min ✓ Linv ⇠= min ✓ H(ZS |X). (9) Proposition 2. Minimizing Eq. (7) is equivalent to maximizing the entropy of ZS , i.e., min ✓ Ldec ⇠= max ✓ H(ZS). (10) The two propositions unveil the effects of two terms in our objective. Combining two propositions, we can further interpret Eq. (5) from an information-theoretic perspective. Theorem 1. By optimizing Eq (5), we maximize the mutual information between the augmented view’s embedding ZS and the input data X , and minimize the mutual information between ZS and the view itself S, conditioned on the input data X . Formally we have min ✓ L ) max ✓ I(ZS , X) and min ✓ I(ZS , S|X). (11) The proof is based on the facts I(ZS , X) = H(ZS) H(ZS |X) and I(ZS , S|X) = H(ZS |X) + H(ZS |S) = H(ZS |X). Theorem 1 indicates that our objective Eq. (5) learns representations that maximize the information of the input data, i.e., I(ZS , X), and meanwhile minimize the lost information during augmentation, i.e., I(ZS , S|X). 4.2 Connection with the Information Bottleneck Principle The analysis in Section 4.1 enables us to further build a connection between our objective Eq. (5) and the well-studied Information Bottleneck Principle [43, 44, 37, 1] under SSL settings. Recall that the supervised Information Bottleneck (IB) is defined as follows: Definition 1. The supervised IB aims at maximizing an Information Bottleneck Lagrangian: IBsup = I(Y, ZX) I(X,ZX), where > 0. (12) As we can see, IBsup attempts to maximize the information between the data representation ZX and its corresponding label Y , and concurrently minimize the information between ZX and the input data X (i.e., exploiting compression of ZX from X). The intuition of IB principle is that ZX is expected to contain only the information that is useful for predicting Y . Several recent works [9, 45, 53] propose various forms of IB under self-supervised settings. The most relevant one names Self-supervised Information Bottleneck: Definition 2. (Self-supervised Information Bottleneck [53]). The Self-supervised IB aims at maximizing the following Lagrangian: IBssl = I(X,ZS) I(S,ZS), where > 0. (13) Intuitively, IBssl posits that a desirable representation is expected to be informative to augmentation invariant features, and to be a maximally compressed representation of the input. Our objective Eq. (5) is essentially an embodiment of IBssl: Theorem 2. Assume 0 < 1, then by minimizing Eq. (5), the self-supervised Information Bottleneck objective is maximized, formally: min ✓ L ) max ✓ IBssl (14) Theorem 2 also shows that Eq. (5) implicitly follows the same spirit of IB principle under selfsupervised settings. As further enlightenment, we can relate Eq. (5) with the multi-view Information Bottleneck [9] and the minimal and sufficient representations for self-supervision [45]: Corollary 1. Let X1 = S, X2 = X and assume 0 < 1, then minimizing Eq. (5) is equivalent to minimizing the Multi-view Information Bottleneck Loss in [9]: LMIB = I(Z1, X1|X2) I(X2, Z1), where 0 < 1. (15) Corollary 2. When the data augmentation process is reversible, minimizing Eq. (5) is equivalent to learning the Minimal and Sufficient Representations for Self-supervision in [45]: ZsslX = argmax ZX I(ZX , S), Z sslmin X = argmin ZX H(ZX |S) s.t. I(ZX , S) is maximized. (16) 4.3 Influence on Downstream Tasks We have provided a principled understanding for our new objective. Next, we discuss its effect on downstream tasks T . The rationality of data augmentations in SSL is rooted in a conjecture that an ideal data augmentation approach would not change the information related to its label. We formulate this hypothesis as a building block for analysis on downstream tasks [36, 9]. Assumption 2. (Task-relevant information and data augmentation). All the task-relevant information is shared across the input data X and its augmentations S, i.e., I(X,T ) = I(S, T ) = I(X,S, T ), or equivalently, I(X,T |S) = I(S, T |X) = 0. This indicates that all the task-relevant information is contained in augmentation invariant features. We proceed to derive the following theorem which reveals the efficacy of the learned representations by our objective with respect to downstream tasks. Theorem 3. (Task-relevant/irrelevant information). By optimizing Eq. (5), the task-relevant information I(ZS , T ) is maximized, and the task-irrelevant information H(ZS |T ) is minimized. Formally, min ✓ L ) max ✓ I(ZS , T ) and min ✓ H(ZS |T ). (17) Therefore, the learned representation ZS is expected to contain minimal and sufficient information about downstream tasks [45, 9], which further illuminates the reason why the embeddings given by SSL approaches have superior performance on various downstream tasks. 5 Experiments We assess the quality of representations after self-supervised pretraining on seven node classification benchmarks: Cora, Citeseer, Pubmed, Coauthor CS, Coauthor Physics and Amazon Computer, Amazon-Photo. We adopt the public splits for Cora, Citeseer, Pubmed, and a 1:1:9 training/validation/testing splits for the other 4 datasets. Details of the datasets are in Appendix E. Evaluation protocol. We follow the linear evaluation scheme as introduced in [48]: i) We first train the model on all the nodes in a graph without supervision, by optimizing the objective in Eq. (5). ii) After that, we freeze the parameters of the encoder and obtain all the nodes’ embeddings, which are subsequently fed into a linear classifier (i.e., a logistic regression model) to generate a predicted label for each node. In the second stage, only nodes in training set are used for training the classifier, and we report the classification accuracy on testing nodes. We implement the model with PyTorch. All experiments are conducted on a NVIDIA V100 GPU with 16 GB memory. We use the Adam optimizer [20] for both stages. The graph encoder f✓ is specified as a standard two-layer GCN model [22] for all the datasets except citeseer (where we empirically find that a one-layer GCN is better). We report the mean accuracy with a standard deviation through 20 random initialization (on Coauthor CS, Coauthor Physics and Amazon Computer, Amazon-Photo, the split is also randomly generated). Detailed hyperparameter settings are in Appendix E. 5.1 Comparison with Peer Methods We compare CCA-SSG with classical unsupervised models, Deepwalk [32] and GAE [21], and self-supervised models, DGI [48], MVGRL [15], GRACE [57] and GCA [58]. We also compare with supervised learning models, including MLP, Label Propagation (LP) [56], and supervised baselines GCN [22] and GAT [47]3. The results of baselines are quoted from [15, 57, 58] if not specified. We report the node classification results of citation networks and other datasets in Table 2 and Table 3 respectively. As we can see, CCA-SSG outperforms both the unsupervised competitors and the fully supervised baselines on Cora and Pubmed, despite its simple architecture. On Citeseer, CCA-SSG achieves competitive results as of the most powerful baseline MVGRL. On four larger benchmarks, CCA-SSG also achieves the best performance in four datasets except Coauther-Physics. It is worth mentioning that we empirically find that on Coauthor-CS a pure 2-layer-MLP encoder is better than GNN models. This might because the graph-structured information is much less informative than the node features, presumably providing harmful signals for classification (in fact, on Coauthor-CS, linear models using merely node features can greatly outperform DeepWalk/DeepWalk+features). 3The BGRL [39] is not compared as its source code has not been released. 5.2 Ablation Study and Scalability Comparison Effectiveness of invariance/decorrelation terms. We alter our loss by removing the invariance/decorrelation term respectively to study the effects of each component, with results reported in Table 4. We find that only using the invariance term will lead to merely performance drop instead of completely collapsed solutions. This is because node embeddings are normalized along the instance dimension to have a zero-mean and fixed-standard deviation, and the worst solution is no worse than dimensional collapse (i.e., all the embeddings lie in an line, and our decorrelation term can help to prevent it) instead of complete collapse (i.e., all the embeddings degenerate into a single point). As expected, only optimizing the decorrelation term will lead to poor result, as the model learns nothing meaningful but disentangled representation. In Appendix B we discuss the relationship between complete/dimensional collapse, when the two cases happen and how to avoid them. Effect of decorrelation intensity. We study how the intensity of feature decorrelation improves/degrades the performance by increasing the trade-off hyper-parameter . Fig. 2 shows test accuracy w.r.t. different ’s on Cora, Citeseer and Pubmed. The performance benefits from a proper selection of (from 0.0005 to 0.001 in our experiments). When is too small, the decorrelation term does not work; if it is too large, the invariance term would be neglected, leading to serious performance degrade. An interesting finding is that even when is very small or even equals to 0 (w/o Ldec in Table 4), the test accuracy on Citeseer does not degrade as much as that on Cora and Citeseer. The reason is that node embeddings of Citeseer is already highly uncorrelated even without the decorrelation term. Appendix F visualizes the correlation matrices without/with decorrelations. Effect of embedding dimension. Fig. 3 shows the effect of the embedding dimension. Similar to contrastive methods [48, 15, 57, 58], CCA-SSG benefits from a large embedding dimension (compared with supervised learning), while the optimal embedding dimension of CCA-SSG (512 on most benchmarks) is a bit larger than other methods (usually 128 or 256). Yet, we notice a performance drop as the embedding dimension increases. We conjecture that the CCA is essentially a dimension-reduction method, the ideal embedding dimension ought to be smaller than the dimension of input. Hence we do not apply it on well-compressed datasets (e.g. ogbn-arXiv and ogbn-product). Scalability Comparison. Table 5 compares model size, training time (till the epoch that gives the highest evaluation accuracy) and memory cost of CCA-SSG with other methods, on Cora, Pubmed and Amazon-Computers. Overall, our method has fewer parameters, shorter training time, and fewer memory cost than MVGRL, GRACE and GCA in most cases. DGI is another simple and efficient model, but it yields much poorer performance. The results show that despite its simplicity and efficiency, our method achieves even better (or competitive) performance. Table 4: Ablation study of node classification accuracy (%) on the key components of CCA-SSG. Variants Cora Citeseer Pubmed Baseline 84.2 73.1 81.6 w/o Ldec 79.1 72.2 75.3 w/o Linv 40.1 28.9 46.5 Figure 2: Effect of . Figure 3: Effect of D. 6 Conclusion and Discussions In this paper, we have introduced CCA-SSG, a conceptually simple, efficient yet effective method for self-supervised representation learning on graphs, based on the idea of Canonical Correlation Analysis. Compared with contrastive methods, our model does not require additional components except random augmentations and a GNN encoder, whose effectiveness is justified in experiments. Limitations of the work. Despite the theoretical grounds and the promising experimental justifications, our method would suffer from several limitations. 1) The objective Eq. (5) is essentially performing dimension reduction, while SSL approach usually requires a large embedding dimension. As a result, our method might not work well on datasets where input data does not have a large feature dimension. 2) Like other augmentation based methods, CCA-SSG highly relies on a high-quality, informative and especially, label-invariant augmentations. However, the augmentations used in our model might not perfectly meet these requirements, and it remains an open problem how to generate informative graph augmentations that have non-negative impacts on the downstream tasks. Potential negative societal impacts. This work explores a simple pipeline for representation learning without large amount of labeled data. However, in industry there are many career workers whose responsibility is to label or annotate data. The proposed method might reduce the need for labeling data manually, and thus makes a few individuals unemployed (especially for developing countries and remote areas). Furthermore, our model might be biased, as it tends to pay more attention to the majority and dominant features (shared information across most of the data). The minority group whose features are scare are likely to be downplayed by the algorithm. Acknowledgments and Disclosure of Funding This work was supported in part by NSF under grants III-1763325, III-1909323, III-2106758, and SaTC-1930941. Qitian Wu and Junchi Yan were partly supported by Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102). We thank Amazon Web Services for sponsoring computation resources for this work.
1. What is the focus of the paper regarding graph neural networks? 2. What are the strengths of the proposed approach, particularly in its novelty and theoretical connections? 3. Do you have any concerns or questions about the method's implementation, such as the distribution of s_1 and s_2? 4. How do the hyperparameters of edge dropping and feature masking affect the performance of CCA-SSG?
Summary Of The Paper Review
Summary Of The Paper This paper focuses on self-supervised graph neural networks, aiming at training graph neural networks without labeled data, and an approach called the CCA-SSG is proposed. CCA-SSG constructs two views of the given graph through edge dropping and node feature masking, and further uses CCA for learning node representations from the two views. The idea of formalizing unsupervised node representation learning as a multi-view learning task looks novel to me. Also, CCA-SSG has intuitive connections with the information bottleneck principle, which is attractive. The authors do experiment on multiple datasets, and CCA-SSG outperforms many competitive baseline methods. Strengths: Important problem. Principled method with good theoretical connections. Strong results. Review Below are some detailed comments and questions: At line 553 of the appendix, it is said that s_1 and s_2 come from the same distribution p_{aug}(x). But from my understanding of the proposed method, s_1 and s_2 are graphs from two different views (i.e., s_1 from edge dropping and s_2 from node feature masking respectively). If this is the case, s_1 and s_2 should come from different distributions. Is it correct? CCA-SSG relies on edge dropping and node feature masking to augment the original graph, but the hyperparameters of edge dropping rate and feature masking rate are not studied in the paper. How will the results of CCA-SSG vary if the hyperparameters are changed?
NIPS
Title Policy Gradient With Serial Markov Chain Reasoning Abstract We introduce a new framework that performs decision-making in reinforcement learning (RL) as an iterative reasoning process. We model agent behavior as the steady-state distribution of a parameterized reasoning Markov chain (RMC), optimized with a new tractable estimate of the policy gradient. We perform action selection by simulating the RMC for enough reasoning steps to approach its steadystate distribution. We show our framework has several useful properties that are inherently missing from traditional RL. For instance, it allows agent behavior to approximate any continuous distribution over actions by parameterizing the RMC with a simple Gaussian transition function. Moreover, the number of reasoning steps to reach convergence can scale adaptively with the difficulty of each action selection decision and can be accelerated by re-using past solutions. Our resulting algorithm achieves state-of-the-art performance in popular Mujoco and DeepMind Control benchmarks, both for proprioceptive and pixel-based tasks. 1 Introduction Reinforcement learning (RL) has the potential to provide a general and effective solution to many modern challenges. Recently, this class of methods achieved numerous impressive milestones in different problem domains, such as games [1–3], robotics [4–6], and other meaningful real-world applications [7–9]. However, all these achievements relied on massive amounts of data, controlled environments, and domain-specific tuning. These commonalities highlight some of the current practical limitations that prevent RL to be widely applicable [10]. In the deep RL framework, practitioners train agents with the end goal of obtaining optimal behavior. Traditionally, agent behavior is modeled with feed-forward policies regressing from any state to a corresponding distribution over actions. Such formulation yields practical training objectives in both off-policy [11–13] and on-policy settings [14–16]. However, we identify three inherent properties of this rigid representation of behavior that could considerably impact expressivity and efficiency in continuous control tasks. First, agent behavior is restricted to a class of tractable distributions, which might fail to capture the necessary complexity and multi-modality of a task. Second, the policy performs a fixed reasoning process with a feed-forward computation, which potency cannot adapt to the varying complexity of individual action selection problems. Third, decision-making is performed every time from scratch, without re-using any past information that might still inform and facilitate the current action selection problem. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Unlike RL policies, human reasoning does not appear to follow a rigid feed-forward structure. In fact, a range of popular psychological models characterize human decision-making as a sequential process with adaptive temporal dynamics [17–20]. Many of these models have found empirical groundings in neuroscience [21–24] and have shown to effectively complement RL for capturing human behavior in experimental settings [25, 26]. Partly inspired by these works, we attempt to reframe the deep RL framework by making use of a similar flexible model of agent behavior, in order to counteract its aforementioned limitations. We introduce serial Markov chain reasoning - a new powerful framework for representing agent behavior. Our framework treats decision-making as an adaptive reasoning process, where the agent sequentially updates its beliefs regarding which action to execute in a series of reasoning steps. We model this process by replacing the traditional policy with a parameterized transition function, which defines a reasoning Markov chain (RMC). The steady-state distribution of the RMC represents the distribution of agent behavior after performing enough reasoning for decision-making. Our framework naturally overcomes the aforementioned limitations of traditional RL. In particular, we show that our agent’s behavior can approximate any arbitrary distribution even with simple parameterized transition functions. Moreover, the required number of reasoning steps adaptively scales with the difficulty of individual action selection problems and can be accelerated by re-using samples from similar RMCs. To optimize behavior modeled by the steady-state distribution of the RMC, we derive a new tractable method to estimate the policy gradient. Hence, we implement a new effective off-policy algorithm for maximum entropy reinforcement learning (MaxEnt RL) [27, 28], named Steady-State Policy Gradient (SSPG). Using SSPG, we empirically validate the conceptual properties of our framework over traditional MaxEnt RL. Moreover, we obtain state-of-the-art results for popular benchmarks from the OpenAI Gym Mujoco suite [29] and the DeepMind Control suite from pixels [30]. In summary, this work makes the following key contributions: 1. We propose serial Markov Chain reasoning a framework to represent agent behavior that can overcome expressivity and efficiency limitations inherent to traditional reinforcement learning. 2. Based on our framework, we derive SSPG, a new tractable off-policy algorithm for MaxEnt RL. 3. We provide experimental results validating theorized properties of serial Markov Chain reasoning and displaying state-of-the-art performance on the Mujoco and DeepMind Control suites. 2 Background 2.1 Reinforcement learning problem We consider the classical formulation of the reinforcement learning (RL) problem setting as a Markov Decision Process (MDP) [31], defined by the tuple (S,A, P, p0, r, ). In particular, at each discrete time step t the agent experiences a state from the environment’s state-space, st 2 S, based on which it selects an action from its own action space, at 2 A. In continuous control problems (considered in this work), the action space is typically a compact subset of an Euclidean space Rdim(A). The evolution of the environment’s state through time is determined by the transition dynamics and initial state distribution, P and p0. Lastly, the reward function r represents the immediate level of progress for any state-action tuple towards solving a target task. The agent’s behavior is represented by a state-conditioned parameterized policy distribution ⇡✓. Hence, its interaction with the environment produces trajectories, ⌧ = (s0, a0, s1, ..., sT , aT ), according to a factored joint distribution p⇡✓ (⌧) = p0(s0) QT t=0 ⇡✓(at|st)P (st+1|st, at). The RL objective is to optimize agent behavior as to maximize the discounted sum of expected future rewards: argmax✓ Ep⇡✓ (⌧) hPT t=0 tr(st, at) i . 2.2 Maximum entropy reinforcement learning and inference Maximum entropy reinforcement learning (MaxEnt RL) [32] considers optimizing agent behavior for a different objective that naturally arises when formulating action selection as an inference problem [33–36]. Following Levine [28], we consider modeling a set of binary optimality random variables with realization probability proportional to the exponentiated rewards scaled by the temperature ↵, p(Ot|st, at) / exp( 1↵r(st, at)). The goal of MaxEnt RL is to minimize the KL-divergence between trajectories stemming from agent behavior, p⇡✓ (⌧), and the inferred optimal behavior, p(⌧ |O0:T ): DKL (p⇡✓ (⌧)||p(⌧ |O0:T )) = Ep⇡✓(⌧) " log p0(s0) QT t=0 ⇡✓(at|st)P (st+1|st, at) p0(s0) QT t=0 exp( 1 ↵r(st, at))P (st+1|st, at) # = Ep⇡✓ (⌧) " TX t=0 r(st, at) + ↵H(⇡(·|st)) # . (1) The resulting entropy-regularized objective introduces an explicit trade-off between exploitation and exploration, regulated by the temperature parameter ↵ scaling the policy’s entropy. An effective choice to optimize this objective is to learn an auxiliary parameterized soft Q-function [37]: Q⇡ (st, at) = Ep⇡✓ (⌧ |st,at) " r(st, at) + TX t0=t+1 r(st0 , at0) + ↵H(⇡(at0 |st0) # . (2) Given some state, Q⇡ (s, ·) represents an energy-function based on the expected immediate reward and the agent’s future likelihood of optimality from performing any action. Thus, we can locally optimize the MaxEnt objective by reducing the KL-divergence between ⇡ and the canonical distribution of its current soft Q-function. This is equivalent to maximizing the expected soft Q-function’s value corrected by the policy’s entropy, resembling a regularized policy gradient objective [11, 12]: argmax ✓ Es,a⇠⇡✓(·|s) ⇥ Q⇡ (s, a) + ↵H(⇡✓(a|s)) ⇤ . (3) The policy is usually modeled with a neural network outputting the parameters of some tractable distribution, such as a factorized Gaussian, ⇡✓(·|s) = N(µ✓(s);⌃✓(s)). This practice allows to efficiently approximate the gradients from Eqn. 3 via the reparameterization trick [38]. We consider the off-policy RL setting, where the agent alternates learning with storing new experience in a data buffer, D. We refer the reader to Haarnoja et al. [13, 39] for further derivation and practical details. 3 Policy Gradient with serial reasoning 3.1 Reasoning as a Markov chain We introduce Serial Markov Chain Reasoning, a new framework to model agent behavior, based on conceptualizing action selection as an adaptive, sequential process which we refer to as reasoning. Instead of using a traditional policy, the agent selects which action to execute by maintaining an internal action-belief and a belief transition (BT-) policy, ⇡b(a0|a, s). During the reasoning process, the agent updates its action-belief for a series of reasoning steps by sampling a new action with the BT-policy ⇡b taking both environment state and previous action-belief as input. We naturally represent this process with a reasoning Markov chain (RMC), a discrete-time Markov chain over different action-beliefs, with transition dynamics given by the BT-policy. Hence, for any input environment state s and initial action-belief a0, the n-step transition probabilities of the RMC for future reasoning steps n = 1, 2, 3, ... are defined as: ⇡bn(a|a0, s) = Z A ⇡b(a|a0, s)⇡bn 1(a0|a0, s)da0, for n > 1, and ⇡b1 = ⇡b. (4) Given a compact action space and a BT-policy with a non-zero infimum density, we can ensure that as the number of reasoning steps grows, the probability of any action-belief in the RMC converges to some steady-state probability which is independent of the initial action-belief.1 We denote this implicit probability distribution as the steady-state (SS-) policy, symbolized by ⇡s(a|s): Lemma 3.1. Steady-state convergence. For any environment state s, consider a reasoning Markov chain (RMC) defined on a compact action space A with transition probabilities given by ⇡b(a0|a, s). Suppose that inf{⇡b(a0|a, s) : a0, a 2 A} > 0. Then there exists a steady-state probability distribution function ⇡s(·|s) such that: lim n!1 ⇡bn(a|a0, s) ! ⇡s(a|s) for all a 2 A. (5) Proof. See Appendix A. The RMC’s steady-state probabilities can be interpreted as representing the distribution of agent’s behavior after an appropriate number of reasoning steps are performed. In this work, we strive to optimize the agent’s behavior following the MaxEnt RL framework described in Section 2. In particular, we consider learning a parameterized BT-policy, ⇡b✓, to produce appropriate transition probabilities for each environment state such that the SS-policy, ⇡s✓ , from the resulting RMC optimizes: argmax ✓ J(✓) = Es,a⇠⇡s✓(·|s) ⇥ Qs (s, a) + ↵H(⇡ s ✓(a|s)) ⇤ . (6) Here, Qs is a parameterized soft Q-function for the agent’s behavior from ⇡ s, which we learn by minimizing a squared soft Bellman loss utilizing delayed parameters 0 and samples from ⇡s✓: argmin J( ) = Es,a,s0 h (Qs (s, a) ⇣ r(s, a) + Ea0⇠⇡s✓(·|s) ⇥ Q s 0(s 0 , a 0) + ↵H(⇡s✓(a 0|s0) ⇤⌘i2 . (7) In Fig. 2, we illustrate the relationship between a learned BT-policy, the corresponding SS-policy, and the soft Q-function in a 1-dimensional toy task (see App. C for details). In this example, the BT-policy is parameterized as a simple squashed Gaussian distribution, with unimodal transitions between consecutive action beliefs (Fig. 2, Left). We obtain samples of agent behavior (the SS-policy) by performing a series of reasoning steps, using the BT-policy to simulate the RMC until we approach steady-state convergence. By plotting the resulting empirical distribution of agent behavior, we see it closely matches the multi-modal, non-Gaussian canonical distribution from its soft Q-function (Fig. 2, Right). This example shows how the expressive power of agent behavior in our framework can go far beyond the BT-policy’s simple parameterization, enabling for the effective maximization of complex and multi-modal MaxEnt objectives. 3.2 Learning the belief transition policy We propose a new method to estimate the policy gradient of the BT-policy, ⇡b✓, for optimizing the steady-state MaxEnt objective described in Section 3.1. We note that the gradient from Eq. 6 involves differentiating through an expectation of the steady-state policy, ⇡s✓ . However, ⇡ s ✓ is only implicitly defined, and its connection with the actual BT-policy or its parameters does not have a tractable closed-form expression. To approach this problem, we introduce a family of n-step extensions to the soft Q-function, Qsn : S ⇥A 7! R for n = 0, 1, 2, . . . , defined as: Qsn(s, a) = Z A ⇡bn(a 0|a, s)Qs (s, a0)da0, with r✓Qsn(s, a) = 0. (8) Intuitively, each n-step soft Q-function Qsn(s, a) outputs the expected soft Q-value after performing n reasoning steps in the RMC from the initial action-belief a. However, we treat the output of each n-step soft Q-function as being independent of the actual parameters of the BT-policy, ✓. Hence, we can interpret computing Qsn(s, a) as simulating the RMC with a fixed and immutable copy of the current ⇡b✓. We use this definition to provide a convenient notation in the following new Theorem that expresses the policy gradient without differentiating through ⇡s✓: 1This is unrelated to the steady-state distribution for infinite-horizon MDPs considered in prior work [40]. Theorem 3.2. Steady-state policy gradient. Let ⇡b✓(·|a, s) be a parameterized belief transition policy which defines a reasoning Markov chain with a stationary distribution given by the steady-state policy ⇡s✓(·|s). Let Qs be a real function defined on S ⇥A, with a family of n-step extensions {Qsn} as defined in Eq. 8. Suppose ⇡b, Qs and their gradient with respect to ✓ (denoted r✓) are continuous and bounded functions. Then r✓ Ea⇠⇡s✓(·|s) [Q s(s, a)] = Ea⇠⇡s✓(·|s) " lim N!1 NX n=0 r✓ Ea0⇠⇡b✓(·|a,s) [Q s n(s, a 0)] # . (9) Proof. See Appendix A. Using Lemma 3.1 (steady-state convergence), we can approximate the policy gradient expression in Eq. 9 with an arbitrarily small expected error using a finite number of n-step soft Q-functions, i.e., N (see App. A). An intuition for this property follows from the fact that for large enough n, Lemma 3.1 implies that ⇡bn(a|a0, s) ⇡ ⇡s✓(a|s) and, thus, Qsn(s, a0) ⇡ R A ⇡ b ✓s(a|s)Qs (s, a)da. Therefore, the value of each Qsn(s, a0) will be independent of the BT-policy’s action a0, such that r✓ Ea0⇠⇡b✓(·|a,s) [Q s n(s, a 0)] ⇡ 0. In other words, each subsequent step in the RMC introduces additional randomness that is independent of a0, causing a warranted vanishing gradient phenomenon [41] which culminates with converging to ⇡s✓ . Using a similar notation as Haarnoja et al. [39], we apply the reparameterization trick [38] to express the BT-policy in terms of a deterministic function f b✓ (a, s, ✏), taking as input a Gaussian noise vector ✏. This allows to rewrite the gradient in each inner expectation in the sum from Eq. 9 as: r✓ Ea0⇠⇡b✓(·|a,s) [Q s n(s, a 0)] = E✏0⇠N(0,1) ⇥ ra0Qsn(s, a0)r✓f b✓ (a, s, ✏0) ⇤ , (10) where a0 = f b✓ (a, s, ✏0). We can apply the same reparameterization for all n-step soft Q-functions, to establish a new relationship between the gradient terms ra0Qsn(s, a0): ra0Qsn(s, a0) = ra0 Z A ⇡bn(an|a0, s)Qs (s, an)dan = ra0 Z A ⇡b(a1|a0, s)Qsn 1(s, a1)da1 = E✏1 ⇥ ra1Qsn 1(s, a1)ra0f b(a0, s, ✏1) ⇤ , where, a1 = f(a0, s, ✏1). (11) In Eq. 11, we purposefully omit the dependence of f b and ⇡b from ✓ since each Qsn term is a local approximation of the RMC that does not depend on ✓ (as defined in Eq. 8). By recursively applying this relationship (Eq. 11) to ra1Qsn 1(s, a1) and all subsequent gradient terms we obtain: ra0Qsn(s, a0) = E✏1,...,✏n " ranQs (s, an) n 1Y i=0 raif b(ai, s, ✏i+1) # , (12) where ai = f b(ai 1, s, ✏i) for i = 1, . . . , n. By combining Eq. 10 and Eq. 12, we can thus reparameterize and express the whole sum in Eq. 9 as: r✓ Ea⇠⇡s✓(·|s) [Q s(s, a)] ⇡ NX n=0 r✓ Ea0⇠⇡b✓(·|a,s) [Q s n(s, a 0)] = E✏0,...,✏N " NX n=0 ranQs (s, an) n 1Y i=0 raif b(ai, s, ✏i+1) ! r✓f b✓ (a, s, ✏0) # . (13) Eq. 13 intuitively corresponds to differentiating through each Qsn(s, a0) term by reparameterizing the RMC. Hence, to get a sample estimate of the policy gradient we can simulate the reparameterized RMC for N reasoning steps to obtain a1, ..., aN , compute each Qs (s, an) term, and backpropagate (e.g., with autodifferentiation). Following Haarnoja et al. [13, 39], we can apply Theorem 3.2 and easily extend the same methodology to estimate the MaxEnt policy gradient from Eq. 6 that also involves an extra entropy term. We include this alternative derivation in App. A for completeness. Algorithm 1 Agent Acting input: s, current state a0 ⇠ Â N 0 R p +1 while R p > 1.1 do aN+1 ⇠ ⇡b✓(·|aN) N N + 1 Update Rp with a1:N . Eq.16 N̂ ⇢N̂ + (1 ⇢)N . ⇢ 2 [0, 1) Â Â [ a1:N output: a ⇠ a1:N Algorithm 2 Agent Learning input: D, data buffer (s, a, s0, r) ⇠ D a0 ⇠ ⇡b✓(·|a, s0) for n 0, dN̂e do Q s n Qs (s0, an) . Eq. 8 ✏n+1 ⇠ N(0, 1), an+1 = f b(an, s, ✏n+1) r✓Qs r✓( PdNe n=0 Q s n) . Thm. 3.2 argmin✓ J(✓) . Eq. 6 a 0 ⇠ a1:dN̂e argmin J( ) . Eq. 7 3.3 Action selection and temporal consistency To collect experience in the environment, we propose to perform reasoning with the BT-policy starting from a set of different initial action-beliefs {a00, ..., aM0 }. We batch this set as a single input matrix, a0, to make effective use of parallel computation. To reduce the required number of reasoning steps and facilitate detecting convergence to ⇡s✓ , we identify two desirable properties for the distribution of action-beliefs in a0. In particular, initial action-beliefs should 1) be likely under ⇡s✓ , and 2) cover diverse modes of ⇡s✓ . Property (1) should logically accelerate reasoning by providing the BT-policy with already-useful information about optimal behavior. Property (2) serves to provide the BT-policy with initial information of diverse behavior, which facilitates convergence detection (Sec. 3.4) and expedites reasoning even if the RMC has slow mixing times between multiple modes. To satisfy these properties, we use a simple effective heuristic based on common temporal-consistency properties of MDPs [42, 43]. Especially in continuous environments, actions tend to have small individual effects, making them likely relevant also for environment states experienced in the near future. Thus, we propose storing past action-beliefs in a fixed sized buffer, called the short-term action memory, Â, and use them to construct a0. We find this strategy allows to effectively regulate the initial action-beliefs quality and diversity through the size of Â, accelerating convergence at negligible cost. 3.4 Detecting convergence to the steady-state policy A key requirement for learning and acting with BT-policies, as described in Sections 3.2 and 3.3, is the ability to determine a sufficient number of reasoning steps (N ) for the action-belief distribution to converge. Given the properties of the RMC, there exist different analytical methods that provide a priori bounds on the rate of convergence [44–46]. However, using any fixed N would be extremely limiting as we expect the BT-policy and the properties of its resulting RMCs to continuously evolve during training. Moreover, different tasks, states, and initial action-beliefs might affect the number of reasoning steps required for convergence due to different levels of complexity for the relative decisionmaking problems. To account for similar conditions, in the Markov Chain Monte Carlo literature, the predominant approach is to perform a statistical analysis of the properties of the simulated chain, choosing from several established convergence diagnostic tools [47–49]. Hence, we propose to employ a similar adaptive strategy by analyzing the history of the simulated RMC to determine the appropriate number of reasoning steps. Since we apply ⇡b✓ from a diverse set of initial action beliefs (see Section 3.3), we base our convergence-detection strategy on the seminal Gelman-Rubin (GR) diagnostic [50] and its multivariate extension [51]. In particular, the multivariate GR diagnostic computes the pseudo scale reduction factor (PSRF), a score representing whether the statistics of a multivariate variable of interest have converged to the steady-state distribution. The intuition behind this diagnostic is to compare two different estimators of the covariance for the unknown steady-state distribution, making use of either the samples within each individual chain and between all different chains. Thus, as the individual chains approach the true steady-state distribution, the two estimates should expectedly get closer to each other. The PSRF measures this precise similarity based on the largest eigenvalue of their matrix product. For our use-case, we employ the PSRF to determine the convergence of the set of action-beliefs a1:N, as we perform consecutive reasoning steps with ⇡b✓. Following [51], we calculate the average sample covariance of the action-beliefs within each of the parallel chains (W ) computed from a batched set of initial action-beliefs a0 = [a10, a20, . . . aM0 ]: ām = 1 N NX n=1 amn , Wm = 1 N 1 NX n=1 (a ām)(a ām)T , W = 1 M MX m=1 Wm. (14) We compare W with an unbiased estimate of the target covariance, constructed from the sample covariance between the different parallel chains (B): ā = 1 N ⇥M NX n=1 MX n=1 amn , B = 1 M 1 NX n=1 (ām ā)(ām ā)T . (15) The PSRF for a1:N is then computed from the largest eigenvalue ( max) of the product W 1B, as: Rp = r N 1 N + max(W 1B). (16) Thus, as the individual chains approach the distribution of ⇡s✓ , the PSRF (R p) will approach 1. Following Brooks and Gelman [51], we use Rp < 1.1 as an effective criterion for determining the convergence of a1:N. In practice, we also keep a running mean of the current number of reasoning steps for convergence, N̂ . We use dN̂e as the number of reasoning steps to simulate the RMC with ⇡b✓ when computing gradients from Eqs. 6-7. dN̂e is a safe choice to ensure near unbiased optimization since Rp < 1.1 is considered a very conservative criterion [52] and we can learn by simulating the RMC from recent actions stored in the data buffer, which are already likely close to optimal. We provide further details regarding our implementation and its rationale in App. B. We provide a simplified summary of our adaptive reasoning process for acting and learning in Algs. 1-2. 3.5 Advantages of serial Markov chain reasoning Based on the above specification, we identify three main conceptual advantages of our serial Markov chain reasoning framework. 1. Unlimited expressiveness. The distribution of agent behavior given by the SS-policy ⇡s✓ , is a mixture model with potentially infinitely many components. Thus, even a simple Gaussian parameterization of the BT-policy ⇡b✓ would make ⇡ s ✓ a universal approximator of densities, providing unlimited expressive power to the agent [53, 54]. 2. Adaptive computation. The number of reasoning steps performed to reach approximate convergence is determined by the properties of each environment state’s RMC. Hence, the agent can flexibly spend different amounts of computation time based on the complexity of each action-selection problem, with potential gains in both precision and efficiency. 3. Information reuse. By storing past solutions to similar RMCs, we can initialize the reasoning process with initial action-beliefs that are already close to ⇡s✓ . This allows using the temporal-consistency properties of the MDP to exploit traditionally discarded information and accelerate agent reasoning. We provide empirical validation for these properties in Section 4.2. 4 Experimentation 4.1 Performance evaluation We evaluate the serial Markov chain reasoning framework by comparing its performance with current state-of-the-art baselines based on traditional RL. We consider 6 challenging Mujoco tasks from Gym [29, 56] and 12 tasks pixel-based tasks from the DeepMind Control Suite (DMC) [30]. In both settings, we base our implementation on MaxEnt RL, replacing the traditional policy with a Gaussian BT-policy optimized with the training procedures specified in Sec. 3. Other orthogonal design choices (e.g., network architectures) follow contemporary RL practices, we refer to App. C or the code for full details. We call the resulting algorithm Steady-State Policy Gradient (SSPG). We report the mean performance curves and aggregate metrics using the statistical tools from Rliable [55]. In particular, we compare normalized performance profiles [57], interquantile mean (IQM), and probability of improvements over baselines with the Mann-Whitney U statistic [58]. The reported ranges/shaded regions represent 95% stratified bootstrap confidence intervals (CIs) [59]. In App. D, we provide per-task results and further statistical analysis. For each experiment, we collect the returns of SSPG over five seeds, by performing 100 evaluation rollouts during the last 5% of steps. Mujoco suite. We evaluate on a challenging set of Mujoco tasks popular in recent literature. We compare SSPG with recent RL algorithms achieving state-of-the-art sample-efficiency performance on these tasks, which utilize large critic ensembles and high update-to-data (UTD) ratios. We consider REDQ [60] and MBPO [61] for state-of-the-art algorithms based on the traditional model-free and model-based RL frameworks. We also compare with iterative amortized policy optimization (IAPO) [62], in which the agent performs iterative amortization to optimize its policy distribution [63]. This procedure for action selection is more computationally involved than our agent’s reasoning process, as it requires both evaluating the policy and computing gradients at several iterations. Yet, as IAPO is still based on the traditional policy gradient framework, its benefits are solely due to reducing the amortization gap with an alternative action inference procedure. To ground different results, we also show the performance of the seminal Soft Actor-Critic (SAC) algorithm [39], upon which all considered policy gradient baselines are based on. To account for the additional computational cost of training an agent with serial Markov chain reasoning, we use a UTD ratio that is half the other algorithms. On our hardware, this makes SSPG faster than all other modern baselines (see App. D). Figure 3 (Top) shows the performance results after 100K environment steps. Individual scores are normalized using the performance of SAC after 3M steps, enough to reach convergence in most tasks. SSPG considerably outperforms all prior algorithms with statistically meaningful gains, as per the conservative Neyman-Pearson statistical testing criterion [64]. Furthermore, SSPG even stochastically dominates all considered state-of-the-art baselines [65]. We obtain similar results evaluating at 50K and 200K steps (App. D). In comparison, IAPO obtains lower performance than other non-iterative baselines while being the most compute-intensive algorithm. This indicates that, for sample-efficiency, only reducing the amortization gap beyond direct estimation might not provide significant benefits. Instead, serial Markov chain reasoning’s improved expressivity and flexibility appear to considerably accelerate learning, yielding state-of-the-art performance in complex tasks. DeepMind Control suite. To validate the generality of our framework, we also evaluate on a considerably different set of problems: 12 pixel-based DMC tasks. We follow the recent task specifications and evaluation protocols introduced by Yarats et al. [66]. We compare SSPG with DrQv2 [66], the current state-of-the-art policy gradient algorithm on this benchmark, which employs a deterministic actor and hand-tuned exploration. We also compare with additional baselines that, like SSPG, are based on MaxEnt RL: DrQ [67], CURL [68], and a convolutional version of SAC [39]. Figure 3 (Bottom) shows the performance results after 1.5M environment steps. DMC tasks yield returns scaled within a set range, [0, 1000], which we use for normalization. Remarkably, also in this domain, SSPG attains state-of-the-art performance with statistically significant improvements over all baselines. Unlike for the Mujoco tasks, the other considered algorithms based on MaxEnt RL underperform as compared to the deterministic DrQv2, a result Yarats et al. [66] attributed to ineffective exploration. In contrast, SSPG yields performance gains especially on sparser reward tasks where the other baselines struggle (see App. D). These results validate the scalability of our framework to high-dimensional inputs and its ability to successfully complement MaxEnt RL. 4.2 Properties of serial Markov chain reasoning We test if theorized benefits of our framework (Sec. 3.5) hold in practical settings with deep networks and stochastic optimization. We provide further ablation studies and analysis of SSPG in App. E. 1. Policy expressiveness. First, we test the expressiveness of the behavior learned with SSPG using a Gaussian BT-policy. We design a series of single-step toy RL problems where the agent needs to position itself on a small 2D environment with a reward function based on unknown goal locations, which we name positional bandits (see App. C for details). The objective of these experiments is to isolate how our framework compares with traditional policies for MaxEnt RL to explore the environments and learn to match the true canonical distributions of returns. As displayed in Fig. 4 A, even in highly multi-modal positional bandits, the SS-policy successfully learns to visit all relevant goals with similar frequencies. Furthermore, quantizing the state space around the goals reveals that the relative RMC intuitively learns to transition between action-beliefs that visit the different goals as reasoning progresses, with a transition matrix matching a cyclic permutation (App. F). In comparison, a squashed Gaussian policy expectedly fails to capture the complexity of the canonical distribution, with samples either collapsing to a single mode or covering large suboptimal parts of the action space. We also show results for a policy based on normalizing flows [69, 70], modeled with a deep expressive network (App. C). After several attempts, we find these models require orders of magnitude more training iterations and data to learn any behavior that is more complex than a uni-modal distribution. Yet, even after increasing training by a factor of 1000, we still observe the flow policy distribution collapsing in the more complex positional bandits. We attribute our findings to training inefficiencies from a lack of proper inductive biases for flow models in the non i.i.d. RL problem setting [71]. In particular, as flows can assign arbitrarily low probability mass to some regions of the action space, initial local optima can greatly hinder future exploration, exacerbating coverage of the data buffer distribution in a vicious circle. 2. Policy adaptivity. Second, we examine the adaptivity of our framework for tackling decisionmaking problems with different complexities. We compare the average number of reasoning steps (N̄ ) performed by SSPG for each task from Sec. 4.1 (Fig. 4 B). We identify a general correlation between task difficulty and reasoning computation, with complex robotic manipulation and humanoid locomotion problems requiring the most steps. By concentrating on two representative tasks, we validate the effectiveness of the reasoning process and our adaptive convergence detection strategy with an ablation study where we train SSPG using a fixed number of reasoning steps Nfix 2 {1, dN̄e, d3N̄e}. For the case Nfix = 1, which closely resembles traditional RL, we use double the UTD ratio to improve performance and offset any training-time gains from multi-step reasoning. As shown in Fig. 4 C, increasing Nfix yields clear performance improvements, validating that agents can greatly benefit from performing longer reasoning processes. Furthermore, our adaptive SSPG attains the same performance as Nfix = d3N̄e and visibly outperforms Nfix = dN̄e. These results show how different action selection problems require different amounts of reasoning computation and validate the practical effectiveness of our adaptive strategy to detect steady-state convergence. We obtain analogous findings for additional tasks and values of Nfix in App. F. 3. Solution reuse. Last, we examine the effects of the short-term action memory buffer (Â) to sample initial action beliefs (a0) in two tasks. We evaluate ablating Â, randomly re-initializing a0 from a uniform distribution. While there are only minor differences performance-wise between the two approaches (App. F), sampling a0 from the short-term action memory considerably decreases the number of reasoning steps for convergence (Fig. 4 D). Moreover, we observe the gap in reasoning efficiency expands throughout training as the agent’s steady-state behavior further improves for the target task. This result validates that a simple temporal heuristic can provide considerable efficiency benefits, amortizing the additional computational cost of our powerful new framework. 5 Related work There have been several prior attempts to extend ubiquitous Gaussian policies [13, 39, 72, 73] with simple normalizing flows [69, 70], both to improve expressiveness [74, 75] and to instantiate behavior hierarchies [76]. Yet, the expressiveness of normalizing flows is coupled with some training challenges [71], which we show can lead to premature convergence to suboptimal solutions in RL (Sec. 4.2). Other works also considered entirely replacing policy models with gradient-free [4] or gradient-based optimization over the predicted values [77]. Marino et al. [62] similarly considered learning an optimizer to infer Gaussian behavior [28] with iterative amortization [63]. However, while all these works consider alternative modeling of agent behavior, they are still based on the traditional RL framework of representing decision-making as the output of a fixed process. Instead, our work entails a conceptually different approach and enables implicit modeling of agent behavior as the result of an adaptive reasoning process, orthogonally providing agents also with additional flexibility to scale computation based on the properties of each individual input state. Outside RL, there have been efforts to model generation processes with parameterized Markov chains learned to revert fixed noise injection processes acting on data [78–82]. Based on this framework, diffusion models [83–85] recently achieved remarkable results for image generation [85, 86]. While applied to inherently different problem settings, these works share some conceptual resemblances with our framework and highlight the vast scaling potential of implicit modeling. 6 Conclusion We introduced serial Markov chain reasoning, a novel framework for modeling agent behavior in RL with several benefits. We showed our framework allows an agent to 1) learn arbitrary continuous action distributions, 2) flexibly scale computation based on the complexity of individual actionselection decisions, and 3) re-use prior solutions to accelerate future reasoning. Hence, we derived SSPG an off-policy maximum entropy RL algorithm for serial Markov chain reasoning, achieving state-of-the-art performance on two separate continuous control benchmarks. While for problems with discrete action spaces simple multinomial policy distributions already provide unlimited expressivity, we note that the inherent computational adaptivity of our framework could still yield benefits over traditional fixed policies in these settings. Furthermore, we believe our motivation and early results provide a strong argument for the future potential of serial Markov chain reasoning, even beyond off-policy RL and simulation tasks. We provide our implementation for transparency and to facilitate future extensions at sites.google.com/view/serial-mcr/. Acknowledgments We thank Johannes Lutzeyer for providing valuable feedback on an earlier draft of this work. Edoardo Cetin would like to acknowledge the support from the Engineering and Physical Sciences Research Council [EP/R513064/1]. Oya Celiktutan would also like to acknowledge the support from the LISI Project, funded by the Engineering and Physical Sciences Research Council [EP/V010875/1]. Furthermore, we thank Toyota Motor Europe and Toyota Motor Corporation for providing support towards funding the utilized computational resources.
1. What is the focus and contribution of the paper regarding policies expressed as reasoning Markov chains? 2. What are the strengths and weaknesses of the proposed approach, particularly in its adaptive iterative reasoning process and comparison to baselines? 3. Do you have any concerns or questions regarding the method's direction and its relation to normalizing flows? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any limitations to the approach, especially regarding its applicability to domains with discrete action spaces?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper considers policies expressed as a reasoning Markov chain over actions in the current state that refines its choice of actions until arriving at a steady-state distribution over actions. In this framework the authors learn the number of steps to take before outputting an action and outline a policy gradient theorem that can be leveraged to differentiate through this process. The reason for learning this kind of policy seems to be in order to increase the expressiveness of distributions with multiple modes in the case of continuous action spaces. The authors present experiments demonstrating superior performance to baselines in 6 MuJoCo and 12 DMLab continuous control domains. Strengths And Weaknesses Strengths: The authors propose an adaptive iterative reasoning process with the goal of allowing for more complex multi-modal action space distributions for problems with continuous action spaces. The authors compare to a solid set of baselines across 6 mujoco domains and 12 DMLab domains while also providing nice qualitative experiments and relevant ablation experiments. The approach is a bit outside the box from recent research with the fact that it works well striking me as somewhat surprising on the surface. The derivation of Theorem 3.2 seems involved beyond the standard policy gradient derivation. Weaknesses: The authors only provide an intuitive rationale for their proposed strategy, on the surface it seems a bit weird to iteratively update actions in this way. I understand that it makes the distribution more expressive, but it is not clear to me why this direction is preferable to normalizing flows or some adaptive version of normalizing flows. "Steady-state policy gradient" is a confusing term in light of the RL literature. I get what you mean upon reading, but it is confusing with the policy gradient for continuing environments which is defined over a different steady-state distribution (i.e. Sutton and Barto, 2018 Section 13.6). It would be nice to provide readers with more information regarding how convergence to the steady-state distribution was determined for those not familiar with pseudo scale reduction factors. Questions Can you compare what is achieved by your approach more directly to normalizing flows? The main advantage I understood from the related work section is allowing for a dynamic amount of computation. Am I missing something? Can there be an empirical comparison with normalizing flows? What are the “mild assumptions” used in the proof of Lemma 3.1? Even looking through the proof in the appendix, it is not 100% clear to me what these assumptions are. Based on my experience with the Markov chain literature I was expecting something related to stochasticity of transitions or irreducibility and aperiodicity of the Markov chain etc. Can you elaborate more about how information reuse is a benefit of this framework? It seems more like a necessity to make it practical? Is this meant in comparison with normalizing flows? Limitations It seems that the authors mostly highlight value in their work with respect to continuous action space domains. Could this approach also provide value in domains with discrete action spaces?
NIPS
Title Policy Gradient With Serial Markov Chain Reasoning Abstract We introduce a new framework that performs decision-making in reinforcement learning (RL) as an iterative reasoning process. We model agent behavior as the steady-state distribution of a parameterized reasoning Markov chain (RMC), optimized with a new tractable estimate of the policy gradient. We perform action selection by simulating the RMC for enough reasoning steps to approach its steadystate distribution. We show our framework has several useful properties that are inherently missing from traditional RL. For instance, it allows agent behavior to approximate any continuous distribution over actions by parameterizing the RMC with a simple Gaussian transition function. Moreover, the number of reasoning steps to reach convergence can scale adaptively with the difficulty of each action selection decision and can be accelerated by re-using past solutions. Our resulting algorithm achieves state-of-the-art performance in popular Mujoco and DeepMind Control benchmarks, both for proprioceptive and pixel-based tasks. 1 Introduction Reinforcement learning (RL) has the potential to provide a general and effective solution to many modern challenges. Recently, this class of methods achieved numerous impressive milestones in different problem domains, such as games [1–3], robotics [4–6], and other meaningful real-world applications [7–9]. However, all these achievements relied on massive amounts of data, controlled environments, and domain-specific tuning. These commonalities highlight some of the current practical limitations that prevent RL to be widely applicable [10]. In the deep RL framework, practitioners train agents with the end goal of obtaining optimal behavior. Traditionally, agent behavior is modeled with feed-forward policies regressing from any state to a corresponding distribution over actions. Such formulation yields practical training objectives in both off-policy [11–13] and on-policy settings [14–16]. However, we identify three inherent properties of this rigid representation of behavior that could considerably impact expressivity and efficiency in continuous control tasks. First, agent behavior is restricted to a class of tractable distributions, which might fail to capture the necessary complexity and multi-modality of a task. Second, the policy performs a fixed reasoning process with a feed-forward computation, which potency cannot adapt to the varying complexity of individual action selection problems. Third, decision-making is performed every time from scratch, without re-using any past information that might still inform and facilitate the current action selection problem. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Unlike RL policies, human reasoning does not appear to follow a rigid feed-forward structure. In fact, a range of popular psychological models characterize human decision-making as a sequential process with adaptive temporal dynamics [17–20]. Many of these models have found empirical groundings in neuroscience [21–24] and have shown to effectively complement RL for capturing human behavior in experimental settings [25, 26]. Partly inspired by these works, we attempt to reframe the deep RL framework by making use of a similar flexible model of agent behavior, in order to counteract its aforementioned limitations. We introduce serial Markov chain reasoning - a new powerful framework for representing agent behavior. Our framework treats decision-making as an adaptive reasoning process, where the agent sequentially updates its beliefs regarding which action to execute in a series of reasoning steps. We model this process by replacing the traditional policy with a parameterized transition function, which defines a reasoning Markov chain (RMC). The steady-state distribution of the RMC represents the distribution of agent behavior after performing enough reasoning for decision-making. Our framework naturally overcomes the aforementioned limitations of traditional RL. In particular, we show that our agent’s behavior can approximate any arbitrary distribution even with simple parameterized transition functions. Moreover, the required number of reasoning steps adaptively scales with the difficulty of individual action selection problems and can be accelerated by re-using samples from similar RMCs. To optimize behavior modeled by the steady-state distribution of the RMC, we derive a new tractable method to estimate the policy gradient. Hence, we implement a new effective off-policy algorithm for maximum entropy reinforcement learning (MaxEnt RL) [27, 28], named Steady-State Policy Gradient (SSPG). Using SSPG, we empirically validate the conceptual properties of our framework over traditional MaxEnt RL. Moreover, we obtain state-of-the-art results for popular benchmarks from the OpenAI Gym Mujoco suite [29] and the DeepMind Control suite from pixels [30]. In summary, this work makes the following key contributions: 1. We propose serial Markov Chain reasoning a framework to represent agent behavior that can overcome expressivity and efficiency limitations inherent to traditional reinforcement learning. 2. Based on our framework, we derive SSPG, a new tractable off-policy algorithm for MaxEnt RL. 3. We provide experimental results validating theorized properties of serial Markov Chain reasoning and displaying state-of-the-art performance on the Mujoco and DeepMind Control suites. 2 Background 2.1 Reinforcement learning problem We consider the classical formulation of the reinforcement learning (RL) problem setting as a Markov Decision Process (MDP) [31], defined by the tuple (S,A, P, p0, r, ). In particular, at each discrete time step t the agent experiences a state from the environment’s state-space, st 2 S, based on which it selects an action from its own action space, at 2 A. In continuous control problems (considered in this work), the action space is typically a compact subset of an Euclidean space Rdim(A). The evolution of the environment’s state through time is determined by the transition dynamics and initial state distribution, P and p0. Lastly, the reward function r represents the immediate level of progress for any state-action tuple towards solving a target task. The agent’s behavior is represented by a state-conditioned parameterized policy distribution ⇡✓. Hence, its interaction with the environment produces trajectories, ⌧ = (s0, a0, s1, ..., sT , aT ), according to a factored joint distribution p⇡✓ (⌧) = p0(s0) QT t=0 ⇡✓(at|st)P (st+1|st, at). The RL objective is to optimize agent behavior as to maximize the discounted sum of expected future rewards: argmax✓ Ep⇡✓ (⌧) hPT t=0 tr(st, at) i . 2.2 Maximum entropy reinforcement learning and inference Maximum entropy reinforcement learning (MaxEnt RL) [32] considers optimizing agent behavior for a different objective that naturally arises when formulating action selection as an inference problem [33–36]. Following Levine [28], we consider modeling a set of binary optimality random variables with realization probability proportional to the exponentiated rewards scaled by the temperature ↵, p(Ot|st, at) / exp( 1↵r(st, at)). The goal of MaxEnt RL is to minimize the KL-divergence between trajectories stemming from agent behavior, p⇡✓ (⌧), and the inferred optimal behavior, p(⌧ |O0:T ): DKL (p⇡✓ (⌧)||p(⌧ |O0:T )) = Ep⇡✓(⌧) " log p0(s0) QT t=0 ⇡✓(at|st)P (st+1|st, at) p0(s0) QT t=0 exp( 1 ↵r(st, at))P (st+1|st, at) # = Ep⇡✓ (⌧) " TX t=0 r(st, at) + ↵H(⇡(·|st)) # . (1) The resulting entropy-regularized objective introduces an explicit trade-off between exploitation and exploration, regulated by the temperature parameter ↵ scaling the policy’s entropy. An effective choice to optimize this objective is to learn an auxiliary parameterized soft Q-function [37]: Q⇡ (st, at) = Ep⇡✓ (⌧ |st,at) " r(st, at) + TX t0=t+1 r(st0 , at0) + ↵H(⇡(at0 |st0) # . (2) Given some state, Q⇡ (s, ·) represents an energy-function based on the expected immediate reward and the agent’s future likelihood of optimality from performing any action. Thus, we can locally optimize the MaxEnt objective by reducing the KL-divergence between ⇡ and the canonical distribution of its current soft Q-function. This is equivalent to maximizing the expected soft Q-function’s value corrected by the policy’s entropy, resembling a regularized policy gradient objective [11, 12]: argmax ✓ Es,a⇠⇡✓(·|s) ⇥ Q⇡ (s, a) + ↵H(⇡✓(a|s)) ⇤ . (3) The policy is usually modeled with a neural network outputting the parameters of some tractable distribution, such as a factorized Gaussian, ⇡✓(·|s) = N(µ✓(s);⌃✓(s)). This practice allows to efficiently approximate the gradients from Eqn. 3 via the reparameterization trick [38]. We consider the off-policy RL setting, where the agent alternates learning with storing new experience in a data buffer, D. We refer the reader to Haarnoja et al. [13, 39] for further derivation and practical details. 3 Policy Gradient with serial reasoning 3.1 Reasoning as a Markov chain We introduce Serial Markov Chain Reasoning, a new framework to model agent behavior, based on conceptualizing action selection as an adaptive, sequential process which we refer to as reasoning. Instead of using a traditional policy, the agent selects which action to execute by maintaining an internal action-belief and a belief transition (BT-) policy, ⇡b(a0|a, s). During the reasoning process, the agent updates its action-belief for a series of reasoning steps by sampling a new action with the BT-policy ⇡b taking both environment state and previous action-belief as input. We naturally represent this process with a reasoning Markov chain (RMC), a discrete-time Markov chain over different action-beliefs, with transition dynamics given by the BT-policy. Hence, for any input environment state s and initial action-belief a0, the n-step transition probabilities of the RMC for future reasoning steps n = 1, 2, 3, ... are defined as: ⇡bn(a|a0, s) = Z A ⇡b(a|a0, s)⇡bn 1(a0|a0, s)da0, for n > 1, and ⇡b1 = ⇡b. (4) Given a compact action space and a BT-policy with a non-zero infimum density, we can ensure that as the number of reasoning steps grows, the probability of any action-belief in the RMC converges to some steady-state probability which is independent of the initial action-belief.1 We denote this implicit probability distribution as the steady-state (SS-) policy, symbolized by ⇡s(a|s): Lemma 3.1. Steady-state convergence. For any environment state s, consider a reasoning Markov chain (RMC) defined on a compact action space A with transition probabilities given by ⇡b(a0|a, s). Suppose that inf{⇡b(a0|a, s) : a0, a 2 A} > 0. Then there exists a steady-state probability distribution function ⇡s(·|s) such that: lim n!1 ⇡bn(a|a0, s) ! ⇡s(a|s) for all a 2 A. (5) Proof. See Appendix A. The RMC’s steady-state probabilities can be interpreted as representing the distribution of agent’s behavior after an appropriate number of reasoning steps are performed. In this work, we strive to optimize the agent’s behavior following the MaxEnt RL framework described in Section 2. In particular, we consider learning a parameterized BT-policy, ⇡b✓, to produce appropriate transition probabilities for each environment state such that the SS-policy, ⇡s✓ , from the resulting RMC optimizes: argmax ✓ J(✓) = Es,a⇠⇡s✓(·|s) ⇥ Qs (s, a) + ↵H(⇡ s ✓(a|s)) ⇤ . (6) Here, Qs is a parameterized soft Q-function for the agent’s behavior from ⇡ s, which we learn by minimizing a squared soft Bellman loss utilizing delayed parameters 0 and samples from ⇡s✓: argmin J( ) = Es,a,s0 h (Qs (s, a) ⇣ r(s, a) + Ea0⇠⇡s✓(·|s) ⇥ Q s 0(s 0 , a 0) + ↵H(⇡s✓(a 0|s0) ⇤⌘i2 . (7) In Fig. 2, we illustrate the relationship between a learned BT-policy, the corresponding SS-policy, and the soft Q-function in a 1-dimensional toy task (see App. C for details). In this example, the BT-policy is parameterized as a simple squashed Gaussian distribution, with unimodal transitions between consecutive action beliefs (Fig. 2, Left). We obtain samples of agent behavior (the SS-policy) by performing a series of reasoning steps, using the BT-policy to simulate the RMC until we approach steady-state convergence. By plotting the resulting empirical distribution of agent behavior, we see it closely matches the multi-modal, non-Gaussian canonical distribution from its soft Q-function (Fig. 2, Right). This example shows how the expressive power of agent behavior in our framework can go far beyond the BT-policy’s simple parameterization, enabling for the effective maximization of complex and multi-modal MaxEnt objectives. 3.2 Learning the belief transition policy We propose a new method to estimate the policy gradient of the BT-policy, ⇡b✓, for optimizing the steady-state MaxEnt objective described in Section 3.1. We note that the gradient from Eq. 6 involves differentiating through an expectation of the steady-state policy, ⇡s✓ . However, ⇡ s ✓ is only implicitly defined, and its connection with the actual BT-policy or its parameters does not have a tractable closed-form expression. To approach this problem, we introduce a family of n-step extensions to the soft Q-function, Qsn : S ⇥A 7! R for n = 0, 1, 2, . . . , defined as: Qsn(s, a) = Z A ⇡bn(a 0|a, s)Qs (s, a0)da0, with r✓Qsn(s, a) = 0. (8) Intuitively, each n-step soft Q-function Qsn(s, a) outputs the expected soft Q-value after performing n reasoning steps in the RMC from the initial action-belief a. However, we treat the output of each n-step soft Q-function as being independent of the actual parameters of the BT-policy, ✓. Hence, we can interpret computing Qsn(s, a) as simulating the RMC with a fixed and immutable copy of the current ⇡b✓. We use this definition to provide a convenient notation in the following new Theorem that expresses the policy gradient without differentiating through ⇡s✓: 1This is unrelated to the steady-state distribution for infinite-horizon MDPs considered in prior work [40]. Theorem 3.2. Steady-state policy gradient. Let ⇡b✓(·|a, s) be a parameterized belief transition policy which defines a reasoning Markov chain with a stationary distribution given by the steady-state policy ⇡s✓(·|s). Let Qs be a real function defined on S ⇥A, with a family of n-step extensions {Qsn} as defined in Eq. 8. Suppose ⇡b, Qs and their gradient with respect to ✓ (denoted r✓) are continuous and bounded functions. Then r✓ Ea⇠⇡s✓(·|s) [Q s(s, a)] = Ea⇠⇡s✓(·|s) " lim N!1 NX n=0 r✓ Ea0⇠⇡b✓(·|a,s) [Q s n(s, a 0)] # . (9) Proof. See Appendix A. Using Lemma 3.1 (steady-state convergence), we can approximate the policy gradient expression in Eq. 9 with an arbitrarily small expected error using a finite number of n-step soft Q-functions, i.e., N (see App. A). An intuition for this property follows from the fact that for large enough n, Lemma 3.1 implies that ⇡bn(a|a0, s) ⇡ ⇡s✓(a|s) and, thus, Qsn(s, a0) ⇡ R A ⇡ b ✓s(a|s)Qs (s, a)da. Therefore, the value of each Qsn(s, a0) will be independent of the BT-policy’s action a0, such that r✓ Ea0⇠⇡b✓(·|a,s) [Q s n(s, a 0)] ⇡ 0. In other words, each subsequent step in the RMC introduces additional randomness that is independent of a0, causing a warranted vanishing gradient phenomenon [41] which culminates with converging to ⇡s✓ . Using a similar notation as Haarnoja et al. [39], we apply the reparameterization trick [38] to express the BT-policy in terms of a deterministic function f b✓ (a, s, ✏), taking as input a Gaussian noise vector ✏. This allows to rewrite the gradient in each inner expectation in the sum from Eq. 9 as: r✓ Ea0⇠⇡b✓(·|a,s) [Q s n(s, a 0)] = E✏0⇠N(0,1) ⇥ ra0Qsn(s, a0)r✓f b✓ (a, s, ✏0) ⇤ , (10) where a0 = f b✓ (a, s, ✏0). We can apply the same reparameterization for all n-step soft Q-functions, to establish a new relationship between the gradient terms ra0Qsn(s, a0): ra0Qsn(s, a0) = ra0 Z A ⇡bn(an|a0, s)Qs (s, an)dan = ra0 Z A ⇡b(a1|a0, s)Qsn 1(s, a1)da1 = E✏1 ⇥ ra1Qsn 1(s, a1)ra0f b(a0, s, ✏1) ⇤ , where, a1 = f(a0, s, ✏1). (11) In Eq. 11, we purposefully omit the dependence of f b and ⇡b from ✓ since each Qsn term is a local approximation of the RMC that does not depend on ✓ (as defined in Eq. 8). By recursively applying this relationship (Eq. 11) to ra1Qsn 1(s, a1) and all subsequent gradient terms we obtain: ra0Qsn(s, a0) = E✏1,...,✏n " ranQs (s, an) n 1Y i=0 raif b(ai, s, ✏i+1) # , (12) where ai = f b(ai 1, s, ✏i) for i = 1, . . . , n. By combining Eq. 10 and Eq. 12, we can thus reparameterize and express the whole sum in Eq. 9 as: r✓ Ea⇠⇡s✓(·|s) [Q s(s, a)] ⇡ NX n=0 r✓ Ea0⇠⇡b✓(·|a,s) [Q s n(s, a 0)] = E✏0,...,✏N " NX n=0 ranQs (s, an) n 1Y i=0 raif b(ai, s, ✏i+1) ! r✓f b✓ (a, s, ✏0) # . (13) Eq. 13 intuitively corresponds to differentiating through each Qsn(s, a0) term by reparameterizing the RMC. Hence, to get a sample estimate of the policy gradient we can simulate the reparameterized RMC for N reasoning steps to obtain a1, ..., aN , compute each Qs (s, an) term, and backpropagate (e.g., with autodifferentiation). Following Haarnoja et al. [13, 39], we can apply Theorem 3.2 and easily extend the same methodology to estimate the MaxEnt policy gradient from Eq. 6 that also involves an extra entropy term. We include this alternative derivation in App. A for completeness. Algorithm 1 Agent Acting input: s, current state a0 ⇠ Â N 0 R p +1 while R p > 1.1 do aN+1 ⇠ ⇡b✓(·|aN) N N + 1 Update Rp with a1:N . Eq.16 N̂ ⇢N̂ + (1 ⇢)N . ⇢ 2 [0, 1) Â Â [ a1:N output: a ⇠ a1:N Algorithm 2 Agent Learning input: D, data buffer (s, a, s0, r) ⇠ D a0 ⇠ ⇡b✓(·|a, s0) for n 0, dN̂e do Q s n Qs (s0, an) . Eq. 8 ✏n+1 ⇠ N(0, 1), an+1 = f b(an, s, ✏n+1) r✓Qs r✓( PdNe n=0 Q s n) . Thm. 3.2 argmin✓ J(✓) . Eq. 6 a 0 ⇠ a1:dN̂e argmin J( ) . Eq. 7 3.3 Action selection and temporal consistency To collect experience in the environment, we propose to perform reasoning with the BT-policy starting from a set of different initial action-beliefs {a00, ..., aM0 }. We batch this set as a single input matrix, a0, to make effective use of parallel computation. To reduce the required number of reasoning steps and facilitate detecting convergence to ⇡s✓ , we identify two desirable properties for the distribution of action-beliefs in a0. In particular, initial action-beliefs should 1) be likely under ⇡s✓ , and 2) cover diverse modes of ⇡s✓ . Property (1) should logically accelerate reasoning by providing the BT-policy with already-useful information about optimal behavior. Property (2) serves to provide the BT-policy with initial information of diverse behavior, which facilitates convergence detection (Sec. 3.4) and expedites reasoning even if the RMC has slow mixing times between multiple modes. To satisfy these properties, we use a simple effective heuristic based on common temporal-consistency properties of MDPs [42, 43]. Especially in continuous environments, actions tend to have small individual effects, making them likely relevant also for environment states experienced in the near future. Thus, we propose storing past action-beliefs in a fixed sized buffer, called the short-term action memory, Â, and use them to construct a0. We find this strategy allows to effectively regulate the initial action-beliefs quality and diversity through the size of Â, accelerating convergence at negligible cost. 3.4 Detecting convergence to the steady-state policy A key requirement for learning and acting with BT-policies, as described in Sections 3.2 and 3.3, is the ability to determine a sufficient number of reasoning steps (N ) for the action-belief distribution to converge. Given the properties of the RMC, there exist different analytical methods that provide a priori bounds on the rate of convergence [44–46]. However, using any fixed N would be extremely limiting as we expect the BT-policy and the properties of its resulting RMCs to continuously evolve during training. Moreover, different tasks, states, and initial action-beliefs might affect the number of reasoning steps required for convergence due to different levels of complexity for the relative decisionmaking problems. To account for similar conditions, in the Markov Chain Monte Carlo literature, the predominant approach is to perform a statistical analysis of the properties of the simulated chain, choosing from several established convergence diagnostic tools [47–49]. Hence, we propose to employ a similar adaptive strategy by analyzing the history of the simulated RMC to determine the appropriate number of reasoning steps. Since we apply ⇡b✓ from a diverse set of initial action beliefs (see Section 3.3), we base our convergence-detection strategy on the seminal Gelman-Rubin (GR) diagnostic [50] and its multivariate extension [51]. In particular, the multivariate GR diagnostic computes the pseudo scale reduction factor (PSRF), a score representing whether the statistics of a multivariate variable of interest have converged to the steady-state distribution. The intuition behind this diagnostic is to compare two different estimators of the covariance for the unknown steady-state distribution, making use of either the samples within each individual chain and between all different chains. Thus, as the individual chains approach the true steady-state distribution, the two estimates should expectedly get closer to each other. The PSRF measures this precise similarity based on the largest eigenvalue of their matrix product. For our use-case, we employ the PSRF to determine the convergence of the set of action-beliefs a1:N, as we perform consecutive reasoning steps with ⇡b✓. Following [51], we calculate the average sample covariance of the action-beliefs within each of the parallel chains (W ) computed from a batched set of initial action-beliefs a0 = [a10, a20, . . . aM0 ]: ām = 1 N NX n=1 amn , Wm = 1 N 1 NX n=1 (a ām)(a ām)T , W = 1 M MX m=1 Wm. (14) We compare W with an unbiased estimate of the target covariance, constructed from the sample covariance between the different parallel chains (B): ā = 1 N ⇥M NX n=1 MX n=1 amn , B = 1 M 1 NX n=1 (ām ā)(ām ā)T . (15) The PSRF for a1:N is then computed from the largest eigenvalue ( max) of the product W 1B, as: Rp = r N 1 N + max(W 1B). (16) Thus, as the individual chains approach the distribution of ⇡s✓ , the PSRF (R p) will approach 1. Following Brooks and Gelman [51], we use Rp < 1.1 as an effective criterion for determining the convergence of a1:N. In practice, we also keep a running mean of the current number of reasoning steps for convergence, N̂ . We use dN̂e as the number of reasoning steps to simulate the RMC with ⇡b✓ when computing gradients from Eqs. 6-7. dN̂e is a safe choice to ensure near unbiased optimization since Rp < 1.1 is considered a very conservative criterion [52] and we can learn by simulating the RMC from recent actions stored in the data buffer, which are already likely close to optimal. We provide further details regarding our implementation and its rationale in App. B. We provide a simplified summary of our adaptive reasoning process for acting and learning in Algs. 1-2. 3.5 Advantages of serial Markov chain reasoning Based on the above specification, we identify three main conceptual advantages of our serial Markov chain reasoning framework. 1. Unlimited expressiveness. The distribution of agent behavior given by the SS-policy ⇡s✓ , is a mixture model with potentially infinitely many components. Thus, even a simple Gaussian parameterization of the BT-policy ⇡b✓ would make ⇡ s ✓ a universal approximator of densities, providing unlimited expressive power to the agent [53, 54]. 2. Adaptive computation. The number of reasoning steps performed to reach approximate convergence is determined by the properties of each environment state’s RMC. Hence, the agent can flexibly spend different amounts of computation time based on the complexity of each action-selection problem, with potential gains in both precision and efficiency. 3. Information reuse. By storing past solutions to similar RMCs, we can initialize the reasoning process with initial action-beliefs that are already close to ⇡s✓ . This allows using the temporal-consistency properties of the MDP to exploit traditionally discarded information and accelerate agent reasoning. We provide empirical validation for these properties in Section 4.2. 4 Experimentation 4.1 Performance evaluation We evaluate the serial Markov chain reasoning framework by comparing its performance with current state-of-the-art baselines based on traditional RL. We consider 6 challenging Mujoco tasks from Gym [29, 56] and 12 tasks pixel-based tasks from the DeepMind Control Suite (DMC) [30]. In both settings, we base our implementation on MaxEnt RL, replacing the traditional policy with a Gaussian BT-policy optimized with the training procedures specified in Sec. 3. Other orthogonal design choices (e.g., network architectures) follow contemporary RL practices, we refer to App. C or the code for full details. We call the resulting algorithm Steady-State Policy Gradient (SSPG). We report the mean performance curves and aggregate metrics using the statistical tools from Rliable [55]. In particular, we compare normalized performance profiles [57], interquantile mean (IQM), and probability of improvements over baselines with the Mann-Whitney U statistic [58]. The reported ranges/shaded regions represent 95% stratified bootstrap confidence intervals (CIs) [59]. In App. D, we provide per-task results and further statistical analysis. For each experiment, we collect the returns of SSPG over five seeds, by performing 100 evaluation rollouts during the last 5% of steps. Mujoco suite. We evaluate on a challenging set of Mujoco tasks popular in recent literature. We compare SSPG with recent RL algorithms achieving state-of-the-art sample-efficiency performance on these tasks, which utilize large critic ensembles and high update-to-data (UTD) ratios. We consider REDQ [60] and MBPO [61] for state-of-the-art algorithms based on the traditional model-free and model-based RL frameworks. We also compare with iterative amortized policy optimization (IAPO) [62], in which the agent performs iterative amortization to optimize its policy distribution [63]. This procedure for action selection is more computationally involved than our agent’s reasoning process, as it requires both evaluating the policy and computing gradients at several iterations. Yet, as IAPO is still based on the traditional policy gradient framework, its benefits are solely due to reducing the amortization gap with an alternative action inference procedure. To ground different results, we also show the performance of the seminal Soft Actor-Critic (SAC) algorithm [39], upon which all considered policy gradient baselines are based on. To account for the additional computational cost of training an agent with serial Markov chain reasoning, we use a UTD ratio that is half the other algorithms. On our hardware, this makes SSPG faster than all other modern baselines (see App. D). Figure 3 (Top) shows the performance results after 100K environment steps. Individual scores are normalized using the performance of SAC after 3M steps, enough to reach convergence in most tasks. SSPG considerably outperforms all prior algorithms with statistically meaningful gains, as per the conservative Neyman-Pearson statistical testing criterion [64]. Furthermore, SSPG even stochastically dominates all considered state-of-the-art baselines [65]. We obtain similar results evaluating at 50K and 200K steps (App. D). In comparison, IAPO obtains lower performance than other non-iterative baselines while being the most compute-intensive algorithm. This indicates that, for sample-efficiency, only reducing the amortization gap beyond direct estimation might not provide significant benefits. Instead, serial Markov chain reasoning’s improved expressivity and flexibility appear to considerably accelerate learning, yielding state-of-the-art performance in complex tasks. DeepMind Control suite. To validate the generality of our framework, we also evaluate on a considerably different set of problems: 12 pixel-based DMC tasks. We follow the recent task specifications and evaluation protocols introduced by Yarats et al. [66]. We compare SSPG with DrQv2 [66], the current state-of-the-art policy gradient algorithm on this benchmark, which employs a deterministic actor and hand-tuned exploration. We also compare with additional baselines that, like SSPG, are based on MaxEnt RL: DrQ [67], CURL [68], and a convolutional version of SAC [39]. Figure 3 (Bottom) shows the performance results after 1.5M environment steps. DMC tasks yield returns scaled within a set range, [0, 1000], which we use for normalization. Remarkably, also in this domain, SSPG attains state-of-the-art performance with statistically significant improvements over all baselines. Unlike for the Mujoco tasks, the other considered algorithms based on MaxEnt RL underperform as compared to the deterministic DrQv2, a result Yarats et al. [66] attributed to ineffective exploration. In contrast, SSPG yields performance gains especially on sparser reward tasks where the other baselines struggle (see App. D). These results validate the scalability of our framework to high-dimensional inputs and its ability to successfully complement MaxEnt RL. 4.2 Properties of serial Markov chain reasoning We test if theorized benefits of our framework (Sec. 3.5) hold in practical settings with deep networks and stochastic optimization. We provide further ablation studies and analysis of SSPG in App. E. 1. Policy expressiveness. First, we test the expressiveness of the behavior learned with SSPG using a Gaussian BT-policy. We design a series of single-step toy RL problems where the agent needs to position itself on a small 2D environment with a reward function based on unknown goal locations, which we name positional bandits (see App. C for details). The objective of these experiments is to isolate how our framework compares with traditional policies for MaxEnt RL to explore the environments and learn to match the true canonical distributions of returns. As displayed in Fig. 4 A, even in highly multi-modal positional bandits, the SS-policy successfully learns to visit all relevant goals with similar frequencies. Furthermore, quantizing the state space around the goals reveals that the relative RMC intuitively learns to transition between action-beliefs that visit the different goals as reasoning progresses, with a transition matrix matching a cyclic permutation (App. F). In comparison, a squashed Gaussian policy expectedly fails to capture the complexity of the canonical distribution, with samples either collapsing to a single mode or covering large suboptimal parts of the action space. We also show results for a policy based on normalizing flows [69, 70], modeled with a deep expressive network (App. C). After several attempts, we find these models require orders of magnitude more training iterations and data to learn any behavior that is more complex than a uni-modal distribution. Yet, even after increasing training by a factor of 1000, we still observe the flow policy distribution collapsing in the more complex positional bandits. We attribute our findings to training inefficiencies from a lack of proper inductive biases for flow models in the non i.i.d. RL problem setting [71]. In particular, as flows can assign arbitrarily low probability mass to some regions of the action space, initial local optima can greatly hinder future exploration, exacerbating coverage of the data buffer distribution in a vicious circle. 2. Policy adaptivity. Second, we examine the adaptivity of our framework for tackling decisionmaking problems with different complexities. We compare the average number of reasoning steps (N̄ ) performed by SSPG for each task from Sec. 4.1 (Fig. 4 B). We identify a general correlation between task difficulty and reasoning computation, with complex robotic manipulation and humanoid locomotion problems requiring the most steps. By concentrating on two representative tasks, we validate the effectiveness of the reasoning process and our adaptive convergence detection strategy with an ablation study where we train SSPG using a fixed number of reasoning steps Nfix 2 {1, dN̄e, d3N̄e}. For the case Nfix = 1, which closely resembles traditional RL, we use double the UTD ratio to improve performance and offset any training-time gains from multi-step reasoning. As shown in Fig. 4 C, increasing Nfix yields clear performance improvements, validating that agents can greatly benefit from performing longer reasoning processes. Furthermore, our adaptive SSPG attains the same performance as Nfix = d3N̄e and visibly outperforms Nfix = dN̄e. These results show how different action selection problems require different amounts of reasoning computation and validate the practical effectiveness of our adaptive strategy to detect steady-state convergence. We obtain analogous findings for additional tasks and values of Nfix in App. F. 3. Solution reuse. Last, we examine the effects of the short-term action memory buffer (Â) to sample initial action beliefs (a0) in two tasks. We evaluate ablating Â, randomly re-initializing a0 from a uniform distribution. While there are only minor differences performance-wise between the two approaches (App. F), sampling a0 from the short-term action memory considerably decreases the number of reasoning steps for convergence (Fig. 4 D). Moreover, we observe the gap in reasoning efficiency expands throughout training as the agent’s steady-state behavior further improves for the target task. This result validates that a simple temporal heuristic can provide considerable efficiency benefits, amortizing the additional computational cost of our powerful new framework. 5 Related work There have been several prior attempts to extend ubiquitous Gaussian policies [13, 39, 72, 73] with simple normalizing flows [69, 70], both to improve expressiveness [74, 75] and to instantiate behavior hierarchies [76]. Yet, the expressiveness of normalizing flows is coupled with some training challenges [71], which we show can lead to premature convergence to suboptimal solutions in RL (Sec. 4.2). Other works also considered entirely replacing policy models with gradient-free [4] or gradient-based optimization over the predicted values [77]. Marino et al. [62] similarly considered learning an optimizer to infer Gaussian behavior [28] with iterative amortization [63]. However, while all these works consider alternative modeling of agent behavior, they are still based on the traditional RL framework of representing decision-making as the output of a fixed process. Instead, our work entails a conceptually different approach and enables implicit modeling of agent behavior as the result of an adaptive reasoning process, orthogonally providing agents also with additional flexibility to scale computation based on the properties of each individual input state. Outside RL, there have been efforts to model generation processes with parameterized Markov chains learned to revert fixed noise injection processes acting on data [78–82]. Based on this framework, diffusion models [83–85] recently achieved remarkable results for image generation [85, 86]. While applied to inherently different problem settings, these works share some conceptual resemblances with our framework and highlight the vast scaling potential of implicit modeling. 6 Conclusion We introduced serial Markov chain reasoning, a novel framework for modeling agent behavior in RL with several benefits. We showed our framework allows an agent to 1) learn arbitrary continuous action distributions, 2) flexibly scale computation based on the complexity of individual actionselection decisions, and 3) re-use prior solutions to accelerate future reasoning. Hence, we derived SSPG an off-policy maximum entropy RL algorithm for serial Markov chain reasoning, achieving state-of-the-art performance on two separate continuous control benchmarks. While for problems with discrete action spaces simple multinomial policy distributions already provide unlimited expressivity, we note that the inherent computational adaptivity of our framework could still yield benefits over traditional fixed policies in these settings. Furthermore, we believe our motivation and early results provide a strong argument for the future potential of serial Markov chain reasoning, even beyond off-policy RL and simulation tasks. We provide our implementation for transparency and to facilitate future extensions at sites.google.com/view/serial-mcr/. Acknowledgments We thank Johannes Lutzeyer for providing valuable feedback on an earlier draft of this work. Edoardo Cetin would like to acknowledge the support from the Engineering and Physical Sciences Research Council [EP/R513064/1]. Oya Celiktutan would also like to acknowledge the support from the LISI Project, funded by the Engineering and Physical Sciences Research Council [EP/V010875/1]. Furthermore, we thank Toyota Motor Europe and Toyota Motor Corporation for providing support towards funding the utilized computational resources.
1. What is the focus and contribution of the paper on serial markov chain reasoning for representing agent behavior? 2. What are the strengths of the proposed approach, particularly in its ability to solve RL problems in a more expressive way? 3. What are the weaknesses of the paper regarding its clarity and background knowledge? 4. Do you have any concerns or questions about the theoretical analysis, such as approximating arbitrary distributions with parameterized transition functions? 5. Have the authors adequately addressed the limitations of their approach?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper introduces a new serial markov chain reasoning for representing agent behavior. Instead of using a policy, the agent selects an action by maintaining an action-belief and a belief transition policy. The authors derive a steady state policy gradient algorithm for estimating the belief transition policy in a MaxEnt RL environment. In addition, this paper also provide a method to determine the number of reasoning steps dynamically. Finally, some empirical experiments are presented to verify the performance of the proposed method in real applications. Strengths And Weaknesses Strengths: This method presented in this paper is novel. It has a potential to solve some RL problems in a more expressive way. The idea of learning the action belief instead of the policy makes sense to me. The proof of the theorems is rigorous and looks right to me. The experiments presented in this paper are sufficient and well-designed. The results look promising as the SSPG algorithm outperforms others with significant gains. Weakness: This paper is a bit hard to follow since some background knowledge is not clearly presented. Before you introduce how to apply the techniques, it is better to have a comprehensive description for them. Questions I have several questions. The authors state that the proposed reasoning Markov chain is more general compared to the traditional policy. And in the first section, you mentioned that the agent’s behavior can approximate any arbitrary distribution with simple parameterized transition functions. I don’t fully get this point from the theoretical analysis. Is it true that a parameterized belief transition function can approximate any policy? Can you elaborate your statement a bit by referring to the theorems in the paper? And since you are using the stationary distribution of a markov chain to define the action, you assumed that the markov chain is ergodic and every action is reachable in every state. I suggest the authors explain the assumptions first before presenting the method and theorems. Limitations Yes. The authors adequately addressed the limitations.
NIPS
Title Policy Gradient With Serial Markov Chain Reasoning Abstract We introduce a new framework that performs decision-making in reinforcement learning (RL) as an iterative reasoning process. We model agent behavior as the steady-state distribution of a parameterized reasoning Markov chain (RMC), optimized with a new tractable estimate of the policy gradient. We perform action selection by simulating the RMC for enough reasoning steps to approach its steadystate distribution. We show our framework has several useful properties that are inherently missing from traditional RL. For instance, it allows agent behavior to approximate any continuous distribution over actions by parameterizing the RMC with a simple Gaussian transition function. Moreover, the number of reasoning steps to reach convergence can scale adaptively with the difficulty of each action selection decision and can be accelerated by re-using past solutions. Our resulting algorithm achieves state-of-the-art performance in popular Mujoco and DeepMind Control benchmarks, both for proprioceptive and pixel-based tasks. 1 Introduction Reinforcement learning (RL) has the potential to provide a general and effective solution to many modern challenges. Recently, this class of methods achieved numerous impressive milestones in different problem domains, such as games [1–3], robotics [4–6], and other meaningful real-world applications [7–9]. However, all these achievements relied on massive amounts of data, controlled environments, and domain-specific tuning. These commonalities highlight some of the current practical limitations that prevent RL to be widely applicable [10]. In the deep RL framework, practitioners train agents with the end goal of obtaining optimal behavior. Traditionally, agent behavior is modeled with feed-forward policies regressing from any state to a corresponding distribution over actions. Such formulation yields practical training objectives in both off-policy [11–13] and on-policy settings [14–16]. However, we identify three inherent properties of this rigid representation of behavior that could considerably impact expressivity and efficiency in continuous control tasks. First, agent behavior is restricted to a class of tractable distributions, which might fail to capture the necessary complexity and multi-modality of a task. Second, the policy performs a fixed reasoning process with a feed-forward computation, which potency cannot adapt to the varying complexity of individual action selection problems. Third, decision-making is performed every time from scratch, without re-using any past information that might still inform and facilitate the current action selection problem. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Unlike RL policies, human reasoning does not appear to follow a rigid feed-forward structure. In fact, a range of popular psychological models characterize human decision-making as a sequential process with adaptive temporal dynamics [17–20]. Many of these models have found empirical groundings in neuroscience [21–24] and have shown to effectively complement RL for capturing human behavior in experimental settings [25, 26]. Partly inspired by these works, we attempt to reframe the deep RL framework by making use of a similar flexible model of agent behavior, in order to counteract its aforementioned limitations. We introduce serial Markov chain reasoning - a new powerful framework for representing agent behavior. Our framework treats decision-making as an adaptive reasoning process, where the agent sequentially updates its beliefs regarding which action to execute in a series of reasoning steps. We model this process by replacing the traditional policy with a parameterized transition function, which defines a reasoning Markov chain (RMC). The steady-state distribution of the RMC represents the distribution of agent behavior after performing enough reasoning for decision-making. Our framework naturally overcomes the aforementioned limitations of traditional RL. In particular, we show that our agent’s behavior can approximate any arbitrary distribution even with simple parameterized transition functions. Moreover, the required number of reasoning steps adaptively scales with the difficulty of individual action selection problems and can be accelerated by re-using samples from similar RMCs. To optimize behavior modeled by the steady-state distribution of the RMC, we derive a new tractable method to estimate the policy gradient. Hence, we implement a new effective off-policy algorithm for maximum entropy reinforcement learning (MaxEnt RL) [27, 28], named Steady-State Policy Gradient (SSPG). Using SSPG, we empirically validate the conceptual properties of our framework over traditional MaxEnt RL. Moreover, we obtain state-of-the-art results for popular benchmarks from the OpenAI Gym Mujoco suite [29] and the DeepMind Control suite from pixels [30]. In summary, this work makes the following key contributions: 1. We propose serial Markov Chain reasoning a framework to represent agent behavior that can overcome expressivity and efficiency limitations inherent to traditional reinforcement learning. 2. Based on our framework, we derive SSPG, a new tractable off-policy algorithm for MaxEnt RL. 3. We provide experimental results validating theorized properties of serial Markov Chain reasoning and displaying state-of-the-art performance on the Mujoco and DeepMind Control suites. 2 Background 2.1 Reinforcement learning problem We consider the classical formulation of the reinforcement learning (RL) problem setting as a Markov Decision Process (MDP) [31], defined by the tuple (S,A, P, p0, r, ). In particular, at each discrete time step t the agent experiences a state from the environment’s state-space, st 2 S, based on which it selects an action from its own action space, at 2 A. In continuous control problems (considered in this work), the action space is typically a compact subset of an Euclidean space Rdim(A). The evolution of the environment’s state through time is determined by the transition dynamics and initial state distribution, P and p0. Lastly, the reward function r represents the immediate level of progress for any state-action tuple towards solving a target task. The agent’s behavior is represented by a state-conditioned parameterized policy distribution ⇡✓. Hence, its interaction with the environment produces trajectories, ⌧ = (s0, a0, s1, ..., sT , aT ), according to a factored joint distribution p⇡✓ (⌧) = p0(s0) QT t=0 ⇡✓(at|st)P (st+1|st, at). The RL objective is to optimize agent behavior as to maximize the discounted sum of expected future rewards: argmax✓ Ep⇡✓ (⌧) hPT t=0 tr(st, at) i . 2.2 Maximum entropy reinforcement learning and inference Maximum entropy reinforcement learning (MaxEnt RL) [32] considers optimizing agent behavior for a different objective that naturally arises when formulating action selection as an inference problem [33–36]. Following Levine [28], we consider modeling a set of binary optimality random variables with realization probability proportional to the exponentiated rewards scaled by the temperature ↵, p(Ot|st, at) / exp( 1↵r(st, at)). The goal of MaxEnt RL is to minimize the KL-divergence between trajectories stemming from agent behavior, p⇡✓ (⌧), and the inferred optimal behavior, p(⌧ |O0:T ): DKL (p⇡✓ (⌧)||p(⌧ |O0:T )) = Ep⇡✓(⌧) " log p0(s0) QT t=0 ⇡✓(at|st)P (st+1|st, at) p0(s0) QT t=0 exp( 1 ↵r(st, at))P (st+1|st, at) # = Ep⇡✓ (⌧) " TX t=0 r(st, at) + ↵H(⇡(·|st)) # . (1) The resulting entropy-regularized objective introduces an explicit trade-off between exploitation and exploration, regulated by the temperature parameter ↵ scaling the policy’s entropy. An effective choice to optimize this objective is to learn an auxiliary parameterized soft Q-function [37]: Q⇡ (st, at) = Ep⇡✓ (⌧ |st,at) " r(st, at) + TX t0=t+1 r(st0 , at0) + ↵H(⇡(at0 |st0) # . (2) Given some state, Q⇡ (s, ·) represents an energy-function based on the expected immediate reward and the agent’s future likelihood of optimality from performing any action. Thus, we can locally optimize the MaxEnt objective by reducing the KL-divergence between ⇡ and the canonical distribution of its current soft Q-function. This is equivalent to maximizing the expected soft Q-function’s value corrected by the policy’s entropy, resembling a regularized policy gradient objective [11, 12]: argmax ✓ Es,a⇠⇡✓(·|s) ⇥ Q⇡ (s, a) + ↵H(⇡✓(a|s)) ⇤ . (3) The policy is usually modeled with a neural network outputting the parameters of some tractable distribution, such as a factorized Gaussian, ⇡✓(·|s) = N(µ✓(s);⌃✓(s)). This practice allows to efficiently approximate the gradients from Eqn. 3 via the reparameterization trick [38]. We consider the off-policy RL setting, where the agent alternates learning with storing new experience in a data buffer, D. We refer the reader to Haarnoja et al. [13, 39] for further derivation and practical details. 3 Policy Gradient with serial reasoning 3.1 Reasoning as a Markov chain We introduce Serial Markov Chain Reasoning, a new framework to model agent behavior, based on conceptualizing action selection as an adaptive, sequential process which we refer to as reasoning. Instead of using a traditional policy, the agent selects which action to execute by maintaining an internal action-belief and a belief transition (BT-) policy, ⇡b(a0|a, s). During the reasoning process, the agent updates its action-belief for a series of reasoning steps by sampling a new action with the BT-policy ⇡b taking both environment state and previous action-belief as input. We naturally represent this process with a reasoning Markov chain (RMC), a discrete-time Markov chain over different action-beliefs, with transition dynamics given by the BT-policy. Hence, for any input environment state s and initial action-belief a0, the n-step transition probabilities of the RMC for future reasoning steps n = 1, 2, 3, ... are defined as: ⇡bn(a|a0, s) = Z A ⇡b(a|a0, s)⇡bn 1(a0|a0, s)da0, for n > 1, and ⇡b1 = ⇡b. (4) Given a compact action space and a BT-policy with a non-zero infimum density, we can ensure that as the number of reasoning steps grows, the probability of any action-belief in the RMC converges to some steady-state probability which is independent of the initial action-belief.1 We denote this implicit probability distribution as the steady-state (SS-) policy, symbolized by ⇡s(a|s): Lemma 3.1. Steady-state convergence. For any environment state s, consider a reasoning Markov chain (RMC) defined on a compact action space A with transition probabilities given by ⇡b(a0|a, s). Suppose that inf{⇡b(a0|a, s) : a0, a 2 A} > 0. Then there exists a steady-state probability distribution function ⇡s(·|s) such that: lim n!1 ⇡bn(a|a0, s) ! ⇡s(a|s) for all a 2 A. (5) Proof. See Appendix A. The RMC’s steady-state probabilities can be interpreted as representing the distribution of agent’s behavior after an appropriate number of reasoning steps are performed. In this work, we strive to optimize the agent’s behavior following the MaxEnt RL framework described in Section 2. In particular, we consider learning a parameterized BT-policy, ⇡b✓, to produce appropriate transition probabilities for each environment state such that the SS-policy, ⇡s✓ , from the resulting RMC optimizes: argmax ✓ J(✓) = Es,a⇠⇡s✓(·|s) ⇥ Qs (s, a) + ↵H(⇡ s ✓(a|s)) ⇤ . (6) Here, Qs is a parameterized soft Q-function for the agent’s behavior from ⇡ s, which we learn by minimizing a squared soft Bellman loss utilizing delayed parameters 0 and samples from ⇡s✓: argmin J( ) = Es,a,s0 h (Qs (s, a) ⇣ r(s, a) + Ea0⇠⇡s✓(·|s) ⇥ Q s 0(s 0 , a 0) + ↵H(⇡s✓(a 0|s0) ⇤⌘i2 . (7) In Fig. 2, we illustrate the relationship between a learned BT-policy, the corresponding SS-policy, and the soft Q-function in a 1-dimensional toy task (see App. C for details). In this example, the BT-policy is parameterized as a simple squashed Gaussian distribution, with unimodal transitions between consecutive action beliefs (Fig. 2, Left). We obtain samples of agent behavior (the SS-policy) by performing a series of reasoning steps, using the BT-policy to simulate the RMC until we approach steady-state convergence. By plotting the resulting empirical distribution of agent behavior, we see it closely matches the multi-modal, non-Gaussian canonical distribution from its soft Q-function (Fig. 2, Right). This example shows how the expressive power of agent behavior in our framework can go far beyond the BT-policy’s simple parameterization, enabling for the effective maximization of complex and multi-modal MaxEnt objectives. 3.2 Learning the belief transition policy We propose a new method to estimate the policy gradient of the BT-policy, ⇡b✓, for optimizing the steady-state MaxEnt objective described in Section 3.1. We note that the gradient from Eq. 6 involves differentiating through an expectation of the steady-state policy, ⇡s✓ . However, ⇡ s ✓ is only implicitly defined, and its connection with the actual BT-policy or its parameters does not have a tractable closed-form expression. To approach this problem, we introduce a family of n-step extensions to the soft Q-function, Qsn : S ⇥A 7! R for n = 0, 1, 2, . . . , defined as: Qsn(s, a) = Z A ⇡bn(a 0|a, s)Qs (s, a0)da0, with r✓Qsn(s, a) = 0. (8) Intuitively, each n-step soft Q-function Qsn(s, a) outputs the expected soft Q-value after performing n reasoning steps in the RMC from the initial action-belief a. However, we treat the output of each n-step soft Q-function as being independent of the actual parameters of the BT-policy, ✓. Hence, we can interpret computing Qsn(s, a) as simulating the RMC with a fixed and immutable copy of the current ⇡b✓. We use this definition to provide a convenient notation in the following new Theorem that expresses the policy gradient without differentiating through ⇡s✓: 1This is unrelated to the steady-state distribution for infinite-horizon MDPs considered in prior work [40]. Theorem 3.2. Steady-state policy gradient. Let ⇡b✓(·|a, s) be a parameterized belief transition policy which defines a reasoning Markov chain with a stationary distribution given by the steady-state policy ⇡s✓(·|s). Let Qs be a real function defined on S ⇥A, with a family of n-step extensions {Qsn} as defined in Eq. 8. Suppose ⇡b, Qs and their gradient with respect to ✓ (denoted r✓) are continuous and bounded functions. Then r✓ Ea⇠⇡s✓(·|s) [Q s(s, a)] = Ea⇠⇡s✓(·|s) " lim N!1 NX n=0 r✓ Ea0⇠⇡b✓(·|a,s) [Q s n(s, a 0)] # . (9) Proof. See Appendix A. Using Lemma 3.1 (steady-state convergence), we can approximate the policy gradient expression in Eq. 9 with an arbitrarily small expected error using a finite number of n-step soft Q-functions, i.e., N (see App. A). An intuition for this property follows from the fact that for large enough n, Lemma 3.1 implies that ⇡bn(a|a0, s) ⇡ ⇡s✓(a|s) and, thus, Qsn(s, a0) ⇡ R A ⇡ b ✓s(a|s)Qs (s, a)da. Therefore, the value of each Qsn(s, a0) will be independent of the BT-policy’s action a0, such that r✓ Ea0⇠⇡b✓(·|a,s) [Q s n(s, a 0)] ⇡ 0. In other words, each subsequent step in the RMC introduces additional randomness that is independent of a0, causing a warranted vanishing gradient phenomenon [41] which culminates with converging to ⇡s✓ . Using a similar notation as Haarnoja et al. [39], we apply the reparameterization trick [38] to express the BT-policy in terms of a deterministic function f b✓ (a, s, ✏), taking as input a Gaussian noise vector ✏. This allows to rewrite the gradient in each inner expectation in the sum from Eq. 9 as: r✓ Ea0⇠⇡b✓(·|a,s) [Q s n(s, a 0)] = E✏0⇠N(0,1) ⇥ ra0Qsn(s, a0)r✓f b✓ (a, s, ✏0) ⇤ , (10) where a0 = f b✓ (a, s, ✏0). We can apply the same reparameterization for all n-step soft Q-functions, to establish a new relationship between the gradient terms ra0Qsn(s, a0): ra0Qsn(s, a0) = ra0 Z A ⇡bn(an|a0, s)Qs (s, an)dan = ra0 Z A ⇡b(a1|a0, s)Qsn 1(s, a1)da1 = E✏1 ⇥ ra1Qsn 1(s, a1)ra0f b(a0, s, ✏1) ⇤ , where, a1 = f(a0, s, ✏1). (11) In Eq. 11, we purposefully omit the dependence of f b and ⇡b from ✓ since each Qsn term is a local approximation of the RMC that does not depend on ✓ (as defined in Eq. 8). By recursively applying this relationship (Eq. 11) to ra1Qsn 1(s, a1) and all subsequent gradient terms we obtain: ra0Qsn(s, a0) = E✏1,...,✏n " ranQs (s, an) n 1Y i=0 raif b(ai, s, ✏i+1) # , (12) where ai = f b(ai 1, s, ✏i) for i = 1, . . . , n. By combining Eq. 10 and Eq. 12, we can thus reparameterize and express the whole sum in Eq. 9 as: r✓ Ea⇠⇡s✓(·|s) [Q s(s, a)] ⇡ NX n=0 r✓ Ea0⇠⇡b✓(·|a,s) [Q s n(s, a 0)] = E✏0,...,✏N " NX n=0 ranQs (s, an) n 1Y i=0 raif b(ai, s, ✏i+1) ! r✓f b✓ (a, s, ✏0) # . (13) Eq. 13 intuitively corresponds to differentiating through each Qsn(s, a0) term by reparameterizing the RMC. Hence, to get a sample estimate of the policy gradient we can simulate the reparameterized RMC for N reasoning steps to obtain a1, ..., aN , compute each Qs (s, an) term, and backpropagate (e.g., with autodifferentiation). Following Haarnoja et al. [13, 39], we can apply Theorem 3.2 and easily extend the same methodology to estimate the MaxEnt policy gradient from Eq. 6 that also involves an extra entropy term. We include this alternative derivation in App. A for completeness. Algorithm 1 Agent Acting input: s, current state a0 ⇠ Â N 0 R p +1 while R p > 1.1 do aN+1 ⇠ ⇡b✓(·|aN) N N + 1 Update Rp with a1:N . Eq.16 N̂ ⇢N̂ + (1 ⇢)N . ⇢ 2 [0, 1) Â Â [ a1:N output: a ⇠ a1:N Algorithm 2 Agent Learning input: D, data buffer (s, a, s0, r) ⇠ D a0 ⇠ ⇡b✓(·|a, s0) for n 0, dN̂e do Q s n Qs (s0, an) . Eq. 8 ✏n+1 ⇠ N(0, 1), an+1 = f b(an, s, ✏n+1) r✓Qs r✓( PdNe n=0 Q s n) . Thm. 3.2 argmin✓ J(✓) . Eq. 6 a 0 ⇠ a1:dN̂e argmin J( ) . Eq. 7 3.3 Action selection and temporal consistency To collect experience in the environment, we propose to perform reasoning with the BT-policy starting from a set of different initial action-beliefs {a00, ..., aM0 }. We batch this set as a single input matrix, a0, to make effective use of parallel computation. To reduce the required number of reasoning steps and facilitate detecting convergence to ⇡s✓ , we identify two desirable properties for the distribution of action-beliefs in a0. In particular, initial action-beliefs should 1) be likely under ⇡s✓ , and 2) cover diverse modes of ⇡s✓ . Property (1) should logically accelerate reasoning by providing the BT-policy with already-useful information about optimal behavior. Property (2) serves to provide the BT-policy with initial information of diverse behavior, which facilitates convergence detection (Sec. 3.4) and expedites reasoning even if the RMC has slow mixing times between multiple modes. To satisfy these properties, we use a simple effective heuristic based on common temporal-consistency properties of MDPs [42, 43]. Especially in continuous environments, actions tend to have small individual effects, making them likely relevant also for environment states experienced in the near future. Thus, we propose storing past action-beliefs in a fixed sized buffer, called the short-term action memory, Â, and use them to construct a0. We find this strategy allows to effectively regulate the initial action-beliefs quality and diversity through the size of Â, accelerating convergence at negligible cost. 3.4 Detecting convergence to the steady-state policy A key requirement for learning and acting with BT-policies, as described in Sections 3.2 and 3.3, is the ability to determine a sufficient number of reasoning steps (N ) for the action-belief distribution to converge. Given the properties of the RMC, there exist different analytical methods that provide a priori bounds on the rate of convergence [44–46]. However, using any fixed N would be extremely limiting as we expect the BT-policy and the properties of its resulting RMCs to continuously evolve during training. Moreover, different tasks, states, and initial action-beliefs might affect the number of reasoning steps required for convergence due to different levels of complexity for the relative decisionmaking problems. To account for similar conditions, in the Markov Chain Monte Carlo literature, the predominant approach is to perform a statistical analysis of the properties of the simulated chain, choosing from several established convergence diagnostic tools [47–49]. Hence, we propose to employ a similar adaptive strategy by analyzing the history of the simulated RMC to determine the appropriate number of reasoning steps. Since we apply ⇡b✓ from a diverse set of initial action beliefs (see Section 3.3), we base our convergence-detection strategy on the seminal Gelman-Rubin (GR) diagnostic [50] and its multivariate extension [51]. In particular, the multivariate GR diagnostic computes the pseudo scale reduction factor (PSRF), a score representing whether the statistics of a multivariate variable of interest have converged to the steady-state distribution. The intuition behind this diagnostic is to compare two different estimators of the covariance for the unknown steady-state distribution, making use of either the samples within each individual chain and between all different chains. Thus, as the individual chains approach the true steady-state distribution, the two estimates should expectedly get closer to each other. The PSRF measures this precise similarity based on the largest eigenvalue of their matrix product. For our use-case, we employ the PSRF to determine the convergence of the set of action-beliefs a1:N, as we perform consecutive reasoning steps with ⇡b✓. Following [51], we calculate the average sample covariance of the action-beliefs within each of the parallel chains (W ) computed from a batched set of initial action-beliefs a0 = [a10, a20, . . . aM0 ]: ām = 1 N NX n=1 amn , Wm = 1 N 1 NX n=1 (a ām)(a ām)T , W = 1 M MX m=1 Wm. (14) We compare W with an unbiased estimate of the target covariance, constructed from the sample covariance between the different parallel chains (B): ā = 1 N ⇥M NX n=1 MX n=1 amn , B = 1 M 1 NX n=1 (ām ā)(ām ā)T . (15) The PSRF for a1:N is then computed from the largest eigenvalue ( max) of the product W 1B, as: Rp = r N 1 N + max(W 1B). (16) Thus, as the individual chains approach the distribution of ⇡s✓ , the PSRF (R p) will approach 1. Following Brooks and Gelman [51], we use Rp < 1.1 as an effective criterion for determining the convergence of a1:N. In practice, we also keep a running mean of the current number of reasoning steps for convergence, N̂ . We use dN̂e as the number of reasoning steps to simulate the RMC with ⇡b✓ when computing gradients from Eqs. 6-7. dN̂e is a safe choice to ensure near unbiased optimization since Rp < 1.1 is considered a very conservative criterion [52] and we can learn by simulating the RMC from recent actions stored in the data buffer, which are already likely close to optimal. We provide further details regarding our implementation and its rationale in App. B. We provide a simplified summary of our adaptive reasoning process for acting and learning in Algs. 1-2. 3.5 Advantages of serial Markov chain reasoning Based on the above specification, we identify three main conceptual advantages of our serial Markov chain reasoning framework. 1. Unlimited expressiveness. The distribution of agent behavior given by the SS-policy ⇡s✓ , is a mixture model with potentially infinitely many components. Thus, even a simple Gaussian parameterization of the BT-policy ⇡b✓ would make ⇡ s ✓ a universal approximator of densities, providing unlimited expressive power to the agent [53, 54]. 2. Adaptive computation. The number of reasoning steps performed to reach approximate convergence is determined by the properties of each environment state’s RMC. Hence, the agent can flexibly spend different amounts of computation time based on the complexity of each action-selection problem, with potential gains in both precision and efficiency. 3. Information reuse. By storing past solutions to similar RMCs, we can initialize the reasoning process with initial action-beliefs that are already close to ⇡s✓ . This allows using the temporal-consistency properties of the MDP to exploit traditionally discarded information and accelerate agent reasoning. We provide empirical validation for these properties in Section 4.2. 4 Experimentation 4.1 Performance evaluation We evaluate the serial Markov chain reasoning framework by comparing its performance with current state-of-the-art baselines based on traditional RL. We consider 6 challenging Mujoco tasks from Gym [29, 56] and 12 tasks pixel-based tasks from the DeepMind Control Suite (DMC) [30]. In both settings, we base our implementation on MaxEnt RL, replacing the traditional policy with a Gaussian BT-policy optimized with the training procedures specified in Sec. 3. Other orthogonal design choices (e.g., network architectures) follow contemporary RL practices, we refer to App. C or the code for full details. We call the resulting algorithm Steady-State Policy Gradient (SSPG). We report the mean performance curves and aggregate metrics using the statistical tools from Rliable [55]. In particular, we compare normalized performance profiles [57], interquantile mean (IQM), and probability of improvements over baselines with the Mann-Whitney U statistic [58]. The reported ranges/shaded regions represent 95% stratified bootstrap confidence intervals (CIs) [59]. In App. D, we provide per-task results and further statistical analysis. For each experiment, we collect the returns of SSPG over five seeds, by performing 100 evaluation rollouts during the last 5% of steps. Mujoco suite. We evaluate on a challenging set of Mujoco tasks popular in recent literature. We compare SSPG with recent RL algorithms achieving state-of-the-art sample-efficiency performance on these tasks, which utilize large critic ensembles and high update-to-data (UTD) ratios. We consider REDQ [60] and MBPO [61] for state-of-the-art algorithms based on the traditional model-free and model-based RL frameworks. We also compare with iterative amortized policy optimization (IAPO) [62], in which the agent performs iterative amortization to optimize its policy distribution [63]. This procedure for action selection is more computationally involved than our agent’s reasoning process, as it requires both evaluating the policy and computing gradients at several iterations. Yet, as IAPO is still based on the traditional policy gradient framework, its benefits are solely due to reducing the amortization gap with an alternative action inference procedure. To ground different results, we also show the performance of the seminal Soft Actor-Critic (SAC) algorithm [39], upon which all considered policy gradient baselines are based on. To account for the additional computational cost of training an agent with serial Markov chain reasoning, we use a UTD ratio that is half the other algorithms. On our hardware, this makes SSPG faster than all other modern baselines (see App. D). Figure 3 (Top) shows the performance results after 100K environment steps. Individual scores are normalized using the performance of SAC after 3M steps, enough to reach convergence in most tasks. SSPG considerably outperforms all prior algorithms with statistically meaningful gains, as per the conservative Neyman-Pearson statistical testing criterion [64]. Furthermore, SSPG even stochastically dominates all considered state-of-the-art baselines [65]. We obtain similar results evaluating at 50K and 200K steps (App. D). In comparison, IAPO obtains lower performance than other non-iterative baselines while being the most compute-intensive algorithm. This indicates that, for sample-efficiency, only reducing the amortization gap beyond direct estimation might not provide significant benefits. Instead, serial Markov chain reasoning’s improved expressivity and flexibility appear to considerably accelerate learning, yielding state-of-the-art performance in complex tasks. DeepMind Control suite. To validate the generality of our framework, we also evaluate on a considerably different set of problems: 12 pixel-based DMC tasks. We follow the recent task specifications and evaluation protocols introduced by Yarats et al. [66]. We compare SSPG with DrQv2 [66], the current state-of-the-art policy gradient algorithm on this benchmark, which employs a deterministic actor and hand-tuned exploration. We also compare with additional baselines that, like SSPG, are based on MaxEnt RL: DrQ [67], CURL [68], and a convolutional version of SAC [39]. Figure 3 (Bottom) shows the performance results after 1.5M environment steps. DMC tasks yield returns scaled within a set range, [0, 1000], which we use for normalization. Remarkably, also in this domain, SSPG attains state-of-the-art performance with statistically significant improvements over all baselines. Unlike for the Mujoco tasks, the other considered algorithms based on MaxEnt RL underperform as compared to the deterministic DrQv2, a result Yarats et al. [66] attributed to ineffective exploration. In contrast, SSPG yields performance gains especially on sparser reward tasks where the other baselines struggle (see App. D). These results validate the scalability of our framework to high-dimensional inputs and its ability to successfully complement MaxEnt RL. 4.2 Properties of serial Markov chain reasoning We test if theorized benefits of our framework (Sec. 3.5) hold in practical settings with deep networks and stochastic optimization. We provide further ablation studies and analysis of SSPG in App. E. 1. Policy expressiveness. First, we test the expressiveness of the behavior learned with SSPG using a Gaussian BT-policy. We design a series of single-step toy RL problems where the agent needs to position itself on a small 2D environment with a reward function based on unknown goal locations, which we name positional bandits (see App. C for details). The objective of these experiments is to isolate how our framework compares with traditional policies for MaxEnt RL to explore the environments and learn to match the true canonical distributions of returns. As displayed in Fig. 4 A, even in highly multi-modal positional bandits, the SS-policy successfully learns to visit all relevant goals with similar frequencies. Furthermore, quantizing the state space around the goals reveals that the relative RMC intuitively learns to transition between action-beliefs that visit the different goals as reasoning progresses, with a transition matrix matching a cyclic permutation (App. F). In comparison, a squashed Gaussian policy expectedly fails to capture the complexity of the canonical distribution, with samples either collapsing to a single mode or covering large suboptimal parts of the action space. We also show results for a policy based on normalizing flows [69, 70], modeled with a deep expressive network (App. C). After several attempts, we find these models require orders of magnitude more training iterations and data to learn any behavior that is more complex than a uni-modal distribution. Yet, even after increasing training by a factor of 1000, we still observe the flow policy distribution collapsing in the more complex positional bandits. We attribute our findings to training inefficiencies from a lack of proper inductive biases for flow models in the non i.i.d. RL problem setting [71]. In particular, as flows can assign arbitrarily low probability mass to some regions of the action space, initial local optima can greatly hinder future exploration, exacerbating coverage of the data buffer distribution in a vicious circle. 2. Policy adaptivity. Second, we examine the adaptivity of our framework for tackling decisionmaking problems with different complexities. We compare the average number of reasoning steps (N̄ ) performed by SSPG for each task from Sec. 4.1 (Fig. 4 B). We identify a general correlation between task difficulty and reasoning computation, with complex robotic manipulation and humanoid locomotion problems requiring the most steps. By concentrating on two representative tasks, we validate the effectiveness of the reasoning process and our adaptive convergence detection strategy with an ablation study where we train SSPG using a fixed number of reasoning steps Nfix 2 {1, dN̄e, d3N̄e}. For the case Nfix = 1, which closely resembles traditional RL, we use double the UTD ratio to improve performance and offset any training-time gains from multi-step reasoning. As shown in Fig. 4 C, increasing Nfix yields clear performance improvements, validating that agents can greatly benefit from performing longer reasoning processes. Furthermore, our adaptive SSPG attains the same performance as Nfix = d3N̄e and visibly outperforms Nfix = dN̄e. These results show how different action selection problems require different amounts of reasoning computation and validate the practical effectiveness of our adaptive strategy to detect steady-state convergence. We obtain analogous findings for additional tasks and values of Nfix in App. F. 3. Solution reuse. Last, we examine the effects of the short-term action memory buffer (Â) to sample initial action beliefs (a0) in two tasks. We evaluate ablating Â, randomly re-initializing a0 from a uniform distribution. While there are only minor differences performance-wise between the two approaches (App. F), sampling a0 from the short-term action memory considerably decreases the number of reasoning steps for convergence (Fig. 4 D). Moreover, we observe the gap in reasoning efficiency expands throughout training as the agent’s steady-state behavior further improves for the target task. This result validates that a simple temporal heuristic can provide considerable efficiency benefits, amortizing the additional computational cost of our powerful new framework. 5 Related work There have been several prior attempts to extend ubiquitous Gaussian policies [13, 39, 72, 73] with simple normalizing flows [69, 70], both to improve expressiveness [74, 75] and to instantiate behavior hierarchies [76]. Yet, the expressiveness of normalizing flows is coupled with some training challenges [71], which we show can lead to premature convergence to suboptimal solutions in RL (Sec. 4.2). Other works also considered entirely replacing policy models with gradient-free [4] or gradient-based optimization over the predicted values [77]. Marino et al. [62] similarly considered learning an optimizer to infer Gaussian behavior [28] with iterative amortization [63]. However, while all these works consider alternative modeling of agent behavior, they are still based on the traditional RL framework of representing decision-making as the output of a fixed process. Instead, our work entails a conceptually different approach and enables implicit modeling of agent behavior as the result of an adaptive reasoning process, orthogonally providing agents also with additional flexibility to scale computation based on the properties of each individual input state. Outside RL, there have been efforts to model generation processes with parameterized Markov chains learned to revert fixed noise injection processes acting on data [78–82]. Based on this framework, diffusion models [83–85] recently achieved remarkable results for image generation [85, 86]. While applied to inherently different problem settings, these works share some conceptual resemblances with our framework and highlight the vast scaling potential of implicit modeling. 6 Conclusion We introduced serial Markov chain reasoning, a novel framework for modeling agent behavior in RL with several benefits. We showed our framework allows an agent to 1) learn arbitrary continuous action distributions, 2) flexibly scale computation based on the complexity of individual actionselection decisions, and 3) re-use prior solutions to accelerate future reasoning. Hence, we derived SSPG an off-policy maximum entropy RL algorithm for serial Markov chain reasoning, achieving state-of-the-art performance on two separate continuous control benchmarks. While for problems with discrete action spaces simple multinomial policy distributions already provide unlimited expressivity, we note that the inherent computational adaptivity of our framework could still yield benefits over traditional fixed policies in these settings. Furthermore, we believe our motivation and early results provide a strong argument for the future potential of serial Markov chain reasoning, even beyond off-policy RL and simulation tasks. We provide our implementation for transparency and to facilitate future extensions at sites.google.com/view/serial-mcr/. Acknowledgments We thank Johannes Lutzeyer for providing valuable feedback on an earlier draft of this work. Edoardo Cetin would like to acknowledge the support from the Engineering and Physical Sciences Research Council [EP/R513064/1]. Oya Celiktutan would also like to acknowledge the support from the LISI Project, funded by the Engineering and Physical Sciences Research Council [EP/V010875/1]. Furthermore, we thank Toyota Motor Europe and Toyota Motor Corporation for providing support towards funding the utilized computational resources.
1. What is the focus and contribution of the paper on representing decision making in an RL problem? 2. What are the strengths of the proposed framework, particularly in its novelty and empirical evaluation? 3. What are the weaknesses of the paper, especially regarding the theoretical soundness of one of the key equations? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any minor comments or suggestions regarding the presentation of the paper?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper proposes a novel framework for representing decision making in an RL problem as an iterative reasoning process based on serial Markov chains. The authors derive a policy gradient theorem for their framework in the standard RL setting, as well as the MaxEnt RL setting. The empirical evaluation shows improved performance in both state and pixel observations and demonstrates the expressivity of the proposed policy representation. Strengths And Weaknesses This is a solid paper with a clear and interesting idea, novel contribution, and a series of thoroughly conducted evaluations. I, however, have certain concerns regarding one of the key equations / definitions used by the authors which is the basis for all of the main derivations. Strengths 1. Novelty The proposed method for obtaining policy gradients using serial Markov chain reasoning is novel, to the best of my knowledge. Furthermore, I believe the contributions are significant and relevant as the proposed framework is a core development of RL algorithms and can be potentially impactful. 2. Empirical Soundness The empirical evaluation is thorough and sound. In particular, I appreciate using Rliable metrics and comparison against several strong baselines for both state and pixel observations. Finally, several ablation studies are conducted in Appendix E to shed more light onto the method and its hyperparameters. 3. Demonstration of useful properties of the proposed method I greatly appreciate section 4.2 and correspondingly Appendix F. I have found the empirical results on policy expressiveness and adaptivity to be very convincing and a nice divergence from the standard comparison in the (deep) RL community that is usually concerned only with pure performance. Nevertheless, I do appreciate that the proposed method is outperforming SOTA algorithms. Weaknesses 1. Theoretical soundness I have carefully checked the proofs of Lemma 3.1 and Theorem 3.2 and I acknowledge that the math is correct and sound (based on my understanding) if one agrees with the fact that Q n s ( s , a ) is not a function of policy parameters θ . However, I am not convinced this is the case. The concerning equation is equation (8) in which the authors define: Q n s ( s , a ) = ∫ A π n b ( a ′ | a , s ) Q ϕ s ( s , a ′ ) d a ′ , with ∇ θ Q n s ( s , a ) = 0 First, I am confused with the "with" statement; is that an assumption, condition, or a result of the definition of Q n s ( s , a ) ? Second, in L416 of Appendix, the authors write “[since] Q n s ( s , a ) is a local approximation of the RMC with no dependence from θ ...” and continue the proof without again stating why that is the case. My understanding is that the π θ b is paremetrized by θ , hence π n b is also a function of θ based on equation (4). In that case, I am interested to know why differentiating equation (8) does not yield the following result which is clearly non-zero: ∇ θ Q n s ( s , a ) = ∫ A ∇ θ π n b ( a ′ | a , s ) Q ϕ s ( s , a ′ ) d a ′ + π n b ( a ′ | a , s ) ∇ θ Q ϕ s ( s , a ′ ) Given that equation (8) is the key for all the consequent derivations, including the main theoretical result of the paper, I am concerned with the theoretical soundness of the paper. Note: With that said, I am hopeful that this is just a misunderstanding from my side and I am looking forward to hearing the authors’ response to elaborate and clear the confusion. Unfortunately for this reason, my rating is relatively low depite my acknowledgment of the strengths of the work. But I am eagerly looking forward to increasing it if the authors can explain the reasoning behind ∇ θ Q n s ( s , a ) = 0 . Update after the rebuttal: I acklowledge that I have been convinced by the argument of the authors and have increased my score from 6 (weak accept) to 7 (accept). Questions My major question is regarding the point discussed in the weaknesses. I have noted it here for easier reference. The rest of the questions are mostly minor comments/suggestions. Can you clarify equation (8) and explain why ∇ θ Q n s ( s , a ) = 0 . I am not convinced by the argument of the local approximation, as detailed above. I suggest the authors give titles to the theorems for better clarification and better guidance of the reader. An inexperienced reader may be lost within the heavy math. The parameterization of the policy is sometimes written (denoted as π θ b ) and sometimes dropped (denoted as π b or π s ). This can confuse and mislead the reader. This is for my curiosity and I was not expecting to find this answer in the paper. Can recurrent neural networks be used to parameterize the RMC? I believe prior work has made the connection between RNNs and Markov chain reasoning. Figure 1 is a very generic figure without giving much information about the method. I suggest the authors reconsider that and replace it with something more specific to their work. The colors and font size on Figure 2 makes the text hard to read. Limitations Yes, the authors have addressed the limitations and potential negative societal impact of their work.
NIPS
Title Policy Gradient With Serial Markov Chain Reasoning Abstract We introduce a new framework that performs decision-making in reinforcement learning (RL) as an iterative reasoning process. We model agent behavior as the steady-state distribution of a parameterized reasoning Markov chain (RMC), optimized with a new tractable estimate of the policy gradient. We perform action selection by simulating the RMC for enough reasoning steps to approach its steadystate distribution. We show our framework has several useful properties that are inherently missing from traditional RL. For instance, it allows agent behavior to approximate any continuous distribution over actions by parameterizing the RMC with a simple Gaussian transition function. Moreover, the number of reasoning steps to reach convergence can scale adaptively with the difficulty of each action selection decision and can be accelerated by re-using past solutions. Our resulting algorithm achieves state-of-the-art performance in popular Mujoco and DeepMind Control benchmarks, both for proprioceptive and pixel-based tasks. 1 Introduction Reinforcement learning (RL) has the potential to provide a general and effective solution to many modern challenges. Recently, this class of methods achieved numerous impressive milestones in different problem domains, such as games [1–3], robotics [4–6], and other meaningful real-world applications [7–9]. However, all these achievements relied on massive amounts of data, controlled environments, and domain-specific tuning. These commonalities highlight some of the current practical limitations that prevent RL to be widely applicable [10]. In the deep RL framework, practitioners train agents with the end goal of obtaining optimal behavior. Traditionally, agent behavior is modeled with feed-forward policies regressing from any state to a corresponding distribution over actions. Such formulation yields practical training objectives in both off-policy [11–13] and on-policy settings [14–16]. However, we identify three inherent properties of this rigid representation of behavior that could considerably impact expressivity and efficiency in continuous control tasks. First, agent behavior is restricted to a class of tractable distributions, which might fail to capture the necessary complexity and multi-modality of a task. Second, the policy performs a fixed reasoning process with a feed-forward computation, which potency cannot adapt to the varying complexity of individual action selection problems. Third, decision-making is performed every time from scratch, without re-using any past information that might still inform and facilitate the current action selection problem. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Unlike RL policies, human reasoning does not appear to follow a rigid feed-forward structure. In fact, a range of popular psychological models characterize human decision-making as a sequential process with adaptive temporal dynamics [17–20]. Many of these models have found empirical groundings in neuroscience [21–24] and have shown to effectively complement RL for capturing human behavior in experimental settings [25, 26]. Partly inspired by these works, we attempt to reframe the deep RL framework by making use of a similar flexible model of agent behavior, in order to counteract its aforementioned limitations. We introduce serial Markov chain reasoning - a new powerful framework for representing agent behavior. Our framework treats decision-making as an adaptive reasoning process, where the agent sequentially updates its beliefs regarding which action to execute in a series of reasoning steps. We model this process by replacing the traditional policy with a parameterized transition function, which defines a reasoning Markov chain (RMC). The steady-state distribution of the RMC represents the distribution of agent behavior after performing enough reasoning for decision-making. Our framework naturally overcomes the aforementioned limitations of traditional RL. In particular, we show that our agent’s behavior can approximate any arbitrary distribution even with simple parameterized transition functions. Moreover, the required number of reasoning steps adaptively scales with the difficulty of individual action selection problems and can be accelerated by re-using samples from similar RMCs. To optimize behavior modeled by the steady-state distribution of the RMC, we derive a new tractable method to estimate the policy gradient. Hence, we implement a new effective off-policy algorithm for maximum entropy reinforcement learning (MaxEnt RL) [27, 28], named Steady-State Policy Gradient (SSPG). Using SSPG, we empirically validate the conceptual properties of our framework over traditional MaxEnt RL. Moreover, we obtain state-of-the-art results for popular benchmarks from the OpenAI Gym Mujoco suite [29] and the DeepMind Control suite from pixels [30]. In summary, this work makes the following key contributions: 1. We propose serial Markov Chain reasoning a framework to represent agent behavior that can overcome expressivity and efficiency limitations inherent to traditional reinforcement learning. 2. Based on our framework, we derive SSPG, a new tractable off-policy algorithm for MaxEnt RL. 3. We provide experimental results validating theorized properties of serial Markov Chain reasoning and displaying state-of-the-art performance on the Mujoco and DeepMind Control suites. 2 Background 2.1 Reinforcement learning problem We consider the classical formulation of the reinforcement learning (RL) problem setting as a Markov Decision Process (MDP) [31], defined by the tuple (S,A, P, p0, r, ). In particular, at each discrete time step t the agent experiences a state from the environment’s state-space, st 2 S, based on which it selects an action from its own action space, at 2 A. In continuous control problems (considered in this work), the action space is typically a compact subset of an Euclidean space Rdim(A). The evolution of the environment’s state through time is determined by the transition dynamics and initial state distribution, P and p0. Lastly, the reward function r represents the immediate level of progress for any state-action tuple towards solving a target task. The agent’s behavior is represented by a state-conditioned parameterized policy distribution ⇡✓. Hence, its interaction with the environment produces trajectories, ⌧ = (s0, a0, s1, ..., sT , aT ), according to a factored joint distribution p⇡✓ (⌧) = p0(s0) QT t=0 ⇡✓(at|st)P (st+1|st, at). The RL objective is to optimize agent behavior as to maximize the discounted sum of expected future rewards: argmax✓ Ep⇡✓ (⌧) hPT t=0 tr(st, at) i . 2.2 Maximum entropy reinforcement learning and inference Maximum entropy reinforcement learning (MaxEnt RL) [32] considers optimizing agent behavior for a different objective that naturally arises when formulating action selection as an inference problem [33–36]. Following Levine [28], we consider modeling a set of binary optimality random variables with realization probability proportional to the exponentiated rewards scaled by the temperature ↵, p(Ot|st, at) / exp( 1↵r(st, at)). The goal of MaxEnt RL is to minimize the KL-divergence between trajectories stemming from agent behavior, p⇡✓ (⌧), and the inferred optimal behavior, p(⌧ |O0:T ): DKL (p⇡✓ (⌧)||p(⌧ |O0:T )) = Ep⇡✓(⌧) " log p0(s0) QT t=0 ⇡✓(at|st)P (st+1|st, at) p0(s0) QT t=0 exp( 1 ↵r(st, at))P (st+1|st, at) # = Ep⇡✓ (⌧) " TX t=0 r(st, at) + ↵H(⇡(·|st)) # . (1) The resulting entropy-regularized objective introduces an explicit trade-off between exploitation and exploration, regulated by the temperature parameter ↵ scaling the policy’s entropy. An effective choice to optimize this objective is to learn an auxiliary parameterized soft Q-function [37]: Q⇡ (st, at) = Ep⇡✓ (⌧ |st,at) " r(st, at) + TX t0=t+1 r(st0 , at0) + ↵H(⇡(at0 |st0) # . (2) Given some state, Q⇡ (s, ·) represents an energy-function based on the expected immediate reward and the agent’s future likelihood of optimality from performing any action. Thus, we can locally optimize the MaxEnt objective by reducing the KL-divergence between ⇡ and the canonical distribution of its current soft Q-function. This is equivalent to maximizing the expected soft Q-function’s value corrected by the policy’s entropy, resembling a regularized policy gradient objective [11, 12]: argmax ✓ Es,a⇠⇡✓(·|s) ⇥ Q⇡ (s, a) + ↵H(⇡✓(a|s)) ⇤ . (3) The policy is usually modeled with a neural network outputting the parameters of some tractable distribution, such as a factorized Gaussian, ⇡✓(·|s) = N(µ✓(s);⌃✓(s)). This practice allows to efficiently approximate the gradients from Eqn. 3 via the reparameterization trick [38]. We consider the off-policy RL setting, where the agent alternates learning with storing new experience in a data buffer, D. We refer the reader to Haarnoja et al. [13, 39] for further derivation and practical details. 3 Policy Gradient with serial reasoning 3.1 Reasoning as a Markov chain We introduce Serial Markov Chain Reasoning, a new framework to model agent behavior, based on conceptualizing action selection as an adaptive, sequential process which we refer to as reasoning. Instead of using a traditional policy, the agent selects which action to execute by maintaining an internal action-belief and a belief transition (BT-) policy, ⇡b(a0|a, s). During the reasoning process, the agent updates its action-belief for a series of reasoning steps by sampling a new action with the BT-policy ⇡b taking both environment state and previous action-belief as input. We naturally represent this process with a reasoning Markov chain (RMC), a discrete-time Markov chain over different action-beliefs, with transition dynamics given by the BT-policy. Hence, for any input environment state s and initial action-belief a0, the n-step transition probabilities of the RMC for future reasoning steps n = 1, 2, 3, ... are defined as: ⇡bn(a|a0, s) = Z A ⇡b(a|a0, s)⇡bn 1(a0|a0, s)da0, for n > 1, and ⇡b1 = ⇡b. (4) Given a compact action space and a BT-policy with a non-zero infimum density, we can ensure that as the number of reasoning steps grows, the probability of any action-belief in the RMC converges to some steady-state probability which is independent of the initial action-belief.1 We denote this implicit probability distribution as the steady-state (SS-) policy, symbolized by ⇡s(a|s): Lemma 3.1. Steady-state convergence. For any environment state s, consider a reasoning Markov chain (RMC) defined on a compact action space A with transition probabilities given by ⇡b(a0|a, s). Suppose that inf{⇡b(a0|a, s) : a0, a 2 A} > 0. Then there exists a steady-state probability distribution function ⇡s(·|s) such that: lim n!1 ⇡bn(a|a0, s) ! ⇡s(a|s) for all a 2 A. (5) Proof. See Appendix A. The RMC’s steady-state probabilities can be interpreted as representing the distribution of agent’s behavior after an appropriate number of reasoning steps are performed. In this work, we strive to optimize the agent’s behavior following the MaxEnt RL framework described in Section 2. In particular, we consider learning a parameterized BT-policy, ⇡b✓, to produce appropriate transition probabilities for each environment state such that the SS-policy, ⇡s✓ , from the resulting RMC optimizes: argmax ✓ J(✓) = Es,a⇠⇡s✓(·|s) ⇥ Qs (s, a) + ↵H(⇡ s ✓(a|s)) ⇤ . (6) Here, Qs is a parameterized soft Q-function for the agent’s behavior from ⇡ s, which we learn by minimizing a squared soft Bellman loss utilizing delayed parameters 0 and samples from ⇡s✓: argmin J( ) = Es,a,s0 h (Qs (s, a) ⇣ r(s, a) + Ea0⇠⇡s✓(·|s) ⇥ Q s 0(s 0 , a 0) + ↵H(⇡s✓(a 0|s0) ⇤⌘i2 . (7) In Fig. 2, we illustrate the relationship between a learned BT-policy, the corresponding SS-policy, and the soft Q-function in a 1-dimensional toy task (see App. C for details). In this example, the BT-policy is parameterized as a simple squashed Gaussian distribution, with unimodal transitions between consecutive action beliefs (Fig. 2, Left). We obtain samples of agent behavior (the SS-policy) by performing a series of reasoning steps, using the BT-policy to simulate the RMC until we approach steady-state convergence. By plotting the resulting empirical distribution of agent behavior, we see it closely matches the multi-modal, non-Gaussian canonical distribution from its soft Q-function (Fig. 2, Right). This example shows how the expressive power of agent behavior in our framework can go far beyond the BT-policy’s simple parameterization, enabling for the effective maximization of complex and multi-modal MaxEnt objectives. 3.2 Learning the belief transition policy We propose a new method to estimate the policy gradient of the BT-policy, ⇡b✓, for optimizing the steady-state MaxEnt objective described in Section 3.1. We note that the gradient from Eq. 6 involves differentiating through an expectation of the steady-state policy, ⇡s✓ . However, ⇡ s ✓ is only implicitly defined, and its connection with the actual BT-policy or its parameters does not have a tractable closed-form expression. To approach this problem, we introduce a family of n-step extensions to the soft Q-function, Qsn : S ⇥A 7! R for n = 0, 1, 2, . . . , defined as: Qsn(s, a) = Z A ⇡bn(a 0|a, s)Qs (s, a0)da0, with r✓Qsn(s, a) = 0. (8) Intuitively, each n-step soft Q-function Qsn(s, a) outputs the expected soft Q-value after performing n reasoning steps in the RMC from the initial action-belief a. However, we treat the output of each n-step soft Q-function as being independent of the actual parameters of the BT-policy, ✓. Hence, we can interpret computing Qsn(s, a) as simulating the RMC with a fixed and immutable copy of the current ⇡b✓. We use this definition to provide a convenient notation in the following new Theorem that expresses the policy gradient without differentiating through ⇡s✓: 1This is unrelated to the steady-state distribution for infinite-horizon MDPs considered in prior work [40]. Theorem 3.2. Steady-state policy gradient. Let ⇡b✓(·|a, s) be a parameterized belief transition policy which defines a reasoning Markov chain with a stationary distribution given by the steady-state policy ⇡s✓(·|s). Let Qs be a real function defined on S ⇥A, with a family of n-step extensions {Qsn} as defined in Eq. 8. Suppose ⇡b, Qs and their gradient with respect to ✓ (denoted r✓) are continuous and bounded functions. Then r✓ Ea⇠⇡s✓(·|s) [Q s(s, a)] = Ea⇠⇡s✓(·|s) " lim N!1 NX n=0 r✓ Ea0⇠⇡b✓(·|a,s) [Q s n(s, a 0)] # . (9) Proof. See Appendix A. Using Lemma 3.1 (steady-state convergence), we can approximate the policy gradient expression in Eq. 9 with an arbitrarily small expected error using a finite number of n-step soft Q-functions, i.e., N (see App. A). An intuition for this property follows from the fact that for large enough n, Lemma 3.1 implies that ⇡bn(a|a0, s) ⇡ ⇡s✓(a|s) and, thus, Qsn(s, a0) ⇡ R A ⇡ b ✓s(a|s)Qs (s, a)da. Therefore, the value of each Qsn(s, a0) will be independent of the BT-policy’s action a0, such that r✓ Ea0⇠⇡b✓(·|a,s) [Q s n(s, a 0)] ⇡ 0. In other words, each subsequent step in the RMC introduces additional randomness that is independent of a0, causing a warranted vanishing gradient phenomenon [41] which culminates with converging to ⇡s✓ . Using a similar notation as Haarnoja et al. [39], we apply the reparameterization trick [38] to express the BT-policy in terms of a deterministic function f b✓ (a, s, ✏), taking as input a Gaussian noise vector ✏. This allows to rewrite the gradient in each inner expectation in the sum from Eq. 9 as: r✓ Ea0⇠⇡b✓(·|a,s) [Q s n(s, a 0)] = E✏0⇠N(0,1) ⇥ ra0Qsn(s, a0)r✓f b✓ (a, s, ✏0) ⇤ , (10) where a0 = f b✓ (a, s, ✏0). We can apply the same reparameterization for all n-step soft Q-functions, to establish a new relationship between the gradient terms ra0Qsn(s, a0): ra0Qsn(s, a0) = ra0 Z A ⇡bn(an|a0, s)Qs (s, an)dan = ra0 Z A ⇡b(a1|a0, s)Qsn 1(s, a1)da1 = E✏1 ⇥ ra1Qsn 1(s, a1)ra0f b(a0, s, ✏1) ⇤ , where, a1 = f(a0, s, ✏1). (11) In Eq. 11, we purposefully omit the dependence of f b and ⇡b from ✓ since each Qsn term is a local approximation of the RMC that does not depend on ✓ (as defined in Eq. 8). By recursively applying this relationship (Eq. 11) to ra1Qsn 1(s, a1) and all subsequent gradient terms we obtain: ra0Qsn(s, a0) = E✏1,...,✏n " ranQs (s, an) n 1Y i=0 raif b(ai, s, ✏i+1) # , (12) where ai = f b(ai 1, s, ✏i) for i = 1, . . . , n. By combining Eq. 10 and Eq. 12, we can thus reparameterize and express the whole sum in Eq. 9 as: r✓ Ea⇠⇡s✓(·|s) [Q s(s, a)] ⇡ NX n=0 r✓ Ea0⇠⇡b✓(·|a,s) [Q s n(s, a 0)] = E✏0,...,✏N " NX n=0 ranQs (s, an) n 1Y i=0 raif b(ai, s, ✏i+1) ! r✓f b✓ (a, s, ✏0) # . (13) Eq. 13 intuitively corresponds to differentiating through each Qsn(s, a0) term by reparameterizing the RMC. Hence, to get a sample estimate of the policy gradient we can simulate the reparameterized RMC for N reasoning steps to obtain a1, ..., aN , compute each Qs (s, an) term, and backpropagate (e.g., with autodifferentiation). Following Haarnoja et al. [13, 39], we can apply Theorem 3.2 and easily extend the same methodology to estimate the MaxEnt policy gradient from Eq. 6 that also involves an extra entropy term. We include this alternative derivation in App. A for completeness. Algorithm 1 Agent Acting input: s, current state a0 ⇠ Â N 0 R p +1 while R p > 1.1 do aN+1 ⇠ ⇡b✓(·|aN) N N + 1 Update Rp with a1:N . Eq.16 N̂ ⇢N̂ + (1 ⇢)N . ⇢ 2 [0, 1) Â Â [ a1:N output: a ⇠ a1:N Algorithm 2 Agent Learning input: D, data buffer (s, a, s0, r) ⇠ D a0 ⇠ ⇡b✓(·|a, s0) for n 0, dN̂e do Q s n Qs (s0, an) . Eq. 8 ✏n+1 ⇠ N(0, 1), an+1 = f b(an, s, ✏n+1) r✓Qs r✓( PdNe n=0 Q s n) . Thm. 3.2 argmin✓ J(✓) . Eq. 6 a 0 ⇠ a1:dN̂e argmin J( ) . Eq. 7 3.3 Action selection and temporal consistency To collect experience in the environment, we propose to perform reasoning with the BT-policy starting from a set of different initial action-beliefs {a00, ..., aM0 }. We batch this set as a single input matrix, a0, to make effective use of parallel computation. To reduce the required number of reasoning steps and facilitate detecting convergence to ⇡s✓ , we identify two desirable properties for the distribution of action-beliefs in a0. In particular, initial action-beliefs should 1) be likely under ⇡s✓ , and 2) cover diverse modes of ⇡s✓ . Property (1) should logically accelerate reasoning by providing the BT-policy with already-useful information about optimal behavior. Property (2) serves to provide the BT-policy with initial information of diverse behavior, which facilitates convergence detection (Sec. 3.4) and expedites reasoning even if the RMC has slow mixing times between multiple modes. To satisfy these properties, we use a simple effective heuristic based on common temporal-consistency properties of MDPs [42, 43]. Especially in continuous environments, actions tend to have small individual effects, making them likely relevant also for environment states experienced in the near future. Thus, we propose storing past action-beliefs in a fixed sized buffer, called the short-term action memory, Â, and use them to construct a0. We find this strategy allows to effectively regulate the initial action-beliefs quality and diversity through the size of Â, accelerating convergence at negligible cost. 3.4 Detecting convergence to the steady-state policy A key requirement for learning and acting with BT-policies, as described in Sections 3.2 and 3.3, is the ability to determine a sufficient number of reasoning steps (N ) for the action-belief distribution to converge. Given the properties of the RMC, there exist different analytical methods that provide a priori bounds on the rate of convergence [44–46]. However, using any fixed N would be extremely limiting as we expect the BT-policy and the properties of its resulting RMCs to continuously evolve during training. Moreover, different tasks, states, and initial action-beliefs might affect the number of reasoning steps required for convergence due to different levels of complexity for the relative decisionmaking problems. To account for similar conditions, in the Markov Chain Monte Carlo literature, the predominant approach is to perform a statistical analysis of the properties of the simulated chain, choosing from several established convergence diagnostic tools [47–49]. Hence, we propose to employ a similar adaptive strategy by analyzing the history of the simulated RMC to determine the appropriate number of reasoning steps. Since we apply ⇡b✓ from a diverse set of initial action beliefs (see Section 3.3), we base our convergence-detection strategy on the seminal Gelman-Rubin (GR) diagnostic [50] and its multivariate extension [51]. In particular, the multivariate GR diagnostic computes the pseudo scale reduction factor (PSRF), a score representing whether the statistics of a multivariate variable of interest have converged to the steady-state distribution. The intuition behind this diagnostic is to compare two different estimators of the covariance for the unknown steady-state distribution, making use of either the samples within each individual chain and between all different chains. Thus, as the individual chains approach the true steady-state distribution, the two estimates should expectedly get closer to each other. The PSRF measures this precise similarity based on the largest eigenvalue of their matrix product. For our use-case, we employ the PSRF to determine the convergence of the set of action-beliefs a1:N, as we perform consecutive reasoning steps with ⇡b✓. Following [51], we calculate the average sample covariance of the action-beliefs within each of the parallel chains (W ) computed from a batched set of initial action-beliefs a0 = [a10, a20, . . . aM0 ]: ām = 1 N NX n=1 amn , Wm = 1 N 1 NX n=1 (a ām)(a ām)T , W = 1 M MX m=1 Wm. (14) We compare W with an unbiased estimate of the target covariance, constructed from the sample covariance between the different parallel chains (B): ā = 1 N ⇥M NX n=1 MX n=1 amn , B = 1 M 1 NX n=1 (ām ā)(ām ā)T . (15) The PSRF for a1:N is then computed from the largest eigenvalue ( max) of the product W 1B, as: Rp = r N 1 N + max(W 1B). (16) Thus, as the individual chains approach the distribution of ⇡s✓ , the PSRF (R p) will approach 1. Following Brooks and Gelman [51], we use Rp < 1.1 as an effective criterion for determining the convergence of a1:N. In practice, we also keep a running mean of the current number of reasoning steps for convergence, N̂ . We use dN̂e as the number of reasoning steps to simulate the RMC with ⇡b✓ when computing gradients from Eqs. 6-7. dN̂e is a safe choice to ensure near unbiased optimization since Rp < 1.1 is considered a very conservative criterion [52] and we can learn by simulating the RMC from recent actions stored in the data buffer, which are already likely close to optimal. We provide further details regarding our implementation and its rationale in App. B. We provide a simplified summary of our adaptive reasoning process for acting and learning in Algs. 1-2. 3.5 Advantages of serial Markov chain reasoning Based on the above specification, we identify three main conceptual advantages of our serial Markov chain reasoning framework. 1. Unlimited expressiveness. The distribution of agent behavior given by the SS-policy ⇡s✓ , is a mixture model with potentially infinitely many components. Thus, even a simple Gaussian parameterization of the BT-policy ⇡b✓ would make ⇡ s ✓ a universal approximator of densities, providing unlimited expressive power to the agent [53, 54]. 2. Adaptive computation. The number of reasoning steps performed to reach approximate convergence is determined by the properties of each environment state’s RMC. Hence, the agent can flexibly spend different amounts of computation time based on the complexity of each action-selection problem, with potential gains in both precision and efficiency. 3. Information reuse. By storing past solutions to similar RMCs, we can initialize the reasoning process with initial action-beliefs that are already close to ⇡s✓ . This allows using the temporal-consistency properties of the MDP to exploit traditionally discarded information and accelerate agent reasoning. We provide empirical validation for these properties in Section 4.2. 4 Experimentation 4.1 Performance evaluation We evaluate the serial Markov chain reasoning framework by comparing its performance with current state-of-the-art baselines based on traditional RL. We consider 6 challenging Mujoco tasks from Gym [29, 56] and 12 tasks pixel-based tasks from the DeepMind Control Suite (DMC) [30]. In both settings, we base our implementation on MaxEnt RL, replacing the traditional policy with a Gaussian BT-policy optimized with the training procedures specified in Sec. 3. Other orthogonal design choices (e.g., network architectures) follow contemporary RL practices, we refer to App. C or the code for full details. We call the resulting algorithm Steady-State Policy Gradient (SSPG). We report the mean performance curves and aggregate metrics using the statistical tools from Rliable [55]. In particular, we compare normalized performance profiles [57], interquantile mean (IQM), and probability of improvements over baselines with the Mann-Whitney U statistic [58]. The reported ranges/shaded regions represent 95% stratified bootstrap confidence intervals (CIs) [59]. In App. D, we provide per-task results and further statistical analysis. For each experiment, we collect the returns of SSPG over five seeds, by performing 100 evaluation rollouts during the last 5% of steps. Mujoco suite. We evaluate on a challenging set of Mujoco tasks popular in recent literature. We compare SSPG with recent RL algorithms achieving state-of-the-art sample-efficiency performance on these tasks, which utilize large critic ensembles and high update-to-data (UTD) ratios. We consider REDQ [60] and MBPO [61] for state-of-the-art algorithms based on the traditional model-free and model-based RL frameworks. We also compare with iterative amortized policy optimization (IAPO) [62], in which the agent performs iterative amortization to optimize its policy distribution [63]. This procedure for action selection is more computationally involved than our agent’s reasoning process, as it requires both evaluating the policy and computing gradients at several iterations. Yet, as IAPO is still based on the traditional policy gradient framework, its benefits are solely due to reducing the amortization gap with an alternative action inference procedure. To ground different results, we also show the performance of the seminal Soft Actor-Critic (SAC) algorithm [39], upon which all considered policy gradient baselines are based on. To account for the additional computational cost of training an agent with serial Markov chain reasoning, we use a UTD ratio that is half the other algorithms. On our hardware, this makes SSPG faster than all other modern baselines (see App. D). Figure 3 (Top) shows the performance results after 100K environment steps. Individual scores are normalized using the performance of SAC after 3M steps, enough to reach convergence in most tasks. SSPG considerably outperforms all prior algorithms with statistically meaningful gains, as per the conservative Neyman-Pearson statistical testing criterion [64]. Furthermore, SSPG even stochastically dominates all considered state-of-the-art baselines [65]. We obtain similar results evaluating at 50K and 200K steps (App. D). In comparison, IAPO obtains lower performance than other non-iterative baselines while being the most compute-intensive algorithm. This indicates that, for sample-efficiency, only reducing the amortization gap beyond direct estimation might not provide significant benefits. Instead, serial Markov chain reasoning’s improved expressivity and flexibility appear to considerably accelerate learning, yielding state-of-the-art performance in complex tasks. DeepMind Control suite. To validate the generality of our framework, we also evaluate on a considerably different set of problems: 12 pixel-based DMC tasks. We follow the recent task specifications and evaluation protocols introduced by Yarats et al. [66]. We compare SSPG with DrQv2 [66], the current state-of-the-art policy gradient algorithm on this benchmark, which employs a deterministic actor and hand-tuned exploration. We also compare with additional baselines that, like SSPG, are based on MaxEnt RL: DrQ [67], CURL [68], and a convolutional version of SAC [39]. Figure 3 (Bottom) shows the performance results after 1.5M environment steps. DMC tasks yield returns scaled within a set range, [0, 1000], which we use for normalization. Remarkably, also in this domain, SSPG attains state-of-the-art performance with statistically significant improvements over all baselines. Unlike for the Mujoco tasks, the other considered algorithms based on MaxEnt RL underperform as compared to the deterministic DrQv2, a result Yarats et al. [66] attributed to ineffective exploration. In contrast, SSPG yields performance gains especially on sparser reward tasks where the other baselines struggle (see App. D). These results validate the scalability of our framework to high-dimensional inputs and its ability to successfully complement MaxEnt RL. 4.2 Properties of serial Markov chain reasoning We test if theorized benefits of our framework (Sec. 3.5) hold in practical settings with deep networks and stochastic optimization. We provide further ablation studies and analysis of SSPG in App. E. 1. Policy expressiveness. First, we test the expressiveness of the behavior learned with SSPG using a Gaussian BT-policy. We design a series of single-step toy RL problems where the agent needs to position itself on a small 2D environment with a reward function based on unknown goal locations, which we name positional bandits (see App. C for details). The objective of these experiments is to isolate how our framework compares with traditional policies for MaxEnt RL to explore the environments and learn to match the true canonical distributions of returns. As displayed in Fig. 4 A, even in highly multi-modal positional bandits, the SS-policy successfully learns to visit all relevant goals with similar frequencies. Furthermore, quantizing the state space around the goals reveals that the relative RMC intuitively learns to transition between action-beliefs that visit the different goals as reasoning progresses, with a transition matrix matching a cyclic permutation (App. F). In comparison, a squashed Gaussian policy expectedly fails to capture the complexity of the canonical distribution, with samples either collapsing to a single mode or covering large suboptimal parts of the action space. We also show results for a policy based on normalizing flows [69, 70], modeled with a deep expressive network (App. C). After several attempts, we find these models require orders of magnitude more training iterations and data to learn any behavior that is more complex than a uni-modal distribution. Yet, even after increasing training by a factor of 1000, we still observe the flow policy distribution collapsing in the more complex positional bandits. We attribute our findings to training inefficiencies from a lack of proper inductive biases for flow models in the non i.i.d. RL problem setting [71]. In particular, as flows can assign arbitrarily low probability mass to some regions of the action space, initial local optima can greatly hinder future exploration, exacerbating coverage of the data buffer distribution in a vicious circle. 2. Policy adaptivity. Second, we examine the adaptivity of our framework for tackling decisionmaking problems with different complexities. We compare the average number of reasoning steps (N̄ ) performed by SSPG for each task from Sec. 4.1 (Fig. 4 B). We identify a general correlation between task difficulty and reasoning computation, with complex robotic manipulation and humanoid locomotion problems requiring the most steps. By concentrating on two representative tasks, we validate the effectiveness of the reasoning process and our adaptive convergence detection strategy with an ablation study where we train SSPG using a fixed number of reasoning steps Nfix 2 {1, dN̄e, d3N̄e}. For the case Nfix = 1, which closely resembles traditional RL, we use double the UTD ratio to improve performance and offset any training-time gains from multi-step reasoning. As shown in Fig. 4 C, increasing Nfix yields clear performance improvements, validating that agents can greatly benefit from performing longer reasoning processes. Furthermore, our adaptive SSPG attains the same performance as Nfix = d3N̄e and visibly outperforms Nfix = dN̄e. These results show how different action selection problems require different amounts of reasoning computation and validate the practical effectiveness of our adaptive strategy to detect steady-state convergence. We obtain analogous findings for additional tasks and values of Nfix in App. F. 3. Solution reuse. Last, we examine the effects of the short-term action memory buffer (Â) to sample initial action beliefs (a0) in two tasks. We evaluate ablating Â, randomly re-initializing a0 from a uniform distribution. While there are only minor differences performance-wise between the two approaches (App. F), sampling a0 from the short-term action memory considerably decreases the number of reasoning steps for convergence (Fig. 4 D). Moreover, we observe the gap in reasoning efficiency expands throughout training as the agent’s steady-state behavior further improves for the target task. This result validates that a simple temporal heuristic can provide considerable efficiency benefits, amortizing the additional computational cost of our powerful new framework. 5 Related work There have been several prior attempts to extend ubiquitous Gaussian policies [13, 39, 72, 73] with simple normalizing flows [69, 70], both to improve expressiveness [74, 75] and to instantiate behavior hierarchies [76]. Yet, the expressiveness of normalizing flows is coupled with some training challenges [71], which we show can lead to premature convergence to suboptimal solutions in RL (Sec. 4.2). Other works also considered entirely replacing policy models with gradient-free [4] or gradient-based optimization over the predicted values [77]. Marino et al. [62] similarly considered learning an optimizer to infer Gaussian behavior [28] with iterative amortization [63]. However, while all these works consider alternative modeling of agent behavior, they are still based on the traditional RL framework of representing decision-making as the output of a fixed process. Instead, our work entails a conceptually different approach and enables implicit modeling of agent behavior as the result of an adaptive reasoning process, orthogonally providing agents also with additional flexibility to scale computation based on the properties of each individual input state. Outside RL, there have been efforts to model generation processes with parameterized Markov chains learned to revert fixed noise injection processes acting on data [78–82]. Based on this framework, diffusion models [83–85] recently achieved remarkable results for image generation [85, 86]. While applied to inherently different problem settings, these works share some conceptual resemblances with our framework and highlight the vast scaling potential of implicit modeling. 6 Conclusion We introduced serial Markov chain reasoning, a novel framework for modeling agent behavior in RL with several benefits. We showed our framework allows an agent to 1) learn arbitrary continuous action distributions, 2) flexibly scale computation based on the complexity of individual actionselection decisions, and 3) re-use prior solutions to accelerate future reasoning. Hence, we derived SSPG an off-policy maximum entropy RL algorithm for serial Markov chain reasoning, achieving state-of-the-art performance on two separate continuous control benchmarks. While for problems with discrete action spaces simple multinomial policy distributions already provide unlimited expressivity, we note that the inherent computational adaptivity of our framework could still yield benefits over traditional fixed policies in these settings. Furthermore, we believe our motivation and early results provide a strong argument for the future potential of serial Markov chain reasoning, even beyond off-policy RL and simulation tasks. We provide our implementation for transparency and to facilitate future extensions at sites.google.com/view/serial-mcr/. Acknowledgments We thank Johannes Lutzeyer for providing valuable feedback on an earlier draft of this work. Edoardo Cetin would like to acknowledge the support from the Engineering and Physical Sciences Research Council [EP/R513064/1]. Oya Celiktutan would also like to acknowledge the support from the LISI Project, funded by the Engineering and Physical Sciences Research Council [EP/V010875/1]. Furthermore, we thank Toyota Motor Europe and Toyota Motor Corporation for providing support towards funding the utilized computational resources.
1. What is the focus and contribution of the paper on implicitly defined stochastic policies? 2. What are the strengths of the proposed approach, particularly in its ability to adapt to multi-step reasoning? 3. What are the weaknesses of the paper regarding computational costs and increased complexity? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions or concerns regarding the paper's experimental results or comparisons with other works?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper proposes a family of implicitly defined stochastic policies that arise as the steady-state distribution of a Markov Chain whose transition function is a neural net which takes an action, a system state and a noise vector and produces a new action. The authors motivate this family of policies in three ways: a) they can approach any distribution over the action space b) the computation required to plan an action can vary depending on the state c) planning can be shortened by initializing the Markov chain with recently used actions, which can be seen as a form of solution re-use. The method improves over all considered baselines in typical benchmarks. Strengths And Weaknesses Strengths The idea reminds me of things like deep equilibrium models, but I believe the specific form here might have an advantage over those (stochasticity). I certainly haven't seen this type of model applied to RL, and I found it quite interesting. The paper has a good flow. Ideas, derivations and results are presented in a way that can be followed easily. Steps that would be complicated to parse at first reading are relegated to the appendix, which is good. The experimental results are both thorough and equivocally in favor of the model. The paper presents ablations that support the motivation for using a policy that refines itself over multiple steps (beyond that of improved performance). The experiments on simple positional bandits show that the model has an easier time accommodating multi-model distributions compared to normalizing flows, which would be the go-to model for something like this (aside from mixture density networks perhaps). Figure 4 b) and d) support the intuitions about multi-step reasoning. In OpenAI-Gym, Humanoid requires the most reasoning steps, while pendulum requires the least for instance. Re-using recent actions seems to bring an overall reduction in the number of reasoning steps, which is another thing that checks out. Weaknesses I believe the core weakness of the paper is the increase in computation. The authors analyze this point in terms of training time. I believe it should also be analyzed in terms of the cost of acting during deployment. How many times per second can we query such a policy? This remains somewhat unaddressed. I think it would also be interesting to compare SSPG to the other baselines in terms of the number of reasoning steps that are used (at the end of training, if we allow SSPG to be trained the usual way). At how many steps does SSPG outperform the others? There is a comparison of SSPG to itself in this fashion in the two hardest environments, but that doesn't exactly cover the same area. The increase in complexity over a standard policy is another weakness. SSPG adds some hyper-parameters and non-trivial implementation cost into the mix. There are also concerns about the assumptions that we have to make, such as the accuracy of the convergence criteria. Questions Questions In eq. 13, the gradient term contains a product of gradients over the reasoning steps. Do the usual concerns about vanishing/exploding gradients apply here? Did you observe instability in practice? What is the real-time capability of these policies (in terms of deployment after training)? Limitations These points are addressed well, aside from my comments above about computation time at deployment.
NIPS
Title Pre-Train Your Loss: Easy Bayesian Transfer Learning with Informative Priors Abstract Deep learning is increasingly moving towards a transfer learning paradigm whereby large foundation models are fine-tuned on downstream tasks, starting from an initialization learned on the source task. But an initialization contains relatively little information about the source task, and does not reflect the belief that our knowledge of the source task should affect the locations and shape of optima on the downstream task. Instead, we show that we can learn highly informative posteriors from the source task, through supervised or self-supervised approaches, which then serve as the basis for priors that modify the whole loss surface on the downstream task. This simple modular approach enables significant performance gains and more data-efficient learning on a variety of downstream classification and segmentation tasks, serving as a drop-in replacement for standard pre-training strategies. These highly informative priors also can be saved for future use, similar to pre-trained weights, and stand in contrast to the zero-mean isotropic uninformative priors that are typically used in Bayesian deep learning. 1 Introduction The ability to transfer what is learned from one task to another — learning to ride a bicycle to then ride a unicycle — has historically set apart biological intelligence from machine learning approaches. However, transfer learning is quickly becoming mainstream practice in deep learning. Typically, large “foundation models” are pre-trained on massive volumes of source data, and then the learned parameter vector is used as an initialization for training in a downstream task. While this approach has had great empirical success, reliance on an initialization is a very limited way to perform transfer learning. If we are doing a good job of optimization, then our final solution should be independent of initialization, barring local minima with identical training loss. Moreover, our knowledge of the source task should affect the locations and shapes of optima on the downstream knowledge. We propose to instead use a re-scaled Bayesian parameter posterior from the first task as a pre-trained prior for the downstream task. Since the negative log posterior on the downstream task is our loss function, this procedure has the effect of reshaping our training objective on the downstream task, ∗Authors contributed equally. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). to reflect more nuanced information that we have learned from the source task, as in Figure 1. The posterior on the source task is re-scaled before use as a prior on the downstream task to reflect the belief that the source and downstream tasks are drawn from different distributions. With our now highly informative prior for the downstream task, we can then proceed with optimization, or perform full Bayesian model averaging with the posterior on the downstream task. We find both procedures profoundly improve upon standard deep transfer learning, making use of a simple pipeline that involves easy-to-use pre-existing components. Indeed, the simplicity of the approach, combined with its promising empirical performance, is one of its greatest features — enabling its use as a drop-in replacement for standard approaches for deep transfer learning. The pre-training posterior can be saved as a prior for a wide array of downstream tasks, similar to pre-trained weights. Despite the simplicity and effectiveness of this approach, Bayesian neural networks are almost always used with uninformative isotropic zero-mean priors [51, 25, 15], and have not harnessed recent advances in self-supervised learning. Furthermore, as we will show, the details of how we perform Bayesian transfer learning with modern neural networks are crucial to practical success. For example, it is important that the prior on the source task be represented with an informative covariance matrix, indicating which directions in parameter space we expect to provide good solutions on the source task, which significantly outperforms isotropic or diagonal counterparts. We also provide several key conceptual findings — for example, that the informative priors provide an inductive bias that enables more data efficient fine-tuning than a pre-trained initialization, and that pre-trained priors based on self-supervised learning are more transferable than supervised pre-trained priors. Finally, we observe through extensive experiments across both image classification and semantic segmentation on multiple neural architectures, and on both supervised and self-supervised (SimCLR [6]) pre-training loss functions, that Bayesian inference is particularly advantageous in the transfer learning setting. We emphasize that the proposed approach outperforms standard transfer learning, without requiring any expert intervention or knowledge of Bayesian methods. One only needs to tune a single additional hyperparameter using standard cross-validation, which incurs minimal computational overhead. We release PyTorch pre-trained priors and code for learning and using priors for downstream inference: https://github.com/hsouri/BayesianTransferLearning. 2 Background Bayesian Neural Networks. Consider a neural network f with weights w and a training dataset D = {(x(i), y(i))} of independently drawn samples. In standard non-Bayesian neural network training, we minimize the negative log posterior, − log p(w|D, f) ∝ − log p(D|w, f)− log p(w|f), where the likelihood p(D|w, f) denotes the probability that a model with weights w would generate the observed labels {y(i)} (e.g., cross-entropy). The log prior log p(w|f) often takes the form of weight decay, corresponding to a zero-mean isotropic Gaussian prior. In Bayesian modeling, we instead make predictions with a Bayesian Model Average (BMA) of all models weighted by their posterior probabilities: p(y|x,D, f) = ∫ p(y|x,w, f)p(w|D, f)dw. There are many ways to approximate this integral, such as MCMC, or variational approaches, which take a finite number of approximate samples wj from the parameter posterior to form the simple Monte Carlo estimate: p(y|x,D, f) ≈ 1J ∑ j p(y|x,wj , f). Almost always, one uses zero-mean isotropic priors [51]. There are also specialized priors, including heavy-tailed priors [15, 25], noise-contrastive priors for high uncertainty under distribution shift [17], and input-dependent priors for domain generalization [24]. Transfer Learning. In transfer learning, we wish to recycle the representation learned on one task to improve performance on another. Transfer learning is now widely applied in deep learning, and forms the basis for foundation models [1, 12, 8, 43, 4, 20, 10], which are exceptionally large neural networks pre-trained on massive volumes of data, and then fine-tuned on a downstream task. Recent work has found self-supervised pre-training can transfer better than supervised pre-training [19], in line with our finding in Section 4 that self-supervised pre-trained priors transfer better. In continual learning, we wish to learn tasks sequentially without forgetting what has been learned previously. Elastic Weight Consolidation (EWC) prevents forgetting by imposing a penalty, the diagonal of a Fisher information matrix computed on previous tasks, for adapting to a new task [29]. We can contextualize the EWC penalty within Bayesian transfer learning as a negative log Gaussian prior with diagonal covariance used for maximum a posteriori (MAP) estimation on sequential tasks, such as digit classification or RL [29]. In our experiments, we find that MAP estimation, diagonal covariance, and supervised pre-training are all suboptimal in the transfer learning setting. Leveraging Auxiliary Data and Knowledge Transfer in Bayesian Modeling. A number of works outside of deep learning have considered knowledge transfer in Bayesian modeling, especially in settings such as domain adaptation or homogeneous transfer learning in which source and target tasks contain similar feature and label spaces but differ in their marginal distributions. For example, Xuan et al. [52] considers Bayesian transfer learning methods for probabilistic graphical models, and Karbalayghareh et al. [28] develop a theoretical framework for understanding optimal Bayes classifiers given prior knowledge from other domains. Other works on Bayesian transfer learning learn a Dirichlet prior over naive Bayes classifiers [44]. Bayesian optimization methods can also find transferable hyperparameter settings [40] or select data which will yield transferable models on shifted domains [45]. Schnaus [47] uses the Laplace approximation to learn posteriors with Kroneckerfactored covariance and then adjusts the posteriors by optimizing PAC-Bayes generalization bounds to create priors for downstream applications. Bayesian tools have additionally been used for leveraging auxiliary data or multiple domains in deep learning. Chandra and Kapoor [2] learn from multiple domains simultaneously using a round-robin task sampling procedure and a single-layer neural network. Bayesian methods for continual learning update the posterior to accommodate new tasks without forgetting how to perform previous ones [35, 49, 13, 27, 46], or develop kernels based on neural networks trained on previous tasks for Gaussian process inference [33, 37]. Semi-supervised algorithms can incorporate unlabeled data into the training pipelines of BNNs using biologically plausible Bayesian Confidence Propagation Neural Networks (BCPNN) which model the cortex [41], by perturbing weights and using consistency regularization [11], or via semi-supervised deep kernel learning [26]. Gao et al. [16] also show how to harness unlabeled data to learn reference priors, uninformative priors which depend on the amount of training data and allow labels to most efficiently inform inference. Deep kernel learning has also been used for Bayesian meta-learning in few-shot classification and regression [39]. In contrast to these works, we do not perform multi-task learning nor is our goal to harness auxiliary unlabeled downstream data; we instead approach transfer learning in which we leverage pre-training data for expressive priors that maximize performance on a single downstream task. 3 Bayesian Transfer Learning with Pre-Trained Priors In order to transfer knowledge acquired through pre-training to downstream tasks, we adopt a pipeline with three simple components, composed of easy-to-use existing tools: 1. First, we fit a probability distribution with closed-form density to the posterior over feature extractor parameters using a pre-trained checkpoint and SWAG [34] (Section 3.1). 2. Second, we re-scale this distribution, viewed as a prior for new tasks, to reflect the mismatch between the pre-training and downstream tasks and to add coverage to parameter settings which might be consistent with the downstream, but not pre-training, task. To this end, we tune a single scalar coefficient on held out validation data (Section 3.2). Finally, we use this re-scaled prior, along with a zero-mean isotropic Gaussian prior over added parameters (e.g. classification head), to form a posterior on the downstream task. We then either optimize the posterior, or use it to perform full Bayesian inference with SGLD and SGHMC samplers [42, 5] (Section 3.3) 3. Finally, we plug the re-scaled prior into a Bayesian inference algorithm, along with a zeromean isotropic Gaussian prior over added parameters (e.g. classification head) to form a posterior on the downstream task. We then either optimize the posterior, or use it to perform full Bayesian inference with SGLD and SGHMC samplers [50, 5] (Section 3.3). We illustrate this pipeline in Figure 5 in the Appendix. The simplicity and modularity of this framework are key strengths: by carefully combining easy-to-use existing components, we will see in Section 4 that we can straightforwardly improve the default approaches to deep transfer learning. The potential for impact is significant: we can use this pipeline as a drop-in replacement for standard procedures used in deploying foundation models. At the same time, there is significant novelty: while Bayesian neural networks are typically used with simple zero-mean isotropic priors, we leverage the significant developments in self-supervised learning to produce highly informative priors. Moreover, in the following subsections (3.1, 3.2, and 3.3) we will gain a nuanced understanding of each of these three components and the key considerations for practical success. We discuss computational considerations in Section 3.4. For experiments in this section, we use a prior learned over the parameters of an ImageNet pre-trained SimCLR ResNet-50 feature extractor [9, 6, 18], and we choose CIFAR-10 and CIFAR-100 for downstream tasks [30]. We perform Bayesian inference with stochastic gradient Hamiltonian Monte Carlo (SGHMC) [5]. The experiments in this section are primarily intended to gain conceptual insights into each step of our approach. In Section 4, we present our main empirical evaluations. 3.1 Learning Transferable Priors We begin by building a probability distribution over the parameters of a feature extractor which represents knowledge we acquire from pre-training. To this end, we fit the distribution to the Bayesian posterior, or regularized loss function, on the pre-training task. This pre-training posterior will become the prior for downstream tasks and can be saved, or publicly released, for future use, similar to pre-trained weights. This stage requires two major design choices: a pre-training task and an algorithm for constructing the probability distribution. Our downstream inference procedures require that our prior is represented as a closed-form density function. Thus, we must use a method that can provide a closed-form posterior approximation for the source task, which can be then re-scaled and used as a prior in the downstream task. We opt for SWA-Gaussian (SWAG) [34] due to its simplicity, scalability, popularity, and non-diagonal covariance. Other procedures that provide closed-form posterior approximations, such as the Laplace approximation or variational methods, could also be applied, though as we will see, a non-diagonal covariance is particularly important to the success of this approach. SWAG starts from a pre-trained model, and runs a small number M of fine-tuning epochs with a modified learning rate schedule [34]. The SWAG approximate posterior distribution is given by N (w, 12Σdiag + 1 2Σlow-rank), where w = 1 M ∑M t=1 wt is the SWA [23] solution, Σdiag = 1 L−1 ∑M t=M−L+1 diag(wt −w), and Σlow-rank = 1 L−1 ∑M t=M−L+1 (wt − w) (wt − w) ⊤, where L is a hyperparameter controlling the rank of the low-rank component of the covariance matrix. After obtaining a closed-form distribution using SWAG, we remove the head from on top of the feature extractor, for example, a linear classification module, and we consider only the distribution’s restriction to the parameters of the feature extractor. New layers that are added for downstream tasks receive a non-learned prior over their parameters. To highlight the versatility of Bayesian transfer learning, we will focus both on supervised image classification and also on self-supervised learning pre-training tasks and leverage existing torchvision and SimCLR [6] checkpoints. For both torchvision and SimCLR models, we learn the prior using the associated loss function — cross-entropy and InfoNCE, respectively, regularized by weight decay. In both settings, the regularized loss function can be represented as the sum of a negative log-likelihood and the negative log Gaussian density (weight decay). While standard SGD-based transfer learning uses a learned parameter vector, our Bayesian transfer learning approach uses a covariance matrix over feature extractor parameters which contains information about the pre-training loss surface geometry. Can we really “learn” a prior? A data-dependent prior may sound odd, because a prior reflects our beliefs “before we see the data”. However, it is entirely principled to learn a prior, as long as it is not learned using exactly the same labeled data we use in our downstream likelihood for the downstream task. Indeed, any informative prior is based on data that has shaped our beliefs. Alignment of Pre-Training and Downstream Loss Geometry. If the covariance matrix of our learned prior is to be more effective than an isotropic one, then it must not favour poorly generalizing directions for the downstream task. Thus, we compare the alignment of the learned covariance matrix with a downstream task’s test loss. We begin by computing the top 5 leading singular vectors of a SWAG learned covariance matrix over parameters of the SimCLR ImageNet-trained ResNet-50 feature extractor. We then train a linear classifier head on CIFAR-10 training data on top of the fixed pre-trained feature extractor. Starting at the learned parameter vector, we perturb the feature extractor parameters in the direction of the singular vectors (fixing fully-connected classifier parameters), and measure the increase in test loss. We compare to the test loss when instead perturbing by each of 10 random vectors. All perturbation distances are filter normalized (as in [31, 21]), to account for invariance with respect to filter-wise parameter rescaling. In Figure 2a, we see the CIFAR-10 test loss is far flatter in the directions of leading eigenvectors of the pre-trained covariance than in a random direction, indicating the learned prior indeed promotes directions consistent with the downstream task. Learned Covariance Outperforms Only a Learned Mean. After verifying that our pre-trained priors do in fact identify flat directions of the pre-training loss which are aligned with the downstream loss, we directly compare the performance benefits of our learned covariance over an isotropic covariance with a learned mean. To this end, we swap out our learned prior’s covariance with an isotropic version αI , where α is tuned on a held out validation set. Figure 2c shows that a learned covariance consistently outperforms its isotropic counterpart, indicating that the shape of the pre-training loss surface’s basin is informative for downstream tasks. The x-axis here denotes the number of training samples used for fine-tuning on the downstream task. We also see Bayesian model averaging provides further performance gains, which we discuss further in Section 3.3. We now dissect just how important the low-rank component is. How is the Prior Best Structured? In the context of continual learning, elastic weight consolidation (EWC) [29] uses purely diagonal covariance priors to help prevent forgetting in continual learning. In this paper, we are interested in transfer learning, and wish to understand the benefits of a lowrank component for capturing particularly important directions in the pre-training loss surface for transferring to downstream tasks. Omitting the low-rank component slightly reduces the memory footprint, but loses potentially important information about the shape of the pre-training posterior mode. In practice, the rank of the matrix is determined by the number of samples collected when running SWAG. In Figure 2b, we present the performance of a prior learned via SWAG on a SimCLR ResNet-50 feature extractor for transfer learning to CIFAR-10 and CIFAR-100. As the rank of the low-rank component increases from zero (diagonal covariance), we see the performance improves dramatically until it saturates, indicating that only a small number of dominant directions are important for the transferability of the prior. Note that performance saturation occurs later for CIFAR-100 than CIFAR-10 — likely due to the higher complexity of CIFAR-100 compared to CIFAR-10, which has 10× fewer classes. 3.2 Rescaling the Prior In incremental learning, it is common to acquire some data, form a posterior, and then use this posterior as our new prior in acquiring future data. This procedure is equivalent to forming the posterior from all of the data at once, assuming all data are drawn from the same distribution with the same likelihood. However, in transfer learning, we assume the data from the source and downstream task are drawn from different but related distributions. Thus we do not want to directly re-use a posterior from the source as a prior for the downstream task, without modification. For example, as we acquire more data from the source task, our posterior will become increasingly concentrated. This concentration is an issue, as certainty with respect to the ideal parameters for the pre-training task does not imply certainty on the ideal parameters for the downstream task. To address this consideration, we rescale the learned Gaussian prior by multiplying its covariance matrix by a scalar value. We select the highest performing scalar value across a grid on a holdout set from the downstream task. If we do not scale the covariance enough, our prior will become concentrated around parameters which are inconsistent with the downstream task, and if we scale the covariance too much, our prior will be diffuse and assign too much mass to regions of parameter space which are again inconsistent with the downstream task. We now put this intuition to the test. We fit the SimCLR pre-training loss using a Gaussian prior with mean µ and covariance matrix Σ. Given our uncertainty regarding the strength of the relationship between the pre-training and downstream tasks, it is unclear if our learned prior would lead to over-confidence on the downstream task. To rectify this problem, we instead assign prior covariance matrix λΣ with λ ≥ 1. Figure 2d shows the accuracy of our method on CIFAR-10 as a function of λ. As expected, we see that we need to make the prior more diffuse if we are to optimize performance. In fact, prior rescaling can be the difference between strong and poor performance. We also see there is an optimal scaling factor where the prior is neither overly diffuse nor concentrated around a solution which is inconsistent with the downstream task. Additionally, small scaling factors constrain ourselves to poor parameters, hurting performance much more than large scaling factors, which simply inducing a near-uniform prior, negating the benefits of transfer learning. 3.3 Bayesian Inference After learning a prior and re-scaling it, we finally must draw samples from the downstream task’s posterior over the parameters of the entire model, including additional modules, such as classification heads, which were added after pre-training specifically for a particular downstream task. We use a zero-mean isotropic Gaussian prior over these additional parameters, with a scaling that is again tuned on held-out training data from the downstream task. Since we obtain a closed-form prior, the re-scaled distributions we learn are compatible with a wide variety of Bayesian inference algorithms. In our experiments, we choose stochastic gradient Hamiltonian Monte Carlo (SGHMC) [5] and Stochastic Gradient Langevin Dynamics (SGLD) [50] since these samplers simultaneously provide strong performance and tractable computation costs. Once we have obtained samples from the downstream posterior, we use these samples to form a Bayesian model average for test-time predictions. In Figure 2c, we also observe the advantage of Bayesian inference (SGHMC) over MAP estimation (SGD) on the negative log-posterior yielded by our learned prior. Bayesian inference outperforms MAP estimation across all dataset sizes we consider. In general, Bayesian inference can benefit most from an expressive prior. Indeed, priors reshape the whole posterior landscape — and the distinctive feature of a Bayesian approach is that it marginalizes over the whole posterior, rather than simply using an optimum as with standard training. 3.4 Practical Considerations While our method requires a learned prior and a re-scaling coefficient, it is very easy to use and has minimal computational costs over standard fine-tuning routines. In particular, no expert intervention or knowledge of Bayesian methods is required, and only a single hyperparameter needs to be tuned. In the Appendix Section A, we evaluate the costs of each of the three stages of our pipeline in detail: (i) inferring the posterior on the source task; (ii) re-scaling the posterior to become an informative prior for the downstream task; (iii) using the informative prior in the downstream task. In short, (i) has a minor cost (e.g., 14 of an epoch on ImageNet) that is a one-time cost when a pre-trained posterior is publicly released, as we have done; (ii) involves a single hyperparameter that can be tuned in the same way as other hyperparameters (simple validation grid search), does not require specialized expertise, and adds 17 to the runtime of fine-tuning; (iii) has no additional cost if we do MAP optimization and has a cost comparable to deep ensembles if we run our implementation of Bayesian inference, both in terms of fine-tuning and test-time, and we show that Bayesian inference significantly outperforms deep ensembles. 4 Experiments We now conduct a thorough empirical evaluation of our transfer learning pipeline on image classification and semantic segmentation. We generally consider five approaches, which we find have the following performance ordering: (1) Bayesian inference with learned priors, (2) SGD with learned priors, (3) SGD with standard pre-training, (4) Bayesian inference with non-learned zero-mean priors, (5) SGD with non-learned zero-mean priors. We note that both (1) and (2) are part of our framework, and that SGD with learned priors (2) significantly outperforms standard transfer learning (3). 4.1 Experimental Setting We adopt the ResNet-50 and ResNet-101 architectures [18], and we scale the input image to 224×224 pixels to accommodate these feature extractors designed for ImageNet data. We use a SimCLR (SSL) ResNet-50 checkpoint [6] pre-trained on the ImageNet 1k dataset [9] and fit our prior to the SimCLR loss function. For the supervised setting, we use pre-trained torchvision ResNet-50 and ResNet101 backbones. We perform image classification experiments on four downstream tasks: CIFAR-10, CIFAR-100 [30], Oxford Flowers-102 [36], and Oxford-IIIT Pets [38]. On semantic segmentation, we use a DeepLabv3+ system [4] with ResNet-50 and ResNet-101 backbone architectures, and we evaluate performance on the Pascal VOC 2012 [14] and Cityscapes [7] datasets. All error bars represent one standard error over 5 runs. We evaluate over a variety of downstream training set sizes, as transfer learning often involves limited downstream data.We provide a detailed description of hyperparameters in Appendix C.1. 4.2 Performance Comparison In Figure 3, we compare the five approaches described above across various dataset sizes. We observe the following: (i) learned priors consistently outperform SGD transfer learning which in turn outperforms non-learned priors; (ii) conditioned on using the same prior, Bayesian inference often outperforms SGD training; (iii) Bayesian inference adds more value when used with informative priors; (iv) learned priors are relatively most valuable on intermediate data sizes for the downstream task. Point (iv) is particularly interesting: unlike standard SGD transfer learning, which involves an initialization from the pretraining task, the learned priors provide an explicit inductive bias, enabling the resulting model to learn more efficiently from downstream data. Once there is a sufficient amount of downstream data, the source pre-training becomes less important, and the methods become more similar in performance, although there is still a significant gap. Computation and comparison to deep ensembles. Our Bayesian model average (BMA) contains 10 samples from the posterior, and thus incurs a higher test-time cost than a single SGD-trained model. We therefore additionally compare to an equally sized ensemble of SGD-trained transfer learning models in Appendix B.3. We see that our BMA strongly outperforms the deep ensemble. In transfer learning, deep ensemble members are initialized with the same pre-trained checkpoint, and fine-tuning tends to stay in the same basin, such that the ensemble components are relatively homogenous in their predictions. Recall that SGD with our learned priors incurs essentially the same computational costs as standard transfer learning, but has much better performance. Comparison between self-supervised and supervised pre-training. In Appendix B.2 we provide additional evaluations with priors generated using torchvision ResNet-50 and ResNet-101 checkpoints. These priors differ from the SimCLR prior in that they are learned with labeled data rather than a self-supervision task. We see here also that learned supervised priors paired with Bayesian inference consistently outperforms all baselines. We also see, notably, that the SSL priors are more transferable and outperform their supervised counterparts. Evaluating uncertainty. We also measure predictive uncertainty via negative log test-likelihood (NLL) in Figure 4b, with additional results in Appendix B.4. We find that Bayesian transfer learning outperforms all other methods. However, we note that even though Bayesian inference with a non-learned prior has inferior accuracy to SGD with pre-training, it has superior test-likelihood — indicating that the confidence of SGD-based transfer learners is significantly miscalibrated, as likelihood accounts for both accuracy and uncertainty. We also evaluate the calibration of uncertainty using reliability diagrams (Figure 9 in the Appendix). These plots demonstrate that Bayesian inference with learned priors is the best calibrated among methods we consider. . 4.3 Out-of-Distribution Generalization CIFAR-10.1 [42] is a test set of 2000 natural images modeled after CIFAR-10 with the same classes and image dimensions, following a similar dataset creation process to that described in the original CIFAR-10 paper. Models trained on CIFAR-10 consistently perform much worse on CIFAR-10.1 despite the images being similarly easy for humans to classify. Figure 4a indicates that our method achieves superior performance to SGD-based transfer learning and Bayesian inference with non-learned priors on CIFAR-10.1 across training set sizes, where training sets are sampled from CIFAR-10 training data. 4.4 Semantic Segmentation with ImageNet Priors Popular segmentation methods use ImageNet pre-trained weights as an initialization for backbone parameters [4]. We observe that semantic segmentation models, which contain numerous parameters aside from those of the backbone, still benefit immensely from a learned prior over backbone parameters. Placing a strong prior, even if over only the backbone component of the segmentation system, enables us to more fully harness pre-training and boost performance across benchmark datasets PASCAL VOC 2012 and Cityscapes. When constructing a prior, we apply SWAG to torchvision (“Supervised”) and SimCLR (“SSL”) ResNet-50 models and a torchvision ResNet-101 model (the authors of SimCLR did not release a ResNet-101 model). We assign a zero-mean isotropic Gaussian prior to the decoder and atrous convolution parameters of DeepLabv3+ which are not pre-trained. In Table 1, we see that Bayesian inference with learned priors achieves superior Mean-IoU to both baselines (SGD and SGLD) without learned priors. Priors learned on the SimCLR objective again outperform those learned on labeled data. We also present experiments with MAP estimates (SGD) for the loss functions induced by both learned priors in Appendix C.2. We again find that our learned priors boost the performance of MAP estimates as well, and thus are preferable to standard transfer learning, even for practitioners who do not intend to perform Bayesian inference. 5 Discussion Our work reveals several new key insights about deep transfer learning: • Modifying the loss surface on the downstream task through informative priors leads to significant performance gains, with and without Bayesian inference. • Bayesian inference provides a particular performance boost with the informative priors. • Informative priors lead to more data-efficient performance on the downstream task. • The success of this approach depends on capturing key directions in the loss surface of the source task, which we represent through a low-rank plus diagonal covariance matrix. • Standard transfer learning can be significantly miscalibrated, even providing worse likeli- hood than Bayesian methods from scratch on the downstream task. • Priors learned via self-supervised pre-training transfer better than those learning via supervised learning. In short, pre-training your loss with care provides an easy drop-in replacement for conventional transfer learning that relies on initialization.
1. What is the focus and contribution of the paper on transfer learning? 2. What are the strengths of the proposed approach, particularly in terms of its motivation, presentation, and performance gains? 3. What are the weaknesses of the paper, especially regarding the additional computation and hyperparameter tuning costs? 4. Do you have any questions about the method's applicability or comparisons with other works? 5. What are the limitations of the paper regarding its scope and applications?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This submission shows how an approximate posterior distribution on DNN weights learned on a source task can be beneficial in transfer-learning to downstream tasks. A 3-step method is proposed: 1) Fit posterior over weights on source task using SWAG. 2) Re-scale the learned posterior using a single scalar coefficient. 3) Plug the re-scaled prior into a Beysian inference algorithm like SGLD or SGHMC. Experimental results are provided to support the following claims: In terms of performance, learned priors > SGD transfer learning > non-learned priors. SGD with learned priors significantly outperforms standard transfer learning (i.e., learned priors lead to performance gains with or w/o Bayesian inference). Learned priors lead to more data-efficient performance on the downstream task. When using the same prior Bayesian inference often outperforms SGD training -- though Bayesian inference with a non-learned prior has inferior accuracy to SGD with pre-training. Strengths And Weaknesses Strengths S1. The method is well-motivated, presented with clarity and shown to lead to performance gains. S2. The insights provided prompt further exploration of loss-surface alignment in transfer learning and other settings (e.g., few-shot learning). S3. The method builds on prior art and adds a single hyperparameter. For this reason it should be easy for others to apply. S4. The experimental validation includes classification tasks of different complexities and semantic segmentation. Thus, the method is shown to be beneficial in interesting problems. Weaknesses W1. The proposed method claims to be a drop-in replacement for standard transfer learning.Because the additional computation is not expected to be trivial, it would have been beneficial to provide a proper study of the expected increase in computation w.r.t. standard transfer learning. The submission includes some discussion about the additional training and inference cost but no discussion about the cost (in practice) of tuning hyperparameters. For example, there are a number of hyperparameters for the methods involved in the different steps of the proposed method. Setting or tuning these hyperparameters for a given transfer would be part of the cost of the method, in particular for steps 2 (scaling λ of prior) and 3 (parameters of SGLD and SGHMC). W2. The main technical novelty in the method is the scaling of the prior (steps 1 and 3 are applications of prior art) but this was not given much attention in the experiments (only Fig. 2c). What is the recommended procedure and expected cost for tuning the scaling parameter λ ? W3. Plots, legends and captions are not synchronized. For example, in Fig. 2c there is a purple plot but the legend does not include purple; in Fig. 7 the caption speaks of a red plot but there is no red in the plots. Questions Q1. What would be expected increase in computation (tuning included) w.r.t. conventional transfer learning for applying the proposed method to common tasks? Q2. The method is compared to deep ensembles with regards to accuracy. How does the method compare with regards to overconfidence? Limitations The two limitations discussed are the additional computation and the inclusion of only vision applications.
NIPS
Title Pre-Train Your Loss: Easy Bayesian Transfer Learning with Informative Priors Abstract Deep learning is increasingly moving towards a transfer learning paradigm whereby large foundation models are fine-tuned on downstream tasks, starting from an initialization learned on the source task. But an initialization contains relatively little information about the source task, and does not reflect the belief that our knowledge of the source task should affect the locations and shape of optima on the downstream task. Instead, we show that we can learn highly informative posteriors from the source task, through supervised or self-supervised approaches, which then serve as the basis for priors that modify the whole loss surface on the downstream task. This simple modular approach enables significant performance gains and more data-efficient learning on a variety of downstream classification and segmentation tasks, serving as a drop-in replacement for standard pre-training strategies. These highly informative priors also can be saved for future use, similar to pre-trained weights, and stand in contrast to the zero-mean isotropic uninformative priors that are typically used in Bayesian deep learning. 1 Introduction The ability to transfer what is learned from one task to another — learning to ride a bicycle to then ride a unicycle — has historically set apart biological intelligence from machine learning approaches. However, transfer learning is quickly becoming mainstream practice in deep learning. Typically, large “foundation models” are pre-trained on massive volumes of source data, and then the learned parameter vector is used as an initialization for training in a downstream task. While this approach has had great empirical success, reliance on an initialization is a very limited way to perform transfer learning. If we are doing a good job of optimization, then our final solution should be independent of initialization, barring local minima with identical training loss. Moreover, our knowledge of the source task should affect the locations and shapes of optima on the downstream knowledge. We propose to instead use a re-scaled Bayesian parameter posterior from the first task as a pre-trained prior for the downstream task. Since the negative log posterior on the downstream task is our loss function, this procedure has the effect of reshaping our training objective on the downstream task, ∗Authors contributed equally. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). to reflect more nuanced information that we have learned from the source task, as in Figure 1. The posterior on the source task is re-scaled before use as a prior on the downstream task to reflect the belief that the source and downstream tasks are drawn from different distributions. With our now highly informative prior for the downstream task, we can then proceed with optimization, or perform full Bayesian model averaging with the posterior on the downstream task. We find both procedures profoundly improve upon standard deep transfer learning, making use of a simple pipeline that involves easy-to-use pre-existing components. Indeed, the simplicity of the approach, combined with its promising empirical performance, is one of its greatest features — enabling its use as a drop-in replacement for standard approaches for deep transfer learning. The pre-training posterior can be saved as a prior for a wide array of downstream tasks, similar to pre-trained weights. Despite the simplicity and effectiveness of this approach, Bayesian neural networks are almost always used with uninformative isotropic zero-mean priors [51, 25, 15], and have not harnessed recent advances in self-supervised learning. Furthermore, as we will show, the details of how we perform Bayesian transfer learning with modern neural networks are crucial to practical success. For example, it is important that the prior on the source task be represented with an informative covariance matrix, indicating which directions in parameter space we expect to provide good solutions on the source task, which significantly outperforms isotropic or diagonal counterparts. We also provide several key conceptual findings — for example, that the informative priors provide an inductive bias that enables more data efficient fine-tuning than a pre-trained initialization, and that pre-trained priors based on self-supervised learning are more transferable than supervised pre-trained priors. Finally, we observe through extensive experiments across both image classification and semantic segmentation on multiple neural architectures, and on both supervised and self-supervised (SimCLR [6]) pre-training loss functions, that Bayesian inference is particularly advantageous in the transfer learning setting. We emphasize that the proposed approach outperforms standard transfer learning, without requiring any expert intervention or knowledge of Bayesian methods. One only needs to tune a single additional hyperparameter using standard cross-validation, which incurs minimal computational overhead. We release PyTorch pre-trained priors and code for learning and using priors for downstream inference: https://github.com/hsouri/BayesianTransferLearning. 2 Background Bayesian Neural Networks. Consider a neural network f with weights w and a training dataset D = {(x(i), y(i))} of independently drawn samples. In standard non-Bayesian neural network training, we minimize the negative log posterior, − log p(w|D, f) ∝ − log p(D|w, f)− log p(w|f), where the likelihood p(D|w, f) denotes the probability that a model with weights w would generate the observed labels {y(i)} (e.g., cross-entropy). The log prior log p(w|f) often takes the form of weight decay, corresponding to a zero-mean isotropic Gaussian prior. In Bayesian modeling, we instead make predictions with a Bayesian Model Average (BMA) of all models weighted by their posterior probabilities: p(y|x,D, f) = ∫ p(y|x,w, f)p(w|D, f)dw. There are many ways to approximate this integral, such as MCMC, or variational approaches, which take a finite number of approximate samples wj from the parameter posterior to form the simple Monte Carlo estimate: p(y|x,D, f) ≈ 1J ∑ j p(y|x,wj , f). Almost always, one uses zero-mean isotropic priors [51]. There are also specialized priors, including heavy-tailed priors [15, 25], noise-contrastive priors for high uncertainty under distribution shift [17], and input-dependent priors for domain generalization [24]. Transfer Learning. In transfer learning, we wish to recycle the representation learned on one task to improve performance on another. Transfer learning is now widely applied in deep learning, and forms the basis for foundation models [1, 12, 8, 43, 4, 20, 10], which are exceptionally large neural networks pre-trained on massive volumes of data, and then fine-tuned on a downstream task. Recent work has found self-supervised pre-training can transfer better than supervised pre-training [19], in line with our finding in Section 4 that self-supervised pre-trained priors transfer better. In continual learning, we wish to learn tasks sequentially without forgetting what has been learned previously. Elastic Weight Consolidation (EWC) prevents forgetting by imposing a penalty, the diagonal of a Fisher information matrix computed on previous tasks, for adapting to a new task [29]. We can contextualize the EWC penalty within Bayesian transfer learning as a negative log Gaussian prior with diagonal covariance used for maximum a posteriori (MAP) estimation on sequential tasks, such as digit classification or RL [29]. In our experiments, we find that MAP estimation, diagonal covariance, and supervised pre-training are all suboptimal in the transfer learning setting. Leveraging Auxiliary Data and Knowledge Transfer in Bayesian Modeling. A number of works outside of deep learning have considered knowledge transfer in Bayesian modeling, especially in settings such as domain adaptation or homogeneous transfer learning in which source and target tasks contain similar feature and label spaces but differ in their marginal distributions. For example, Xuan et al. [52] considers Bayesian transfer learning methods for probabilistic graphical models, and Karbalayghareh et al. [28] develop a theoretical framework for understanding optimal Bayes classifiers given prior knowledge from other domains. Other works on Bayesian transfer learning learn a Dirichlet prior over naive Bayes classifiers [44]. Bayesian optimization methods can also find transferable hyperparameter settings [40] or select data which will yield transferable models on shifted domains [45]. Schnaus [47] uses the Laplace approximation to learn posteriors with Kroneckerfactored covariance and then adjusts the posteriors by optimizing PAC-Bayes generalization bounds to create priors for downstream applications. Bayesian tools have additionally been used for leveraging auxiliary data or multiple domains in deep learning. Chandra and Kapoor [2] learn from multiple domains simultaneously using a round-robin task sampling procedure and a single-layer neural network. Bayesian methods for continual learning update the posterior to accommodate new tasks without forgetting how to perform previous ones [35, 49, 13, 27, 46], or develop kernels based on neural networks trained on previous tasks for Gaussian process inference [33, 37]. Semi-supervised algorithms can incorporate unlabeled data into the training pipelines of BNNs using biologically plausible Bayesian Confidence Propagation Neural Networks (BCPNN) which model the cortex [41], by perturbing weights and using consistency regularization [11], or via semi-supervised deep kernel learning [26]. Gao et al. [16] also show how to harness unlabeled data to learn reference priors, uninformative priors which depend on the amount of training data and allow labels to most efficiently inform inference. Deep kernel learning has also been used for Bayesian meta-learning in few-shot classification and regression [39]. In contrast to these works, we do not perform multi-task learning nor is our goal to harness auxiliary unlabeled downstream data; we instead approach transfer learning in which we leverage pre-training data for expressive priors that maximize performance on a single downstream task. 3 Bayesian Transfer Learning with Pre-Trained Priors In order to transfer knowledge acquired through pre-training to downstream tasks, we adopt a pipeline with three simple components, composed of easy-to-use existing tools: 1. First, we fit a probability distribution with closed-form density to the posterior over feature extractor parameters using a pre-trained checkpoint and SWAG [34] (Section 3.1). 2. Second, we re-scale this distribution, viewed as a prior for new tasks, to reflect the mismatch between the pre-training and downstream tasks and to add coverage to parameter settings which might be consistent with the downstream, but not pre-training, task. To this end, we tune a single scalar coefficient on held out validation data (Section 3.2). Finally, we use this re-scaled prior, along with a zero-mean isotropic Gaussian prior over added parameters (e.g. classification head), to form a posterior on the downstream task. We then either optimize the posterior, or use it to perform full Bayesian inference with SGLD and SGHMC samplers [42, 5] (Section 3.3) 3. Finally, we plug the re-scaled prior into a Bayesian inference algorithm, along with a zeromean isotropic Gaussian prior over added parameters (e.g. classification head) to form a posterior on the downstream task. We then either optimize the posterior, or use it to perform full Bayesian inference with SGLD and SGHMC samplers [50, 5] (Section 3.3). We illustrate this pipeline in Figure 5 in the Appendix. The simplicity and modularity of this framework are key strengths: by carefully combining easy-to-use existing components, we will see in Section 4 that we can straightforwardly improve the default approaches to deep transfer learning. The potential for impact is significant: we can use this pipeline as a drop-in replacement for standard procedures used in deploying foundation models. At the same time, there is significant novelty: while Bayesian neural networks are typically used with simple zero-mean isotropic priors, we leverage the significant developments in self-supervised learning to produce highly informative priors. Moreover, in the following subsections (3.1, 3.2, and 3.3) we will gain a nuanced understanding of each of these three components and the key considerations for practical success. We discuss computational considerations in Section 3.4. For experiments in this section, we use a prior learned over the parameters of an ImageNet pre-trained SimCLR ResNet-50 feature extractor [9, 6, 18], and we choose CIFAR-10 and CIFAR-100 for downstream tasks [30]. We perform Bayesian inference with stochastic gradient Hamiltonian Monte Carlo (SGHMC) [5]. The experiments in this section are primarily intended to gain conceptual insights into each step of our approach. In Section 4, we present our main empirical evaluations. 3.1 Learning Transferable Priors We begin by building a probability distribution over the parameters of a feature extractor which represents knowledge we acquire from pre-training. To this end, we fit the distribution to the Bayesian posterior, or regularized loss function, on the pre-training task. This pre-training posterior will become the prior for downstream tasks and can be saved, or publicly released, for future use, similar to pre-trained weights. This stage requires two major design choices: a pre-training task and an algorithm for constructing the probability distribution. Our downstream inference procedures require that our prior is represented as a closed-form density function. Thus, we must use a method that can provide a closed-form posterior approximation for the source task, which can be then re-scaled and used as a prior in the downstream task. We opt for SWA-Gaussian (SWAG) [34] due to its simplicity, scalability, popularity, and non-diagonal covariance. Other procedures that provide closed-form posterior approximations, such as the Laplace approximation or variational methods, could also be applied, though as we will see, a non-diagonal covariance is particularly important to the success of this approach. SWAG starts from a pre-trained model, and runs a small number M of fine-tuning epochs with a modified learning rate schedule [34]. The SWAG approximate posterior distribution is given by N (w, 12Σdiag + 1 2Σlow-rank), where w = 1 M ∑M t=1 wt is the SWA [23] solution, Σdiag = 1 L−1 ∑M t=M−L+1 diag(wt −w), and Σlow-rank = 1 L−1 ∑M t=M−L+1 (wt − w) (wt − w) ⊤, where L is a hyperparameter controlling the rank of the low-rank component of the covariance matrix. After obtaining a closed-form distribution using SWAG, we remove the head from on top of the feature extractor, for example, a linear classification module, and we consider only the distribution’s restriction to the parameters of the feature extractor. New layers that are added for downstream tasks receive a non-learned prior over their parameters. To highlight the versatility of Bayesian transfer learning, we will focus both on supervised image classification and also on self-supervised learning pre-training tasks and leverage existing torchvision and SimCLR [6] checkpoints. For both torchvision and SimCLR models, we learn the prior using the associated loss function — cross-entropy and InfoNCE, respectively, regularized by weight decay. In both settings, the regularized loss function can be represented as the sum of a negative log-likelihood and the negative log Gaussian density (weight decay). While standard SGD-based transfer learning uses a learned parameter vector, our Bayesian transfer learning approach uses a covariance matrix over feature extractor parameters which contains information about the pre-training loss surface geometry. Can we really “learn” a prior? A data-dependent prior may sound odd, because a prior reflects our beliefs “before we see the data”. However, it is entirely principled to learn a prior, as long as it is not learned using exactly the same labeled data we use in our downstream likelihood for the downstream task. Indeed, any informative prior is based on data that has shaped our beliefs. Alignment of Pre-Training and Downstream Loss Geometry. If the covariance matrix of our learned prior is to be more effective than an isotropic one, then it must not favour poorly generalizing directions for the downstream task. Thus, we compare the alignment of the learned covariance matrix with a downstream task’s test loss. We begin by computing the top 5 leading singular vectors of a SWAG learned covariance matrix over parameters of the SimCLR ImageNet-trained ResNet-50 feature extractor. We then train a linear classifier head on CIFAR-10 training data on top of the fixed pre-trained feature extractor. Starting at the learned parameter vector, we perturb the feature extractor parameters in the direction of the singular vectors (fixing fully-connected classifier parameters), and measure the increase in test loss. We compare to the test loss when instead perturbing by each of 10 random vectors. All perturbation distances are filter normalized (as in [31, 21]), to account for invariance with respect to filter-wise parameter rescaling. In Figure 2a, we see the CIFAR-10 test loss is far flatter in the directions of leading eigenvectors of the pre-trained covariance than in a random direction, indicating the learned prior indeed promotes directions consistent with the downstream task. Learned Covariance Outperforms Only a Learned Mean. After verifying that our pre-trained priors do in fact identify flat directions of the pre-training loss which are aligned with the downstream loss, we directly compare the performance benefits of our learned covariance over an isotropic covariance with a learned mean. To this end, we swap out our learned prior’s covariance with an isotropic version αI , where α is tuned on a held out validation set. Figure 2c shows that a learned covariance consistently outperforms its isotropic counterpart, indicating that the shape of the pre-training loss surface’s basin is informative for downstream tasks. The x-axis here denotes the number of training samples used for fine-tuning on the downstream task. We also see Bayesian model averaging provides further performance gains, which we discuss further in Section 3.3. We now dissect just how important the low-rank component is. How is the Prior Best Structured? In the context of continual learning, elastic weight consolidation (EWC) [29] uses purely diagonal covariance priors to help prevent forgetting in continual learning. In this paper, we are interested in transfer learning, and wish to understand the benefits of a lowrank component for capturing particularly important directions in the pre-training loss surface for transferring to downstream tasks. Omitting the low-rank component slightly reduces the memory footprint, but loses potentially important information about the shape of the pre-training posterior mode. In practice, the rank of the matrix is determined by the number of samples collected when running SWAG. In Figure 2b, we present the performance of a prior learned via SWAG on a SimCLR ResNet-50 feature extractor for transfer learning to CIFAR-10 and CIFAR-100. As the rank of the low-rank component increases from zero (diagonal covariance), we see the performance improves dramatically until it saturates, indicating that only a small number of dominant directions are important for the transferability of the prior. Note that performance saturation occurs later for CIFAR-100 than CIFAR-10 — likely due to the higher complexity of CIFAR-100 compared to CIFAR-10, which has 10× fewer classes. 3.2 Rescaling the Prior In incremental learning, it is common to acquire some data, form a posterior, and then use this posterior as our new prior in acquiring future data. This procedure is equivalent to forming the posterior from all of the data at once, assuming all data are drawn from the same distribution with the same likelihood. However, in transfer learning, we assume the data from the source and downstream task are drawn from different but related distributions. Thus we do not want to directly re-use a posterior from the source as a prior for the downstream task, without modification. For example, as we acquire more data from the source task, our posterior will become increasingly concentrated. This concentration is an issue, as certainty with respect to the ideal parameters for the pre-training task does not imply certainty on the ideal parameters for the downstream task. To address this consideration, we rescale the learned Gaussian prior by multiplying its covariance matrix by a scalar value. We select the highest performing scalar value across a grid on a holdout set from the downstream task. If we do not scale the covariance enough, our prior will become concentrated around parameters which are inconsistent with the downstream task, and if we scale the covariance too much, our prior will be diffuse and assign too much mass to regions of parameter space which are again inconsistent with the downstream task. We now put this intuition to the test. We fit the SimCLR pre-training loss using a Gaussian prior with mean µ and covariance matrix Σ. Given our uncertainty regarding the strength of the relationship between the pre-training and downstream tasks, it is unclear if our learned prior would lead to over-confidence on the downstream task. To rectify this problem, we instead assign prior covariance matrix λΣ with λ ≥ 1. Figure 2d shows the accuracy of our method on CIFAR-10 as a function of λ. As expected, we see that we need to make the prior more diffuse if we are to optimize performance. In fact, prior rescaling can be the difference between strong and poor performance. We also see there is an optimal scaling factor where the prior is neither overly diffuse nor concentrated around a solution which is inconsistent with the downstream task. Additionally, small scaling factors constrain ourselves to poor parameters, hurting performance much more than large scaling factors, which simply inducing a near-uniform prior, negating the benefits of transfer learning. 3.3 Bayesian Inference After learning a prior and re-scaling it, we finally must draw samples from the downstream task’s posterior over the parameters of the entire model, including additional modules, such as classification heads, which were added after pre-training specifically for a particular downstream task. We use a zero-mean isotropic Gaussian prior over these additional parameters, with a scaling that is again tuned on held-out training data from the downstream task. Since we obtain a closed-form prior, the re-scaled distributions we learn are compatible with a wide variety of Bayesian inference algorithms. In our experiments, we choose stochastic gradient Hamiltonian Monte Carlo (SGHMC) [5] and Stochastic Gradient Langevin Dynamics (SGLD) [50] since these samplers simultaneously provide strong performance and tractable computation costs. Once we have obtained samples from the downstream posterior, we use these samples to form a Bayesian model average for test-time predictions. In Figure 2c, we also observe the advantage of Bayesian inference (SGHMC) over MAP estimation (SGD) on the negative log-posterior yielded by our learned prior. Bayesian inference outperforms MAP estimation across all dataset sizes we consider. In general, Bayesian inference can benefit most from an expressive prior. Indeed, priors reshape the whole posterior landscape — and the distinctive feature of a Bayesian approach is that it marginalizes over the whole posterior, rather than simply using an optimum as with standard training. 3.4 Practical Considerations While our method requires a learned prior and a re-scaling coefficient, it is very easy to use and has minimal computational costs over standard fine-tuning routines. In particular, no expert intervention or knowledge of Bayesian methods is required, and only a single hyperparameter needs to be tuned. In the Appendix Section A, we evaluate the costs of each of the three stages of our pipeline in detail: (i) inferring the posterior on the source task; (ii) re-scaling the posterior to become an informative prior for the downstream task; (iii) using the informative prior in the downstream task. In short, (i) has a minor cost (e.g., 14 of an epoch on ImageNet) that is a one-time cost when a pre-trained posterior is publicly released, as we have done; (ii) involves a single hyperparameter that can be tuned in the same way as other hyperparameters (simple validation grid search), does not require specialized expertise, and adds 17 to the runtime of fine-tuning; (iii) has no additional cost if we do MAP optimization and has a cost comparable to deep ensembles if we run our implementation of Bayesian inference, both in terms of fine-tuning and test-time, and we show that Bayesian inference significantly outperforms deep ensembles. 4 Experiments We now conduct a thorough empirical evaluation of our transfer learning pipeline on image classification and semantic segmentation. We generally consider five approaches, which we find have the following performance ordering: (1) Bayesian inference with learned priors, (2) SGD with learned priors, (3) SGD with standard pre-training, (4) Bayesian inference with non-learned zero-mean priors, (5) SGD with non-learned zero-mean priors. We note that both (1) and (2) are part of our framework, and that SGD with learned priors (2) significantly outperforms standard transfer learning (3). 4.1 Experimental Setting We adopt the ResNet-50 and ResNet-101 architectures [18], and we scale the input image to 224×224 pixels to accommodate these feature extractors designed for ImageNet data. We use a SimCLR (SSL) ResNet-50 checkpoint [6] pre-trained on the ImageNet 1k dataset [9] and fit our prior to the SimCLR loss function. For the supervised setting, we use pre-trained torchvision ResNet-50 and ResNet101 backbones. We perform image classification experiments on four downstream tasks: CIFAR-10, CIFAR-100 [30], Oxford Flowers-102 [36], and Oxford-IIIT Pets [38]. On semantic segmentation, we use a DeepLabv3+ system [4] with ResNet-50 and ResNet-101 backbone architectures, and we evaluate performance on the Pascal VOC 2012 [14] and Cityscapes [7] datasets. All error bars represent one standard error over 5 runs. We evaluate over a variety of downstream training set sizes, as transfer learning often involves limited downstream data.We provide a detailed description of hyperparameters in Appendix C.1. 4.2 Performance Comparison In Figure 3, we compare the five approaches described above across various dataset sizes. We observe the following: (i) learned priors consistently outperform SGD transfer learning which in turn outperforms non-learned priors; (ii) conditioned on using the same prior, Bayesian inference often outperforms SGD training; (iii) Bayesian inference adds more value when used with informative priors; (iv) learned priors are relatively most valuable on intermediate data sizes for the downstream task. Point (iv) is particularly interesting: unlike standard SGD transfer learning, which involves an initialization from the pretraining task, the learned priors provide an explicit inductive bias, enabling the resulting model to learn more efficiently from downstream data. Once there is a sufficient amount of downstream data, the source pre-training becomes less important, and the methods become more similar in performance, although there is still a significant gap. Computation and comparison to deep ensembles. Our Bayesian model average (BMA) contains 10 samples from the posterior, and thus incurs a higher test-time cost than a single SGD-trained model. We therefore additionally compare to an equally sized ensemble of SGD-trained transfer learning models in Appendix B.3. We see that our BMA strongly outperforms the deep ensemble. In transfer learning, deep ensemble members are initialized with the same pre-trained checkpoint, and fine-tuning tends to stay in the same basin, such that the ensemble components are relatively homogenous in their predictions. Recall that SGD with our learned priors incurs essentially the same computational costs as standard transfer learning, but has much better performance. Comparison between self-supervised and supervised pre-training. In Appendix B.2 we provide additional evaluations with priors generated using torchvision ResNet-50 and ResNet-101 checkpoints. These priors differ from the SimCLR prior in that they are learned with labeled data rather than a self-supervision task. We see here also that learned supervised priors paired with Bayesian inference consistently outperforms all baselines. We also see, notably, that the SSL priors are more transferable and outperform their supervised counterparts. Evaluating uncertainty. We also measure predictive uncertainty via negative log test-likelihood (NLL) in Figure 4b, with additional results in Appendix B.4. We find that Bayesian transfer learning outperforms all other methods. However, we note that even though Bayesian inference with a non-learned prior has inferior accuracy to SGD with pre-training, it has superior test-likelihood — indicating that the confidence of SGD-based transfer learners is significantly miscalibrated, as likelihood accounts for both accuracy and uncertainty. We also evaluate the calibration of uncertainty using reliability diagrams (Figure 9 in the Appendix). These plots demonstrate that Bayesian inference with learned priors is the best calibrated among methods we consider. . 4.3 Out-of-Distribution Generalization CIFAR-10.1 [42] is a test set of 2000 natural images modeled after CIFAR-10 with the same classes and image dimensions, following a similar dataset creation process to that described in the original CIFAR-10 paper. Models trained on CIFAR-10 consistently perform much worse on CIFAR-10.1 despite the images being similarly easy for humans to classify. Figure 4a indicates that our method achieves superior performance to SGD-based transfer learning and Bayesian inference with non-learned priors on CIFAR-10.1 across training set sizes, where training sets are sampled from CIFAR-10 training data. 4.4 Semantic Segmentation with ImageNet Priors Popular segmentation methods use ImageNet pre-trained weights as an initialization for backbone parameters [4]. We observe that semantic segmentation models, which contain numerous parameters aside from those of the backbone, still benefit immensely from a learned prior over backbone parameters. Placing a strong prior, even if over only the backbone component of the segmentation system, enables us to more fully harness pre-training and boost performance across benchmark datasets PASCAL VOC 2012 and Cityscapes. When constructing a prior, we apply SWAG to torchvision (“Supervised”) and SimCLR (“SSL”) ResNet-50 models and a torchvision ResNet-101 model (the authors of SimCLR did not release a ResNet-101 model). We assign a zero-mean isotropic Gaussian prior to the decoder and atrous convolution parameters of DeepLabv3+ which are not pre-trained. In Table 1, we see that Bayesian inference with learned priors achieves superior Mean-IoU to both baselines (SGD and SGLD) without learned priors. Priors learned on the SimCLR objective again outperform those learned on labeled data. We also present experiments with MAP estimates (SGD) for the loss functions induced by both learned priors in Appendix C.2. We again find that our learned priors boost the performance of MAP estimates as well, and thus are preferable to standard transfer learning, even for practitioners who do not intend to perform Bayesian inference. 5 Discussion Our work reveals several new key insights about deep transfer learning: • Modifying the loss surface on the downstream task through informative priors leads to significant performance gains, with and without Bayesian inference. • Bayesian inference provides a particular performance boost with the informative priors. • Informative priors lead to more data-efficient performance on the downstream task. • The success of this approach depends on capturing key directions in the loss surface of the source task, which we represent through a low-rank plus diagonal covariance matrix. • Standard transfer learning can be significantly miscalibrated, even providing worse likeli- hood than Bayesian methods from scratch on the downstream task. • Priors learned via self-supervised pre-training transfer better than those learning via supervised learning. In short, pre-training your loss with care provides an easy drop-in replacement for conventional transfer learning that relies on initialization.
1. What is the main contribution of the paper regarding Bayesian learning for transfer learning? 2. What are the strengths and weaknesses of the proposed approach, particularly in its technical contribution and methodology? 3. Do you have any concerns or suggestions regarding the scaling factor adjustment method used in the proposed approach? 4. How does the reviewer assess the background review and relevance of the paper's content to Bayesian transfer learning? 5. What are the limitations of the paper regarding computational burden and target task performance?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper presents a Bayesian learning method for transfer learning on target task. Specifically, a posterior approximation method SWAG is used to estimate the posterior distribution of a supervised or self-supervised pre-trained model. This distribution is used as the prior for downstream learning. Experiments and ablation study demonstrate the effectiveness of learning a prior for transfer. Strengths And Weaknesses Strengths The problem of learning a prior for transfer with Bayesian learning is well motivated. The proposed method with three steps is reasonable. Experimental results of the proposed method on semantic segmentation are quite strong. Weaknesses This paper employs methods of posterior approximation and Bayesian learning and incorporates them into Bayesian transfer by learning a prior from source tasks. These existing methods have been well developed so the overall technical contribution is incremental upon these. The method introduced in section 3.2 looks a bit arbitrary. In order to slightly smooth the prior distribution, this paper proposes to multiply a single scaling factor to the covariance matrix. This does not seem to be a principled way to adapt a prior distribution to a target task. Are there any other possible methods of adjusting the covariance matrix for transfer? The holdout validation method used to select this scaling factor also neglects other data information that is potentially useful for determining the scaling factor. The background review on Bayesian transfer learning seems not very thorough, which makes the overall contribution of the proposed transfer learning with Bayesian learning unclear. The only relevant review on this direction is about continual learning. The proposed method will introduce additional computational burden for transfer learning. Questions Q1: Most of the evaluations in experiments focus on the performance of the learned prior when transferred to a new target task. I'm not sure if it is possible to use Bayesian model average can on source task itself, as it is not clear how good is the approximated posterior distribution. This study on the posterior approximation on the source task can probably show the effectiveness of estimation with close-form density as well as Bayesian learning. Q2: The rescaling method proposed in section 3.2 is not well developed. Are there any related work for the proposed scaling method? Practically, from Fig. 2d, it seems that the scaling factor is very big. It seems that it is necessary to tune this scaling factor for every new target dataset. Is it possible to estimate this parameter without using holdout validation set? Q3: Transfer learning results on target classification tasks are reported mainly using test error. A more common comparison is classification accuracies as error depends on the loss function. As seen from the tables the test error of several baseline methods are quite close. What are the classification accuracies for these results? Limitations See weakness and questions.
NIPS
Title Pre-Train Your Loss: Easy Bayesian Transfer Learning with Informative Priors Abstract Deep learning is increasingly moving towards a transfer learning paradigm whereby large foundation models are fine-tuned on downstream tasks, starting from an initialization learned on the source task. But an initialization contains relatively little information about the source task, and does not reflect the belief that our knowledge of the source task should affect the locations and shape of optima on the downstream task. Instead, we show that we can learn highly informative posteriors from the source task, through supervised or self-supervised approaches, which then serve as the basis for priors that modify the whole loss surface on the downstream task. This simple modular approach enables significant performance gains and more data-efficient learning on a variety of downstream classification and segmentation tasks, serving as a drop-in replacement for standard pre-training strategies. These highly informative priors also can be saved for future use, similar to pre-trained weights, and stand in contrast to the zero-mean isotropic uninformative priors that are typically used in Bayesian deep learning. 1 Introduction The ability to transfer what is learned from one task to another — learning to ride a bicycle to then ride a unicycle — has historically set apart biological intelligence from machine learning approaches. However, transfer learning is quickly becoming mainstream practice in deep learning. Typically, large “foundation models” are pre-trained on massive volumes of source data, and then the learned parameter vector is used as an initialization for training in a downstream task. While this approach has had great empirical success, reliance on an initialization is a very limited way to perform transfer learning. If we are doing a good job of optimization, then our final solution should be independent of initialization, barring local minima with identical training loss. Moreover, our knowledge of the source task should affect the locations and shapes of optima on the downstream knowledge. We propose to instead use a re-scaled Bayesian parameter posterior from the first task as a pre-trained prior for the downstream task. Since the negative log posterior on the downstream task is our loss function, this procedure has the effect of reshaping our training objective on the downstream task, ∗Authors contributed equally. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). to reflect more nuanced information that we have learned from the source task, as in Figure 1. The posterior on the source task is re-scaled before use as a prior on the downstream task to reflect the belief that the source and downstream tasks are drawn from different distributions. With our now highly informative prior for the downstream task, we can then proceed with optimization, or perform full Bayesian model averaging with the posterior on the downstream task. We find both procedures profoundly improve upon standard deep transfer learning, making use of a simple pipeline that involves easy-to-use pre-existing components. Indeed, the simplicity of the approach, combined with its promising empirical performance, is one of its greatest features — enabling its use as a drop-in replacement for standard approaches for deep transfer learning. The pre-training posterior can be saved as a prior for a wide array of downstream tasks, similar to pre-trained weights. Despite the simplicity and effectiveness of this approach, Bayesian neural networks are almost always used with uninformative isotropic zero-mean priors [51, 25, 15], and have not harnessed recent advances in self-supervised learning. Furthermore, as we will show, the details of how we perform Bayesian transfer learning with modern neural networks are crucial to practical success. For example, it is important that the prior on the source task be represented with an informative covariance matrix, indicating which directions in parameter space we expect to provide good solutions on the source task, which significantly outperforms isotropic or diagonal counterparts. We also provide several key conceptual findings — for example, that the informative priors provide an inductive bias that enables more data efficient fine-tuning than a pre-trained initialization, and that pre-trained priors based on self-supervised learning are more transferable than supervised pre-trained priors. Finally, we observe through extensive experiments across both image classification and semantic segmentation on multiple neural architectures, and on both supervised and self-supervised (SimCLR [6]) pre-training loss functions, that Bayesian inference is particularly advantageous in the transfer learning setting. We emphasize that the proposed approach outperforms standard transfer learning, without requiring any expert intervention or knowledge of Bayesian methods. One only needs to tune a single additional hyperparameter using standard cross-validation, which incurs minimal computational overhead. We release PyTorch pre-trained priors and code for learning and using priors for downstream inference: https://github.com/hsouri/BayesianTransferLearning. 2 Background Bayesian Neural Networks. Consider a neural network f with weights w and a training dataset D = {(x(i), y(i))} of independently drawn samples. In standard non-Bayesian neural network training, we minimize the negative log posterior, − log p(w|D, f) ∝ − log p(D|w, f)− log p(w|f), where the likelihood p(D|w, f) denotes the probability that a model with weights w would generate the observed labels {y(i)} (e.g., cross-entropy). The log prior log p(w|f) often takes the form of weight decay, corresponding to a zero-mean isotropic Gaussian prior. In Bayesian modeling, we instead make predictions with a Bayesian Model Average (BMA) of all models weighted by their posterior probabilities: p(y|x,D, f) = ∫ p(y|x,w, f)p(w|D, f)dw. There are many ways to approximate this integral, such as MCMC, or variational approaches, which take a finite number of approximate samples wj from the parameter posterior to form the simple Monte Carlo estimate: p(y|x,D, f) ≈ 1J ∑ j p(y|x,wj , f). Almost always, one uses zero-mean isotropic priors [51]. There are also specialized priors, including heavy-tailed priors [15, 25], noise-contrastive priors for high uncertainty under distribution shift [17], and input-dependent priors for domain generalization [24]. Transfer Learning. In transfer learning, we wish to recycle the representation learned on one task to improve performance on another. Transfer learning is now widely applied in deep learning, and forms the basis for foundation models [1, 12, 8, 43, 4, 20, 10], which are exceptionally large neural networks pre-trained on massive volumes of data, and then fine-tuned on a downstream task. Recent work has found self-supervised pre-training can transfer better than supervised pre-training [19], in line with our finding in Section 4 that self-supervised pre-trained priors transfer better. In continual learning, we wish to learn tasks sequentially without forgetting what has been learned previously. Elastic Weight Consolidation (EWC) prevents forgetting by imposing a penalty, the diagonal of a Fisher information matrix computed on previous tasks, for adapting to a new task [29]. We can contextualize the EWC penalty within Bayesian transfer learning as a negative log Gaussian prior with diagonal covariance used for maximum a posteriori (MAP) estimation on sequential tasks, such as digit classification or RL [29]. In our experiments, we find that MAP estimation, diagonal covariance, and supervised pre-training are all suboptimal in the transfer learning setting. Leveraging Auxiliary Data and Knowledge Transfer in Bayesian Modeling. A number of works outside of deep learning have considered knowledge transfer in Bayesian modeling, especially in settings such as domain adaptation or homogeneous transfer learning in which source and target tasks contain similar feature and label spaces but differ in their marginal distributions. For example, Xuan et al. [52] considers Bayesian transfer learning methods for probabilistic graphical models, and Karbalayghareh et al. [28] develop a theoretical framework for understanding optimal Bayes classifiers given prior knowledge from other domains. Other works on Bayesian transfer learning learn a Dirichlet prior over naive Bayes classifiers [44]. Bayesian optimization methods can also find transferable hyperparameter settings [40] or select data which will yield transferable models on shifted domains [45]. Schnaus [47] uses the Laplace approximation to learn posteriors with Kroneckerfactored covariance and then adjusts the posteriors by optimizing PAC-Bayes generalization bounds to create priors for downstream applications. Bayesian tools have additionally been used for leveraging auxiliary data or multiple domains in deep learning. Chandra and Kapoor [2] learn from multiple domains simultaneously using a round-robin task sampling procedure and a single-layer neural network. Bayesian methods for continual learning update the posterior to accommodate new tasks without forgetting how to perform previous ones [35, 49, 13, 27, 46], or develop kernels based on neural networks trained on previous tasks for Gaussian process inference [33, 37]. Semi-supervised algorithms can incorporate unlabeled data into the training pipelines of BNNs using biologically plausible Bayesian Confidence Propagation Neural Networks (BCPNN) which model the cortex [41], by perturbing weights and using consistency regularization [11], or via semi-supervised deep kernel learning [26]. Gao et al. [16] also show how to harness unlabeled data to learn reference priors, uninformative priors which depend on the amount of training data and allow labels to most efficiently inform inference. Deep kernel learning has also been used for Bayesian meta-learning in few-shot classification and regression [39]. In contrast to these works, we do not perform multi-task learning nor is our goal to harness auxiliary unlabeled downstream data; we instead approach transfer learning in which we leverage pre-training data for expressive priors that maximize performance on a single downstream task. 3 Bayesian Transfer Learning with Pre-Trained Priors In order to transfer knowledge acquired through pre-training to downstream tasks, we adopt a pipeline with three simple components, composed of easy-to-use existing tools: 1. First, we fit a probability distribution with closed-form density to the posterior over feature extractor parameters using a pre-trained checkpoint and SWAG [34] (Section 3.1). 2. Second, we re-scale this distribution, viewed as a prior for new tasks, to reflect the mismatch between the pre-training and downstream tasks and to add coverage to parameter settings which might be consistent with the downstream, but not pre-training, task. To this end, we tune a single scalar coefficient on held out validation data (Section 3.2). Finally, we use this re-scaled prior, along with a zero-mean isotropic Gaussian prior over added parameters (e.g. classification head), to form a posterior on the downstream task. We then either optimize the posterior, or use it to perform full Bayesian inference with SGLD and SGHMC samplers [42, 5] (Section 3.3) 3. Finally, we plug the re-scaled prior into a Bayesian inference algorithm, along with a zeromean isotropic Gaussian prior over added parameters (e.g. classification head) to form a posterior on the downstream task. We then either optimize the posterior, or use it to perform full Bayesian inference with SGLD and SGHMC samplers [50, 5] (Section 3.3). We illustrate this pipeline in Figure 5 in the Appendix. The simplicity and modularity of this framework are key strengths: by carefully combining easy-to-use existing components, we will see in Section 4 that we can straightforwardly improve the default approaches to deep transfer learning. The potential for impact is significant: we can use this pipeline as a drop-in replacement for standard procedures used in deploying foundation models. At the same time, there is significant novelty: while Bayesian neural networks are typically used with simple zero-mean isotropic priors, we leverage the significant developments in self-supervised learning to produce highly informative priors. Moreover, in the following subsections (3.1, 3.2, and 3.3) we will gain a nuanced understanding of each of these three components and the key considerations for practical success. We discuss computational considerations in Section 3.4. For experiments in this section, we use a prior learned over the parameters of an ImageNet pre-trained SimCLR ResNet-50 feature extractor [9, 6, 18], and we choose CIFAR-10 and CIFAR-100 for downstream tasks [30]. We perform Bayesian inference with stochastic gradient Hamiltonian Monte Carlo (SGHMC) [5]. The experiments in this section are primarily intended to gain conceptual insights into each step of our approach. In Section 4, we present our main empirical evaluations. 3.1 Learning Transferable Priors We begin by building a probability distribution over the parameters of a feature extractor which represents knowledge we acquire from pre-training. To this end, we fit the distribution to the Bayesian posterior, or regularized loss function, on the pre-training task. This pre-training posterior will become the prior for downstream tasks and can be saved, or publicly released, for future use, similar to pre-trained weights. This stage requires two major design choices: a pre-training task and an algorithm for constructing the probability distribution. Our downstream inference procedures require that our prior is represented as a closed-form density function. Thus, we must use a method that can provide a closed-form posterior approximation for the source task, which can be then re-scaled and used as a prior in the downstream task. We opt for SWA-Gaussian (SWAG) [34] due to its simplicity, scalability, popularity, and non-diagonal covariance. Other procedures that provide closed-form posterior approximations, such as the Laplace approximation or variational methods, could also be applied, though as we will see, a non-diagonal covariance is particularly important to the success of this approach. SWAG starts from a pre-trained model, and runs a small number M of fine-tuning epochs with a modified learning rate schedule [34]. The SWAG approximate posterior distribution is given by N (w, 12Σdiag + 1 2Σlow-rank), where w = 1 M ∑M t=1 wt is the SWA [23] solution, Σdiag = 1 L−1 ∑M t=M−L+1 diag(wt −w), and Σlow-rank = 1 L−1 ∑M t=M−L+1 (wt − w) (wt − w) ⊤, where L is a hyperparameter controlling the rank of the low-rank component of the covariance matrix. After obtaining a closed-form distribution using SWAG, we remove the head from on top of the feature extractor, for example, a linear classification module, and we consider only the distribution’s restriction to the parameters of the feature extractor. New layers that are added for downstream tasks receive a non-learned prior over their parameters. To highlight the versatility of Bayesian transfer learning, we will focus both on supervised image classification and also on self-supervised learning pre-training tasks and leverage existing torchvision and SimCLR [6] checkpoints. For both torchvision and SimCLR models, we learn the prior using the associated loss function — cross-entropy and InfoNCE, respectively, regularized by weight decay. In both settings, the regularized loss function can be represented as the sum of a negative log-likelihood and the negative log Gaussian density (weight decay). While standard SGD-based transfer learning uses a learned parameter vector, our Bayesian transfer learning approach uses a covariance matrix over feature extractor parameters which contains information about the pre-training loss surface geometry. Can we really “learn” a prior? A data-dependent prior may sound odd, because a prior reflects our beliefs “before we see the data”. However, it is entirely principled to learn a prior, as long as it is not learned using exactly the same labeled data we use in our downstream likelihood for the downstream task. Indeed, any informative prior is based on data that has shaped our beliefs. Alignment of Pre-Training and Downstream Loss Geometry. If the covariance matrix of our learned prior is to be more effective than an isotropic one, then it must not favour poorly generalizing directions for the downstream task. Thus, we compare the alignment of the learned covariance matrix with a downstream task’s test loss. We begin by computing the top 5 leading singular vectors of a SWAG learned covariance matrix over parameters of the SimCLR ImageNet-trained ResNet-50 feature extractor. We then train a linear classifier head on CIFAR-10 training data on top of the fixed pre-trained feature extractor. Starting at the learned parameter vector, we perturb the feature extractor parameters in the direction of the singular vectors (fixing fully-connected classifier parameters), and measure the increase in test loss. We compare to the test loss when instead perturbing by each of 10 random vectors. All perturbation distances are filter normalized (as in [31, 21]), to account for invariance with respect to filter-wise parameter rescaling. In Figure 2a, we see the CIFAR-10 test loss is far flatter in the directions of leading eigenvectors of the pre-trained covariance than in a random direction, indicating the learned prior indeed promotes directions consistent with the downstream task. Learned Covariance Outperforms Only a Learned Mean. After verifying that our pre-trained priors do in fact identify flat directions of the pre-training loss which are aligned with the downstream loss, we directly compare the performance benefits of our learned covariance over an isotropic covariance with a learned mean. To this end, we swap out our learned prior’s covariance with an isotropic version αI , where α is tuned on a held out validation set. Figure 2c shows that a learned covariance consistently outperforms its isotropic counterpart, indicating that the shape of the pre-training loss surface’s basin is informative for downstream tasks. The x-axis here denotes the number of training samples used for fine-tuning on the downstream task. We also see Bayesian model averaging provides further performance gains, which we discuss further in Section 3.3. We now dissect just how important the low-rank component is. How is the Prior Best Structured? In the context of continual learning, elastic weight consolidation (EWC) [29] uses purely diagonal covariance priors to help prevent forgetting in continual learning. In this paper, we are interested in transfer learning, and wish to understand the benefits of a lowrank component for capturing particularly important directions in the pre-training loss surface for transferring to downstream tasks. Omitting the low-rank component slightly reduces the memory footprint, but loses potentially important information about the shape of the pre-training posterior mode. In practice, the rank of the matrix is determined by the number of samples collected when running SWAG. In Figure 2b, we present the performance of a prior learned via SWAG on a SimCLR ResNet-50 feature extractor for transfer learning to CIFAR-10 and CIFAR-100. As the rank of the low-rank component increases from zero (diagonal covariance), we see the performance improves dramatically until it saturates, indicating that only a small number of dominant directions are important for the transferability of the prior. Note that performance saturation occurs later for CIFAR-100 than CIFAR-10 — likely due to the higher complexity of CIFAR-100 compared to CIFAR-10, which has 10× fewer classes. 3.2 Rescaling the Prior In incremental learning, it is common to acquire some data, form a posterior, and then use this posterior as our new prior in acquiring future data. This procedure is equivalent to forming the posterior from all of the data at once, assuming all data are drawn from the same distribution with the same likelihood. However, in transfer learning, we assume the data from the source and downstream task are drawn from different but related distributions. Thus we do not want to directly re-use a posterior from the source as a prior for the downstream task, without modification. For example, as we acquire more data from the source task, our posterior will become increasingly concentrated. This concentration is an issue, as certainty with respect to the ideal parameters for the pre-training task does not imply certainty on the ideal parameters for the downstream task. To address this consideration, we rescale the learned Gaussian prior by multiplying its covariance matrix by a scalar value. We select the highest performing scalar value across a grid on a holdout set from the downstream task. If we do not scale the covariance enough, our prior will become concentrated around parameters which are inconsistent with the downstream task, and if we scale the covariance too much, our prior will be diffuse and assign too much mass to regions of parameter space which are again inconsistent with the downstream task. We now put this intuition to the test. We fit the SimCLR pre-training loss using a Gaussian prior with mean µ and covariance matrix Σ. Given our uncertainty regarding the strength of the relationship between the pre-training and downstream tasks, it is unclear if our learned prior would lead to over-confidence on the downstream task. To rectify this problem, we instead assign prior covariance matrix λΣ with λ ≥ 1. Figure 2d shows the accuracy of our method on CIFAR-10 as a function of λ. As expected, we see that we need to make the prior more diffuse if we are to optimize performance. In fact, prior rescaling can be the difference between strong and poor performance. We also see there is an optimal scaling factor where the prior is neither overly diffuse nor concentrated around a solution which is inconsistent with the downstream task. Additionally, small scaling factors constrain ourselves to poor parameters, hurting performance much more than large scaling factors, which simply inducing a near-uniform prior, negating the benefits of transfer learning. 3.3 Bayesian Inference After learning a prior and re-scaling it, we finally must draw samples from the downstream task’s posterior over the parameters of the entire model, including additional modules, such as classification heads, which were added after pre-training specifically for a particular downstream task. We use a zero-mean isotropic Gaussian prior over these additional parameters, with a scaling that is again tuned on held-out training data from the downstream task. Since we obtain a closed-form prior, the re-scaled distributions we learn are compatible with a wide variety of Bayesian inference algorithms. In our experiments, we choose stochastic gradient Hamiltonian Monte Carlo (SGHMC) [5] and Stochastic Gradient Langevin Dynamics (SGLD) [50] since these samplers simultaneously provide strong performance and tractable computation costs. Once we have obtained samples from the downstream posterior, we use these samples to form a Bayesian model average for test-time predictions. In Figure 2c, we also observe the advantage of Bayesian inference (SGHMC) over MAP estimation (SGD) on the negative log-posterior yielded by our learned prior. Bayesian inference outperforms MAP estimation across all dataset sizes we consider. In general, Bayesian inference can benefit most from an expressive prior. Indeed, priors reshape the whole posterior landscape — and the distinctive feature of a Bayesian approach is that it marginalizes over the whole posterior, rather than simply using an optimum as with standard training. 3.4 Practical Considerations While our method requires a learned prior and a re-scaling coefficient, it is very easy to use and has minimal computational costs over standard fine-tuning routines. In particular, no expert intervention or knowledge of Bayesian methods is required, and only a single hyperparameter needs to be tuned. In the Appendix Section A, we evaluate the costs of each of the three stages of our pipeline in detail: (i) inferring the posterior on the source task; (ii) re-scaling the posterior to become an informative prior for the downstream task; (iii) using the informative prior in the downstream task. In short, (i) has a minor cost (e.g., 14 of an epoch on ImageNet) that is a one-time cost when a pre-trained posterior is publicly released, as we have done; (ii) involves a single hyperparameter that can be tuned in the same way as other hyperparameters (simple validation grid search), does not require specialized expertise, and adds 17 to the runtime of fine-tuning; (iii) has no additional cost if we do MAP optimization and has a cost comparable to deep ensembles if we run our implementation of Bayesian inference, both in terms of fine-tuning and test-time, and we show that Bayesian inference significantly outperforms deep ensembles. 4 Experiments We now conduct a thorough empirical evaluation of our transfer learning pipeline on image classification and semantic segmentation. We generally consider five approaches, which we find have the following performance ordering: (1) Bayesian inference with learned priors, (2) SGD with learned priors, (3) SGD with standard pre-training, (4) Bayesian inference with non-learned zero-mean priors, (5) SGD with non-learned zero-mean priors. We note that both (1) and (2) are part of our framework, and that SGD with learned priors (2) significantly outperforms standard transfer learning (3). 4.1 Experimental Setting We adopt the ResNet-50 and ResNet-101 architectures [18], and we scale the input image to 224×224 pixels to accommodate these feature extractors designed for ImageNet data. We use a SimCLR (SSL) ResNet-50 checkpoint [6] pre-trained on the ImageNet 1k dataset [9] and fit our prior to the SimCLR loss function. For the supervised setting, we use pre-trained torchvision ResNet-50 and ResNet101 backbones. We perform image classification experiments on four downstream tasks: CIFAR-10, CIFAR-100 [30], Oxford Flowers-102 [36], and Oxford-IIIT Pets [38]. On semantic segmentation, we use a DeepLabv3+ system [4] with ResNet-50 and ResNet-101 backbone architectures, and we evaluate performance on the Pascal VOC 2012 [14] and Cityscapes [7] datasets. All error bars represent one standard error over 5 runs. We evaluate over a variety of downstream training set sizes, as transfer learning often involves limited downstream data.We provide a detailed description of hyperparameters in Appendix C.1. 4.2 Performance Comparison In Figure 3, we compare the five approaches described above across various dataset sizes. We observe the following: (i) learned priors consistently outperform SGD transfer learning which in turn outperforms non-learned priors; (ii) conditioned on using the same prior, Bayesian inference often outperforms SGD training; (iii) Bayesian inference adds more value when used with informative priors; (iv) learned priors are relatively most valuable on intermediate data sizes for the downstream task. Point (iv) is particularly interesting: unlike standard SGD transfer learning, which involves an initialization from the pretraining task, the learned priors provide an explicit inductive bias, enabling the resulting model to learn more efficiently from downstream data. Once there is a sufficient amount of downstream data, the source pre-training becomes less important, and the methods become more similar in performance, although there is still a significant gap. Computation and comparison to deep ensembles. Our Bayesian model average (BMA) contains 10 samples from the posterior, and thus incurs a higher test-time cost than a single SGD-trained model. We therefore additionally compare to an equally sized ensemble of SGD-trained transfer learning models in Appendix B.3. We see that our BMA strongly outperforms the deep ensemble. In transfer learning, deep ensemble members are initialized with the same pre-trained checkpoint, and fine-tuning tends to stay in the same basin, such that the ensemble components are relatively homogenous in their predictions. Recall that SGD with our learned priors incurs essentially the same computational costs as standard transfer learning, but has much better performance. Comparison between self-supervised and supervised pre-training. In Appendix B.2 we provide additional evaluations with priors generated using torchvision ResNet-50 and ResNet-101 checkpoints. These priors differ from the SimCLR prior in that they are learned with labeled data rather than a self-supervision task. We see here also that learned supervised priors paired with Bayesian inference consistently outperforms all baselines. We also see, notably, that the SSL priors are more transferable and outperform their supervised counterparts. Evaluating uncertainty. We also measure predictive uncertainty via negative log test-likelihood (NLL) in Figure 4b, with additional results in Appendix B.4. We find that Bayesian transfer learning outperforms all other methods. However, we note that even though Bayesian inference with a non-learned prior has inferior accuracy to SGD with pre-training, it has superior test-likelihood — indicating that the confidence of SGD-based transfer learners is significantly miscalibrated, as likelihood accounts for both accuracy and uncertainty. We also evaluate the calibration of uncertainty using reliability diagrams (Figure 9 in the Appendix). These plots demonstrate that Bayesian inference with learned priors is the best calibrated among methods we consider. . 4.3 Out-of-Distribution Generalization CIFAR-10.1 [42] is a test set of 2000 natural images modeled after CIFAR-10 with the same classes and image dimensions, following a similar dataset creation process to that described in the original CIFAR-10 paper. Models trained on CIFAR-10 consistently perform much worse on CIFAR-10.1 despite the images being similarly easy for humans to classify. Figure 4a indicates that our method achieves superior performance to SGD-based transfer learning and Bayesian inference with non-learned priors on CIFAR-10.1 across training set sizes, where training sets are sampled from CIFAR-10 training data. 4.4 Semantic Segmentation with ImageNet Priors Popular segmentation methods use ImageNet pre-trained weights as an initialization for backbone parameters [4]. We observe that semantic segmentation models, which contain numerous parameters aside from those of the backbone, still benefit immensely from a learned prior over backbone parameters. Placing a strong prior, even if over only the backbone component of the segmentation system, enables us to more fully harness pre-training and boost performance across benchmark datasets PASCAL VOC 2012 and Cityscapes. When constructing a prior, we apply SWAG to torchvision (“Supervised”) and SimCLR (“SSL”) ResNet-50 models and a torchvision ResNet-101 model (the authors of SimCLR did not release a ResNet-101 model). We assign a zero-mean isotropic Gaussian prior to the decoder and atrous convolution parameters of DeepLabv3+ which are not pre-trained. In Table 1, we see that Bayesian inference with learned priors achieves superior Mean-IoU to both baselines (SGD and SGLD) without learned priors. Priors learned on the SimCLR objective again outperform those learned on labeled data. We also present experiments with MAP estimates (SGD) for the loss functions induced by both learned priors in Appendix C.2. We again find that our learned priors boost the performance of MAP estimates as well, and thus are preferable to standard transfer learning, even for practitioners who do not intend to perform Bayesian inference. 5 Discussion Our work reveals several new key insights about deep transfer learning: • Modifying the loss surface on the downstream task through informative priors leads to significant performance gains, with and without Bayesian inference. • Bayesian inference provides a particular performance boost with the informative priors. • Informative priors lead to more data-efficient performance on the downstream task. • The success of this approach depends on capturing key directions in the loss surface of the source task, which we represent through a low-rank plus diagonal covariance matrix. • Standard transfer learning can be significantly miscalibrated, even providing worse likeli- hood than Bayesian methods from scratch on the downstream task. • Priors learned via self-supervised pre-training transfer better than those learning via supervised learning. In short, pre-training your loss with care provides an easy drop-in replacement for conventional transfer learning that relies on initialization.
1. What is the focus and contribution of the paper on transfer learning in deep neural networks? 2. What are the strengths of the proposed approach, particularly in terms of its relevance and organization? 3. What are the weaknesses of the paper, especially regarding its computational costs and limitations? 4. Do you have any concerns about the applicability of the proposed method to regular practitioners? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper proposes a Bayesian perspective of transfer learning in deep neural networks, whereby a re-scaled Bayesian parameter posterior from a source task is used as a pre-trained prior for a target task. The proposed procedure effectively reshapes the training objective of the target task to more faithfully reflect knowledge learnt from the source task. The authors find that modifying the loss surface of target tasks through informative priors significantly improves performance and calibration, especially with Bayesian inference. Strengths And Weaknesses Strengths: Highly important and relevant topic for the research community; Clear and organized exposition; Well motivated arguments and experiments. Weaknesses: Proposed approach can incur significant additional computational costs; May require expert knowledge to use effectively. Overall I like this paper. I find the arguments and experiments convincing enough and believe it would be valuable for the research community. One concern is whether the proposed Bayesian inference based transfer learning pipeline would be of limited utility to regular practitioners, as expert knowledge may be required to get these systems working stably as intended. There is also the incurred computational cost which may deter uptake if the performance gain does not justify it. Questions Since transformers (visual or otherwise) are the main subjects of transfer learning nowadays, it would make a lot of sense to evaluate them too. It is also known that transformers incur different loss landscapes than typical CNNs such as ResNets, it would be interesting to see how performance, robustness and calibration are effected by the proposed transfer learning policy. Limitations See above.
NIPS
Title Multiresolution Kernel Approximation for Gaussian Process Regression Abstract Gaussian process regression generally does not scale to beyond a few thousands data points without applying some sort of kernel approximation method. Most approximations focus on the high eigenvalue part of the spectrum of the kernel matrix, K, which leads to bad performance when the length scale of the kernel is small. In this paper we introduce Multiresolution Kernel Approximation (MKA), the first true broad bandwidth kernel approximation algorithm. Important points about MKA are that it is memory efficient, and it is a direct method, which means that it also makes it easy to approximate K−1 and det(K). 1 Introduction Gaussian Process (GP) regression, and its frequentist cousin, kernel ridge regression, are such natural and canonical algorithms that they have been reinvented many times by different communities under different names. In machine learning, GPs are considered one of the standard methods of Bayesian nonparametric inference [1]. Meanwhile, the same model, under the name Kriging or Gaussian Random Fields, is the de facto standard for modeling a range of natural phenomena from geophyics to biology [2]. One of the most appealing features of GPs is that, ultimately, the algorithm reduces to “just” having to compute the inverse of a kernel matrix, K. Unfortunately, this also turns out to be the algorithm’s Achilles heel, since in the general case, the complexity of inverting a dense n×n matrix scales with O(n3), meaning that when the number of training examples exceeds 104 ∼ 105, GP inference becomes problematic on virtually any computer1. Over the course of the last 15 years, devising approximations to address this problem has become a burgeoning field. The most common approach is to use one of the so-called Nyström methods [3], which select a small subset {xi1 , . . . , xim} of the original training data points as “anchors” and approximate K in the form K ≈ K∗,ICK>∗,I , where K∗,I is the submatrix of K consisting of columns {i1, . . . , im}, and C is a matrix such as the pseudo-inverse of KI,I . Nyström methods often work well in practice and have a mature literature offering strong theoretical guarantees. Still, Nyström is inherently a global low rank approximation, and, as pointed out in [4], a priori there is no reason to believe that K should be well approximable by a low rank matrix: for example, in the case of the popular Gaussian kernel k(x, x′) = exp(−(x−x′)2/(2`2)), as ` decreases and the kernel becomes more and more “local” the number of significant eigenvalues quickly increases. This observation has motivated alternative types of approximations, including local, hierarchical and distributed ones (see Section 2). In certain contexts involving translation invariant kernels yet other strategies may be applicable [5], but these are beyond the scope of the present paper. In this paper we present a new kernel approximation method, Multiresolution Kernel Approximation (MKA), which is inspired by a combination of ideas from hierarchical matrix decomposition 1 In the limited case of evaluating a GP with a fixed Gram matrix on a single training set, GP inference reduces to solving a linear system in K, which scales better with n, but might be problematic behavior when the condition number of K is large. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. algorithms and multiresolution analysis. Some of the important features of MKA are that (a) it is a broad spectrum algorithm that approximates the entire kernel matrixK, not just its top eigenvectors, and (b) it is a so-called “direct” method, i.e., it yields explicit approximations to K−1 and det(K). Notations. We define [n] = {1, 2, . . . , n}. Given a matrix A, and a tuple I = (i1, . . . , ir), AI,∗ will denote the submatrix of A formed of rows indexed by i1, . . . , ir, similarly A∗,J will denote the submatrix formed of columns indexed by j1, . . . , jp, and AI,J will denote the submatrix at the intersection of rows i1, . . . , ir and columns j1, . . . , jp. We extend these notations to the case when I and J are sets in the obvious way. If A is a blocked matrix then JAKi,j will denote its (i, j) block. 2 Local vs. global kernel approximation Recall that a Gaussian Process (GP) on a space X is a prior over functions f : X → R defined by a mean function µ(x) = E[f(x)], and covariance function k(x, x′) = Cov(f(x), f(x′)). Using the most elementary model yi = f(xi) + where ∼ N (0, σ2) and σ2 is a noise parameter, given training data {(x1, y1), . . . , (xn, yn)}, the posterior is also a GP, with mean µ′(x) = µ(x)+k>x (K+ σ2I)−1y, where kx = (k(x, x1), . . . , k(x, xn)), y=(y1, . . . , yn), and covariance k′(x, x′) = k(x, x′)− k>x′(K + σ2I)−1kx. (1) Thus (here and in the following assuming µ = 0 for simplicity), the maximum a posteriori (MAP) estimate of f is f̂(x) = k>x (K + σ 2I)−1y. (2) Ridge regression, which is the frequentist analog of GP regression, yields the same formula, but regards f̂ as the solution to a regularized risk minimization problem over a Hilbert spaceH induced by k. We will use “GP” as the generic term to refer to both Bayesian GPs and ridge regression. Letting K ′ = (K+σ2I), virtually all GP approximation approaches focus on trying to approximate the (augmented) kernel matrix K ′ in such a way so as to make inverting it, solving K ′y =α or computing det(K ′) easier. For the sake of simplicity in the following we will actually discuss approximating K, since adding the diagonal term usually doesn’t make the problem any more challenging. 2.1 Global low rank methods As in other kernel methods, intuitively, Ki,j = k(xi, xj) encodes the degree of similarity or closeness between the two points xi and xj as it relates to the degree of correlation/similarity between the value of f at xi and at xj . Given that k is often conceived of as a smooth, slowly varying function, one very natural idea is to take a smaller set {xi1 , . . . , xim} of “landmark points” or “pseudo-inputs” and approximate k(x, x′) in terms of the similarity of x to each of the landmarks, the relationship of the landmarks to each other, and the similarity of the landmarks to x′. Mathematically, k(x, x′) ≈ m∑ s=1 m∑ j=1 k(x, xis) cis,ij k(xij , x ′), which, assuming that {xi1 , . . . , xim} is a subset of the original point set {x1, . . . , xn}, amounts to an approximation of the form K ≈ K∗,ICK>∗,I , with I = {i1, . . . , im}. The canonical choice for C is C = W+, where W = KI,I , and W+ denotes the Moore-Penrose pseudoinverse of W . The resulting approximation K ≈ K∗,IW+K>∗,I , (3) is known as the Nyström approximation, because it is analogous to the so-called Nyström extension used to extrapolate continuous operators from a finite number of quadrature points. Clearly, the choice of I is critical for a good quality approximation. Starting with the pioneering papers [6, 3, 7], over the course of the last 15 years a sequence of different sampling strategies have been developed for obtaining I , several with rigorous approximation bounds [8, 9, 10, 11]. Further variations include the ensemble Nyström method [12] and the modified Nyström method [13]. Nyström methods have the advantage of being relatively simple, and having reliable performance bounds. A fundamental limitation, however, is that the approximation (3) is inherently low rank. As pointed out in [4], there is no reason to believe that kernel matrices in general should be close to low rank. An even more fundamental issue, which is less often discussed in the literature, relates to the specific form of (2). The appearance of K ′−1 in this formula suggests that it is the low eigenvalue eigenvectors of K ′ that should dominate the result of GP regression. On the other hand, multiplying the matrix by kx largely cancels this effect, since kx is effectively a row of a kernel matrix similar to K ′, and will likely concentrate most weight on the high eigenvalue eigenvectors. Therefore, ultimately, it is not K ′ itself, but the relationship between the eigenvectors of K ′ and the data vector y that determines which part of the spectrum of K ′ the result of GP regression is most sensitive to. Once again, intuition about the kernel helps clarify this point. In a setting where the function that we are regressing is smooth, and correspondingly, the kernel has a large length scale parameter, it is the global, long range relationships between data points that dominate GP regression, and that can indeed be well approximated by the landmark point method. In terms of the linear algebra, the spectral expansion of K ′ is dominated by a few large eigenvalue eigenvectors, we will call this the “PCA-like” scenario. In contrast, in situations where f varies more rapidly, a shorter lengthscale kernel is called for, local relationships between nearby points become more important, which the landmark point method is less well suited to capture. We call this the “k–nearest neighbor type” scenario. In reality, most non-trivial GP regression problems fall somewhere in between the above two extremes. In high dimensions data points tend to be all almost equally far from each other anyway, limiting the applicability of simple geometric interpretations. Nonetheless, the two scenarios are an illustration of the general point that one of the key challenges in large scale machine learning is integrating information from both local and global scales. 2.2 Local and hierarchical low rank methods Realizing the limitations of the low rank approach, local kernel approximation methods have also started appearing in the literature. Broadly, these algorithms: (1) first cluster the rows/columns of K with some appropriate fast clustering method, e.g., METIS [14] or GRACLUS [15] and block K accordingly; (2) compute a low rank, but relatively high accuracy, approximation JKKi,i ≈ UiΣiU>i to each diagonal block of K; (3) use the {Ui} bases to compute possibly coarser approximations to the JKKi,j off diagonal blocks. This idea appears in its purest form in [16], and is refined in [4] in a way that avoids having to form all rows/columns of the off-diagonal blocks in the first place. Recently, [17] proposed a related approach, where all the blocks in a given row share the same row basis but have different column bases. A major advantage of local approaches is that they are inherently parallelizable. The clustering itself, however, is a delicate, and sometimes not very robust component of these methods. In fact, divide-and-conquer type algorithms such as [18] and [19] can also be included in the same category, even though in these cases the blocking is usually random. A natural extension of the blocking idea would be to apply the divide-and-conquer approach recursively, at multiple different scales. Geometrically, this is similar to recent multiresolution data analysis approaches such as [20]. In fact, hierarchical matrix approximations, including HODLR matrices,H–matrices [21],H2–matrices [22] and HSS matrices [23] are very popular in the numerical analysis literature. While the exact details vary, each of these methods imposes a specific type of block structure on the matrix and forces the off-diagonal blocks to be low rank (Figure 1 in the Supplement). Intuitively, nearby clusters interact in a richer way, but as we move farther away, data can be aggregated more and more coarsely, just as in the fast multipole method [24]. We know of only two applications of the hierarchical matrix methodology to kernel approximation: Börm and Garcke’s H2 matrix approach [25] and O’Neil et al.’s HODLR method [26]. The advantage of H2 matrices is their more intricate structure, allowing relatively tight interactions between neighboring clusters even when the two clusters are not siblings in the tree (e.g. blocks 8 and 9 in Figure 1c in the Supplement). However, the H2 format does not directly help with inverting K or computing its determinant: it is merely a memory-efficient way of storing K and performing matrix/vector multiplies inside an iterative method. HODLR matrices have a simpler structure, but admit a factorization that makes it possible to directly compute both the inverse and the determinant of the approximated matrix in just O(n log n) time. The reason that hierarchical matrix approximations have not become more popular in machine learning so far is that in the case of high dimensional, unstructured data, finding the way to organize {x1, . . . , xn} into a single hierarchy is much more challenging than in the setting of regularly spaced points in R2 or R3, where these methods originate: 1. Hierarchical matrices require making hard assignments of data points to clusters, since the block structure at each level corresponds to partitioning the rows/columns of the original matrix. 2. The hierarchy must form a single tree, which puts deep divisions between clusters whose closest common ancestor is high up in the tree. 3. Finding the hierarchy in the first place is by no means trivial. Most works use a top-down strategy which defeats the inherent parallelism of the matrix structure, and the actual algorithm used (kd-trees) is known to be problematic in high dimensions [27]. 3 Multiresolution Kernel Approximation Our goal in this paper is to develop a data adapted multiscale kernel matrix approximation method, Multiresolution Kernel Approximation (MKA), that reflects the “distant clusters only interact in a low rank fashion” insight of the fast multipole method, but is considerably more flexible than existing hierarchical matrix decompositions. The basic building blocks of MKA are local factorizations of a specific form, which we call core-diagonal compression. Definition 1 We say that a matrix H is c–core-diagonal if Hi,j = 0 unless either i, j ≤ c or i= j. Definition 2 A c–core-diagonal compression of a symmetric matrix A ∈ Rm×m is an approximation of the form A ≈ Q>H Q = ( )( )( ) , (4) where Q is orthogonal and H is c–core-diagonal. Core-diagonal compression is to be contrasted with rank c sketching, where H would just have the c× c block, without the rest of the diagonal. From our multiresolution inspired point of view, however, the purpose of (4) is not just to sketch A, but to also to split Rm into the direct sum of two subspaces: (a) the “detail space”, spanned by the last n−c rows of Q, responsible for capturing purely local interactions in A and (b) the “scaling space”, spanned by the first c rows, capturing the overall structure of A and its relationship to other diagonal blocks. Hierarchical matrix methods apply low rank decompositions to many blocks of K in parallel, at different scales. MKA works similarly, by applying core-diagonal compressions. Specifically, the algorithm proceeds by taking K through a sequence of transformations K = K0 7→ K1 7→ . . . 7→ Ks, called stages. In the first stage 1. Similar to other local methods, MKA first uses a fast clustering method to cluster the rows/columns of K0 into clusters C11 , . . . , C1p1 . Using the corresponding permutation matrix C1 (which maps the elements of the first cluster to (1, 2, . . . |C11 |), the elements of the second cluster to (|C11 |+ 1, . . . , |C11 |+ |C12 |), and so on) we form a blocked matrix K0 = C1K0C>1 , where JK0Ki,j = KC1i ,C1j . 2. Each diagonal block of K0 is independently core-diagonally compressed as in (4) to yield H1i = ( Q1i JK0Ki,i (Q1i )> ) CD(c1i ) (5) where CD(c1i ) in the index stands for truncation to c 1 i –core-diagonal form. 3. The Q1i local rotations are assembled into a single large orthogonal matrix Q1 = ⊕ iQ 1 i and applied to the full matrix to give H1 = Q1K0Q1 > . 4. The rows/columns of H1 are rearranged by applying a permutation P1 that maps the core part of each block to one of the first c1 := c11 + . . . c 1 p1 coordinates, and the diagonal part to the rest, giving Hpre1 = P1 H1 P > 1 . 5. Finally, Hpre1 is truncated into the core-diagonal form H1 = K1 ⊕ D1, where K1 ∈ Rc1×c1 is dense, while D1 is diagonal. Effectively, K1 is a compressed version of K0, while D1 is formed by concatenating the diagonal parts of each of the H1i matrices. Together, this gives a global core-diagonal compression K0 ≈ C>1 Q1>P>1︸ ︷︷ ︸ Q>1 (K1⊕D1) P1Q1C1︸ ︷︷ ︸ Q1 of the entire original matrix K0. The second and further stages of MKA consist of applying the above five steps toK1,K2, . . . ,Ks−1 in turn, so ultimately the algorithm yields a kernel approximation K̃ which has a telescoping form K̃ ≈ Q>1 (Q>2 (. . .Q>s (Ks⊕Ds)Qs . . .⊕D2)Q2⊕D1)Q1 (6) The pseudocode of the full algorithm is in the Supplementary Material. MKA is really a meta-algorithm, in the sense that it can be used in conjunction with different corediagonal compressors. The main requirements on the compressor are that (a) the core of H should capture the dominant part of A, in particular the subspace that most strongly interacts with other blocks, (b) the first c rows of Q should be as sparse as possible. We consider two alternatives. Augmented Sparse PCA (SPCA). Sparse PCA algorithms explicitly set out to find a set of vectors {v1, . . . ,vc} so as to maximize ‖V >AV ‖Frob, where V = [v1, . . . ,vc], while constraining each vector to be as sparse as possible [28]. While not all SPCAs guarantee orthogonality, this can be enforced a posteriori via e.g., QR factorization, yielding Qsc, the top c rows of Q in (4). Letting U be a basis for the complementary subspace, the optimal choice for the bottom m− c rows in terms of minimizing Frobenius norm error of the compression is Qwlet =UÔ, where Ô = argmax O>O=I ‖ diag(O>U>AUO)‖, the solution to which is of course given by the eigenvectors of U>AU . The main drawback of the SPCA approach is its computational cost: depending on the algorithm, the complexity of SPCA scales with m3 or worse [29, 30]. Multiresolution Matrix Factorization (MMF) MMF is a recently introduced matrix factorization algorithm motivated by similar multiresolution ideas as the present work, but applied at the level of individual matrix entries rather than at the level of matrix blocks [31]. Specifically, MMF yields a factorization of the form A ≈ q>1 . . . q>L︸ ︷︷ ︸ Q> H qL . . . q1︸ ︷︷ ︸ Q , where, in the simplest case, the qi’s are just Givens rotations. Typically, the number of rotations in MMF is O(m). MMF is efficient to compute, and sparsity is guaranteed by the sparsity of the individual qi’s and the structure of the algorithm. Hence, MMF has complementary strengths to SPCA: it comes with strong bounds on sparsity and computation time, but the quality of the scaling/wavelet space split that it produces is less well controlled. Remarks. We make a few remarks about MKA. 1. Typically, low rank approximations reduce dimensionality quite aggressively. In contrast, in core-diagonal compression c is often on the order of m/2, leading to “gentler” and more faithful, kernel approximations. 2. In hierarchical matrix methods, the block structure of the matrix is defined by a single tree, which, as discussed above, is potentially problematic. In contrast, by virtue of reclustering the rows/columns of K` before every stage, MKA affords a more flexible factorization. In fact, beyond the first stage, it is not even individual data points that MKA clusters, but subspaces defined by the earlier local compressions. 3. While C` and P` are presented as explicit permutations, they really just correspond to different ways of blocking Ks, which is done implicitly in practice with relatively little overhead. 4. Step 3 of the algorithm is critical, because it extends the core-diagonal splits found in the diagonal blocks of the matrix to the off-diagonal blocks. Essentially the same is done in [4] and [17]. This operation reflects a structural assumption about K, namely that the same bases that pick out the dominant parts of the diagonal blocks (composed of the first c`i rows of the Q ` i rotations) are also good for compressing the off-diagonal blocks. In the hierarchical matrix literature, for the case of specific kernels sampled in specific ways in low dimensions, it is possible to prove such statements. In our high dimensional and less structured setting, deriving analytical results is much more challenging. 5. MKA is an inherently bottom-up algorithm, including the clustering, thus it is naturally parallelizable and can be implemented in a distributed environment. 6. The hierarchical structure of MKA is similar to that of the parallel version of MMF (pMMF) [32], but the way that the compressions are calculated is different (pMMF tries to minimize an objective that relates to the entire matrix). 4 Complexity and application to GPs For MKA to be effective for large scale GP regression, it must be possible to compute the factorization fast. In addition, the resulting approximation K̃ must be symmetric positive semi-definite (spsd) (MEKA, for example, fails to fulfill this [4]). We say that a matrix approximation algorithm A 7→ à is spsd preserving if à is spsd whenever A is. It is clear from its form that the Nyström approximation is spsd preserving , so is augmented SPCA compression. MMF has different variants, but the core part of H is always derived by conjugating A by rotations, while the diagonal elements are guaranteed to be positive, therefore MMF is spsd preserving as well. Proposition 1 If the individual core-diagonal compressions in MKA are spsd preserving, then the entire algorithm is spsd perserving. The complexity of MKA depends on the complexity of the local compressions. Next, we assume that to leading order in m this cost is bounded by ccomp mαcomp (with αcomp≥ 1) and that each row of the Q matrix that is produced is csp–sparse. We assume that the MKA has s stages, the size of the final Ks “core matrix” is dcore × dcore, and that the size of the largest cluster is mmax. We assmue that the maximum number of clusters in any stage is bmax and that the clustering is close to balanced in the sense that that bmax = θ(n/mmax) with a small constant. We ignore the cost of the clustering algorithm, which varies, but usually scales linearly in snbmax. We also ignore the cost of permuting the rows/columns of K`, since this is a memory bound operation that can be virtualized away. The following results are to leading order in mmax and are similar to those in [32] for parallel MMF. Proposition 2 With the above notations, the number of operations needed to compute the MKA of an n×n matrix is upper bounded by 2scspn2 + sccompm αcomp−1 max n. Assuming bmax–fold parallelism, this complexity reduces to 2scspn2/bmax+sccompm αcomp max . The memory cost of MKA is just the cost of storing the various matrices appearing in (6). We only include the number of non-zero reals that need to be stored and not indices, etc.. Proposition 3 The storage complexity of MKA is upper bounded by (scsp +1)n+ d2core. Rather than the general case, it is more informative to focus on MMF based MKA, which is what we use in our experiments. We consider the simplest case of MMF, referred to as “greedy-Jacobi” MMF, in which each of the qi elementary rotations is a Given rotation. An additional parameter of this algorithm is the compression ratio γ, which in our notation is equal to c/n. Some of the special features of this type of core-diagonal compression are: (a) While any given row of the rotation Q produced by the algorithm is not guaranteed to be sparse, Q will be the product of exactly b(1−γ)mc Givens rotations. (b) The leading term in the cost is the m3 cost of computing A>A, but this is a BLAS operation, so it is fast. (c) Once A>A has been computed, the cost of the rest of the compression scales with m2. Together, these features result in very fast core-diagonal compressions and a very compact representation of the kernel matrix. Proposition 4 The complexity of computing the MMF-based MKA of an n×n dense matrix is upper bounded by 4sn2 + sm2maxn, where s = log(dcore/n)/(log γ). Assuming bmax–fold parallelism, this is reduced to 4snmmax +m3max. Proposition 5 The storage complexity of MMF-based MKA is upper bounded by (2s+1)n+ d2core. Typically, dcore = O(1). Note that this implies O(n log n) storage complexity, which is similar to Nyström approximations with very low rank. Finally, we have the following results that are critical for using MKA in GPs. Proposition 6 Given an approximate kernel K̃ in MMF-based MKA form (6), and a vector z ∈Rn the product K̃z can be computed in 4sn+ d2core operations. With bmax–fold parallelism, this is reduced to 4smmax + d2core. Proposition 7 Given an approximate kernel K̃ in (MMF or SPCA-based) MKA form, the MKA form of K̃α for any α can be computed in O(n + d3core) operations. The complexity of computing the matrix exponential exp(βK̃) for any β in MKA form and the complexity of computing det(K̃) are also O(n+ d3core). 4.1 MKA–GPs and MKA Ridge Regression The most direct way of applying MKA to speed up GP regression (or ridge regression) is simply using it to approximate the augmented kernel matrix K ′ = (K + σ2I) and then inverting this approximation using Proposition 7 (with α = −1). Note that the resulting K̃ ′−1 never needs to be evaluated fully, in matrix form. Instead, in equations such as (2), the matrix-vector product K̃ ′−1y can be computed in “matrix-free” form by cascading y through the analog of (6). Assuming that dcore n and mmax is not too large, the serial complexity of each stage of this computation scales with at most n2, which is the same as the complexity of computing K in the first place. One potential issue with the above approach however is that because MKA involves repeated truncation of the Hprej matrices, K̃ ′ will be a biased approximation to K, therefore expressions such as (2) which mix an approximate K ′ with an exact kx will exhibit some systematic bias. In Nyström type methods (specifically, the so-called Subset of Regressors and Deterministic Training Conditional GP approximations) this problem is addressed by replacing kx with its own Nyström approximation, k̂x = K∗,IW+kIx, where [k̂ I x]j = k(x, xij ). Although K̂ ′ = K∗,IW +K>∗,I +σ 2I is a large matrix, expressions such as k̂>x K̂ ′−1 can nonetheless be efficiently evaluated by using a variant of the Sherman–Morrison–Woodbury identity and the fact that W is low rank (see [33]). The same approach cannot be applied to MKA because K̃ is not low rank. Assuming that the testing set {x1, . . . , xp} is known at training time, however, instead of approximatingK orK ′, we compute the MKA approximation of the joint train/test kernel matrix K = ( K K∗ K>∗ Ktest ) where Ki,j = k(xi, xj) + σ 2 [K∗]i,j = k(xi, x ′ j) [Ktest]i,j = k(x ′ i, x ′ j). Writing K−1 in blocked form K̃−1 = ( A B C D ) , and taking the Schur complement of D now recovers an alternative approximation Ǩ−1 = A − BD−1C to K−1 which is consistent with the off-diagonal block K∗ leading to our final MKA–GP formula f̂ = K>∗ Ǩ −1y, where f̂ = (f̂(x′1), . . . , f̂(x ′ p)) >. While conceptually this is somewhat more involved than naively estimating K ′, assuming p n, the cost of inverting D is negligible, and the overall serial complexity of the algorithm remains (n+ p)2. In certain GP applications, the O(n2) cost of writing down the kernel matrix is already forbidding. The one circumstance under which MKA can get around this problem is when the kernel matrix is a matrix polynomial in a sparse matrix L, which is most notably for diffusion kernels and certain other graph kernels. Specifically in the case of MMF-based MKA, since the computational cost is dominated by computing local “Gram matrices”A>A, when L is sparse, and this sparsity is retained from one compression to another, the MKA of sparse matrices can be computed very fast. In the case of graph Laplacians, empirically, the complexity is close to linear in n. By Proposition 7, the diffusion kernel and certain other graph kernels can also be approximated in about O(n log n) time. 5 Experiments We compare MKA to five other methods: 1. Full: the full GP regression using Cholesky factorization [1]. 2. SOR: the Subset of Regressors method (also equivalent to DTC in mean) [1]. 3. FITC: the Fully Independent Training Conditional approximation, also called Sparse Gaussian Processes using Pseudo-inputs [34]. 4. PITC: the Partially Independent Training Conditional approximation method (also equivalent to PTC in mean) [33]. 5. MEKA: the Memory Efficient Kernel Approximation method [4]. The KISS-GP [35] and other interpolation based methods are not discussed in this paper, because, we believe, they mostly only apply to low dimensional settings. We used custom Matlab implementations [1] for Full, SOR, FITC, and PITC. We used the Matlab codes provided by the author for MEKA. Our algorithm MKA was implemented in C++ with the Matlab interface. To get an approximately fair comparison, we set dcore in MKA to be the number of pseudo-inputs. The parallel MMF algorithm was used as the compressor due to its computational strength [32]. The Gaussian kernel is used for all experiments with one length scale for all input dimensions. Qualitative results. We show the qualitative behavior of each method on the 1D toy dataset from [34]. We sampled the ground truth from a Gaussian processes with length scale ` = 0.5 and number of pseudo-inputs (dcore) is 10. We applied cross-validation to select the parameters for each method to fit the data. Figure 1 shows that MKA fits the data almost as well as the Full GP does. In terms of the other approximate methods, although their fit to the data is smoother, this is to the detriment of capturing the local structure of the underlying data, which verifies MKA’s ability to capture the entire spectrum of the kernel matrix, not just its top eigenvectors. Real data. We tested the efficacy of GP regression on real-world datasets. The data are normalized to mean zero and variance one. We randomly selected 10% of each dataset to be used as a test set. On the other 90% we did five-fold cross validation to learn the length scale and noise parameter for each method and the regression results were averaged over repeating this setting five times. All experiments were ran on a 3.4GHz 8 core machine with 8GB of memory. Two distinct error measures are used to assess performance: (a) standardized mean square error (SMSE), 1n ∑n t=1(ŷt− yt) 2/σ̂2?, where σ̂ 2 ? is the variance of test outputs, and (2) mean negative log probability (MNLP) 1 n ∑n t=1 ( (ŷt − yt)2/σ̂2? + log σ̂2? + log 2π ) , each of which corresponds to the predictive mean and variance in error assessment. From Table 1, we are competitive in both error measures when the number of pseudo-inputs (dcore) is small, which reveals low-rank methods’ inability in capturing the local structure of the data. We also illustrate the performance sensitivity by varying the number of pseudo-inputs on selected datasets. In Figure 2, for the interval of pseudo-inputs considered, MKA’s performance is robust to dcore, while low-rank based methods’ performance changes rapidly, which shows MKA’s ability to achieve good regression results even with a crucial compression level. The Supplementary Material gives a more detailed discussion of the datasets and experiments. 6 Conclusions In this paper we made the case that whether a learning problem is low rank or not depends on the nature of the data rather than just the spectral properties of the kernel matrix K. This is easiest to see in the case of Gaussian Processes, which is the algorithm that we focused on in this paper, but it is also true more generally. Most existing sketching algorithms used in GP regression force low rank structure on K, either globally, or at the block level. When the nature of the problem is indeed low rank, this might actually act as an additional regularizer and improve performance. When the data does not have low rank structure, however, low rank approximations will fail. Inspired by recent work on multiresolution factorizations, we proposed a mulitresolution meta-algorithm, MKA, for approximating kernel matrices, which assumes that the interaction between distant clusters is low rank, while avoiding forcing a low rank structure of the data locally, at any scale. Importantly, MKA allows fast direct calculations of the inverse of the kernel matrix and its determinant, which are almost always the computational bottlenecks in GP problems. Acknowledgements This work was completed in part with resources provided by the University of Chicago Research Computing Center. The authors wish to thank Michael Stein for helpful suggestions.
1. How does the proposed multiresolution approximation method for the Gram matrix K differ from other approaches in the literature? 2. What are the advantages and limitations of using captures local and global properties for K? 3. Can you provide more details on the clustering step in the methodology and its impact on the results? 4. How does the approximate matrix \tilde{K} converge to the true matrix K as the d_{core} parameter increases? 5. What is the effect of approximating K in multiple stages, and how does reclustering K_l before each stage help with this? 6. Which compression method was used in the experiments, and how did it affect the results? 7. Would it be possible to include an experimental comparison with the use of SPCA or MMF in the context of MKA? 8. Could you provide more information on the predictive variance in addition to the predictive mean in the experimental section? 9. Are there any minor issues or suggestions you have for improving the paper's presentation or clarity?
Review
Review The authors consider the problem of large-scale GP regression; they propose a multiresolution approximation method for the Gram matrix K. In the literature, most approximation approaches assume either (1) a low rank representation for K, which may not be supported by the data, or (2) a block-diagonal form for K, the structure of which has to be identified by clustering methods, which is not trivial for high-dimensional data. The current paper proposes MKA, a novel approximation approach that uses captures local and global properties for K. The Gram matrix K is approximated as a Kronecker sum of low-rank and diagonal matrices, a fact that significantly reduces the computational complexity of the linear algebra calculations required in the context of GP regression. The paper initiates a very interesting discussion on the nature of local and global kernel approximations, but I feel that certain aspects ofthe methodology proposed are not sufficiently clear. Below, I list some considerations that I had while reading the paper. What is the effect of the clustering mentioned in step 1 of the methodology? Is the method less sensitive to the result of clustering than the local-based methods in the literature? Does the approximate matrix \tilde{K} converge to true matrix K as the d_{core} parameter is increased? By looking at the experiments of Figure 2, it appears that MKA is rather insensitive to the d_{core} value. The effect of approximating K in many stages as described in Section 3 is not obvious or trivial. The authors attribute the flexibility of MKA to the reclustering of K_l before every stage. However, I would suspect that a slight variation of the output at any stage would have dramatic consequences in the stages that follow. It is not clear which compression method was used in the experiments. I would think that the cost of SPCA is prohibitive, given the objective of the current paper. It could still be worth mentioning SPCA if there was some experimental comparison with the use of MMF in the context of MKA. I think that the experimental section would be stronger if there was also a demonstration of how well the MKA approximates the original GP. Although we get an idea of the predictive mean in Figure 1, there is no information of the predictive variance. Minor comments: The line for the Full GP method in Figure 2 is almost not visible.
NIPS
Title Multiresolution Kernel Approximation for Gaussian Process Regression Abstract Gaussian process regression generally does not scale to beyond a few thousands data points without applying some sort of kernel approximation method. Most approximations focus on the high eigenvalue part of the spectrum of the kernel matrix, K, which leads to bad performance when the length scale of the kernel is small. In this paper we introduce Multiresolution Kernel Approximation (MKA), the first true broad bandwidth kernel approximation algorithm. Important points about MKA are that it is memory efficient, and it is a direct method, which means that it also makes it easy to approximate K−1 and det(K). 1 Introduction Gaussian Process (GP) regression, and its frequentist cousin, kernel ridge regression, are such natural and canonical algorithms that they have been reinvented many times by different communities under different names. In machine learning, GPs are considered one of the standard methods of Bayesian nonparametric inference [1]. Meanwhile, the same model, under the name Kriging or Gaussian Random Fields, is the de facto standard for modeling a range of natural phenomena from geophyics to biology [2]. One of the most appealing features of GPs is that, ultimately, the algorithm reduces to “just” having to compute the inverse of a kernel matrix, K. Unfortunately, this also turns out to be the algorithm’s Achilles heel, since in the general case, the complexity of inverting a dense n×n matrix scales with O(n3), meaning that when the number of training examples exceeds 104 ∼ 105, GP inference becomes problematic on virtually any computer1. Over the course of the last 15 years, devising approximations to address this problem has become a burgeoning field. The most common approach is to use one of the so-called Nyström methods [3], which select a small subset {xi1 , . . . , xim} of the original training data points as “anchors” and approximate K in the form K ≈ K∗,ICK>∗,I , where K∗,I is the submatrix of K consisting of columns {i1, . . . , im}, and C is a matrix such as the pseudo-inverse of KI,I . Nyström methods often work well in practice and have a mature literature offering strong theoretical guarantees. Still, Nyström is inherently a global low rank approximation, and, as pointed out in [4], a priori there is no reason to believe that K should be well approximable by a low rank matrix: for example, in the case of the popular Gaussian kernel k(x, x′) = exp(−(x−x′)2/(2`2)), as ` decreases and the kernel becomes more and more “local” the number of significant eigenvalues quickly increases. This observation has motivated alternative types of approximations, including local, hierarchical and distributed ones (see Section 2). In certain contexts involving translation invariant kernels yet other strategies may be applicable [5], but these are beyond the scope of the present paper. In this paper we present a new kernel approximation method, Multiresolution Kernel Approximation (MKA), which is inspired by a combination of ideas from hierarchical matrix decomposition 1 In the limited case of evaluating a GP with a fixed Gram matrix on a single training set, GP inference reduces to solving a linear system in K, which scales better with n, but might be problematic behavior when the condition number of K is large. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. algorithms and multiresolution analysis. Some of the important features of MKA are that (a) it is a broad spectrum algorithm that approximates the entire kernel matrixK, not just its top eigenvectors, and (b) it is a so-called “direct” method, i.e., it yields explicit approximations to K−1 and det(K). Notations. We define [n] = {1, 2, . . . , n}. Given a matrix A, and a tuple I = (i1, . . . , ir), AI,∗ will denote the submatrix of A formed of rows indexed by i1, . . . , ir, similarly A∗,J will denote the submatrix formed of columns indexed by j1, . . . , jp, and AI,J will denote the submatrix at the intersection of rows i1, . . . , ir and columns j1, . . . , jp. We extend these notations to the case when I and J are sets in the obvious way. If A is a blocked matrix then JAKi,j will denote its (i, j) block. 2 Local vs. global kernel approximation Recall that a Gaussian Process (GP) on a space X is a prior over functions f : X → R defined by a mean function µ(x) = E[f(x)], and covariance function k(x, x′) = Cov(f(x), f(x′)). Using the most elementary model yi = f(xi) + where ∼ N (0, σ2) and σ2 is a noise parameter, given training data {(x1, y1), . . . , (xn, yn)}, the posterior is also a GP, with mean µ′(x) = µ(x)+k>x (K+ σ2I)−1y, where kx = (k(x, x1), . . . , k(x, xn)), y=(y1, . . . , yn), and covariance k′(x, x′) = k(x, x′)− k>x′(K + σ2I)−1kx. (1) Thus (here and in the following assuming µ = 0 for simplicity), the maximum a posteriori (MAP) estimate of f is f̂(x) = k>x (K + σ 2I)−1y. (2) Ridge regression, which is the frequentist analog of GP regression, yields the same formula, but regards f̂ as the solution to a regularized risk minimization problem over a Hilbert spaceH induced by k. We will use “GP” as the generic term to refer to both Bayesian GPs and ridge regression. Letting K ′ = (K+σ2I), virtually all GP approximation approaches focus on trying to approximate the (augmented) kernel matrix K ′ in such a way so as to make inverting it, solving K ′y =α or computing det(K ′) easier. For the sake of simplicity in the following we will actually discuss approximating K, since adding the diagonal term usually doesn’t make the problem any more challenging. 2.1 Global low rank methods As in other kernel methods, intuitively, Ki,j = k(xi, xj) encodes the degree of similarity or closeness between the two points xi and xj as it relates to the degree of correlation/similarity between the value of f at xi and at xj . Given that k is often conceived of as a smooth, slowly varying function, one very natural idea is to take a smaller set {xi1 , . . . , xim} of “landmark points” or “pseudo-inputs” and approximate k(x, x′) in terms of the similarity of x to each of the landmarks, the relationship of the landmarks to each other, and the similarity of the landmarks to x′. Mathematically, k(x, x′) ≈ m∑ s=1 m∑ j=1 k(x, xis) cis,ij k(xij , x ′), which, assuming that {xi1 , . . . , xim} is a subset of the original point set {x1, . . . , xn}, amounts to an approximation of the form K ≈ K∗,ICK>∗,I , with I = {i1, . . . , im}. The canonical choice for C is C = W+, where W = KI,I , and W+ denotes the Moore-Penrose pseudoinverse of W . The resulting approximation K ≈ K∗,IW+K>∗,I , (3) is known as the Nyström approximation, because it is analogous to the so-called Nyström extension used to extrapolate continuous operators from a finite number of quadrature points. Clearly, the choice of I is critical for a good quality approximation. Starting with the pioneering papers [6, 3, 7], over the course of the last 15 years a sequence of different sampling strategies have been developed for obtaining I , several with rigorous approximation bounds [8, 9, 10, 11]. Further variations include the ensemble Nyström method [12] and the modified Nyström method [13]. Nyström methods have the advantage of being relatively simple, and having reliable performance bounds. A fundamental limitation, however, is that the approximation (3) is inherently low rank. As pointed out in [4], there is no reason to believe that kernel matrices in general should be close to low rank. An even more fundamental issue, which is less often discussed in the literature, relates to the specific form of (2). The appearance of K ′−1 in this formula suggests that it is the low eigenvalue eigenvectors of K ′ that should dominate the result of GP regression. On the other hand, multiplying the matrix by kx largely cancels this effect, since kx is effectively a row of a kernel matrix similar to K ′, and will likely concentrate most weight on the high eigenvalue eigenvectors. Therefore, ultimately, it is not K ′ itself, but the relationship between the eigenvectors of K ′ and the data vector y that determines which part of the spectrum of K ′ the result of GP regression is most sensitive to. Once again, intuition about the kernel helps clarify this point. In a setting where the function that we are regressing is smooth, and correspondingly, the kernel has a large length scale parameter, it is the global, long range relationships between data points that dominate GP regression, and that can indeed be well approximated by the landmark point method. In terms of the linear algebra, the spectral expansion of K ′ is dominated by a few large eigenvalue eigenvectors, we will call this the “PCA-like” scenario. In contrast, in situations where f varies more rapidly, a shorter lengthscale kernel is called for, local relationships between nearby points become more important, which the landmark point method is less well suited to capture. We call this the “k–nearest neighbor type” scenario. In reality, most non-trivial GP regression problems fall somewhere in between the above two extremes. In high dimensions data points tend to be all almost equally far from each other anyway, limiting the applicability of simple geometric interpretations. Nonetheless, the two scenarios are an illustration of the general point that one of the key challenges in large scale machine learning is integrating information from both local and global scales. 2.2 Local and hierarchical low rank methods Realizing the limitations of the low rank approach, local kernel approximation methods have also started appearing in the literature. Broadly, these algorithms: (1) first cluster the rows/columns of K with some appropriate fast clustering method, e.g., METIS [14] or GRACLUS [15] and block K accordingly; (2) compute a low rank, but relatively high accuracy, approximation JKKi,i ≈ UiΣiU>i to each diagonal block of K; (3) use the {Ui} bases to compute possibly coarser approximations to the JKKi,j off diagonal blocks. This idea appears in its purest form in [16], and is refined in [4] in a way that avoids having to form all rows/columns of the off-diagonal blocks in the first place. Recently, [17] proposed a related approach, where all the blocks in a given row share the same row basis but have different column bases. A major advantage of local approaches is that they are inherently parallelizable. The clustering itself, however, is a delicate, and sometimes not very robust component of these methods. In fact, divide-and-conquer type algorithms such as [18] and [19] can also be included in the same category, even though in these cases the blocking is usually random. A natural extension of the blocking idea would be to apply the divide-and-conquer approach recursively, at multiple different scales. Geometrically, this is similar to recent multiresolution data analysis approaches such as [20]. In fact, hierarchical matrix approximations, including HODLR matrices,H–matrices [21],H2–matrices [22] and HSS matrices [23] are very popular in the numerical analysis literature. While the exact details vary, each of these methods imposes a specific type of block structure on the matrix and forces the off-diagonal blocks to be low rank (Figure 1 in the Supplement). Intuitively, nearby clusters interact in a richer way, but as we move farther away, data can be aggregated more and more coarsely, just as in the fast multipole method [24]. We know of only two applications of the hierarchical matrix methodology to kernel approximation: Börm and Garcke’s H2 matrix approach [25] and O’Neil et al.’s HODLR method [26]. The advantage of H2 matrices is their more intricate structure, allowing relatively tight interactions between neighboring clusters even when the two clusters are not siblings in the tree (e.g. blocks 8 and 9 in Figure 1c in the Supplement). However, the H2 format does not directly help with inverting K or computing its determinant: it is merely a memory-efficient way of storing K and performing matrix/vector multiplies inside an iterative method. HODLR matrices have a simpler structure, but admit a factorization that makes it possible to directly compute both the inverse and the determinant of the approximated matrix in just O(n log n) time. The reason that hierarchical matrix approximations have not become more popular in machine learning so far is that in the case of high dimensional, unstructured data, finding the way to organize {x1, . . . , xn} into a single hierarchy is much more challenging than in the setting of regularly spaced points in R2 or R3, where these methods originate: 1. Hierarchical matrices require making hard assignments of data points to clusters, since the block structure at each level corresponds to partitioning the rows/columns of the original matrix. 2. The hierarchy must form a single tree, which puts deep divisions between clusters whose closest common ancestor is high up in the tree. 3. Finding the hierarchy in the first place is by no means trivial. Most works use a top-down strategy which defeats the inherent parallelism of the matrix structure, and the actual algorithm used (kd-trees) is known to be problematic in high dimensions [27]. 3 Multiresolution Kernel Approximation Our goal in this paper is to develop a data adapted multiscale kernel matrix approximation method, Multiresolution Kernel Approximation (MKA), that reflects the “distant clusters only interact in a low rank fashion” insight of the fast multipole method, but is considerably more flexible than existing hierarchical matrix decompositions. The basic building blocks of MKA are local factorizations of a specific form, which we call core-diagonal compression. Definition 1 We say that a matrix H is c–core-diagonal if Hi,j = 0 unless either i, j ≤ c or i= j. Definition 2 A c–core-diagonal compression of a symmetric matrix A ∈ Rm×m is an approximation of the form A ≈ Q>H Q = ( )( )( ) , (4) where Q is orthogonal and H is c–core-diagonal. Core-diagonal compression is to be contrasted with rank c sketching, where H would just have the c× c block, without the rest of the diagonal. From our multiresolution inspired point of view, however, the purpose of (4) is not just to sketch A, but to also to split Rm into the direct sum of two subspaces: (a) the “detail space”, spanned by the last n−c rows of Q, responsible for capturing purely local interactions in A and (b) the “scaling space”, spanned by the first c rows, capturing the overall structure of A and its relationship to other diagonal blocks. Hierarchical matrix methods apply low rank decompositions to many blocks of K in parallel, at different scales. MKA works similarly, by applying core-diagonal compressions. Specifically, the algorithm proceeds by taking K through a sequence of transformations K = K0 7→ K1 7→ . . . 7→ Ks, called stages. In the first stage 1. Similar to other local methods, MKA first uses a fast clustering method to cluster the rows/columns of K0 into clusters C11 , . . . , C1p1 . Using the corresponding permutation matrix C1 (which maps the elements of the first cluster to (1, 2, . . . |C11 |), the elements of the second cluster to (|C11 |+ 1, . . . , |C11 |+ |C12 |), and so on) we form a blocked matrix K0 = C1K0C>1 , where JK0Ki,j = KC1i ,C1j . 2. Each diagonal block of K0 is independently core-diagonally compressed as in (4) to yield H1i = ( Q1i JK0Ki,i (Q1i )> ) CD(c1i ) (5) where CD(c1i ) in the index stands for truncation to c 1 i –core-diagonal form. 3. The Q1i local rotations are assembled into a single large orthogonal matrix Q1 = ⊕ iQ 1 i and applied to the full matrix to give H1 = Q1K0Q1 > . 4. The rows/columns of H1 are rearranged by applying a permutation P1 that maps the core part of each block to one of the first c1 := c11 + . . . c 1 p1 coordinates, and the diagonal part to the rest, giving Hpre1 = P1 H1 P > 1 . 5. Finally, Hpre1 is truncated into the core-diagonal form H1 = K1 ⊕ D1, where K1 ∈ Rc1×c1 is dense, while D1 is diagonal. Effectively, K1 is a compressed version of K0, while D1 is formed by concatenating the diagonal parts of each of the H1i matrices. Together, this gives a global core-diagonal compression K0 ≈ C>1 Q1>P>1︸ ︷︷ ︸ Q>1 (K1⊕D1) P1Q1C1︸ ︷︷ ︸ Q1 of the entire original matrix K0. The second and further stages of MKA consist of applying the above five steps toK1,K2, . . . ,Ks−1 in turn, so ultimately the algorithm yields a kernel approximation K̃ which has a telescoping form K̃ ≈ Q>1 (Q>2 (. . .Q>s (Ks⊕Ds)Qs . . .⊕D2)Q2⊕D1)Q1 (6) The pseudocode of the full algorithm is in the Supplementary Material. MKA is really a meta-algorithm, in the sense that it can be used in conjunction with different corediagonal compressors. The main requirements on the compressor are that (a) the core of H should capture the dominant part of A, in particular the subspace that most strongly interacts with other blocks, (b) the first c rows of Q should be as sparse as possible. We consider two alternatives. Augmented Sparse PCA (SPCA). Sparse PCA algorithms explicitly set out to find a set of vectors {v1, . . . ,vc} so as to maximize ‖V >AV ‖Frob, where V = [v1, . . . ,vc], while constraining each vector to be as sparse as possible [28]. While not all SPCAs guarantee orthogonality, this can be enforced a posteriori via e.g., QR factorization, yielding Qsc, the top c rows of Q in (4). Letting U be a basis for the complementary subspace, the optimal choice for the bottom m− c rows in terms of minimizing Frobenius norm error of the compression is Qwlet =UÔ, where Ô = argmax O>O=I ‖ diag(O>U>AUO)‖, the solution to which is of course given by the eigenvectors of U>AU . The main drawback of the SPCA approach is its computational cost: depending on the algorithm, the complexity of SPCA scales with m3 or worse [29, 30]. Multiresolution Matrix Factorization (MMF) MMF is a recently introduced matrix factorization algorithm motivated by similar multiresolution ideas as the present work, but applied at the level of individual matrix entries rather than at the level of matrix blocks [31]. Specifically, MMF yields a factorization of the form A ≈ q>1 . . . q>L︸ ︷︷ ︸ Q> H qL . . . q1︸ ︷︷ ︸ Q , where, in the simplest case, the qi’s are just Givens rotations. Typically, the number of rotations in MMF is O(m). MMF is efficient to compute, and sparsity is guaranteed by the sparsity of the individual qi’s and the structure of the algorithm. Hence, MMF has complementary strengths to SPCA: it comes with strong bounds on sparsity and computation time, but the quality of the scaling/wavelet space split that it produces is less well controlled. Remarks. We make a few remarks about MKA. 1. Typically, low rank approximations reduce dimensionality quite aggressively. In contrast, in core-diagonal compression c is often on the order of m/2, leading to “gentler” and more faithful, kernel approximations. 2. In hierarchical matrix methods, the block structure of the matrix is defined by a single tree, which, as discussed above, is potentially problematic. In contrast, by virtue of reclustering the rows/columns of K` before every stage, MKA affords a more flexible factorization. In fact, beyond the first stage, it is not even individual data points that MKA clusters, but subspaces defined by the earlier local compressions. 3. While C` and P` are presented as explicit permutations, they really just correspond to different ways of blocking Ks, which is done implicitly in practice with relatively little overhead. 4. Step 3 of the algorithm is critical, because it extends the core-diagonal splits found in the diagonal blocks of the matrix to the off-diagonal blocks. Essentially the same is done in [4] and [17]. This operation reflects a structural assumption about K, namely that the same bases that pick out the dominant parts of the diagonal blocks (composed of the first c`i rows of the Q ` i rotations) are also good for compressing the off-diagonal blocks. In the hierarchical matrix literature, for the case of specific kernels sampled in specific ways in low dimensions, it is possible to prove such statements. In our high dimensional and less structured setting, deriving analytical results is much more challenging. 5. MKA is an inherently bottom-up algorithm, including the clustering, thus it is naturally parallelizable and can be implemented in a distributed environment. 6. The hierarchical structure of MKA is similar to that of the parallel version of MMF (pMMF) [32], but the way that the compressions are calculated is different (pMMF tries to minimize an objective that relates to the entire matrix). 4 Complexity and application to GPs For MKA to be effective for large scale GP regression, it must be possible to compute the factorization fast. In addition, the resulting approximation K̃ must be symmetric positive semi-definite (spsd) (MEKA, for example, fails to fulfill this [4]). We say that a matrix approximation algorithm A 7→ à is spsd preserving if à is spsd whenever A is. It is clear from its form that the Nyström approximation is spsd preserving , so is augmented SPCA compression. MMF has different variants, but the core part of H is always derived by conjugating A by rotations, while the diagonal elements are guaranteed to be positive, therefore MMF is spsd preserving as well. Proposition 1 If the individual core-diagonal compressions in MKA are spsd preserving, then the entire algorithm is spsd perserving. The complexity of MKA depends on the complexity of the local compressions. Next, we assume that to leading order in m this cost is bounded by ccomp mαcomp (with αcomp≥ 1) and that each row of the Q matrix that is produced is csp–sparse. We assume that the MKA has s stages, the size of the final Ks “core matrix” is dcore × dcore, and that the size of the largest cluster is mmax. We assmue that the maximum number of clusters in any stage is bmax and that the clustering is close to balanced in the sense that that bmax = θ(n/mmax) with a small constant. We ignore the cost of the clustering algorithm, which varies, but usually scales linearly in snbmax. We also ignore the cost of permuting the rows/columns of K`, since this is a memory bound operation that can be virtualized away. The following results are to leading order in mmax and are similar to those in [32] for parallel MMF. Proposition 2 With the above notations, the number of operations needed to compute the MKA of an n×n matrix is upper bounded by 2scspn2 + sccompm αcomp−1 max n. Assuming bmax–fold parallelism, this complexity reduces to 2scspn2/bmax+sccompm αcomp max . The memory cost of MKA is just the cost of storing the various matrices appearing in (6). We only include the number of non-zero reals that need to be stored and not indices, etc.. Proposition 3 The storage complexity of MKA is upper bounded by (scsp +1)n+ d2core. Rather than the general case, it is more informative to focus on MMF based MKA, which is what we use in our experiments. We consider the simplest case of MMF, referred to as “greedy-Jacobi” MMF, in which each of the qi elementary rotations is a Given rotation. An additional parameter of this algorithm is the compression ratio γ, which in our notation is equal to c/n. Some of the special features of this type of core-diagonal compression are: (a) While any given row of the rotation Q produced by the algorithm is not guaranteed to be sparse, Q will be the product of exactly b(1−γ)mc Givens rotations. (b) The leading term in the cost is the m3 cost of computing A>A, but this is a BLAS operation, so it is fast. (c) Once A>A has been computed, the cost of the rest of the compression scales with m2. Together, these features result in very fast core-diagonal compressions and a very compact representation of the kernel matrix. Proposition 4 The complexity of computing the MMF-based MKA of an n×n dense matrix is upper bounded by 4sn2 + sm2maxn, where s = log(dcore/n)/(log γ). Assuming bmax–fold parallelism, this is reduced to 4snmmax +m3max. Proposition 5 The storage complexity of MMF-based MKA is upper bounded by (2s+1)n+ d2core. Typically, dcore = O(1). Note that this implies O(n log n) storage complexity, which is similar to Nyström approximations with very low rank. Finally, we have the following results that are critical for using MKA in GPs. Proposition 6 Given an approximate kernel K̃ in MMF-based MKA form (6), and a vector z ∈Rn the product K̃z can be computed in 4sn+ d2core operations. With bmax–fold parallelism, this is reduced to 4smmax + d2core. Proposition 7 Given an approximate kernel K̃ in (MMF or SPCA-based) MKA form, the MKA form of K̃α for any α can be computed in O(n + d3core) operations. The complexity of computing the matrix exponential exp(βK̃) for any β in MKA form and the complexity of computing det(K̃) are also O(n+ d3core). 4.1 MKA–GPs and MKA Ridge Regression The most direct way of applying MKA to speed up GP regression (or ridge regression) is simply using it to approximate the augmented kernel matrix K ′ = (K + σ2I) and then inverting this approximation using Proposition 7 (with α = −1). Note that the resulting K̃ ′−1 never needs to be evaluated fully, in matrix form. Instead, in equations such as (2), the matrix-vector product K̃ ′−1y can be computed in “matrix-free” form by cascading y through the analog of (6). Assuming that dcore n and mmax is not too large, the serial complexity of each stage of this computation scales with at most n2, which is the same as the complexity of computing K in the first place. One potential issue with the above approach however is that because MKA involves repeated truncation of the Hprej matrices, K̃ ′ will be a biased approximation to K, therefore expressions such as (2) which mix an approximate K ′ with an exact kx will exhibit some systematic bias. In Nyström type methods (specifically, the so-called Subset of Regressors and Deterministic Training Conditional GP approximations) this problem is addressed by replacing kx with its own Nyström approximation, k̂x = K∗,IW+kIx, where [k̂ I x]j = k(x, xij ). Although K̂ ′ = K∗,IW +K>∗,I +σ 2I is a large matrix, expressions such as k̂>x K̂ ′−1 can nonetheless be efficiently evaluated by using a variant of the Sherman–Morrison–Woodbury identity and the fact that W is low rank (see [33]). The same approach cannot be applied to MKA because K̃ is not low rank. Assuming that the testing set {x1, . . . , xp} is known at training time, however, instead of approximatingK orK ′, we compute the MKA approximation of the joint train/test kernel matrix K = ( K K∗ K>∗ Ktest ) where Ki,j = k(xi, xj) + σ 2 [K∗]i,j = k(xi, x ′ j) [Ktest]i,j = k(x ′ i, x ′ j). Writing K−1 in blocked form K̃−1 = ( A B C D ) , and taking the Schur complement of D now recovers an alternative approximation Ǩ−1 = A − BD−1C to K−1 which is consistent with the off-diagonal block K∗ leading to our final MKA–GP formula f̂ = K>∗ Ǩ −1y, where f̂ = (f̂(x′1), . . . , f̂(x ′ p)) >. While conceptually this is somewhat more involved than naively estimating K ′, assuming p n, the cost of inverting D is negligible, and the overall serial complexity of the algorithm remains (n+ p)2. In certain GP applications, the O(n2) cost of writing down the kernel matrix is already forbidding. The one circumstance under which MKA can get around this problem is when the kernel matrix is a matrix polynomial in a sparse matrix L, which is most notably for diffusion kernels and certain other graph kernels. Specifically in the case of MMF-based MKA, since the computational cost is dominated by computing local “Gram matrices”A>A, when L is sparse, and this sparsity is retained from one compression to another, the MKA of sparse matrices can be computed very fast. In the case of graph Laplacians, empirically, the complexity is close to linear in n. By Proposition 7, the diffusion kernel and certain other graph kernels can also be approximated in about O(n log n) time. 5 Experiments We compare MKA to five other methods: 1. Full: the full GP regression using Cholesky factorization [1]. 2. SOR: the Subset of Regressors method (also equivalent to DTC in mean) [1]. 3. FITC: the Fully Independent Training Conditional approximation, also called Sparse Gaussian Processes using Pseudo-inputs [34]. 4. PITC: the Partially Independent Training Conditional approximation method (also equivalent to PTC in mean) [33]. 5. MEKA: the Memory Efficient Kernel Approximation method [4]. The KISS-GP [35] and other interpolation based methods are not discussed in this paper, because, we believe, they mostly only apply to low dimensional settings. We used custom Matlab implementations [1] for Full, SOR, FITC, and PITC. We used the Matlab codes provided by the author for MEKA. Our algorithm MKA was implemented in C++ with the Matlab interface. To get an approximately fair comparison, we set dcore in MKA to be the number of pseudo-inputs. The parallel MMF algorithm was used as the compressor due to its computational strength [32]. The Gaussian kernel is used for all experiments with one length scale for all input dimensions. Qualitative results. We show the qualitative behavior of each method on the 1D toy dataset from [34]. We sampled the ground truth from a Gaussian processes with length scale ` = 0.5 and number of pseudo-inputs (dcore) is 10. We applied cross-validation to select the parameters for each method to fit the data. Figure 1 shows that MKA fits the data almost as well as the Full GP does. In terms of the other approximate methods, although their fit to the data is smoother, this is to the detriment of capturing the local structure of the underlying data, which verifies MKA’s ability to capture the entire spectrum of the kernel matrix, not just its top eigenvectors. Real data. We tested the efficacy of GP regression on real-world datasets. The data are normalized to mean zero and variance one. We randomly selected 10% of each dataset to be used as a test set. On the other 90% we did five-fold cross validation to learn the length scale and noise parameter for each method and the regression results were averaged over repeating this setting five times. All experiments were ran on a 3.4GHz 8 core machine with 8GB of memory. Two distinct error measures are used to assess performance: (a) standardized mean square error (SMSE), 1n ∑n t=1(ŷt− yt) 2/σ̂2?, where σ̂ 2 ? is the variance of test outputs, and (2) mean negative log probability (MNLP) 1 n ∑n t=1 ( (ŷt − yt)2/σ̂2? + log σ̂2? + log 2π ) , each of which corresponds to the predictive mean and variance in error assessment. From Table 1, we are competitive in both error measures when the number of pseudo-inputs (dcore) is small, which reveals low-rank methods’ inability in capturing the local structure of the data. We also illustrate the performance sensitivity by varying the number of pseudo-inputs on selected datasets. In Figure 2, for the interval of pseudo-inputs considered, MKA’s performance is robust to dcore, while low-rank based methods’ performance changes rapidly, which shows MKA’s ability to achieve good regression results even with a crucial compression level. The Supplementary Material gives a more detailed discussion of the datasets and experiments. 6 Conclusions In this paper we made the case that whether a learning problem is low rank or not depends on the nature of the data rather than just the spectral properties of the kernel matrix K. This is easiest to see in the case of Gaussian Processes, which is the algorithm that we focused on in this paper, but it is also true more generally. Most existing sketching algorithms used in GP regression force low rank structure on K, either globally, or at the block level. When the nature of the problem is indeed low rank, this might actually act as an additional regularizer and improve performance. When the data does not have low rank structure, however, low rank approximations will fail. Inspired by recent work on multiresolution factorizations, we proposed a mulitresolution meta-algorithm, MKA, for approximating kernel matrices, which assumes that the interaction between distant clusters is low rank, while avoiding forcing a low rank structure of the data locally, at any scale. Importantly, MKA allows fast direct calculations of the inverse of the kernel matrix and its determinant, which are almost always the computational bottlenecks in GP problems. Acknowledgements This work was completed in part with resources provided by the University of Chicago Research Computing Center. The authors wish to thank Michael Stein for helpful suggestions.
1. What is the focus and contribution of the paper on kernel approximation? 2. What are the strengths of the proposed approach, particularly in its local factorization and hierarchical factorization aspects? 3. Do you have any concerns regarding the method's superiority compared to other approaches in the field? 4. Can the two novel components, hierarchical factorization and factorization into a c-core-diagonal form, be used independently or only combined? 5. What are the criteria for clustering rows/columns in K_0, deciding the number of clusters p_l, and determining the size of c and corresponding indices CD(c)? 6. How can the presentation of the algorithm be improved? 7. Are there any minor errors or typos in the review that should be addressed?
Review
Review The paper introduces a new kernel approximation method enabling inversion of positive symmetric matrices with linear complexity. In contrast to previous methods, it is not based on a low rank approximation but instead uses local factorization on a so called c-core-diagonal form with a hierarchical factorization. The idea is interesting and the results are very promising. Nevertheless, I am lacking some more intuitive discussion why this approach is superior to other methods in the comparison. As I understand it there are two novel components in this paper. 1. The hierarchical factorization and 2. the factorization into a c-core-diagonal form. The authors have not fully explained why these ingredients are important. Also, can these two strategies be used separately or only combined? I am also lacking details such as: * Based on what criteria are the rows/columns in K_0 clustered? * How are the number of clusters p_l decided? * How do you decide the size of c and the corresponding indices CD(c)? Also, the presentation of the algorithm could be clearer. This would be improved if you move the pseudo algorithm in the supplementary material to the main paper I think. Minors: * The notation K' appears after eq (2) but never introduced (I assume K'=K+sigma^2 I ) * Row 248: "than than" -> "than"
NIPS
Title Multiresolution Kernel Approximation for Gaussian Process Regression Abstract Gaussian process regression generally does not scale to beyond a few thousands data points without applying some sort of kernel approximation method. Most approximations focus on the high eigenvalue part of the spectrum of the kernel matrix, K, which leads to bad performance when the length scale of the kernel is small. In this paper we introduce Multiresolution Kernel Approximation (MKA), the first true broad bandwidth kernel approximation algorithm. Important points about MKA are that it is memory efficient, and it is a direct method, which means that it also makes it easy to approximate K−1 and det(K). 1 Introduction Gaussian Process (GP) regression, and its frequentist cousin, kernel ridge regression, are such natural and canonical algorithms that they have been reinvented many times by different communities under different names. In machine learning, GPs are considered one of the standard methods of Bayesian nonparametric inference [1]. Meanwhile, the same model, under the name Kriging or Gaussian Random Fields, is the de facto standard for modeling a range of natural phenomena from geophyics to biology [2]. One of the most appealing features of GPs is that, ultimately, the algorithm reduces to “just” having to compute the inverse of a kernel matrix, K. Unfortunately, this also turns out to be the algorithm’s Achilles heel, since in the general case, the complexity of inverting a dense n×n matrix scales with O(n3), meaning that when the number of training examples exceeds 104 ∼ 105, GP inference becomes problematic on virtually any computer1. Over the course of the last 15 years, devising approximations to address this problem has become a burgeoning field. The most common approach is to use one of the so-called Nyström methods [3], which select a small subset {xi1 , . . . , xim} of the original training data points as “anchors” and approximate K in the form K ≈ K∗,ICK>∗,I , where K∗,I is the submatrix of K consisting of columns {i1, . . . , im}, and C is a matrix such as the pseudo-inverse of KI,I . Nyström methods often work well in practice and have a mature literature offering strong theoretical guarantees. Still, Nyström is inherently a global low rank approximation, and, as pointed out in [4], a priori there is no reason to believe that K should be well approximable by a low rank matrix: for example, in the case of the popular Gaussian kernel k(x, x′) = exp(−(x−x′)2/(2`2)), as ` decreases and the kernel becomes more and more “local” the number of significant eigenvalues quickly increases. This observation has motivated alternative types of approximations, including local, hierarchical and distributed ones (see Section 2). In certain contexts involving translation invariant kernels yet other strategies may be applicable [5], but these are beyond the scope of the present paper. In this paper we present a new kernel approximation method, Multiresolution Kernel Approximation (MKA), which is inspired by a combination of ideas from hierarchical matrix decomposition 1 In the limited case of evaluating a GP with a fixed Gram matrix on a single training set, GP inference reduces to solving a linear system in K, which scales better with n, but might be problematic behavior when the condition number of K is large. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. algorithms and multiresolution analysis. Some of the important features of MKA are that (a) it is a broad spectrum algorithm that approximates the entire kernel matrixK, not just its top eigenvectors, and (b) it is a so-called “direct” method, i.e., it yields explicit approximations to K−1 and det(K). Notations. We define [n] = {1, 2, . . . , n}. Given a matrix A, and a tuple I = (i1, . . . , ir), AI,∗ will denote the submatrix of A formed of rows indexed by i1, . . . , ir, similarly A∗,J will denote the submatrix formed of columns indexed by j1, . . . , jp, and AI,J will denote the submatrix at the intersection of rows i1, . . . , ir and columns j1, . . . , jp. We extend these notations to the case when I and J are sets in the obvious way. If A is a blocked matrix then JAKi,j will denote its (i, j) block. 2 Local vs. global kernel approximation Recall that a Gaussian Process (GP) on a space X is a prior over functions f : X → R defined by a mean function µ(x) = E[f(x)], and covariance function k(x, x′) = Cov(f(x), f(x′)). Using the most elementary model yi = f(xi) + where ∼ N (0, σ2) and σ2 is a noise parameter, given training data {(x1, y1), . . . , (xn, yn)}, the posterior is also a GP, with mean µ′(x) = µ(x)+k>x (K+ σ2I)−1y, where kx = (k(x, x1), . . . , k(x, xn)), y=(y1, . . . , yn), and covariance k′(x, x′) = k(x, x′)− k>x′(K + σ2I)−1kx. (1) Thus (here and in the following assuming µ = 0 for simplicity), the maximum a posteriori (MAP) estimate of f is f̂(x) = k>x (K + σ 2I)−1y. (2) Ridge regression, which is the frequentist analog of GP regression, yields the same formula, but regards f̂ as the solution to a regularized risk minimization problem over a Hilbert spaceH induced by k. We will use “GP” as the generic term to refer to both Bayesian GPs and ridge regression. Letting K ′ = (K+σ2I), virtually all GP approximation approaches focus on trying to approximate the (augmented) kernel matrix K ′ in such a way so as to make inverting it, solving K ′y =α or computing det(K ′) easier. For the sake of simplicity in the following we will actually discuss approximating K, since adding the diagonal term usually doesn’t make the problem any more challenging. 2.1 Global low rank methods As in other kernel methods, intuitively, Ki,j = k(xi, xj) encodes the degree of similarity or closeness between the two points xi and xj as it relates to the degree of correlation/similarity between the value of f at xi and at xj . Given that k is often conceived of as a smooth, slowly varying function, one very natural idea is to take a smaller set {xi1 , . . . , xim} of “landmark points” or “pseudo-inputs” and approximate k(x, x′) in terms of the similarity of x to each of the landmarks, the relationship of the landmarks to each other, and the similarity of the landmarks to x′. Mathematically, k(x, x′) ≈ m∑ s=1 m∑ j=1 k(x, xis) cis,ij k(xij , x ′), which, assuming that {xi1 , . . . , xim} is a subset of the original point set {x1, . . . , xn}, amounts to an approximation of the form K ≈ K∗,ICK>∗,I , with I = {i1, . . . , im}. The canonical choice for C is C = W+, where W = KI,I , and W+ denotes the Moore-Penrose pseudoinverse of W . The resulting approximation K ≈ K∗,IW+K>∗,I , (3) is known as the Nyström approximation, because it is analogous to the so-called Nyström extension used to extrapolate continuous operators from a finite number of quadrature points. Clearly, the choice of I is critical for a good quality approximation. Starting with the pioneering papers [6, 3, 7], over the course of the last 15 years a sequence of different sampling strategies have been developed for obtaining I , several with rigorous approximation bounds [8, 9, 10, 11]. Further variations include the ensemble Nyström method [12] and the modified Nyström method [13]. Nyström methods have the advantage of being relatively simple, and having reliable performance bounds. A fundamental limitation, however, is that the approximation (3) is inherently low rank. As pointed out in [4], there is no reason to believe that kernel matrices in general should be close to low rank. An even more fundamental issue, which is less often discussed in the literature, relates to the specific form of (2). The appearance of K ′−1 in this formula suggests that it is the low eigenvalue eigenvectors of K ′ that should dominate the result of GP regression. On the other hand, multiplying the matrix by kx largely cancels this effect, since kx is effectively a row of a kernel matrix similar to K ′, and will likely concentrate most weight on the high eigenvalue eigenvectors. Therefore, ultimately, it is not K ′ itself, but the relationship between the eigenvectors of K ′ and the data vector y that determines which part of the spectrum of K ′ the result of GP regression is most sensitive to. Once again, intuition about the kernel helps clarify this point. In a setting where the function that we are regressing is smooth, and correspondingly, the kernel has a large length scale parameter, it is the global, long range relationships between data points that dominate GP regression, and that can indeed be well approximated by the landmark point method. In terms of the linear algebra, the spectral expansion of K ′ is dominated by a few large eigenvalue eigenvectors, we will call this the “PCA-like” scenario. In contrast, in situations where f varies more rapidly, a shorter lengthscale kernel is called for, local relationships between nearby points become more important, which the landmark point method is less well suited to capture. We call this the “k–nearest neighbor type” scenario. In reality, most non-trivial GP regression problems fall somewhere in between the above two extremes. In high dimensions data points tend to be all almost equally far from each other anyway, limiting the applicability of simple geometric interpretations. Nonetheless, the two scenarios are an illustration of the general point that one of the key challenges in large scale machine learning is integrating information from both local and global scales. 2.2 Local and hierarchical low rank methods Realizing the limitations of the low rank approach, local kernel approximation methods have also started appearing in the literature. Broadly, these algorithms: (1) first cluster the rows/columns of K with some appropriate fast clustering method, e.g., METIS [14] or GRACLUS [15] and block K accordingly; (2) compute a low rank, but relatively high accuracy, approximation JKKi,i ≈ UiΣiU>i to each diagonal block of K; (3) use the {Ui} bases to compute possibly coarser approximations to the JKKi,j off diagonal blocks. This idea appears in its purest form in [16], and is refined in [4] in a way that avoids having to form all rows/columns of the off-diagonal blocks in the first place. Recently, [17] proposed a related approach, where all the blocks in a given row share the same row basis but have different column bases. A major advantage of local approaches is that they are inherently parallelizable. The clustering itself, however, is a delicate, and sometimes not very robust component of these methods. In fact, divide-and-conquer type algorithms such as [18] and [19] can also be included in the same category, even though in these cases the blocking is usually random. A natural extension of the blocking idea would be to apply the divide-and-conquer approach recursively, at multiple different scales. Geometrically, this is similar to recent multiresolution data analysis approaches such as [20]. In fact, hierarchical matrix approximations, including HODLR matrices,H–matrices [21],H2–matrices [22] and HSS matrices [23] are very popular in the numerical analysis literature. While the exact details vary, each of these methods imposes a specific type of block structure on the matrix and forces the off-diagonal blocks to be low rank (Figure 1 in the Supplement). Intuitively, nearby clusters interact in a richer way, but as we move farther away, data can be aggregated more and more coarsely, just as in the fast multipole method [24]. We know of only two applications of the hierarchical matrix methodology to kernel approximation: Börm and Garcke’s H2 matrix approach [25] and O’Neil et al.’s HODLR method [26]. The advantage of H2 matrices is their more intricate structure, allowing relatively tight interactions between neighboring clusters even when the two clusters are not siblings in the tree (e.g. blocks 8 and 9 in Figure 1c in the Supplement). However, the H2 format does not directly help with inverting K or computing its determinant: it is merely a memory-efficient way of storing K and performing matrix/vector multiplies inside an iterative method. HODLR matrices have a simpler structure, but admit a factorization that makes it possible to directly compute both the inverse and the determinant of the approximated matrix in just O(n log n) time. The reason that hierarchical matrix approximations have not become more popular in machine learning so far is that in the case of high dimensional, unstructured data, finding the way to organize {x1, . . . , xn} into a single hierarchy is much more challenging than in the setting of regularly spaced points in R2 or R3, where these methods originate: 1. Hierarchical matrices require making hard assignments of data points to clusters, since the block structure at each level corresponds to partitioning the rows/columns of the original matrix. 2. The hierarchy must form a single tree, which puts deep divisions between clusters whose closest common ancestor is high up in the tree. 3. Finding the hierarchy in the first place is by no means trivial. Most works use a top-down strategy which defeats the inherent parallelism of the matrix structure, and the actual algorithm used (kd-trees) is known to be problematic in high dimensions [27]. 3 Multiresolution Kernel Approximation Our goal in this paper is to develop a data adapted multiscale kernel matrix approximation method, Multiresolution Kernel Approximation (MKA), that reflects the “distant clusters only interact in a low rank fashion” insight of the fast multipole method, but is considerably more flexible than existing hierarchical matrix decompositions. The basic building blocks of MKA are local factorizations of a specific form, which we call core-diagonal compression. Definition 1 We say that a matrix H is c–core-diagonal if Hi,j = 0 unless either i, j ≤ c or i= j. Definition 2 A c–core-diagonal compression of a symmetric matrix A ∈ Rm×m is an approximation of the form A ≈ Q>H Q = ( )( )( ) , (4) where Q is orthogonal and H is c–core-diagonal. Core-diagonal compression is to be contrasted with rank c sketching, where H would just have the c× c block, without the rest of the diagonal. From our multiresolution inspired point of view, however, the purpose of (4) is not just to sketch A, but to also to split Rm into the direct sum of two subspaces: (a) the “detail space”, spanned by the last n−c rows of Q, responsible for capturing purely local interactions in A and (b) the “scaling space”, spanned by the first c rows, capturing the overall structure of A and its relationship to other diagonal blocks. Hierarchical matrix methods apply low rank decompositions to many blocks of K in parallel, at different scales. MKA works similarly, by applying core-diagonal compressions. Specifically, the algorithm proceeds by taking K through a sequence of transformations K = K0 7→ K1 7→ . . . 7→ Ks, called stages. In the first stage 1. Similar to other local methods, MKA first uses a fast clustering method to cluster the rows/columns of K0 into clusters C11 , . . . , C1p1 . Using the corresponding permutation matrix C1 (which maps the elements of the first cluster to (1, 2, . . . |C11 |), the elements of the second cluster to (|C11 |+ 1, . . . , |C11 |+ |C12 |), and so on) we form a blocked matrix K0 = C1K0C>1 , where JK0Ki,j = KC1i ,C1j . 2. Each diagonal block of K0 is independently core-diagonally compressed as in (4) to yield H1i = ( Q1i JK0Ki,i (Q1i )> ) CD(c1i ) (5) where CD(c1i ) in the index stands for truncation to c 1 i –core-diagonal form. 3. The Q1i local rotations are assembled into a single large orthogonal matrix Q1 = ⊕ iQ 1 i and applied to the full matrix to give H1 = Q1K0Q1 > . 4. The rows/columns of H1 are rearranged by applying a permutation P1 that maps the core part of each block to one of the first c1 := c11 + . . . c 1 p1 coordinates, and the diagonal part to the rest, giving Hpre1 = P1 H1 P > 1 . 5. Finally, Hpre1 is truncated into the core-diagonal form H1 = K1 ⊕ D1, where K1 ∈ Rc1×c1 is dense, while D1 is diagonal. Effectively, K1 is a compressed version of K0, while D1 is formed by concatenating the diagonal parts of each of the H1i matrices. Together, this gives a global core-diagonal compression K0 ≈ C>1 Q1>P>1︸ ︷︷ ︸ Q>1 (K1⊕D1) P1Q1C1︸ ︷︷ ︸ Q1 of the entire original matrix K0. The second and further stages of MKA consist of applying the above five steps toK1,K2, . . . ,Ks−1 in turn, so ultimately the algorithm yields a kernel approximation K̃ which has a telescoping form K̃ ≈ Q>1 (Q>2 (. . .Q>s (Ks⊕Ds)Qs . . .⊕D2)Q2⊕D1)Q1 (6) The pseudocode of the full algorithm is in the Supplementary Material. MKA is really a meta-algorithm, in the sense that it can be used in conjunction with different corediagonal compressors. The main requirements on the compressor are that (a) the core of H should capture the dominant part of A, in particular the subspace that most strongly interacts with other blocks, (b) the first c rows of Q should be as sparse as possible. We consider two alternatives. Augmented Sparse PCA (SPCA). Sparse PCA algorithms explicitly set out to find a set of vectors {v1, . . . ,vc} so as to maximize ‖V >AV ‖Frob, where V = [v1, . . . ,vc], while constraining each vector to be as sparse as possible [28]. While not all SPCAs guarantee orthogonality, this can be enforced a posteriori via e.g., QR factorization, yielding Qsc, the top c rows of Q in (4). Letting U be a basis for the complementary subspace, the optimal choice for the bottom m− c rows in terms of minimizing Frobenius norm error of the compression is Qwlet =UÔ, where Ô = argmax O>O=I ‖ diag(O>U>AUO)‖, the solution to which is of course given by the eigenvectors of U>AU . The main drawback of the SPCA approach is its computational cost: depending on the algorithm, the complexity of SPCA scales with m3 or worse [29, 30]. Multiresolution Matrix Factorization (MMF) MMF is a recently introduced matrix factorization algorithm motivated by similar multiresolution ideas as the present work, but applied at the level of individual matrix entries rather than at the level of matrix blocks [31]. Specifically, MMF yields a factorization of the form A ≈ q>1 . . . q>L︸ ︷︷ ︸ Q> H qL . . . q1︸ ︷︷ ︸ Q , where, in the simplest case, the qi’s are just Givens rotations. Typically, the number of rotations in MMF is O(m). MMF is efficient to compute, and sparsity is guaranteed by the sparsity of the individual qi’s and the structure of the algorithm. Hence, MMF has complementary strengths to SPCA: it comes with strong bounds on sparsity and computation time, but the quality of the scaling/wavelet space split that it produces is less well controlled. Remarks. We make a few remarks about MKA. 1. Typically, low rank approximations reduce dimensionality quite aggressively. In contrast, in core-diagonal compression c is often on the order of m/2, leading to “gentler” and more faithful, kernel approximations. 2. In hierarchical matrix methods, the block structure of the matrix is defined by a single tree, which, as discussed above, is potentially problematic. In contrast, by virtue of reclustering the rows/columns of K` before every stage, MKA affords a more flexible factorization. In fact, beyond the first stage, it is not even individual data points that MKA clusters, but subspaces defined by the earlier local compressions. 3. While C` and P` are presented as explicit permutations, they really just correspond to different ways of blocking Ks, which is done implicitly in practice with relatively little overhead. 4. Step 3 of the algorithm is critical, because it extends the core-diagonal splits found in the diagonal blocks of the matrix to the off-diagonal blocks. Essentially the same is done in [4] and [17]. This operation reflects a structural assumption about K, namely that the same bases that pick out the dominant parts of the diagonal blocks (composed of the first c`i rows of the Q ` i rotations) are also good for compressing the off-diagonal blocks. In the hierarchical matrix literature, for the case of specific kernels sampled in specific ways in low dimensions, it is possible to prove such statements. In our high dimensional and less structured setting, deriving analytical results is much more challenging. 5. MKA is an inherently bottom-up algorithm, including the clustering, thus it is naturally parallelizable and can be implemented in a distributed environment. 6. The hierarchical structure of MKA is similar to that of the parallel version of MMF (pMMF) [32], but the way that the compressions are calculated is different (pMMF tries to minimize an objective that relates to the entire matrix). 4 Complexity and application to GPs For MKA to be effective for large scale GP regression, it must be possible to compute the factorization fast. In addition, the resulting approximation K̃ must be symmetric positive semi-definite (spsd) (MEKA, for example, fails to fulfill this [4]). We say that a matrix approximation algorithm A 7→ à is spsd preserving if à is spsd whenever A is. It is clear from its form that the Nyström approximation is spsd preserving , so is augmented SPCA compression. MMF has different variants, but the core part of H is always derived by conjugating A by rotations, while the diagonal elements are guaranteed to be positive, therefore MMF is spsd preserving as well. Proposition 1 If the individual core-diagonal compressions in MKA are spsd preserving, then the entire algorithm is spsd perserving. The complexity of MKA depends on the complexity of the local compressions. Next, we assume that to leading order in m this cost is bounded by ccomp mαcomp (with αcomp≥ 1) and that each row of the Q matrix that is produced is csp–sparse. We assume that the MKA has s stages, the size of the final Ks “core matrix” is dcore × dcore, and that the size of the largest cluster is mmax. We assmue that the maximum number of clusters in any stage is bmax and that the clustering is close to balanced in the sense that that bmax = θ(n/mmax) with a small constant. We ignore the cost of the clustering algorithm, which varies, but usually scales linearly in snbmax. We also ignore the cost of permuting the rows/columns of K`, since this is a memory bound operation that can be virtualized away. The following results are to leading order in mmax and are similar to those in [32] for parallel MMF. Proposition 2 With the above notations, the number of operations needed to compute the MKA of an n×n matrix is upper bounded by 2scspn2 + sccompm αcomp−1 max n. Assuming bmax–fold parallelism, this complexity reduces to 2scspn2/bmax+sccompm αcomp max . The memory cost of MKA is just the cost of storing the various matrices appearing in (6). We only include the number of non-zero reals that need to be stored and not indices, etc.. Proposition 3 The storage complexity of MKA is upper bounded by (scsp +1)n+ d2core. Rather than the general case, it is more informative to focus on MMF based MKA, which is what we use in our experiments. We consider the simplest case of MMF, referred to as “greedy-Jacobi” MMF, in which each of the qi elementary rotations is a Given rotation. An additional parameter of this algorithm is the compression ratio γ, which in our notation is equal to c/n. Some of the special features of this type of core-diagonal compression are: (a) While any given row of the rotation Q produced by the algorithm is not guaranteed to be sparse, Q will be the product of exactly b(1−γ)mc Givens rotations. (b) The leading term in the cost is the m3 cost of computing A>A, but this is a BLAS operation, so it is fast. (c) Once A>A has been computed, the cost of the rest of the compression scales with m2. Together, these features result in very fast core-diagonal compressions and a very compact representation of the kernel matrix. Proposition 4 The complexity of computing the MMF-based MKA of an n×n dense matrix is upper bounded by 4sn2 + sm2maxn, where s = log(dcore/n)/(log γ). Assuming bmax–fold parallelism, this is reduced to 4snmmax +m3max. Proposition 5 The storage complexity of MMF-based MKA is upper bounded by (2s+1)n+ d2core. Typically, dcore = O(1). Note that this implies O(n log n) storage complexity, which is similar to Nyström approximations with very low rank. Finally, we have the following results that are critical for using MKA in GPs. Proposition 6 Given an approximate kernel K̃ in MMF-based MKA form (6), and a vector z ∈Rn the product K̃z can be computed in 4sn+ d2core operations. With bmax–fold parallelism, this is reduced to 4smmax + d2core. Proposition 7 Given an approximate kernel K̃ in (MMF or SPCA-based) MKA form, the MKA form of K̃α for any α can be computed in O(n + d3core) operations. The complexity of computing the matrix exponential exp(βK̃) for any β in MKA form and the complexity of computing det(K̃) are also O(n+ d3core). 4.1 MKA–GPs and MKA Ridge Regression The most direct way of applying MKA to speed up GP regression (or ridge regression) is simply using it to approximate the augmented kernel matrix K ′ = (K + σ2I) and then inverting this approximation using Proposition 7 (with α = −1). Note that the resulting K̃ ′−1 never needs to be evaluated fully, in matrix form. Instead, in equations such as (2), the matrix-vector product K̃ ′−1y can be computed in “matrix-free” form by cascading y through the analog of (6). Assuming that dcore n and mmax is not too large, the serial complexity of each stage of this computation scales with at most n2, which is the same as the complexity of computing K in the first place. One potential issue with the above approach however is that because MKA involves repeated truncation of the Hprej matrices, K̃ ′ will be a biased approximation to K, therefore expressions such as (2) which mix an approximate K ′ with an exact kx will exhibit some systematic bias. In Nyström type methods (specifically, the so-called Subset of Regressors and Deterministic Training Conditional GP approximations) this problem is addressed by replacing kx with its own Nyström approximation, k̂x = K∗,IW+kIx, where [k̂ I x]j = k(x, xij ). Although K̂ ′ = K∗,IW +K>∗,I +σ 2I is a large matrix, expressions such as k̂>x K̂ ′−1 can nonetheless be efficiently evaluated by using a variant of the Sherman–Morrison–Woodbury identity and the fact that W is low rank (see [33]). The same approach cannot be applied to MKA because K̃ is not low rank. Assuming that the testing set {x1, . . . , xp} is known at training time, however, instead of approximatingK orK ′, we compute the MKA approximation of the joint train/test kernel matrix K = ( K K∗ K>∗ Ktest ) where Ki,j = k(xi, xj) + σ 2 [K∗]i,j = k(xi, x ′ j) [Ktest]i,j = k(x ′ i, x ′ j). Writing K−1 in blocked form K̃−1 = ( A B C D ) , and taking the Schur complement of D now recovers an alternative approximation Ǩ−1 = A − BD−1C to K−1 which is consistent with the off-diagonal block K∗ leading to our final MKA–GP formula f̂ = K>∗ Ǩ −1y, where f̂ = (f̂(x′1), . . . , f̂(x ′ p)) >. While conceptually this is somewhat more involved than naively estimating K ′, assuming p n, the cost of inverting D is negligible, and the overall serial complexity of the algorithm remains (n+ p)2. In certain GP applications, the O(n2) cost of writing down the kernel matrix is already forbidding. The one circumstance under which MKA can get around this problem is when the kernel matrix is a matrix polynomial in a sparse matrix L, which is most notably for diffusion kernels and certain other graph kernels. Specifically in the case of MMF-based MKA, since the computational cost is dominated by computing local “Gram matrices”A>A, when L is sparse, and this sparsity is retained from one compression to another, the MKA of sparse matrices can be computed very fast. In the case of graph Laplacians, empirically, the complexity is close to linear in n. By Proposition 7, the diffusion kernel and certain other graph kernels can also be approximated in about O(n log n) time. 5 Experiments We compare MKA to five other methods: 1. Full: the full GP regression using Cholesky factorization [1]. 2. SOR: the Subset of Regressors method (also equivalent to DTC in mean) [1]. 3. FITC: the Fully Independent Training Conditional approximation, also called Sparse Gaussian Processes using Pseudo-inputs [34]. 4. PITC: the Partially Independent Training Conditional approximation method (also equivalent to PTC in mean) [33]. 5. MEKA: the Memory Efficient Kernel Approximation method [4]. The KISS-GP [35] and other interpolation based methods are not discussed in this paper, because, we believe, they mostly only apply to low dimensional settings. We used custom Matlab implementations [1] for Full, SOR, FITC, and PITC. We used the Matlab codes provided by the author for MEKA. Our algorithm MKA was implemented in C++ with the Matlab interface. To get an approximately fair comparison, we set dcore in MKA to be the number of pseudo-inputs. The parallel MMF algorithm was used as the compressor due to its computational strength [32]. The Gaussian kernel is used for all experiments with one length scale for all input dimensions. Qualitative results. We show the qualitative behavior of each method on the 1D toy dataset from [34]. We sampled the ground truth from a Gaussian processes with length scale ` = 0.5 and number of pseudo-inputs (dcore) is 10. We applied cross-validation to select the parameters for each method to fit the data. Figure 1 shows that MKA fits the data almost as well as the Full GP does. In terms of the other approximate methods, although their fit to the data is smoother, this is to the detriment of capturing the local structure of the underlying data, which verifies MKA’s ability to capture the entire spectrum of the kernel matrix, not just its top eigenvectors. Real data. We tested the efficacy of GP regression on real-world datasets. The data are normalized to mean zero and variance one. We randomly selected 10% of each dataset to be used as a test set. On the other 90% we did five-fold cross validation to learn the length scale and noise parameter for each method and the regression results were averaged over repeating this setting five times. All experiments were ran on a 3.4GHz 8 core machine with 8GB of memory. Two distinct error measures are used to assess performance: (a) standardized mean square error (SMSE), 1n ∑n t=1(ŷt− yt) 2/σ̂2?, where σ̂ 2 ? is the variance of test outputs, and (2) mean negative log probability (MNLP) 1 n ∑n t=1 ( (ŷt − yt)2/σ̂2? + log σ̂2? + log 2π ) , each of which corresponds to the predictive mean and variance in error assessment. From Table 1, we are competitive in both error measures when the number of pseudo-inputs (dcore) is small, which reveals low-rank methods’ inability in capturing the local structure of the data. We also illustrate the performance sensitivity by varying the number of pseudo-inputs on selected datasets. In Figure 2, for the interval of pseudo-inputs considered, MKA’s performance is robust to dcore, while low-rank based methods’ performance changes rapidly, which shows MKA’s ability to achieve good regression results even with a crucial compression level. The Supplementary Material gives a more detailed discussion of the datasets and experiments. 6 Conclusions In this paper we made the case that whether a learning problem is low rank or not depends on the nature of the data rather than just the spectral properties of the kernel matrix K. This is easiest to see in the case of Gaussian Processes, which is the algorithm that we focused on in this paper, but it is also true more generally. Most existing sketching algorithms used in GP regression force low rank structure on K, either globally, or at the block level. When the nature of the problem is indeed low rank, this might actually act as an additional regularizer and improve performance. When the data does not have low rank structure, however, low rank approximations will fail. Inspired by recent work on multiresolution factorizations, we proposed a mulitresolution meta-algorithm, MKA, for approximating kernel matrices, which assumes that the interaction between distant clusters is low rank, while avoiding forcing a low rank structure of the data locally, at any scale. Importantly, MKA allows fast direct calculations of the inverse of the kernel matrix and its determinant, which are almost always the computational bottlenecks in GP problems. Acknowledgements This work was completed in part with resources provided by the University of Chicago Research Computing Center. The authors wish to thank Michael Stein for helpful suggestions.
1. What is the focus of the paper in terms of kernel-based methods? 2. What is the novelty of the proposed approach compared to other methods? 3. How does the reviewer assess the clarity and quality of the paper's content? 4. What are the strengths of the proposed algorithm regarding its computational complexity? 5. Are there any concerns or limitations regarding the applicability of the method to real-world data?
Review
Review The paper applies a fast hierarchical matrix decomposition to the task for inverting and finding the determinant of the kernel matrix of kernel based methods. The overall complexity of the algorithm is equivalent to or lower then calculating the values in K, O(n^2). In contrast to inducing point based methods, here the kernel is decomposed in such a way as to capture both the long range (dubbed "PCA-like") and short-range (dubbed "k-nearest neighbour type") interactions in data. The paper is clearly written and gives good intuitions as why the approach works, a good description of the algorithm and compelling results on real data.
NIPS
Title Novelty Search in Representational Space for Sample Efficient Exploration Abstract We present a new approach for efficient exploration which leverages a lowdimensional encoding of the environment learned with a combination of modelbased and model-free objectives. Our approach uses intrinsic rewards that are based on the distance of nearest neighbors in the low dimensional representational space to gauge novelty. We then leverage these intrinsic rewards for sampleefficient exploration with planning routines in representational space for hard exploration tasks with sparse rewards. One key element of our approach is the use of information theoretic principles to shape our representations in a way so that our novelty reward goes beyond pixel similarity. We test our approach on a number of maze tasks, as well as a control problem and show that our exploration approach is more sample-efficient compared to strong baselines. 1 Introduction In order to solve a task efficiently in Reinforcement Learning (RL), one of the main challenges is to gather informative experiences via an efficient exploration of the state space. A common approach to exploration is to leverage intrinsic rewards correlated with some metric or score for novelty (Schmidhuber, 2010; Stadie et al., 2015; Houthooft et al., 2016). With intrinsic rewards, an agent can be incentivized to efficiently explore its state space. A direct approach to calculating these novelty scores is to derive a reward based on the observations, such as a count-based reward (Bellemare et al., 2016; Ostrovski et al., 2017) or a prediction-error based reward (Burda et al., 2018b). However, an issue occurs when measuring novelty directly from the raw observations, as some information in pixel space (such as randomness or backgrounds) may be irrelevant. In this case, if an agent wants to efficiently explore its state space it should only focus on meaningful and novel information. In this work, we propose a method of sample-efficient exploration by leveraging intrinsic rewards in a meaningful latent state space. To build a meaningful state abstraction, we view Model-based RL (MBRL) from an information theoretic perspective - we optimize our dynamics learning through the Information Bottleneck (Tishby et al., 2000) principle. We also combine both model-based and model-free components through a joint representation. This method encodes high-dimensional observations into lower-dimensional representations such that states that are close in dynamics are brought close together in representation space (François-Lavet et al., 2018). We also add additional constraints to ensure that a measure of distance between abstract states is meaningful. We leverage these properties of our representation to formulate a novelty score based on Euclidean distance in low-dimensional representation space and we then use this score to generate intrinsic rewards that we can exploit for efficient exploration. One important element of our exploration algorithm is that we take a Model Predictive Control (MPC) approach (Garcia et al., 1989) and perform actions only after our model is sufficiently accurate (and hence ensure an accurate novelty heuristic). Through this training scheme, our agent is 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. also able to learn a meaningful representation of its state space in a sample-efficient manner. The code with all experiments is available 1. 2 Problem setting An agent interacts with its environment over discrete timesteps, modeled as a Markov Decision Process (MDP), defined by the 6-tuple (S,S0,A, τ,R,G) (Puterman, 1994). In this setting, S is the state space, S0 is the initial state distribution, A is the discrete action space, τ : S × A → S is the transition function that is assumed deterministic (with the possibility of extension to stochastic environments with generative methods), R : S × A → R is the reward function (R = [−1, 1]), G : S × A → [0, 1) is the per timestep discount factor. At timestep t in state st ∈ S, the agent chooses an action at ∈ A based on policy π : S × A → [0, 1], such that at ∼ π(st, ·). After taking at, the agent is in state st+1 = τ(st, at) and receives reward rt ∼ R(st, at) and a discount factor γt ∼ G(st, at). Over n environment steps, we define the buffer of previously visited states as B = (s1, . . . , sn), where si ∈ S ∀i ∈ N. In RL, the usual objective is to maximize the sum of expected future rewards Vπ(s) = Eπ [ rt + ∑∞ i=1 (∏i−1 j=0 γt+j ) rt+i|s = st ] . To learn a policy π that maximizes the expected return, an RL agent has to efficiently explore its environment (reach novel states in as few steps as possible). In this paper, we consider tasks with sparse rewards or even no rewards, and are interested in exploration strategies that require as few steps as possible to explore the state space. 3 Abstract state representations We focus on learning a lower-dimensional representation of state when our state (or observations in the partially observable case (Kaelbling et al., 1998)) is high-dimensional (Dayan, 1993; Tamar et al., 2016; Silver et al., 2016; Oh et al., 2017; de Bruin et al., 2018; Ha and Schmidhuber, 2018; François-Lavet et al., 2018; Hafner et al., 2018; Gelada et al., 2019). 3.1 Information Bottleneck We first motivate our methods for model learning. To do so, we consider the Information Bottleneck (IB) (Tishby et al., 2000) principle. Let Z denote the original source message space and Z̃ denote its compressed representation. As opposed to traditional lossless compression where we seek to find corresponding encodings Z̃ that compresses all aspects of Z, in IB we seek to preserve only relevant information in Z̃ with regards to another relevance variable, Y . For example when looking to compress speech waveforms (Z) if our task at hand is speech recognition, then our relevance variable Y would be a transcript of the speech. Our representation Z̃ would only need to maximize relevant information about the transcript Y instead of its full form including tone, pitch, background noise etc. We can formulate this objective by minimizing the following functional with respect to p(z̃ | z): L(p(z̃ | z)) = I[Z; Z̃]− βI[Z̃;Y ] where I[·; ·] is the Mutual Information (MI) between two random variables. β is the Lagrange multiplier for the amount of information our encoding Z̃ is allowed to quantify about Y . This corresponds to a trade-off between minimizing the encoding rate I[Z; Z̃] and maximizing the mutual information between the encoding and our random variable Y . We now apply this principle to representation learning of state in MBRL. If our source message space is our state S′ and our encoded message is X ′, then to distill the most relevant information with regards to the dynamics of our environment one choice of relevance variable is {X,A}, i.e. our encoded state in the previous timestep together with the presence of an action. This gives us the functional L(p(x′ | s′)) = I[S′;X ′]− βI[X ′; {X,A}]. (1) In our work, we look to find methods to minimize this functional for an encoding that maximizes the predictive ability of our dynamics model. We first aim to minimize our encoding rate I[S′;X ′]. Since encoding rate is a measure of the amount of bits transmitted per message S′, representation dimension is analogous to number of bits per message. This principle of minimizing encoding rate guides our selection of representation dimension 1https://github.com/taodav/nsrs - for every environment, we try to choose the smallest representation dimension possible such that the representation can still encapsulate model dynamics as we understand them. For example, in a simple Gridworld example, we look to only encode agent position in the grid-world. Now let us consider the second term in Equation 1. Our goal is to learn an optimally predictive model of our environment. To do so we first consider the MI between the random variable denoting our state representation X , in the presence of the random variable representing actions A and the random variable denoting the state representation in the next timestep X ′ (Still, 2009). Note that MI is a metric and is symmetric: I[{X,A} ; X ′] = Ep(x′,x,a) [ log ( p(x′ | x, a) p(x′) )] = H[X ′]−H[X ′ | X,A] (2) This quantity is a measure of our dynamics model’s predictive ability. If we consider the two entropy terms (denoted H[·]), we see that H[X ′] constitutes the entropy of our state representation and H[X ′ | X,A] as the entropy of the next state X ′ given our current state X and an action A. Recall that we are trying to minimize I[X ′;S′] and maximize I[X ′; {X,A}] with respect to some encoding function X = e(S). In the next section, we describe our approach for this encoding function as well as dynamics learning in MBRL. 3.2 Encoding and dynamics learning For our purposes, we use a neural encoder ê : S → X parameterized by θê to map our highdimensional state space into lower-dimensional abstract representations, where X ⊆ RnX . The dynamics are learned via the following functions: a transition function τ̂ : X × A→ X parameterized by θτ̂ , a reward function r̂ : X ×A→ [−1, 1] parameterized by θr̂, and a per timestep discount factor function γ̂ : X × A → [0, 1) parameterized by θγ̂ . This discount factor is only learned to predict terminal states, where γ = 0. In order to leverage all past experiences, we use an off-policy learning algorithm that samples transition tuples (s, a, r, γ, s′) from a replay buffer. We first encode our current and next states with our encoder to get x← ê(s; θê), x′ ← ê(s′; θê). The Q-function is learned using the DDQN algorithm (van Hasselt et al., 2015), which uses the target: Y = r + γQ(ê(s′; θê−), argmax a′∈A Q(x′, a′; θQ); θQ−), where θQ− and θê− are parameters of an earlier buffered Q-function (or our target Q-function) and encoder respectively. The agent then minimizes the following loss: LQ(θQ) = (Q(x, a; θQ)− Y )2. We learn the dynamics of our environment through the following losses: LR(θê, θr̂) = |r − r̂(x, a; θr̂)|2 , LG(θê, θγ̂) = |γ − γ̂(x, a; θγ̂)|2 and our transition loss Lτ (θê, θτ̂ ) = ||[x+ τ̂(x, a; θτ̂ )]− x′||22. (3) Note that our transition function learns the difference (given an action) between previous state x and current state x′. By jointly learning the weights of the encoder and the different components, the abstract representation is shaped in a meaningful way according to the dynamics of the environment. In particular, by minimizing the loss given in Equation 3 with respect to the encoder parameters θê (or p(x | s)), we minimize our entropy H[X ′|X,A]. In order to maximize the entropy of our learnt abstracted state representations H[X ′], we minimize the expected pairwise Gaussian potential (Borodachov et al., 2019) between states: Ld1(θê) = Es1,s2∼p(s) [ exp(−Cd1||ê(s1; θê)− ê(s2; θê)||22) ] (4) with Cd1 as a hyperparameter. Losses in Equation 3 and Equation 4 are reminiscent of the modelbased losses in François-Lavet et al. (2018) and correspond respectively to the alignment and uniformity contrastive loss formulation in Wang and Isola (2020), where alignment ensures that similar states are close together (in encoded representation space) and uniformity ensures that all states are spread uniformly throughout this low-dimensional representation space. The losses Lτ (θê) and Ld1(θê) maximizes the I[{X,A};X ′] term and selecting smaller dimension for our representation minimizes I[X ′, S′]. Put together, our method is trying to minimize L(p(x′|s′)) as per Equation 1. 3.3 Distance measures in representational space For practical purposes, since we are looking to use a distance metric within X to leverage as a score for novelty, we ensure well-defined distances between states by constraining the `2 distance between two consecutive states: Lcsc(θê) = max(‖ê(s1; θe)− ê(s2; θe)‖2 − ω, 0) (5) where Lcsc is a soft constraint between consecutive states s1 and s2 that tends to enforce two consecutive encoded representations to be at a distance ω apart. We add Lcsc to ensure a well-defined `2 distance between abstract states for use in our intrinsic reward calculation (a discussion of this loss is provided in Appendix B). We discuss how we use ω to evaluate model accuracy for our MPC updates in Appendix A. Finally, we minimize the sum of all the aforementioned losses through gradient descent: L = LR(θê, θr̂) + LG(θê, θγ̂) + Lτ (θê, θτ̂ ) + LQ(θQ) + Ld1(θê) + Lcsc(θê). (6) Through these losses, the agent learns a low-dimensional representation of the environment that is meaningful in terms of the `2 norm in representation space. We then employ a planning technique that combines the knowledge of the model and the value function which we use to maximize intrinsic rewards, as detailed in the next section and Section 4.3. 4 Novelty Search in abstract representational space Our approach for exploration uses intrinsic motivation (Schmidhuber, 1990; Chentanez et al., 2005; Achiam and Sastry, 2017) where an agent rewards itself based on the fact that it gathers interesting experiences. In a large state space setting, states are rarely visited and the count for any state after n steps is almost always 0. While Bellemare et al. (2016) solves this issue with density estimation using pseudo-counts directly from the high-dimensional observations, we aim to estimate some function of novelty in our learnt lower-dimensional representation space. 4.1 Sparsity in representation space as a measure for novelty Through the minimization of Equation 1, states that are close together in dynamics are pushed close together in our abstract state space X . Ideally, we want an agent that efficiently explores the dynamics of its environment. To do so, we reward our agent for exploring areas in lower-dimensional representation space that are less visited and ideally as far apart from the dynamics that we currently know. Given a point x in representation space, we define a reward function that considers the sparsity of states around x - we do so with the average distance between x and its k-nearest-neighbors in its visitation history buffer B: ρ̂X (x) = 1 k k∑ i=1 d(x, xi), (7) where x =̇ ê(s; θê) is a given encoded state, k ∈ Z+, d(·, ·) is some distance metric in RnX and xi =̇ ê(si; θê), where si ∈ B for i = 1 . . . k are the k nearest neighbors (by encoding states in B to representational space) of x according to the distance metric d(·, ·). Implicit in this measure is the reliance on the agent’s visitation history buffer B. An important factor in this score is which distance metric to use. With the losses used in Section 3, we use `2 distance because of the structure imposed on the abstract state space with Equations 4 and 5. As we show in Appendix D, this novelty reward is reminiscent of recoding probabilities (Bellemare et al., 2016; Cover and Thomas, 2012) and is in fact inversely proportional to these probabilities, suggesting that our novelty heuristic estimates visitation count. This is also the same score used to gauge “sparseness” in behavior space in Lehman and Stanley (2011). With this reward function, we present the pseudo-code for our exploration algorithm in Algorithm 1. Algorithm 1: The Novelty Search algorithm in abstract representational space. 1 Initialization: transition buffer B, agent policy π; 2 Sample ninit initial random transitions, let t = ninit; 3 while t ≤ nmax do // We update our dynamics model and Q-function every nfreq steps 4 if t mod nfreq == 0 then 5 while j ≤ niters or Lτ ≤ ( ω δ )2 do 6 Sample batch of transitions (s, a, rextr, rintr, γ, s′) ∈ B; 7 Train dynamics model with (s, a, rextr, γ, s′); 8 Train Q-function with (s, a, rextr + rintr, γ, s′); 9 end 10 ∀(s, a, rextr, rintr, γ, s′) ∈ B, set rintr ← ρ̂X (ê(s′; θê)); 11 end 12 at ∼ π(st); 13 Take action in environment: st+1 ← τ(st, at), rt,extr ← R(st, at), γt ← G(st, at); 14 Calculate intrinsic reward: rt,intr ← ρ̂X (ê(st+1; θê)) 15 B ← B ∪ {(st, at, rt,extr, rt,intr, γt, st+1)}; 16 end 4.2 Asymptotic behavior This reward function also exhibits favorable asymptotic behavior, as it decreases to 0 as most of the state space is visited. We show this in Theorem 1. Theorem 1. Assume we have a finite state space S ⊆ Rd, history of states B = (s1, . . . , sN ), encoded state space X ⊆ RnX , deterministic mapping f : Rd → RnX and a novelty reward defined as ρ̂X (x). With an optimal policy with respect to the rewards of the novelty heuristic, our agent will tend towards states with higher intrinsic rewards. If we assume a communicating MDP setting (Puterman, 1994), we have that lim N→∞ ρ̂X (f(s)) = 0, ∀s ∈ S. Proof. We prove this theorem in Appendix E. 4.3 Combining model-free and model-based components for exploration policies Similarly to previous works (e.g. Oh et al., 2017; Chebotar et al., 2017), we use a combination of model-based planning with model-free Q-learning to obtain a good policy. We calculate rollout estimates of next states based on our transition model τ̂ and sum up the corresponding rewards, which we denote as r : X × A → [0, Rmax] and can be a combination of both intrinsic and extrinsic rewards. We calculate expected returns based on the discounted rewards of our d-depth rollouts: Q̂d(x, a) = r(x, a) + γ̂(x, a; θγ̂)× max a′∈A Q̂d−1(τ(x, a; θτ̂ ), a ′), if d > 0 Q(x, a; θQ), if d = 0 (8) Note that we simulate only b-best options at each expansion step based on Q(x, a; θQ), where b ≤ |A|. In this work, we only use full expansions. The estimated optimal action is given by a∗ = argmax a∈A Q̂d(x, a). The actual action chosen at each step follows an -greedy strategy ( ∈ [0, 1]), where the agent follows the estimated optimal action with probability 1 − and a random action with probability . 5 Experiments We conduct experiments on environments of varying difficulty. All experiments use a training scheme where we first train parameters to converge on an accurate representation of the already experienced transitions before taking an environment step. We optimize the losses (over multiple training iterations) given in Section 3. We discuss all environment-specific hyperparameters in Appendix J. 5.1 Labyrinth exploration We consider two 21 × 21 versions of the grid-world environment (Figure 7 in Appendix). The first is an open labyrinth grid-world, with no walls except for bordering walls. The second is a similar sized grid-world split into four connected rooms. In these environments the action spaceA is the set of four cardinal directions. These environments have no rewards or terminal states and the goal is to explore, agnostic of any task. We use two metrics to gauge exploration for this environment: the first is the ratio of states visited only once, the second is the proportion of total states visited. 5.1.1 Open labyrinth In the open labyrinth experiments (Figure 2a), we compare a number of variations of our approach with a random baseline and a count-based baseline (Bellemare et al., 2016) (as we can count states in this tabular setting). Variations of the policy include an argmax over state values (d = 0) and planning depths of d ∈ {1, 5}. All variations of our method outperform the two baselines in this task, with a slight increase in performance as planning depth d increases. In the open labyrinth, our agent is able to reach 100% of possible states (a total of 19 × 19 = 361 unique states) in approximately 800 steps, and 80% of possible states (≈ 290 states) in approximately 500 steps. These counts also include the ninit number of random steps taken preceding training. Our agent is also able to learn highly interpretable abstract representations in very few environment steps (as shown in Figure 1a) as it explores its state space. In addition, after visiting most unseen states in its environment, our agent tends to uniformly explore its state space due to the nature of our novelty heuristic. A visualisation of this effect is available in Appendix H. 5.1.2 4-room labyrinth We now consider the 4-room labyrinth environment, a more challenging version of the open labyrinth environment (Figure 1a). As before, our encoder ê is able to take a high-dimensional input and compress it to a low-dimensional representation. In the case of both labyrinth environments, the representation incorporates knowledge related to the position of the agent in 2-dimensions that we call primary features. In the 4-room labyrinth environment, it also has to learn other information (a) Results for open labyrinth and different variations on policies compared to baselines. (b) Results for the 4-room labyrinth and different variations on policies compared to baselines. Figure 2: Labyrinth results for both open labyrinth and 4-room labyrinth over 10 trials, showing mean and standard deviations. such as agent surroundings (walls, open space) etc., but it does so only via the transition function learned through experience. We call this extraneous but necessary information secondary features. As most of these secondary features are encoded only in the dynamics model τ̂ , our agent has to experience a transition in order to accurately represent both primary and secondary features. In this environment specifically, our dynamics model might over-generalize for walls between rooms and can sometimes fail at first to try out transitions in the passageways between rooms. However, because our agent tends to visit uniformly all the states that are reachable within the known rooms, the -greedy policy of our approach still ensures that the agent explores passageways efficiently even in the cases where it has over-generalized to the surrounding walls. We run the same experiments on the 4-room labyrinth domain as we do on the open labyrinth and report results in Figure 2b. In both cases, our method outperforms the two baselines in this domain (random and count-based). 5.2 Control and sub-goal exploration In order to test the efficacy of our method beyond fixed mazes, we conduct experiments on the control-based environment Acrobot (Brockman et al., 2016) and a multi-step maze environment. Our method (with planning depth d = 5) is compared to strong exploration baselines with different archetypes: 1. Prediction error incentivized exploration (Stadie et al., 2015) 2. Hash count-based exploration (Tang et al., 2016) 3. Random Network Distillation (Osband et al., 2017) 4. Bootstrap DQN (BDQN, Osband et al. (2016)) In order to maintain consistency in our results, we use the same deep learning architectures throughout. Since we experiment in the deterministic setting, we exclude baselines that require some form of stochasticity or density estimation as baselines (for example, Shyam et al. (2018) and Osband et al. (2017)). A specificity of our approach is that we run multiple training iterations in between each environment step for all experiments, which allows the agent to use orders of magnitude less samples as compared to most model-free RL algorithms (all within the same episode). 5.2.1 Acrobot We now test our approach on Acrobot (Brockman et al., 2016), which has a continuous state space unlike the labyrinth environment. We specifically choose this control task because the nature of this environment makes exploration inherently difficult. The agent only has control of the actuator for the inner joint and has to transfer enough energy into the second joint in order to swing it to its goal state. We modify this environment so that each episode is at most 3000 environment steps. While this environment does admit an extrinsic reward, we ignore these rewards entirely. To measure the performance of our exploration approach, we measure the average number of steps per episode that the agent takes to move its second joint above a given line as per Figure 3a. To demonstrate the ability of our method to learn a low dimensional abstract representation from pixel inputs, we use 4 consecutive pixel frames as input instead of the 6-dimensional full state vector. We use a 4-dimensional abstract representation of our state and results from experiments are shown in Table 1. Our method reaches the goal state more efficiently than the baselines. 5.2.2 Multi-step goal maze We also test our method on a more complex maze with the sub-task of picking up a key that opens the door to an area with a reward. We build our environment with the Pycolab game engine (Stepleton, 2017). The environment can be seen in Figure 3b, where the input to our agent is a top-down view of the environment. While this environment does admit an extrinsic reward (1 for picking up the key, 10 for reaching the final state), we ignore these rewards and only focus on intrinsic rewards. In our experiments, we show that our agent is able to learn an interpretable representation of the environment in a sample-efficient manner. Figure 1c shows an example of learnt representations in this domain after reaching the goal - we observe that positions in the maze correspond to a nearly identical structure in the lower-dimensional representation. Our representation also nicely captures internal state information (whether the key has been picked up) by separating the two sets of states (states when the key has been picked up and states when the key has not been picked up). Similar positions in both sets of states are also mapped closely together in lower-dimensional space (ie. (1, 1, with key) is close in `2 to (1, 1, without key)), suggesting good generalization between similar states. 6 Related work The proposed exploration strategy falls under the category of directed exploration (Thrun, 1992) that makes use of the past interactions with the environment to guide the discovery of new states. This work is inspired by the Novelty Search algorithm (Lehman and Stanley, 2011) that uses a nearest-neighbor scoring approach to gauge novelty in policy space. Our approach leverages this scoring to traverse dynamics space, which we motivate theoretically. Exploration strategies have been investigated with both model-free and model-based approaches. In Bellemare et al. (2016) and Ostrovski et al. (2017), a model-free algorithm provides the notion of novelty through a pseudocount from an arbitrary density model that provides an estimate of how many times an action has been taken in similar states. Recently, Taiga et al. (2020) do a thorough comparison between bonusbased exploration methods in model-free RL and show that architectural changes may be more important to agent performance (based on extrinsic rewards) as opposed to differing exploration strategies. Several exploration strategies have also used a model of the environment along with planning. Hester and Stone (2012) employ a two-part strategy to calculate intrinsic rewards, combining model uncertainty (from a random-forest based model) and a novelty reward based on L1 distance in feature space. A strategy investigated in Salge et al. (2014); Mohamed and Rezende (2015); Gregor et al. (2016); Chiappa et al. (2017) is to have the agent choose a sequence of actions by planning that leads to a representation of state as different as possible to the current state. In Pathak et al. (2017); Haber et al. (2018), the agent optimizes both a model of its environment and a separate model that predicts the error/uncertainty of its own model. Burda et al. (2018a) similarly uses an intrinsic reward based on the uncertainty of its dynamics model. In Shyam et al. (2018), forward models of the environment are used to measure novelty derived from disagreement between future states. Still and Precup (2012) take an information theoretic approach to exploration, that chooses a policy which maximizes the predictive power of the agent’s own behavior and environment rewards. In Badia et al. (2020), an intrinsic reward from the k-NN over the agent’s experience is also employed for exploration. They instead employ a self-supervised inverse dynamics model to learn the embeddings as opposed to our approach. Beyond improved efficiency in exploration, the interpretability of our approach could also lead to human-in-the-loop techniques (Mandel et al., 2017; Abel et al., 2017) for exploration, with the possibility for the agent to better utilize feedback from interpretability of the agent in representation space. 7 Discussion In this paper, we formulate the task of dynamics learning in MBRL through the Information Bottleneck principle. We present methods to optimize the IB equation through low-dimensional abstract representations of state. We further develop a novelty score based on these learnt representations that we leverage as an intrinsic reward that enables efficient exploration. By using this novelty score with a combination of model-based and model-free approaches for planning, we show more efficient exploration across multiple environments with our learnt representations and novelty rewards. As with most methods, our approach also has limitations. One limitation we may have is the scalability of non-parametric methods such as k-NN density estimation since our method scales linearly with the number of environment steps. A possible solution to this problem would be to use some sampling scheme to sample a fixed number of observations for calculation of our novelty heuristic. Another issue that has arisen from using very low-dimensional space to represent state is generalization. In some cases, the model can over-generalize with the consequence that the low-dimensional representation loses information that is crucial for the exploration of the entire state space. An interesting direction for future work would be to find ways of incorporating secondary features such as those mentioned in Section 5.1.2. An interesting possibility would be to use a similar IB method, but using a full history of states as the conditioning variable. Beyond these points, we discuss limitations and potential improvements to this work in Appendix K. Finally, we show preliminary results of our method on a more complex task - Montezuma’s Revenge - in Appendix G. With the theory and methods developed in this paper, we hope to see future work done on larger tasks with more complex environment dynamics. Broader Impact Algorithms for exploring an environment are a central piece of learning efficient policies for unknown sequential decision-making tasks. In this section, we discuss the wider impacts of our research both in the Machine Learning (ML) field and beyond. We first consider the benefits and risks of our method on ML applications. Efficient exploration in unknown environments has the possibility to improve methods for tasks that require accurate knowledge of its environment. By exploring states that are more novel, agents have a more robust dataset. For control tasks, our method improves the sample efficiency of its learning by finding more novel states in terms of dynamics for use in training. Our learnt low-dimensional representation also helps the interpretability of our decision making agents (as seen in Figure 1). More interpretable agents have potential benefits for many areas of ML, including allowing human understandability and intervention in human-in-the-loop approaches. With such applications in mind, we consider societal impacts of our method, along with potential future work that could be done to improve these societal impacts. One specific instance of how efficient exploration and environment modeling might help is in disaster relief settings. With the incipience of robotic systems for disaster area exploration, autonomous agents need to efficiently explore their unknown surroundings. Further research into scaling these MBRL approaches could allow for these robotic agents to find points of interest (survivors, etc.) efficiently. One potential risk of our application is safe exploration. Our method finds and learns from states that are novel in terms of its dynamics. Without safety mechanisms, our agent could view potentially harmful scenarios as novel due to the rarity of such a situation. For example, a car crash might be seen as a highly novel state. To mitigate this safety concern we look to literature on Safety in RL (Garcı́a and Fernández, 2015). In particular, developing a risk metric based on the interpretability of our approach may be an area of research worth developing. Acknowledgements We would like to thank Emmanuel Bengio for the helpful discussions and feedback on early drafts of this work. We would also like to thank all the reviewers for their constructive and helpful comments.
1. What is the focus and contribution of the paper regarding exploration algorithms? 2. What are the strengths of the proposed approach, particularly in its novelty measure and representation learning? 3. What are the weaknesses of the paper, especially regarding its comparisons and empirical evaluations? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper proposes a novel exploration algorithm that computes the novelty of a state in a learned representation space and uses it as a reward in a mix of model-free and model-based approach. The main novelty introduced in this paper is the method to learn a representation from which the novelty measure is meaningful. Strengths The claims are sound and well justified. The paper is well organized and well written. The empirical evaluation studies several environments. Efficient exploration is a hot topic in RL. The code and all necessary details are provided. Weaknesses he main weaknesses of the paper are: * the lack of comparisons to other model-based exploration approaches to disentangle the impact of the model-based aspect vs the novelty measure on performance. * the lack of empirical comparisons and discussion of ICM (Pathak et al., 2017) and RND (Burda, 2018). * the lack of statistical tests to support empirical comparisons. * the lack of an ablative study with respect to the 6 different losses used by the proposed approach.
NIPS
Title Novelty Search in Representational Space for Sample Efficient Exploration Abstract We present a new approach for efficient exploration which leverages a lowdimensional encoding of the environment learned with a combination of modelbased and model-free objectives. Our approach uses intrinsic rewards that are based on the distance of nearest neighbors in the low dimensional representational space to gauge novelty. We then leverage these intrinsic rewards for sampleefficient exploration with planning routines in representational space for hard exploration tasks with sparse rewards. One key element of our approach is the use of information theoretic principles to shape our representations in a way so that our novelty reward goes beyond pixel similarity. We test our approach on a number of maze tasks, as well as a control problem and show that our exploration approach is more sample-efficient compared to strong baselines. 1 Introduction In order to solve a task efficiently in Reinforcement Learning (RL), one of the main challenges is to gather informative experiences via an efficient exploration of the state space. A common approach to exploration is to leverage intrinsic rewards correlated with some metric or score for novelty (Schmidhuber, 2010; Stadie et al., 2015; Houthooft et al., 2016). With intrinsic rewards, an agent can be incentivized to efficiently explore its state space. A direct approach to calculating these novelty scores is to derive a reward based on the observations, such as a count-based reward (Bellemare et al., 2016; Ostrovski et al., 2017) or a prediction-error based reward (Burda et al., 2018b). However, an issue occurs when measuring novelty directly from the raw observations, as some information in pixel space (such as randomness or backgrounds) may be irrelevant. In this case, if an agent wants to efficiently explore its state space it should only focus on meaningful and novel information. In this work, we propose a method of sample-efficient exploration by leveraging intrinsic rewards in a meaningful latent state space. To build a meaningful state abstraction, we view Model-based RL (MBRL) from an information theoretic perspective - we optimize our dynamics learning through the Information Bottleneck (Tishby et al., 2000) principle. We also combine both model-based and model-free components through a joint representation. This method encodes high-dimensional observations into lower-dimensional representations such that states that are close in dynamics are brought close together in representation space (François-Lavet et al., 2018). We also add additional constraints to ensure that a measure of distance between abstract states is meaningful. We leverage these properties of our representation to formulate a novelty score based on Euclidean distance in low-dimensional representation space and we then use this score to generate intrinsic rewards that we can exploit for efficient exploration. One important element of our exploration algorithm is that we take a Model Predictive Control (MPC) approach (Garcia et al., 1989) and perform actions only after our model is sufficiently accurate (and hence ensure an accurate novelty heuristic). Through this training scheme, our agent is 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. also able to learn a meaningful representation of its state space in a sample-efficient manner. The code with all experiments is available 1. 2 Problem setting An agent interacts with its environment over discrete timesteps, modeled as a Markov Decision Process (MDP), defined by the 6-tuple (S,S0,A, τ,R,G) (Puterman, 1994). In this setting, S is the state space, S0 is the initial state distribution, A is the discrete action space, τ : S × A → S is the transition function that is assumed deterministic (with the possibility of extension to stochastic environments with generative methods), R : S × A → R is the reward function (R = [−1, 1]), G : S × A → [0, 1) is the per timestep discount factor. At timestep t in state st ∈ S, the agent chooses an action at ∈ A based on policy π : S × A → [0, 1], such that at ∼ π(st, ·). After taking at, the agent is in state st+1 = τ(st, at) and receives reward rt ∼ R(st, at) and a discount factor γt ∼ G(st, at). Over n environment steps, we define the buffer of previously visited states as B = (s1, . . . , sn), where si ∈ S ∀i ∈ N. In RL, the usual objective is to maximize the sum of expected future rewards Vπ(s) = Eπ [ rt + ∑∞ i=1 (∏i−1 j=0 γt+j ) rt+i|s = st ] . To learn a policy π that maximizes the expected return, an RL agent has to efficiently explore its environment (reach novel states in as few steps as possible). In this paper, we consider tasks with sparse rewards or even no rewards, and are interested in exploration strategies that require as few steps as possible to explore the state space. 3 Abstract state representations We focus on learning a lower-dimensional representation of state when our state (or observations in the partially observable case (Kaelbling et al., 1998)) is high-dimensional (Dayan, 1993; Tamar et al., 2016; Silver et al., 2016; Oh et al., 2017; de Bruin et al., 2018; Ha and Schmidhuber, 2018; François-Lavet et al., 2018; Hafner et al., 2018; Gelada et al., 2019). 3.1 Information Bottleneck We first motivate our methods for model learning. To do so, we consider the Information Bottleneck (IB) (Tishby et al., 2000) principle. Let Z denote the original source message space and Z̃ denote its compressed representation. As opposed to traditional lossless compression where we seek to find corresponding encodings Z̃ that compresses all aspects of Z, in IB we seek to preserve only relevant information in Z̃ with regards to another relevance variable, Y . For example when looking to compress speech waveforms (Z) if our task at hand is speech recognition, then our relevance variable Y would be a transcript of the speech. Our representation Z̃ would only need to maximize relevant information about the transcript Y instead of its full form including tone, pitch, background noise etc. We can formulate this objective by minimizing the following functional with respect to p(z̃ | z): L(p(z̃ | z)) = I[Z; Z̃]− βI[Z̃;Y ] where I[·; ·] is the Mutual Information (MI) between two random variables. β is the Lagrange multiplier for the amount of information our encoding Z̃ is allowed to quantify about Y . This corresponds to a trade-off between minimizing the encoding rate I[Z; Z̃] and maximizing the mutual information between the encoding and our random variable Y . We now apply this principle to representation learning of state in MBRL. If our source message space is our state S′ and our encoded message is X ′, then to distill the most relevant information with regards to the dynamics of our environment one choice of relevance variable is {X,A}, i.e. our encoded state in the previous timestep together with the presence of an action. This gives us the functional L(p(x′ | s′)) = I[S′;X ′]− βI[X ′; {X,A}]. (1) In our work, we look to find methods to minimize this functional for an encoding that maximizes the predictive ability of our dynamics model. We first aim to minimize our encoding rate I[S′;X ′]. Since encoding rate is a measure of the amount of bits transmitted per message S′, representation dimension is analogous to number of bits per message. This principle of minimizing encoding rate guides our selection of representation dimension 1https://github.com/taodav/nsrs - for every environment, we try to choose the smallest representation dimension possible such that the representation can still encapsulate model dynamics as we understand them. For example, in a simple Gridworld example, we look to only encode agent position in the grid-world. Now let us consider the second term in Equation 1. Our goal is to learn an optimally predictive model of our environment. To do so we first consider the MI between the random variable denoting our state representation X , in the presence of the random variable representing actions A and the random variable denoting the state representation in the next timestep X ′ (Still, 2009). Note that MI is a metric and is symmetric: I[{X,A} ; X ′] = Ep(x′,x,a) [ log ( p(x′ | x, a) p(x′) )] = H[X ′]−H[X ′ | X,A] (2) This quantity is a measure of our dynamics model’s predictive ability. If we consider the two entropy terms (denoted H[·]), we see that H[X ′] constitutes the entropy of our state representation and H[X ′ | X,A] as the entropy of the next state X ′ given our current state X and an action A. Recall that we are trying to minimize I[X ′;S′] and maximize I[X ′; {X,A}] with respect to some encoding function X = e(S). In the next section, we describe our approach for this encoding function as well as dynamics learning in MBRL. 3.2 Encoding and dynamics learning For our purposes, we use a neural encoder ê : S → X parameterized by θê to map our highdimensional state space into lower-dimensional abstract representations, where X ⊆ RnX . The dynamics are learned via the following functions: a transition function τ̂ : X × A→ X parameterized by θτ̂ , a reward function r̂ : X ×A→ [−1, 1] parameterized by θr̂, and a per timestep discount factor function γ̂ : X × A → [0, 1) parameterized by θγ̂ . This discount factor is only learned to predict terminal states, where γ = 0. In order to leverage all past experiences, we use an off-policy learning algorithm that samples transition tuples (s, a, r, γ, s′) from a replay buffer. We first encode our current and next states with our encoder to get x← ê(s; θê), x′ ← ê(s′; θê). The Q-function is learned using the DDQN algorithm (van Hasselt et al., 2015), which uses the target: Y = r + γQ(ê(s′; θê−), argmax a′∈A Q(x′, a′; θQ); θQ−), where θQ− and θê− are parameters of an earlier buffered Q-function (or our target Q-function) and encoder respectively. The agent then minimizes the following loss: LQ(θQ) = (Q(x, a; θQ)− Y )2. We learn the dynamics of our environment through the following losses: LR(θê, θr̂) = |r − r̂(x, a; θr̂)|2 , LG(θê, θγ̂) = |γ − γ̂(x, a; θγ̂)|2 and our transition loss Lτ (θê, θτ̂ ) = ||[x+ τ̂(x, a; θτ̂ )]− x′||22. (3) Note that our transition function learns the difference (given an action) between previous state x and current state x′. By jointly learning the weights of the encoder and the different components, the abstract representation is shaped in a meaningful way according to the dynamics of the environment. In particular, by minimizing the loss given in Equation 3 with respect to the encoder parameters θê (or p(x | s)), we minimize our entropy H[X ′|X,A]. In order to maximize the entropy of our learnt abstracted state representations H[X ′], we minimize the expected pairwise Gaussian potential (Borodachov et al., 2019) between states: Ld1(θê) = Es1,s2∼p(s) [ exp(−Cd1||ê(s1; θê)− ê(s2; θê)||22) ] (4) with Cd1 as a hyperparameter. Losses in Equation 3 and Equation 4 are reminiscent of the modelbased losses in François-Lavet et al. (2018) and correspond respectively to the alignment and uniformity contrastive loss formulation in Wang and Isola (2020), where alignment ensures that similar states are close together (in encoded representation space) and uniformity ensures that all states are spread uniformly throughout this low-dimensional representation space. The losses Lτ (θê) and Ld1(θê) maximizes the I[{X,A};X ′] term and selecting smaller dimension for our representation minimizes I[X ′, S′]. Put together, our method is trying to minimize L(p(x′|s′)) as per Equation 1. 3.3 Distance measures in representational space For practical purposes, since we are looking to use a distance metric within X to leverage as a score for novelty, we ensure well-defined distances between states by constraining the `2 distance between two consecutive states: Lcsc(θê) = max(‖ê(s1; θe)− ê(s2; θe)‖2 − ω, 0) (5) where Lcsc is a soft constraint between consecutive states s1 and s2 that tends to enforce two consecutive encoded representations to be at a distance ω apart. We add Lcsc to ensure a well-defined `2 distance between abstract states for use in our intrinsic reward calculation (a discussion of this loss is provided in Appendix B). We discuss how we use ω to evaluate model accuracy for our MPC updates in Appendix A. Finally, we minimize the sum of all the aforementioned losses through gradient descent: L = LR(θê, θr̂) + LG(θê, θγ̂) + Lτ (θê, θτ̂ ) + LQ(θQ) + Ld1(θê) + Lcsc(θê). (6) Through these losses, the agent learns a low-dimensional representation of the environment that is meaningful in terms of the `2 norm in representation space. We then employ a planning technique that combines the knowledge of the model and the value function which we use to maximize intrinsic rewards, as detailed in the next section and Section 4.3. 4 Novelty Search in abstract representational space Our approach for exploration uses intrinsic motivation (Schmidhuber, 1990; Chentanez et al., 2005; Achiam and Sastry, 2017) where an agent rewards itself based on the fact that it gathers interesting experiences. In a large state space setting, states are rarely visited and the count for any state after n steps is almost always 0. While Bellemare et al. (2016) solves this issue with density estimation using pseudo-counts directly from the high-dimensional observations, we aim to estimate some function of novelty in our learnt lower-dimensional representation space. 4.1 Sparsity in representation space as a measure for novelty Through the minimization of Equation 1, states that are close together in dynamics are pushed close together in our abstract state space X . Ideally, we want an agent that efficiently explores the dynamics of its environment. To do so, we reward our agent for exploring areas in lower-dimensional representation space that are less visited and ideally as far apart from the dynamics that we currently know. Given a point x in representation space, we define a reward function that considers the sparsity of states around x - we do so with the average distance between x and its k-nearest-neighbors in its visitation history buffer B: ρ̂X (x) = 1 k k∑ i=1 d(x, xi), (7) where x =̇ ê(s; θê) is a given encoded state, k ∈ Z+, d(·, ·) is some distance metric in RnX and xi =̇ ê(si; θê), where si ∈ B for i = 1 . . . k are the k nearest neighbors (by encoding states in B to representational space) of x according to the distance metric d(·, ·). Implicit in this measure is the reliance on the agent’s visitation history buffer B. An important factor in this score is which distance metric to use. With the losses used in Section 3, we use `2 distance because of the structure imposed on the abstract state space with Equations 4 and 5. As we show in Appendix D, this novelty reward is reminiscent of recoding probabilities (Bellemare et al., 2016; Cover and Thomas, 2012) and is in fact inversely proportional to these probabilities, suggesting that our novelty heuristic estimates visitation count. This is also the same score used to gauge “sparseness” in behavior space in Lehman and Stanley (2011). With this reward function, we present the pseudo-code for our exploration algorithm in Algorithm 1. Algorithm 1: The Novelty Search algorithm in abstract representational space. 1 Initialization: transition buffer B, agent policy π; 2 Sample ninit initial random transitions, let t = ninit; 3 while t ≤ nmax do // We update our dynamics model and Q-function every nfreq steps 4 if t mod nfreq == 0 then 5 while j ≤ niters or Lτ ≤ ( ω δ )2 do 6 Sample batch of transitions (s, a, rextr, rintr, γ, s′) ∈ B; 7 Train dynamics model with (s, a, rextr, γ, s′); 8 Train Q-function with (s, a, rextr + rintr, γ, s′); 9 end 10 ∀(s, a, rextr, rintr, γ, s′) ∈ B, set rintr ← ρ̂X (ê(s′; θê)); 11 end 12 at ∼ π(st); 13 Take action in environment: st+1 ← τ(st, at), rt,extr ← R(st, at), γt ← G(st, at); 14 Calculate intrinsic reward: rt,intr ← ρ̂X (ê(st+1; θê)) 15 B ← B ∪ {(st, at, rt,extr, rt,intr, γt, st+1)}; 16 end 4.2 Asymptotic behavior This reward function also exhibits favorable asymptotic behavior, as it decreases to 0 as most of the state space is visited. We show this in Theorem 1. Theorem 1. Assume we have a finite state space S ⊆ Rd, history of states B = (s1, . . . , sN ), encoded state space X ⊆ RnX , deterministic mapping f : Rd → RnX and a novelty reward defined as ρ̂X (x). With an optimal policy with respect to the rewards of the novelty heuristic, our agent will tend towards states with higher intrinsic rewards. If we assume a communicating MDP setting (Puterman, 1994), we have that lim N→∞ ρ̂X (f(s)) = 0, ∀s ∈ S. Proof. We prove this theorem in Appendix E. 4.3 Combining model-free and model-based components for exploration policies Similarly to previous works (e.g. Oh et al., 2017; Chebotar et al., 2017), we use a combination of model-based planning with model-free Q-learning to obtain a good policy. We calculate rollout estimates of next states based on our transition model τ̂ and sum up the corresponding rewards, which we denote as r : X × A → [0, Rmax] and can be a combination of both intrinsic and extrinsic rewards. We calculate expected returns based on the discounted rewards of our d-depth rollouts: Q̂d(x, a) = r(x, a) + γ̂(x, a; θγ̂)× max a′∈A Q̂d−1(τ(x, a; θτ̂ ), a ′), if d > 0 Q(x, a; θQ), if d = 0 (8) Note that we simulate only b-best options at each expansion step based on Q(x, a; θQ), where b ≤ |A|. In this work, we only use full expansions. The estimated optimal action is given by a∗ = argmax a∈A Q̂d(x, a). The actual action chosen at each step follows an -greedy strategy ( ∈ [0, 1]), where the agent follows the estimated optimal action with probability 1 − and a random action with probability . 5 Experiments We conduct experiments on environments of varying difficulty. All experiments use a training scheme where we first train parameters to converge on an accurate representation of the already experienced transitions before taking an environment step. We optimize the losses (over multiple training iterations) given in Section 3. We discuss all environment-specific hyperparameters in Appendix J. 5.1 Labyrinth exploration We consider two 21 × 21 versions of the grid-world environment (Figure 7 in Appendix). The first is an open labyrinth grid-world, with no walls except for bordering walls. The second is a similar sized grid-world split into four connected rooms. In these environments the action spaceA is the set of four cardinal directions. These environments have no rewards or terminal states and the goal is to explore, agnostic of any task. We use two metrics to gauge exploration for this environment: the first is the ratio of states visited only once, the second is the proportion of total states visited. 5.1.1 Open labyrinth In the open labyrinth experiments (Figure 2a), we compare a number of variations of our approach with a random baseline and a count-based baseline (Bellemare et al., 2016) (as we can count states in this tabular setting). Variations of the policy include an argmax over state values (d = 0) and planning depths of d ∈ {1, 5}. All variations of our method outperform the two baselines in this task, with a slight increase in performance as planning depth d increases. In the open labyrinth, our agent is able to reach 100% of possible states (a total of 19 × 19 = 361 unique states) in approximately 800 steps, and 80% of possible states (≈ 290 states) in approximately 500 steps. These counts also include the ninit number of random steps taken preceding training. Our agent is also able to learn highly interpretable abstract representations in very few environment steps (as shown in Figure 1a) as it explores its state space. In addition, after visiting most unseen states in its environment, our agent tends to uniformly explore its state space due to the nature of our novelty heuristic. A visualisation of this effect is available in Appendix H. 5.1.2 4-room labyrinth We now consider the 4-room labyrinth environment, a more challenging version of the open labyrinth environment (Figure 1a). As before, our encoder ê is able to take a high-dimensional input and compress it to a low-dimensional representation. In the case of both labyrinth environments, the representation incorporates knowledge related to the position of the agent in 2-dimensions that we call primary features. In the 4-room labyrinth environment, it also has to learn other information (a) Results for open labyrinth and different variations on policies compared to baselines. (b) Results for the 4-room labyrinth and different variations on policies compared to baselines. Figure 2: Labyrinth results for both open labyrinth and 4-room labyrinth over 10 trials, showing mean and standard deviations. such as agent surroundings (walls, open space) etc., but it does so only via the transition function learned through experience. We call this extraneous but necessary information secondary features. As most of these secondary features are encoded only in the dynamics model τ̂ , our agent has to experience a transition in order to accurately represent both primary and secondary features. In this environment specifically, our dynamics model might over-generalize for walls between rooms and can sometimes fail at first to try out transitions in the passageways between rooms. However, because our agent tends to visit uniformly all the states that are reachable within the known rooms, the -greedy policy of our approach still ensures that the agent explores passageways efficiently even in the cases where it has over-generalized to the surrounding walls. We run the same experiments on the 4-room labyrinth domain as we do on the open labyrinth and report results in Figure 2b. In both cases, our method outperforms the two baselines in this domain (random and count-based). 5.2 Control and sub-goal exploration In order to test the efficacy of our method beyond fixed mazes, we conduct experiments on the control-based environment Acrobot (Brockman et al., 2016) and a multi-step maze environment. Our method (with planning depth d = 5) is compared to strong exploration baselines with different archetypes: 1. Prediction error incentivized exploration (Stadie et al., 2015) 2. Hash count-based exploration (Tang et al., 2016) 3. Random Network Distillation (Osband et al., 2017) 4. Bootstrap DQN (BDQN, Osband et al. (2016)) In order to maintain consistency in our results, we use the same deep learning architectures throughout. Since we experiment in the deterministic setting, we exclude baselines that require some form of stochasticity or density estimation as baselines (for example, Shyam et al. (2018) and Osband et al. (2017)). A specificity of our approach is that we run multiple training iterations in between each environment step for all experiments, which allows the agent to use orders of magnitude less samples as compared to most model-free RL algorithms (all within the same episode). 5.2.1 Acrobot We now test our approach on Acrobot (Brockman et al., 2016), which has a continuous state space unlike the labyrinth environment. We specifically choose this control task because the nature of this environment makes exploration inherently difficult. The agent only has control of the actuator for the inner joint and has to transfer enough energy into the second joint in order to swing it to its goal state. We modify this environment so that each episode is at most 3000 environment steps. While this environment does admit an extrinsic reward, we ignore these rewards entirely. To measure the performance of our exploration approach, we measure the average number of steps per episode that the agent takes to move its second joint above a given line as per Figure 3a. To demonstrate the ability of our method to learn a low dimensional abstract representation from pixel inputs, we use 4 consecutive pixel frames as input instead of the 6-dimensional full state vector. We use a 4-dimensional abstract representation of our state and results from experiments are shown in Table 1. Our method reaches the goal state more efficiently than the baselines. 5.2.2 Multi-step goal maze We also test our method on a more complex maze with the sub-task of picking up a key that opens the door to an area with a reward. We build our environment with the Pycolab game engine (Stepleton, 2017). The environment can be seen in Figure 3b, where the input to our agent is a top-down view of the environment. While this environment does admit an extrinsic reward (1 for picking up the key, 10 for reaching the final state), we ignore these rewards and only focus on intrinsic rewards. In our experiments, we show that our agent is able to learn an interpretable representation of the environment in a sample-efficient manner. Figure 1c shows an example of learnt representations in this domain after reaching the goal - we observe that positions in the maze correspond to a nearly identical structure in the lower-dimensional representation. Our representation also nicely captures internal state information (whether the key has been picked up) by separating the two sets of states (states when the key has been picked up and states when the key has not been picked up). Similar positions in both sets of states are also mapped closely together in lower-dimensional space (ie. (1, 1, with key) is close in `2 to (1, 1, without key)), suggesting good generalization between similar states. 6 Related work The proposed exploration strategy falls under the category of directed exploration (Thrun, 1992) that makes use of the past interactions with the environment to guide the discovery of new states. This work is inspired by the Novelty Search algorithm (Lehman and Stanley, 2011) that uses a nearest-neighbor scoring approach to gauge novelty in policy space. Our approach leverages this scoring to traverse dynamics space, which we motivate theoretically. Exploration strategies have been investigated with both model-free and model-based approaches. In Bellemare et al. (2016) and Ostrovski et al. (2017), a model-free algorithm provides the notion of novelty through a pseudocount from an arbitrary density model that provides an estimate of how many times an action has been taken in similar states. Recently, Taiga et al. (2020) do a thorough comparison between bonusbased exploration methods in model-free RL and show that architectural changes may be more important to agent performance (based on extrinsic rewards) as opposed to differing exploration strategies. Several exploration strategies have also used a model of the environment along with planning. Hester and Stone (2012) employ a two-part strategy to calculate intrinsic rewards, combining model uncertainty (from a random-forest based model) and a novelty reward based on L1 distance in feature space. A strategy investigated in Salge et al. (2014); Mohamed and Rezende (2015); Gregor et al. (2016); Chiappa et al. (2017) is to have the agent choose a sequence of actions by planning that leads to a representation of state as different as possible to the current state. In Pathak et al. (2017); Haber et al. (2018), the agent optimizes both a model of its environment and a separate model that predicts the error/uncertainty of its own model. Burda et al. (2018a) similarly uses an intrinsic reward based on the uncertainty of its dynamics model. In Shyam et al. (2018), forward models of the environment are used to measure novelty derived from disagreement between future states. Still and Precup (2012) take an information theoretic approach to exploration, that chooses a policy which maximizes the predictive power of the agent’s own behavior and environment rewards. In Badia et al. (2020), an intrinsic reward from the k-NN over the agent’s experience is also employed for exploration. They instead employ a self-supervised inverse dynamics model to learn the embeddings as opposed to our approach. Beyond improved efficiency in exploration, the interpretability of our approach could also lead to human-in-the-loop techniques (Mandel et al., 2017; Abel et al., 2017) for exploration, with the possibility for the agent to better utilize feedback from interpretability of the agent in representation space. 7 Discussion In this paper, we formulate the task of dynamics learning in MBRL through the Information Bottleneck principle. We present methods to optimize the IB equation through low-dimensional abstract representations of state. We further develop a novelty score based on these learnt representations that we leverage as an intrinsic reward that enables efficient exploration. By using this novelty score with a combination of model-based and model-free approaches for planning, we show more efficient exploration across multiple environments with our learnt representations and novelty rewards. As with most methods, our approach also has limitations. One limitation we may have is the scalability of non-parametric methods such as k-NN density estimation since our method scales linearly with the number of environment steps. A possible solution to this problem would be to use some sampling scheme to sample a fixed number of observations for calculation of our novelty heuristic. Another issue that has arisen from using very low-dimensional space to represent state is generalization. In some cases, the model can over-generalize with the consequence that the low-dimensional representation loses information that is crucial for the exploration of the entire state space. An interesting direction for future work would be to find ways of incorporating secondary features such as those mentioned in Section 5.1.2. An interesting possibility would be to use a similar IB method, but using a full history of states as the conditioning variable. Beyond these points, we discuss limitations and potential improvements to this work in Appendix K. Finally, we show preliminary results of our method on a more complex task - Montezuma’s Revenge - in Appendix G. With the theory and methods developed in this paper, we hope to see future work done on larger tasks with more complex environment dynamics. Broader Impact Algorithms for exploring an environment are a central piece of learning efficient policies for unknown sequential decision-making tasks. In this section, we discuss the wider impacts of our research both in the Machine Learning (ML) field and beyond. We first consider the benefits and risks of our method on ML applications. Efficient exploration in unknown environments has the possibility to improve methods for tasks that require accurate knowledge of its environment. By exploring states that are more novel, agents have a more robust dataset. For control tasks, our method improves the sample efficiency of its learning by finding more novel states in terms of dynamics for use in training. Our learnt low-dimensional representation also helps the interpretability of our decision making agents (as seen in Figure 1). More interpretable agents have potential benefits for many areas of ML, including allowing human understandability and intervention in human-in-the-loop approaches. With such applications in mind, we consider societal impacts of our method, along with potential future work that could be done to improve these societal impacts. One specific instance of how efficient exploration and environment modeling might help is in disaster relief settings. With the incipience of robotic systems for disaster area exploration, autonomous agents need to efficiently explore their unknown surroundings. Further research into scaling these MBRL approaches could allow for these robotic agents to find points of interest (survivors, etc.) efficiently. One potential risk of our application is safe exploration. Our method finds and learns from states that are novel in terms of its dynamics. Without safety mechanisms, our agent could view potentially harmful scenarios as novel due to the rarity of such a situation. For example, a car crash might be seen as a highly novel state. To mitigate this safety concern we look to literature on Safety in RL (Garcı́a and Fernández, 2015). In particular, developing a risk metric based on the interpretability of our approach may be an area of research worth developing. Acknowledgements We would like to thank Emmanuel Bengio for the helpful discussions and feedback on early drafts of this work. We would also like to thank all the reviewers for their constructive and helpful comments.
1. What is the main contribution of the paper, and how does it differ from previous works? 2. What are the strengths of the proposed approach, particularly in terms of its theoretical foundation and empirical results? 3. What are the weaknesses of the paper, especially regarding the loss function and experimental design? 4. How could the authors improve the motivation and explanation of certain components of the method? 5. Are there any limitations or potential drawbacks of the proposed approach that the authors did not discuss?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper proposes a model based exploration method based on an encoding of the environment. The method creates a low dimensional representation of the space and then rewards the agent for seeking novelty based states that are farther apart in dynamics. Results are tested empirically in multiple grid-type domains to evaluate stage coverage, as well as two control domains to evaluate the improvement of the search for novelty on the agent’s ability to perform well on control tasks. In both the grid domains and the control domains the agent performed better than the baselines it was compared against. Strengths The paper is well written and explains well each of the components of the proposed algorithm very well. The theory (and it’s sub-concepts like Information Bottlenecks) were explained clearly. The environments they chose for empirical evaluation were good demonstrations of the strengths of their method. The evaluation of the results themselves was clear. Weaknesses The loss function (eq 6) is extremely long and it is not clear from the experiments if different parts of the loss were more important that others. It would have been nice to have a discussion about this and an ablation study of the different components. The empirical section chose domains that were clear in their ability to show the effects of the method. However only having 5 runs in a small domain lead to the results not being significantly different than each other. It was difficult to know the difference in the methods based on the small number of runs and overlapping error bars. While the paper was genuinely clear some of the components of the method were not clearly motivated. For instance - why parameterize and learn the discount function? It was not clear why this choice was made and the effect it was going to have on the method.
NIPS
Title Novelty Search in Representational Space for Sample Efficient Exploration Abstract We present a new approach for efficient exploration which leverages a lowdimensional encoding of the environment learned with a combination of modelbased and model-free objectives. Our approach uses intrinsic rewards that are based on the distance of nearest neighbors in the low dimensional representational space to gauge novelty. We then leverage these intrinsic rewards for sampleefficient exploration with planning routines in representational space for hard exploration tasks with sparse rewards. One key element of our approach is the use of information theoretic principles to shape our representations in a way so that our novelty reward goes beyond pixel similarity. We test our approach on a number of maze tasks, as well as a control problem and show that our exploration approach is more sample-efficient compared to strong baselines. 1 Introduction In order to solve a task efficiently in Reinforcement Learning (RL), one of the main challenges is to gather informative experiences via an efficient exploration of the state space. A common approach to exploration is to leverage intrinsic rewards correlated with some metric or score for novelty (Schmidhuber, 2010; Stadie et al., 2015; Houthooft et al., 2016). With intrinsic rewards, an agent can be incentivized to efficiently explore its state space. A direct approach to calculating these novelty scores is to derive a reward based on the observations, such as a count-based reward (Bellemare et al., 2016; Ostrovski et al., 2017) or a prediction-error based reward (Burda et al., 2018b). However, an issue occurs when measuring novelty directly from the raw observations, as some information in pixel space (such as randomness or backgrounds) may be irrelevant. In this case, if an agent wants to efficiently explore its state space it should only focus on meaningful and novel information. In this work, we propose a method of sample-efficient exploration by leveraging intrinsic rewards in a meaningful latent state space. To build a meaningful state abstraction, we view Model-based RL (MBRL) from an information theoretic perspective - we optimize our dynamics learning through the Information Bottleneck (Tishby et al., 2000) principle. We also combine both model-based and model-free components through a joint representation. This method encodes high-dimensional observations into lower-dimensional representations such that states that are close in dynamics are brought close together in representation space (François-Lavet et al., 2018). We also add additional constraints to ensure that a measure of distance between abstract states is meaningful. We leverage these properties of our representation to formulate a novelty score based on Euclidean distance in low-dimensional representation space and we then use this score to generate intrinsic rewards that we can exploit for efficient exploration. One important element of our exploration algorithm is that we take a Model Predictive Control (MPC) approach (Garcia et al., 1989) and perform actions only after our model is sufficiently accurate (and hence ensure an accurate novelty heuristic). Through this training scheme, our agent is 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. also able to learn a meaningful representation of its state space in a sample-efficient manner. The code with all experiments is available 1. 2 Problem setting An agent interacts with its environment over discrete timesteps, modeled as a Markov Decision Process (MDP), defined by the 6-tuple (S,S0,A, τ,R,G) (Puterman, 1994). In this setting, S is the state space, S0 is the initial state distribution, A is the discrete action space, τ : S × A → S is the transition function that is assumed deterministic (with the possibility of extension to stochastic environments with generative methods), R : S × A → R is the reward function (R = [−1, 1]), G : S × A → [0, 1) is the per timestep discount factor. At timestep t in state st ∈ S, the agent chooses an action at ∈ A based on policy π : S × A → [0, 1], such that at ∼ π(st, ·). After taking at, the agent is in state st+1 = τ(st, at) and receives reward rt ∼ R(st, at) and a discount factor γt ∼ G(st, at). Over n environment steps, we define the buffer of previously visited states as B = (s1, . . . , sn), where si ∈ S ∀i ∈ N. In RL, the usual objective is to maximize the sum of expected future rewards Vπ(s) = Eπ [ rt + ∑∞ i=1 (∏i−1 j=0 γt+j ) rt+i|s = st ] . To learn a policy π that maximizes the expected return, an RL agent has to efficiently explore its environment (reach novel states in as few steps as possible). In this paper, we consider tasks with sparse rewards or even no rewards, and are interested in exploration strategies that require as few steps as possible to explore the state space. 3 Abstract state representations We focus on learning a lower-dimensional representation of state when our state (or observations in the partially observable case (Kaelbling et al., 1998)) is high-dimensional (Dayan, 1993; Tamar et al., 2016; Silver et al., 2016; Oh et al., 2017; de Bruin et al., 2018; Ha and Schmidhuber, 2018; François-Lavet et al., 2018; Hafner et al., 2018; Gelada et al., 2019). 3.1 Information Bottleneck We first motivate our methods for model learning. To do so, we consider the Information Bottleneck (IB) (Tishby et al., 2000) principle. Let Z denote the original source message space and Z̃ denote its compressed representation. As opposed to traditional lossless compression where we seek to find corresponding encodings Z̃ that compresses all aspects of Z, in IB we seek to preserve only relevant information in Z̃ with regards to another relevance variable, Y . For example when looking to compress speech waveforms (Z) if our task at hand is speech recognition, then our relevance variable Y would be a transcript of the speech. Our representation Z̃ would only need to maximize relevant information about the transcript Y instead of its full form including tone, pitch, background noise etc. We can formulate this objective by minimizing the following functional with respect to p(z̃ | z): L(p(z̃ | z)) = I[Z; Z̃]− βI[Z̃;Y ] where I[·; ·] is the Mutual Information (MI) between two random variables. β is the Lagrange multiplier for the amount of information our encoding Z̃ is allowed to quantify about Y . This corresponds to a trade-off between minimizing the encoding rate I[Z; Z̃] and maximizing the mutual information between the encoding and our random variable Y . We now apply this principle to representation learning of state in MBRL. If our source message space is our state S′ and our encoded message is X ′, then to distill the most relevant information with regards to the dynamics of our environment one choice of relevance variable is {X,A}, i.e. our encoded state in the previous timestep together with the presence of an action. This gives us the functional L(p(x′ | s′)) = I[S′;X ′]− βI[X ′; {X,A}]. (1) In our work, we look to find methods to minimize this functional for an encoding that maximizes the predictive ability of our dynamics model. We first aim to minimize our encoding rate I[S′;X ′]. Since encoding rate is a measure of the amount of bits transmitted per message S′, representation dimension is analogous to number of bits per message. This principle of minimizing encoding rate guides our selection of representation dimension 1https://github.com/taodav/nsrs - for every environment, we try to choose the smallest representation dimension possible such that the representation can still encapsulate model dynamics as we understand them. For example, in a simple Gridworld example, we look to only encode agent position in the grid-world. Now let us consider the second term in Equation 1. Our goal is to learn an optimally predictive model of our environment. To do so we first consider the MI between the random variable denoting our state representation X , in the presence of the random variable representing actions A and the random variable denoting the state representation in the next timestep X ′ (Still, 2009). Note that MI is a metric and is symmetric: I[{X,A} ; X ′] = Ep(x′,x,a) [ log ( p(x′ | x, a) p(x′) )] = H[X ′]−H[X ′ | X,A] (2) This quantity is a measure of our dynamics model’s predictive ability. If we consider the two entropy terms (denoted H[·]), we see that H[X ′] constitutes the entropy of our state representation and H[X ′ | X,A] as the entropy of the next state X ′ given our current state X and an action A. Recall that we are trying to minimize I[X ′;S′] and maximize I[X ′; {X,A}] with respect to some encoding function X = e(S). In the next section, we describe our approach for this encoding function as well as dynamics learning in MBRL. 3.2 Encoding and dynamics learning For our purposes, we use a neural encoder ê : S → X parameterized by θê to map our highdimensional state space into lower-dimensional abstract representations, where X ⊆ RnX . The dynamics are learned via the following functions: a transition function τ̂ : X × A→ X parameterized by θτ̂ , a reward function r̂ : X ×A→ [−1, 1] parameterized by θr̂, and a per timestep discount factor function γ̂ : X × A → [0, 1) parameterized by θγ̂ . This discount factor is only learned to predict terminal states, where γ = 0. In order to leverage all past experiences, we use an off-policy learning algorithm that samples transition tuples (s, a, r, γ, s′) from a replay buffer. We first encode our current and next states with our encoder to get x← ê(s; θê), x′ ← ê(s′; θê). The Q-function is learned using the DDQN algorithm (van Hasselt et al., 2015), which uses the target: Y = r + γQ(ê(s′; θê−), argmax a′∈A Q(x′, a′; θQ); θQ−), where θQ− and θê− are parameters of an earlier buffered Q-function (or our target Q-function) and encoder respectively. The agent then minimizes the following loss: LQ(θQ) = (Q(x, a; θQ)− Y )2. We learn the dynamics of our environment through the following losses: LR(θê, θr̂) = |r − r̂(x, a; θr̂)|2 , LG(θê, θγ̂) = |γ − γ̂(x, a; θγ̂)|2 and our transition loss Lτ (θê, θτ̂ ) = ||[x+ τ̂(x, a; θτ̂ )]− x′||22. (3) Note that our transition function learns the difference (given an action) between previous state x and current state x′. By jointly learning the weights of the encoder and the different components, the abstract representation is shaped in a meaningful way according to the dynamics of the environment. In particular, by minimizing the loss given in Equation 3 with respect to the encoder parameters θê (or p(x | s)), we minimize our entropy H[X ′|X,A]. In order to maximize the entropy of our learnt abstracted state representations H[X ′], we minimize the expected pairwise Gaussian potential (Borodachov et al., 2019) between states: Ld1(θê) = Es1,s2∼p(s) [ exp(−Cd1||ê(s1; θê)− ê(s2; θê)||22) ] (4) with Cd1 as a hyperparameter. Losses in Equation 3 and Equation 4 are reminiscent of the modelbased losses in François-Lavet et al. (2018) and correspond respectively to the alignment and uniformity contrastive loss formulation in Wang and Isola (2020), where alignment ensures that similar states are close together (in encoded representation space) and uniformity ensures that all states are spread uniformly throughout this low-dimensional representation space. The losses Lτ (θê) and Ld1(θê) maximizes the I[{X,A};X ′] term and selecting smaller dimension for our representation minimizes I[X ′, S′]. Put together, our method is trying to minimize L(p(x′|s′)) as per Equation 1. 3.3 Distance measures in representational space For practical purposes, since we are looking to use a distance metric within X to leverage as a score for novelty, we ensure well-defined distances between states by constraining the `2 distance between two consecutive states: Lcsc(θê) = max(‖ê(s1; θe)− ê(s2; θe)‖2 − ω, 0) (5) where Lcsc is a soft constraint between consecutive states s1 and s2 that tends to enforce two consecutive encoded representations to be at a distance ω apart. We add Lcsc to ensure a well-defined `2 distance between abstract states for use in our intrinsic reward calculation (a discussion of this loss is provided in Appendix B). We discuss how we use ω to evaluate model accuracy for our MPC updates in Appendix A. Finally, we minimize the sum of all the aforementioned losses through gradient descent: L = LR(θê, θr̂) + LG(θê, θγ̂) + Lτ (θê, θτ̂ ) + LQ(θQ) + Ld1(θê) + Lcsc(θê). (6) Through these losses, the agent learns a low-dimensional representation of the environment that is meaningful in terms of the `2 norm in representation space. We then employ a planning technique that combines the knowledge of the model and the value function which we use to maximize intrinsic rewards, as detailed in the next section and Section 4.3. 4 Novelty Search in abstract representational space Our approach for exploration uses intrinsic motivation (Schmidhuber, 1990; Chentanez et al., 2005; Achiam and Sastry, 2017) where an agent rewards itself based on the fact that it gathers interesting experiences. In a large state space setting, states are rarely visited and the count for any state after n steps is almost always 0. While Bellemare et al. (2016) solves this issue with density estimation using pseudo-counts directly from the high-dimensional observations, we aim to estimate some function of novelty in our learnt lower-dimensional representation space. 4.1 Sparsity in representation space as a measure for novelty Through the minimization of Equation 1, states that are close together in dynamics are pushed close together in our abstract state space X . Ideally, we want an agent that efficiently explores the dynamics of its environment. To do so, we reward our agent for exploring areas in lower-dimensional representation space that are less visited and ideally as far apart from the dynamics that we currently know. Given a point x in representation space, we define a reward function that considers the sparsity of states around x - we do so with the average distance between x and its k-nearest-neighbors in its visitation history buffer B: ρ̂X (x) = 1 k k∑ i=1 d(x, xi), (7) where x =̇ ê(s; θê) is a given encoded state, k ∈ Z+, d(·, ·) is some distance metric in RnX and xi =̇ ê(si; θê), where si ∈ B for i = 1 . . . k are the k nearest neighbors (by encoding states in B to representational space) of x according to the distance metric d(·, ·). Implicit in this measure is the reliance on the agent’s visitation history buffer B. An important factor in this score is which distance metric to use. With the losses used in Section 3, we use `2 distance because of the structure imposed on the abstract state space with Equations 4 and 5. As we show in Appendix D, this novelty reward is reminiscent of recoding probabilities (Bellemare et al., 2016; Cover and Thomas, 2012) and is in fact inversely proportional to these probabilities, suggesting that our novelty heuristic estimates visitation count. This is also the same score used to gauge “sparseness” in behavior space in Lehman and Stanley (2011). With this reward function, we present the pseudo-code for our exploration algorithm in Algorithm 1. Algorithm 1: The Novelty Search algorithm in abstract representational space. 1 Initialization: transition buffer B, agent policy π; 2 Sample ninit initial random transitions, let t = ninit; 3 while t ≤ nmax do // We update our dynamics model and Q-function every nfreq steps 4 if t mod nfreq == 0 then 5 while j ≤ niters or Lτ ≤ ( ω δ )2 do 6 Sample batch of transitions (s, a, rextr, rintr, γ, s′) ∈ B; 7 Train dynamics model with (s, a, rextr, γ, s′); 8 Train Q-function with (s, a, rextr + rintr, γ, s′); 9 end 10 ∀(s, a, rextr, rintr, γ, s′) ∈ B, set rintr ← ρ̂X (ê(s′; θê)); 11 end 12 at ∼ π(st); 13 Take action in environment: st+1 ← τ(st, at), rt,extr ← R(st, at), γt ← G(st, at); 14 Calculate intrinsic reward: rt,intr ← ρ̂X (ê(st+1; θê)) 15 B ← B ∪ {(st, at, rt,extr, rt,intr, γt, st+1)}; 16 end 4.2 Asymptotic behavior This reward function also exhibits favorable asymptotic behavior, as it decreases to 0 as most of the state space is visited. We show this in Theorem 1. Theorem 1. Assume we have a finite state space S ⊆ Rd, history of states B = (s1, . . . , sN ), encoded state space X ⊆ RnX , deterministic mapping f : Rd → RnX and a novelty reward defined as ρ̂X (x). With an optimal policy with respect to the rewards of the novelty heuristic, our agent will tend towards states with higher intrinsic rewards. If we assume a communicating MDP setting (Puterman, 1994), we have that lim N→∞ ρ̂X (f(s)) = 0, ∀s ∈ S. Proof. We prove this theorem in Appendix E. 4.3 Combining model-free and model-based components for exploration policies Similarly to previous works (e.g. Oh et al., 2017; Chebotar et al., 2017), we use a combination of model-based planning with model-free Q-learning to obtain a good policy. We calculate rollout estimates of next states based on our transition model τ̂ and sum up the corresponding rewards, which we denote as r : X × A → [0, Rmax] and can be a combination of both intrinsic and extrinsic rewards. We calculate expected returns based on the discounted rewards of our d-depth rollouts: Q̂d(x, a) = r(x, a) + γ̂(x, a; θγ̂)× max a′∈A Q̂d−1(τ(x, a; θτ̂ ), a ′), if d > 0 Q(x, a; θQ), if d = 0 (8) Note that we simulate only b-best options at each expansion step based on Q(x, a; θQ), where b ≤ |A|. In this work, we only use full expansions. The estimated optimal action is given by a∗ = argmax a∈A Q̂d(x, a). The actual action chosen at each step follows an -greedy strategy ( ∈ [0, 1]), where the agent follows the estimated optimal action with probability 1 − and a random action with probability . 5 Experiments We conduct experiments on environments of varying difficulty. All experiments use a training scheme where we first train parameters to converge on an accurate representation of the already experienced transitions before taking an environment step. We optimize the losses (over multiple training iterations) given in Section 3. We discuss all environment-specific hyperparameters in Appendix J. 5.1 Labyrinth exploration We consider two 21 × 21 versions of the grid-world environment (Figure 7 in Appendix). The first is an open labyrinth grid-world, with no walls except for bordering walls. The second is a similar sized grid-world split into four connected rooms. In these environments the action spaceA is the set of four cardinal directions. These environments have no rewards or terminal states and the goal is to explore, agnostic of any task. We use two metrics to gauge exploration for this environment: the first is the ratio of states visited only once, the second is the proportion of total states visited. 5.1.1 Open labyrinth In the open labyrinth experiments (Figure 2a), we compare a number of variations of our approach with a random baseline and a count-based baseline (Bellemare et al., 2016) (as we can count states in this tabular setting). Variations of the policy include an argmax over state values (d = 0) and planning depths of d ∈ {1, 5}. All variations of our method outperform the two baselines in this task, with a slight increase in performance as planning depth d increases. In the open labyrinth, our agent is able to reach 100% of possible states (a total of 19 × 19 = 361 unique states) in approximately 800 steps, and 80% of possible states (≈ 290 states) in approximately 500 steps. These counts also include the ninit number of random steps taken preceding training. Our agent is also able to learn highly interpretable abstract representations in very few environment steps (as shown in Figure 1a) as it explores its state space. In addition, after visiting most unseen states in its environment, our agent tends to uniformly explore its state space due to the nature of our novelty heuristic. A visualisation of this effect is available in Appendix H. 5.1.2 4-room labyrinth We now consider the 4-room labyrinth environment, a more challenging version of the open labyrinth environment (Figure 1a). As before, our encoder ê is able to take a high-dimensional input and compress it to a low-dimensional representation. In the case of both labyrinth environments, the representation incorporates knowledge related to the position of the agent in 2-dimensions that we call primary features. In the 4-room labyrinth environment, it also has to learn other information (a) Results for open labyrinth and different variations on policies compared to baselines. (b) Results for the 4-room labyrinth and different variations on policies compared to baselines. Figure 2: Labyrinth results for both open labyrinth and 4-room labyrinth over 10 trials, showing mean and standard deviations. such as agent surroundings (walls, open space) etc., but it does so only via the transition function learned through experience. We call this extraneous but necessary information secondary features. As most of these secondary features are encoded only in the dynamics model τ̂ , our agent has to experience a transition in order to accurately represent both primary and secondary features. In this environment specifically, our dynamics model might over-generalize for walls between rooms and can sometimes fail at first to try out transitions in the passageways between rooms. However, because our agent tends to visit uniformly all the states that are reachable within the known rooms, the -greedy policy of our approach still ensures that the agent explores passageways efficiently even in the cases where it has over-generalized to the surrounding walls. We run the same experiments on the 4-room labyrinth domain as we do on the open labyrinth and report results in Figure 2b. In both cases, our method outperforms the two baselines in this domain (random and count-based). 5.2 Control and sub-goal exploration In order to test the efficacy of our method beyond fixed mazes, we conduct experiments on the control-based environment Acrobot (Brockman et al., 2016) and a multi-step maze environment. Our method (with planning depth d = 5) is compared to strong exploration baselines with different archetypes: 1. Prediction error incentivized exploration (Stadie et al., 2015) 2. Hash count-based exploration (Tang et al., 2016) 3. Random Network Distillation (Osband et al., 2017) 4. Bootstrap DQN (BDQN, Osband et al. (2016)) In order to maintain consistency in our results, we use the same deep learning architectures throughout. Since we experiment in the deterministic setting, we exclude baselines that require some form of stochasticity or density estimation as baselines (for example, Shyam et al. (2018) and Osband et al. (2017)). A specificity of our approach is that we run multiple training iterations in between each environment step for all experiments, which allows the agent to use orders of magnitude less samples as compared to most model-free RL algorithms (all within the same episode). 5.2.1 Acrobot We now test our approach on Acrobot (Brockman et al., 2016), which has a continuous state space unlike the labyrinth environment. We specifically choose this control task because the nature of this environment makes exploration inherently difficult. The agent only has control of the actuator for the inner joint and has to transfer enough energy into the second joint in order to swing it to its goal state. We modify this environment so that each episode is at most 3000 environment steps. While this environment does admit an extrinsic reward, we ignore these rewards entirely. To measure the performance of our exploration approach, we measure the average number of steps per episode that the agent takes to move its second joint above a given line as per Figure 3a. To demonstrate the ability of our method to learn a low dimensional abstract representation from pixel inputs, we use 4 consecutive pixel frames as input instead of the 6-dimensional full state vector. We use a 4-dimensional abstract representation of our state and results from experiments are shown in Table 1. Our method reaches the goal state more efficiently than the baselines. 5.2.2 Multi-step goal maze We also test our method on a more complex maze with the sub-task of picking up a key that opens the door to an area with a reward. We build our environment with the Pycolab game engine (Stepleton, 2017). The environment can be seen in Figure 3b, where the input to our agent is a top-down view of the environment. While this environment does admit an extrinsic reward (1 for picking up the key, 10 for reaching the final state), we ignore these rewards and only focus on intrinsic rewards. In our experiments, we show that our agent is able to learn an interpretable representation of the environment in a sample-efficient manner. Figure 1c shows an example of learnt representations in this domain after reaching the goal - we observe that positions in the maze correspond to a nearly identical structure in the lower-dimensional representation. Our representation also nicely captures internal state information (whether the key has been picked up) by separating the two sets of states (states when the key has been picked up and states when the key has not been picked up). Similar positions in both sets of states are also mapped closely together in lower-dimensional space (ie. (1, 1, with key) is close in `2 to (1, 1, without key)), suggesting good generalization between similar states. 6 Related work The proposed exploration strategy falls under the category of directed exploration (Thrun, 1992) that makes use of the past interactions with the environment to guide the discovery of new states. This work is inspired by the Novelty Search algorithm (Lehman and Stanley, 2011) that uses a nearest-neighbor scoring approach to gauge novelty in policy space. Our approach leverages this scoring to traverse dynamics space, which we motivate theoretically. Exploration strategies have been investigated with both model-free and model-based approaches. In Bellemare et al. (2016) and Ostrovski et al. (2017), a model-free algorithm provides the notion of novelty through a pseudocount from an arbitrary density model that provides an estimate of how many times an action has been taken in similar states. Recently, Taiga et al. (2020) do a thorough comparison between bonusbased exploration methods in model-free RL and show that architectural changes may be more important to agent performance (based on extrinsic rewards) as opposed to differing exploration strategies. Several exploration strategies have also used a model of the environment along with planning. Hester and Stone (2012) employ a two-part strategy to calculate intrinsic rewards, combining model uncertainty (from a random-forest based model) and a novelty reward based on L1 distance in feature space. A strategy investigated in Salge et al. (2014); Mohamed and Rezende (2015); Gregor et al. (2016); Chiappa et al. (2017) is to have the agent choose a sequence of actions by planning that leads to a representation of state as different as possible to the current state. In Pathak et al. (2017); Haber et al. (2018), the agent optimizes both a model of its environment and a separate model that predicts the error/uncertainty of its own model. Burda et al. (2018a) similarly uses an intrinsic reward based on the uncertainty of its dynamics model. In Shyam et al. (2018), forward models of the environment are used to measure novelty derived from disagreement between future states. Still and Precup (2012) take an information theoretic approach to exploration, that chooses a policy which maximizes the predictive power of the agent’s own behavior and environment rewards. In Badia et al. (2020), an intrinsic reward from the k-NN over the agent’s experience is also employed for exploration. They instead employ a self-supervised inverse dynamics model to learn the embeddings as opposed to our approach. Beyond improved efficiency in exploration, the interpretability of our approach could also lead to human-in-the-loop techniques (Mandel et al., 2017; Abel et al., 2017) for exploration, with the possibility for the agent to better utilize feedback from interpretability of the agent in representation space. 7 Discussion In this paper, we formulate the task of dynamics learning in MBRL through the Information Bottleneck principle. We present methods to optimize the IB equation through low-dimensional abstract representations of state. We further develop a novelty score based on these learnt representations that we leverage as an intrinsic reward that enables efficient exploration. By using this novelty score with a combination of model-based and model-free approaches for planning, we show more efficient exploration across multiple environments with our learnt representations and novelty rewards. As with most methods, our approach also has limitations. One limitation we may have is the scalability of non-parametric methods such as k-NN density estimation since our method scales linearly with the number of environment steps. A possible solution to this problem would be to use some sampling scheme to sample a fixed number of observations for calculation of our novelty heuristic. Another issue that has arisen from using very low-dimensional space to represent state is generalization. In some cases, the model can over-generalize with the consequence that the low-dimensional representation loses information that is crucial for the exploration of the entire state space. An interesting direction for future work would be to find ways of incorporating secondary features such as those mentioned in Section 5.1.2. An interesting possibility would be to use a similar IB method, but using a full history of states as the conditioning variable. Beyond these points, we discuss limitations and potential improvements to this work in Appendix K. Finally, we show preliminary results of our method on a more complex task - Montezuma’s Revenge - in Appendix G. With the theory and methods developed in this paper, we hope to see future work done on larger tasks with more complex environment dynamics. Broader Impact Algorithms for exploring an environment are a central piece of learning efficient policies for unknown sequential decision-making tasks. In this section, we discuss the wider impacts of our research both in the Machine Learning (ML) field and beyond. We first consider the benefits and risks of our method on ML applications. Efficient exploration in unknown environments has the possibility to improve methods for tasks that require accurate knowledge of its environment. By exploring states that are more novel, agents have a more robust dataset. For control tasks, our method improves the sample efficiency of its learning by finding more novel states in terms of dynamics for use in training. Our learnt low-dimensional representation also helps the interpretability of our decision making agents (as seen in Figure 1). More interpretable agents have potential benefits for many areas of ML, including allowing human understandability and intervention in human-in-the-loop approaches. With such applications in mind, we consider societal impacts of our method, along with potential future work that could be done to improve these societal impacts. One specific instance of how efficient exploration and environment modeling might help is in disaster relief settings. With the incipience of robotic systems for disaster area exploration, autonomous agents need to efficiently explore their unknown surroundings. Further research into scaling these MBRL approaches could allow for these robotic agents to find points of interest (survivors, etc.) efficiently. One potential risk of our application is safe exploration. Our method finds and learns from states that are novel in terms of its dynamics. Without safety mechanisms, our agent could view potentially harmful scenarios as novel due to the rarity of such a situation. For example, a car crash might be seen as a highly novel state. To mitigate this safety concern we look to literature on Safety in RL (Garcı́a and Fernández, 2015). In particular, developing a risk metric based on the interpretability of our approach may be an area of research worth developing. Acknowledgements We would like to thank Emmanuel Bengio for the helpful discussions and feedback on early drafts of this work. We would also like to thank all the reviewers for their constructive and helpful comments.
1. What is the focus and contribution of the paper regarding state space compression? 2. What are the strengths of the proposed approach, particularly in its simplicity and effectiveness? 3. What are the weaknesses of the paper, especially regarding the experiment section? 4. Do you have any concerns about the state space representation used in the paper? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper presents a method for compressing a large state space into a small one that is well-suited for novelty-based exploration. The state space is constructed by maximizing compression while retaining an accurate dynamics model, and additionally enforcing a single-step distance between successive states. This is evaluated qualitatively on some maze domains and then in terms of return on two smaller domains. Strengths The approach is relatively simple and clean, and works surprisingly well. This is an extremely important problem where existing approaches are typically a little ad-hoc and problem motivated (which is counter to the point). I think the qualitative demonstrations are very compelling. Weaknesses I have two main complaints. One (which can be resolved in rebuttal) is: what exactly is the state space for the environments in Figure 1? I think it's an image, but I don't see where that is explained. The second is that the quantitative experiments on RL tasks are limited to a very small maze and Acrobot, which borders on fatally insufficient.
NIPS
Title Novelty Search in Representational Space for Sample Efficient Exploration Abstract We present a new approach for efficient exploration which leverages a lowdimensional encoding of the environment learned with a combination of modelbased and model-free objectives. Our approach uses intrinsic rewards that are based on the distance of nearest neighbors in the low dimensional representational space to gauge novelty. We then leverage these intrinsic rewards for sampleefficient exploration with planning routines in representational space for hard exploration tasks with sparse rewards. One key element of our approach is the use of information theoretic principles to shape our representations in a way so that our novelty reward goes beyond pixel similarity. We test our approach on a number of maze tasks, as well as a control problem and show that our exploration approach is more sample-efficient compared to strong baselines. 1 Introduction In order to solve a task efficiently in Reinforcement Learning (RL), one of the main challenges is to gather informative experiences via an efficient exploration of the state space. A common approach to exploration is to leverage intrinsic rewards correlated with some metric or score for novelty (Schmidhuber, 2010; Stadie et al., 2015; Houthooft et al., 2016). With intrinsic rewards, an agent can be incentivized to efficiently explore its state space. A direct approach to calculating these novelty scores is to derive a reward based on the observations, such as a count-based reward (Bellemare et al., 2016; Ostrovski et al., 2017) or a prediction-error based reward (Burda et al., 2018b). However, an issue occurs when measuring novelty directly from the raw observations, as some information in pixel space (such as randomness or backgrounds) may be irrelevant. In this case, if an agent wants to efficiently explore its state space it should only focus on meaningful and novel information. In this work, we propose a method of sample-efficient exploration by leveraging intrinsic rewards in a meaningful latent state space. To build a meaningful state abstraction, we view Model-based RL (MBRL) from an information theoretic perspective - we optimize our dynamics learning through the Information Bottleneck (Tishby et al., 2000) principle. We also combine both model-based and model-free components through a joint representation. This method encodes high-dimensional observations into lower-dimensional representations such that states that are close in dynamics are brought close together in representation space (François-Lavet et al., 2018). We also add additional constraints to ensure that a measure of distance between abstract states is meaningful. We leverage these properties of our representation to formulate a novelty score based on Euclidean distance in low-dimensional representation space and we then use this score to generate intrinsic rewards that we can exploit for efficient exploration. One important element of our exploration algorithm is that we take a Model Predictive Control (MPC) approach (Garcia et al., 1989) and perform actions only after our model is sufficiently accurate (and hence ensure an accurate novelty heuristic). Through this training scheme, our agent is 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. also able to learn a meaningful representation of its state space in a sample-efficient manner. The code with all experiments is available 1. 2 Problem setting An agent interacts with its environment over discrete timesteps, modeled as a Markov Decision Process (MDP), defined by the 6-tuple (S,S0,A, τ,R,G) (Puterman, 1994). In this setting, S is the state space, S0 is the initial state distribution, A is the discrete action space, τ : S × A → S is the transition function that is assumed deterministic (with the possibility of extension to stochastic environments with generative methods), R : S × A → R is the reward function (R = [−1, 1]), G : S × A → [0, 1) is the per timestep discount factor. At timestep t in state st ∈ S, the agent chooses an action at ∈ A based on policy π : S × A → [0, 1], such that at ∼ π(st, ·). After taking at, the agent is in state st+1 = τ(st, at) and receives reward rt ∼ R(st, at) and a discount factor γt ∼ G(st, at). Over n environment steps, we define the buffer of previously visited states as B = (s1, . . . , sn), where si ∈ S ∀i ∈ N. In RL, the usual objective is to maximize the sum of expected future rewards Vπ(s) = Eπ [ rt + ∑∞ i=1 (∏i−1 j=0 γt+j ) rt+i|s = st ] . To learn a policy π that maximizes the expected return, an RL agent has to efficiently explore its environment (reach novel states in as few steps as possible). In this paper, we consider tasks with sparse rewards or even no rewards, and are interested in exploration strategies that require as few steps as possible to explore the state space. 3 Abstract state representations We focus on learning a lower-dimensional representation of state when our state (or observations in the partially observable case (Kaelbling et al., 1998)) is high-dimensional (Dayan, 1993; Tamar et al., 2016; Silver et al., 2016; Oh et al., 2017; de Bruin et al., 2018; Ha and Schmidhuber, 2018; François-Lavet et al., 2018; Hafner et al., 2018; Gelada et al., 2019). 3.1 Information Bottleneck We first motivate our methods for model learning. To do so, we consider the Information Bottleneck (IB) (Tishby et al., 2000) principle. Let Z denote the original source message space and Z̃ denote its compressed representation. As opposed to traditional lossless compression where we seek to find corresponding encodings Z̃ that compresses all aspects of Z, in IB we seek to preserve only relevant information in Z̃ with regards to another relevance variable, Y . For example when looking to compress speech waveforms (Z) if our task at hand is speech recognition, then our relevance variable Y would be a transcript of the speech. Our representation Z̃ would only need to maximize relevant information about the transcript Y instead of its full form including tone, pitch, background noise etc. We can formulate this objective by minimizing the following functional with respect to p(z̃ | z): L(p(z̃ | z)) = I[Z; Z̃]− βI[Z̃;Y ] where I[·; ·] is the Mutual Information (MI) between two random variables. β is the Lagrange multiplier for the amount of information our encoding Z̃ is allowed to quantify about Y . This corresponds to a trade-off between minimizing the encoding rate I[Z; Z̃] and maximizing the mutual information between the encoding and our random variable Y . We now apply this principle to representation learning of state in MBRL. If our source message space is our state S′ and our encoded message is X ′, then to distill the most relevant information with regards to the dynamics of our environment one choice of relevance variable is {X,A}, i.e. our encoded state in the previous timestep together with the presence of an action. This gives us the functional L(p(x′ | s′)) = I[S′;X ′]− βI[X ′; {X,A}]. (1) In our work, we look to find methods to minimize this functional for an encoding that maximizes the predictive ability of our dynamics model. We first aim to minimize our encoding rate I[S′;X ′]. Since encoding rate is a measure of the amount of bits transmitted per message S′, representation dimension is analogous to number of bits per message. This principle of minimizing encoding rate guides our selection of representation dimension 1https://github.com/taodav/nsrs - for every environment, we try to choose the smallest representation dimension possible such that the representation can still encapsulate model dynamics as we understand them. For example, in a simple Gridworld example, we look to only encode agent position in the grid-world. Now let us consider the second term in Equation 1. Our goal is to learn an optimally predictive model of our environment. To do so we first consider the MI between the random variable denoting our state representation X , in the presence of the random variable representing actions A and the random variable denoting the state representation in the next timestep X ′ (Still, 2009). Note that MI is a metric and is symmetric: I[{X,A} ; X ′] = Ep(x′,x,a) [ log ( p(x′ | x, a) p(x′) )] = H[X ′]−H[X ′ | X,A] (2) This quantity is a measure of our dynamics model’s predictive ability. If we consider the two entropy terms (denoted H[·]), we see that H[X ′] constitutes the entropy of our state representation and H[X ′ | X,A] as the entropy of the next state X ′ given our current state X and an action A. Recall that we are trying to minimize I[X ′;S′] and maximize I[X ′; {X,A}] with respect to some encoding function X = e(S). In the next section, we describe our approach for this encoding function as well as dynamics learning in MBRL. 3.2 Encoding and dynamics learning For our purposes, we use a neural encoder ê : S → X parameterized by θê to map our highdimensional state space into lower-dimensional abstract representations, where X ⊆ RnX . The dynamics are learned via the following functions: a transition function τ̂ : X × A→ X parameterized by θτ̂ , a reward function r̂ : X ×A→ [−1, 1] parameterized by θr̂, and a per timestep discount factor function γ̂ : X × A → [0, 1) parameterized by θγ̂ . This discount factor is only learned to predict terminal states, where γ = 0. In order to leverage all past experiences, we use an off-policy learning algorithm that samples transition tuples (s, a, r, γ, s′) from a replay buffer. We first encode our current and next states with our encoder to get x← ê(s; θê), x′ ← ê(s′; θê). The Q-function is learned using the DDQN algorithm (van Hasselt et al., 2015), which uses the target: Y = r + γQ(ê(s′; θê−), argmax a′∈A Q(x′, a′; θQ); θQ−), where θQ− and θê− are parameters of an earlier buffered Q-function (or our target Q-function) and encoder respectively. The agent then minimizes the following loss: LQ(θQ) = (Q(x, a; θQ)− Y )2. We learn the dynamics of our environment through the following losses: LR(θê, θr̂) = |r − r̂(x, a; θr̂)|2 , LG(θê, θγ̂) = |γ − γ̂(x, a; θγ̂)|2 and our transition loss Lτ (θê, θτ̂ ) = ||[x+ τ̂(x, a; θτ̂ )]− x′||22. (3) Note that our transition function learns the difference (given an action) between previous state x and current state x′. By jointly learning the weights of the encoder and the different components, the abstract representation is shaped in a meaningful way according to the dynamics of the environment. In particular, by minimizing the loss given in Equation 3 with respect to the encoder parameters θê (or p(x | s)), we minimize our entropy H[X ′|X,A]. In order to maximize the entropy of our learnt abstracted state representations H[X ′], we minimize the expected pairwise Gaussian potential (Borodachov et al., 2019) between states: Ld1(θê) = Es1,s2∼p(s) [ exp(−Cd1||ê(s1; θê)− ê(s2; θê)||22) ] (4) with Cd1 as a hyperparameter. Losses in Equation 3 and Equation 4 are reminiscent of the modelbased losses in François-Lavet et al. (2018) and correspond respectively to the alignment and uniformity contrastive loss formulation in Wang and Isola (2020), where alignment ensures that similar states are close together (in encoded representation space) and uniformity ensures that all states are spread uniformly throughout this low-dimensional representation space. The losses Lτ (θê) and Ld1(θê) maximizes the I[{X,A};X ′] term and selecting smaller dimension for our representation minimizes I[X ′, S′]. Put together, our method is trying to minimize L(p(x′|s′)) as per Equation 1. 3.3 Distance measures in representational space For practical purposes, since we are looking to use a distance metric within X to leverage as a score for novelty, we ensure well-defined distances between states by constraining the `2 distance between two consecutive states: Lcsc(θê) = max(‖ê(s1; θe)− ê(s2; θe)‖2 − ω, 0) (5) where Lcsc is a soft constraint between consecutive states s1 and s2 that tends to enforce two consecutive encoded representations to be at a distance ω apart. We add Lcsc to ensure a well-defined `2 distance between abstract states for use in our intrinsic reward calculation (a discussion of this loss is provided in Appendix B). We discuss how we use ω to evaluate model accuracy for our MPC updates in Appendix A. Finally, we minimize the sum of all the aforementioned losses through gradient descent: L = LR(θê, θr̂) + LG(θê, θγ̂) + Lτ (θê, θτ̂ ) + LQ(θQ) + Ld1(θê) + Lcsc(θê). (6) Through these losses, the agent learns a low-dimensional representation of the environment that is meaningful in terms of the `2 norm in representation space. We then employ a planning technique that combines the knowledge of the model and the value function which we use to maximize intrinsic rewards, as detailed in the next section and Section 4.3. 4 Novelty Search in abstract representational space Our approach for exploration uses intrinsic motivation (Schmidhuber, 1990; Chentanez et al., 2005; Achiam and Sastry, 2017) where an agent rewards itself based on the fact that it gathers interesting experiences. In a large state space setting, states are rarely visited and the count for any state after n steps is almost always 0. While Bellemare et al. (2016) solves this issue with density estimation using pseudo-counts directly from the high-dimensional observations, we aim to estimate some function of novelty in our learnt lower-dimensional representation space. 4.1 Sparsity in representation space as a measure for novelty Through the minimization of Equation 1, states that are close together in dynamics are pushed close together in our abstract state space X . Ideally, we want an agent that efficiently explores the dynamics of its environment. To do so, we reward our agent for exploring areas in lower-dimensional representation space that are less visited and ideally as far apart from the dynamics that we currently know. Given a point x in representation space, we define a reward function that considers the sparsity of states around x - we do so with the average distance between x and its k-nearest-neighbors in its visitation history buffer B: ρ̂X (x) = 1 k k∑ i=1 d(x, xi), (7) where x =̇ ê(s; θê) is a given encoded state, k ∈ Z+, d(·, ·) is some distance metric in RnX and xi =̇ ê(si; θê), where si ∈ B for i = 1 . . . k are the k nearest neighbors (by encoding states in B to representational space) of x according to the distance metric d(·, ·). Implicit in this measure is the reliance on the agent’s visitation history buffer B. An important factor in this score is which distance metric to use. With the losses used in Section 3, we use `2 distance because of the structure imposed on the abstract state space with Equations 4 and 5. As we show in Appendix D, this novelty reward is reminiscent of recoding probabilities (Bellemare et al., 2016; Cover and Thomas, 2012) and is in fact inversely proportional to these probabilities, suggesting that our novelty heuristic estimates visitation count. This is also the same score used to gauge “sparseness” in behavior space in Lehman and Stanley (2011). With this reward function, we present the pseudo-code for our exploration algorithm in Algorithm 1. Algorithm 1: The Novelty Search algorithm in abstract representational space. 1 Initialization: transition buffer B, agent policy π; 2 Sample ninit initial random transitions, let t = ninit; 3 while t ≤ nmax do // We update our dynamics model and Q-function every nfreq steps 4 if t mod nfreq == 0 then 5 while j ≤ niters or Lτ ≤ ( ω δ )2 do 6 Sample batch of transitions (s, a, rextr, rintr, γ, s′) ∈ B; 7 Train dynamics model with (s, a, rextr, γ, s′); 8 Train Q-function with (s, a, rextr + rintr, γ, s′); 9 end 10 ∀(s, a, rextr, rintr, γ, s′) ∈ B, set rintr ← ρ̂X (ê(s′; θê)); 11 end 12 at ∼ π(st); 13 Take action in environment: st+1 ← τ(st, at), rt,extr ← R(st, at), γt ← G(st, at); 14 Calculate intrinsic reward: rt,intr ← ρ̂X (ê(st+1; θê)) 15 B ← B ∪ {(st, at, rt,extr, rt,intr, γt, st+1)}; 16 end 4.2 Asymptotic behavior This reward function also exhibits favorable asymptotic behavior, as it decreases to 0 as most of the state space is visited. We show this in Theorem 1. Theorem 1. Assume we have a finite state space S ⊆ Rd, history of states B = (s1, . . . , sN ), encoded state space X ⊆ RnX , deterministic mapping f : Rd → RnX and a novelty reward defined as ρ̂X (x). With an optimal policy with respect to the rewards of the novelty heuristic, our agent will tend towards states with higher intrinsic rewards. If we assume a communicating MDP setting (Puterman, 1994), we have that lim N→∞ ρ̂X (f(s)) = 0, ∀s ∈ S. Proof. We prove this theorem in Appendix E. 4.3 Combining model-free and model-based components for exploration policies Similarly to previous works (e.g. Oh et al., 2017; Chebotar et al., 2017), we use a combination of model-based planning with model-free Q-learning to obtain a good policy. We calculate rollout estimates of next states based on our transition model τ̂ and sum up the corresponding rewards, which we denote as r : X × A → [0, Rmax] and can be a combination of both intrinsic and extrinsic rewards. We calculate expected returns based on the discounted rewards of our d-depth rollouts: Q̂d(x, a) = r(x, a) + γ̂(x, a; θγ̂)× max a′∈A Q̂d−1(τ(x, a; θτ̂ ), a ′), if d > 0 Q(x, a; θQ), if d = 0 (8) Note that we simulate only b-best options at each expansion step based on Q(x, a; θQ), where b ≤ |A|. In this work, we only use full expansions. The estimated optimal action is given by a∗ = argmax a∈A Q̂d(x, a). The actual action chosen at each step follows an -greedy strategy ( ∈ [0, 1]), where the agent follows the estimated optimal action with probability 1 − and a random action with probability . 5 Experiments We conduct experiments on environments of varying difficulty. All experiments use a training scheme where we first train parameters to converge on an accurate representation of the already experienced transitions before taking an environment step. We optimize the losses (over multiple training iterations) given in Section 3. We discuss all environment-specific hyperparameters in Appendix J. 5.1 Labyrinth exploration We consider two 21 × 21 versions of the grid-world environment (Figure 7 in Appendix). The first is an open labyrinth grid-world, with no walls except for bordering walls. The second is a similar sized grid-world split into four connected rooms. In these environments the action spaceA is the set of four cardinal directions. These environments have no rewards or terminal states and the goal is to explore, agnostic of any task. We use two metrics to gauge exploration for this environment: the first is the ratio of states visited only once, the second is the proportion of total states visited. 5.1.1 Open labyrinth In the open labyrinth experiments (Figure 2a), we compare a number of variations of our approach with a random baseline and a count-based baseline (Bellemare et al., 2016) (as we can count states in this tabular setting). Variations of the policy include an argmax over state values (d = 0) and planning depths of d ∈ {1, 5}. All variations of our method outperform the two baselines in this task, with a slight increase in performance as planning depth d increases. In the open labyrinth, our agent is able to reach 100% of possible states (a total of 19 × 19 = 361 unique states) in approximately 800 steps, and 80% of possible states (≈ 290 states) in approximately 500 steps. These counts also include the ninit number of random steps taken preceding training. Our agent is also able to learn highly interpretable abstract representations in very few environment steps (as shown in Figure 1a) as it explores its state space. In addition, after visiting most unseen states in its environment, our agent tends to uniformly explore its state space due to the nature of our novelty heuristic. A visualisation of this effect is available in Appendix H. 5.1.2 4-room labyrinth We now consider the 4-room labyrinth environment, a more challenging version of the open labyrinth environment (Figure 1a). As before, our encoder ê is able to take a high-dimensional input and compress it to a low-dimensional representation. In the case of both labyrinth environments, the representation incorporates knowledge related to the position of the agent in 2-dimensions that we call primary features. In the 4-room labyrinth environment, it also has to learn other information (a) Results for open labyrinth and different variations on policies compared to baselines. (b) Results for the 4-room labyrinth and different variations on policies compared to baselines. Figure 2: Labyrinth results for both open labyrinth and 4-room labyrinth over 10 trials, showing mean and standard deviations. such as agent surroundings (walls, open space) etc., but it does so only via the transition function learned through experience. We call this extraneous but necessary information secondary features. As most of these secondary features are encoded only in the dynamics model τ̂ , our agent has to experience a transition in order to accurately represent both primary and secondary features. In this environment specifically, our dynamics model might over-generalize for walls between rooms and can sometimes fail at first to try out transitions in the passageways between rooms. However, because our agent tends to visit uniformly all the states that are reachable within the known rooms, the -greedy policy of our approach still ensures that the agent explores passageways efficiently even in the cases where it has over-generalized to the surrounding walls. We run the same experiments on the 4-room labyrinth domain as we do on the open labyrinth and report results in Figure 2b. In both cases, our method outperforms the two baselines in this domain (random and count-based). 5.2 Control and sub-goal exploration In order to test the efficacy of our method beyond fixed mazes, we conduct experiments on the control-based environment Acrobot (Brockman et al., 2016) and a multi-step maze environment. Our method (with planning depth d = 5) is compared to strong exploration baselines with different archetypes: 1. Prediction error incentivized exploration (Stadie et al., 2015) 2. Hash count-based exploration (Tang et al., 2016) 3. Random Network Distillation (Osband et al., 2017) 4. Bootstrap DQN (BDQN, Osband et al. (2016)) In order to maintain consistency in our results, we use the same deep learning architectures throughout. Since we experiment in the deterministic setting, we exclude baselines that require some form of stochasticity or density estimation as baselines (for example, Shyam et al. (2018) and Osband et al. (2017)). A specificity of our approach is that we run multiple training iterations in between each environment step for all experiments, which allows the agent to use orders of magnitude less samples as compared to most model-free RL algorithms (all within the same episode). 5.2.1 Acrobot We now test our approach on Acrobot (Brockman et al., 2016), which has a continuous state space unlike the labyrinth environment. We specifically choose this control task because the nature of this environment makes exploration inherently difficult. The agent only has control of the actuator for the inner joint and has to transfer enough energy into the second joint in order to swing it to its goal state. We modify this environment so that each episode is at most 3000 environment steps. While this environment does admit an extrinsic reward, we ignore these rewards entirely. To measure the performance of our exploration approach, we measure the average number of steps per episode that the agent takes to move its second joint above a given line as per Figure 3a. To demonstrate the ability of our method to learn a low dimensional abstract representation from pixel inputs, we use 4 consecutive pixel frames as input instead of the 6-dimensional full state vector. We use a 4-dimensional abstract representation of our state and results from experiments are shown in Table 1. Our method reaches the goal state more efficiently than the baselines. 5.2.2 Multi-step goal maze We also test our method on a more complex maze with the sub-task of picking up a key that opens the door to an area with a reward. We build our environment with the Pycolab game engine (Stepleton, 2017). The environment can be seen in Figure 3b, where the input to our agent is a top-down view of the environment. While this environment does admit an extrinsic reward (1 for picking up the key, 10 for reaching the final state), we ignore these rewards and only focus on intrinsic rewards. In our experiments, we show that our agent is able to learn an interpretable representation of the environment in a sample-efficient manner. Figure 1c shows an example of learnt representations in this domain after reaching the goal - we observe that positions in the maze correspond to a nearly identical structure in the lower-dimensional representation. Our representation also nicely captures internal state information (whether the key has been picked up) by separating the two sets of states (states when the key has been picked up and states when the key has not been picked up). Similar positions in both sets of states are also mapped closely together in lower-dimensional space (ie. (1, 1, with key) is close in `2 to (1, 1, without key)), suggesting good generalization between similar states. 6 Related work The proposed exploration strategy falls under the category of directed exploration (Thrun, 1992) that makes use of the past interactions with the environment to guide the discovery of new states. This work is inspired by the Novelty Search algorithm (Lehman and Stanley, 2011) that uses a nearest-neighbor scoring approach to gauge novelty in policy space. Our approach leverages this scoring to traverse dynamics space, which we motivate theoretically. Exploration strategies have been investigated with both model-free and model-based approaches. In Bellemare et al. (2016) and Ostrovski et al. (2017), a model-free algorithm provides the notion of novelty through a pseudocount from an arbitrary density model that provides an estimate of how many times an action has been taken in similar states. Recently, Taiga et al. (2020) do a thorough comparison between bonusbased exploration methods in model-free RL and show that architectural changes may be more important to agent performance (based on extrinsic rewards) as opposed to differing exploration strategies. Several exploration strategies have also used a model of the environment along with planning. Hester and Stone (2012) employ a two-part strategy to calculate intrinsic rewards, combining model uncertainty (from a random-forest based model) and a novelty reward based on L1 distance in feature space. A strategy investigated in Salge et al. (2014); Mohamed and Rezende (2015); Gregor et al. (2016); Chiappa et al. (2017) is to have the agent choose a sequence of actions by planning that leads to a representation of state as different as possible to the current state. In Pathak et al. (2017); Haber et al. (2018), the agent optimizes both a model of its environment and a separate model that predicts the error/uncertainty of its own model. Burda et al. (2018a) similarly uses an intrinsic reward based on the uncertainty of its dynamics model. In Shyam et al. (2018), forward models of the environment are used to measure novelty derived from disagreement between future states. Still and Precup (2012) take an information theoretic approach to exploration, that chooses a policy which maximizes the predictive power of the agent’s own behavior and environment rewards. In Badia et al. (2020), an intrinsic reward from the k-NN over the agent’s experience is also employed for exploration. They instead employ a self-supervised inverse dynamics model to learn the embeddings as opposed to our approach. Beyond improved efficiency in exploration, the interpretability of our approach could also lead to human-in-the-loop techniques (Mandel et al., 2017; Abel et al., 2017) for exploration, with the possibility for the agent to better utilize feedback from interpretability of the agent in representation space. 7 Discussion In this paper, we formulate the task of dynamics learning in MBRL through the Information Bottleneck principle. We present methods to optimize the IB equation through low-dimensional abstract representations of state. We further develop a novelty score based on these learnt representations that we leverage as an intrinsic reward that enables efficient exploration. By using this novelty score with a combination of model-based and model-free approaches for planning, we show more efficient exploration across multiple environments with our learnt representations and novelty rewards. As with most methods, our approach also has limitations. One limitation we may have is the scalability of non-parametric methods such as k-NN density estimation since our method scales linearly with the number of environment steps. A possible solution to this problem would be to use some sampling scheme to sample a fixed number of observations for calculation of our novelty heuristic. Another issue that has arisen from using very low-dimensional space to represent state is generalization. In some cases, the model can over-generalize with the consequence that the low-dimensional representation loses information that is crucial for the exploration of the entire state space. An interesting direction for future work would be to find ways of incorporating secondary features such as those mentioned in Section 5.1.2. An interesting possibility would be to use a similar IB method, but using a full history of states as the conditioning variable. Beyond these points, we discuss limitations and potential improvements to this work in Appendix K. Finally, we show preliminary results of our method on a more complex task - Montezuma’s Revenge - in Appendix G. With the theory and methods developed in this paper, we hope to see future work done on larger tasks with more complex environment dynamics. Broader Impact Algorithms for exploring an environment are a central piece of learning efficient policies for unknown sequential decision-making tasks. In this section, we discuss the wider impacts of our research both in the Machine Learning (ML) field and beyond. We first consider the benefits and risks of our method on ML applications. Efficient exploration in unknown environments has the possibility to improve methods for tasks that require accurate knowledge of its environment. By exploring states that are more novel, agents have a more robust dataset. For control tasks, our method improves the sample efficiency of its learning by finding more novel states in terms of dynamics for use in training. Our learnt low-dimensional representation also helps the interpretability of our decision making agents (as seen in Figure 1). More interpretable agents have potential benefits for many areas of ML, including allowing human understandability and intervention in human-in-the-loop approaches. With such applications in mind, we consider societal impacts of our method, along with potential future work that could be done to improve these societal impacts. One specific instance of how efficient exploration and environment modeling might help is in disaster relief settings. With the incipience of robotic systems for disaster area exploration, autonomous agents need to efficiently explore their unknown surroundings. Further research into scaling these MBRL approaches could allow for these robotic agents to find points of interest (survivors, etc.) efficiently. One potential risk of our application is safe exploration. Our method finds and learns from states that are novel in terms of its dynamics. Without safety mechanisms, our agent could view potentially harmful scenarios as novel due to the rarity of such a situation. For example, a car crash might be seen as a highly novel state. To mitigate this safety concern we look to literature on Safety in RL (Garcı́a and Fernández, 2015). In particular, developing a risk metric based on the interpretability of our approach may be an area of research worth developing. Acknowledgements We would like to thank Emmanuel Bengio for the helpful discussions and feedback on early drafts of this work. We would also like to thank all the reviewers for their constructive and helpful comments.
1. What is the main contribution of the paper regarding learning low-dimensional encodings of states? 2. What are the strengths of the proposed approach, particularly in terms of its ability to capture global geometric structure and environment dynamics? 3. What are the weaknesses of the paper, especially regarding the choice of baselines and the complexity of the tasks used for evaluation? 4. How could the authors improve their experiments to provide more convincing evidence for the effectiveness of their method? 5. Are there any other approaches to intrinsic motivation that the authors could consider as stronger baselines?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This work presents a novel approach for learning low-dimensional encodings of states motivated by the information bottleneck principle. These encodings are leveraged simultaneously for generating novelty-based intrinsic rewards, as well as planning. The method is assessed in terms of state coverage and performance on several toy environments and shown to exhibit consistent improvement over the baselines. Strengths The visualizations of the abstract state representations learned in selected gridworld domains, demonstrate a promising ability to capture global geometric structure and environment dynamics The experiments provide convincing evidence that the proposed novelty search and planning improves state coverage and performance in simple cases. Weaknesses I think the general method is interesting, but the paper would greatly benefit from experiments on some harder tasks and comparison to stronger baselines. In particular: - The authors include visualizations on Montezuma’s Revenge but do not report performance. Including this would provide more convincing evidence that the proposed method can generalize to more complex domains. - It would be interesting to evaluate this method some in the presence of noisy observations or predictable but uncontrollable dynamics (e.g. a clock in the observation) - I would expect both Random Network Distillation and ICM (Curiosity-driven Exploration by Self-supervised Prediction) to provide stronger baselines for intrinsic motivation than the prediction error or hash-count methods reported in the paper.
NIPS
Title Consistent Non-Parametric Methods for Maximizing Robustness Abstract Learning classifiers that are robust to adversarial examples has received a great deal of recent attention. A major drawback of the standard robust learning framework is there is an artificial robustness radius r that applies to all inputs. This ignores the fact that data may be highly heterogeneous, in which case it is plausible that robustness regions should be larger in some regions of data, and smaller in others. In this paper, we address this limitation by proposing a new limit classifier, called the neighborhood optimal classifier, that extends the Bayes optimal classifier outside its support by using the label of the closest in-support point. We then argue that this classifier maximizes the size of its robustness regions subject to the constraint of having accuracy equal to the Bayes optimal. We then present sufficient conditions under which general non-parametric methods that can be represented as weight functions converge towards this limit, and show that both nearest neighbors and kernel classifiers satisfy them under certain conditions. 1 Introduction Adversarially robust classification, that has been of much recent interest, is typically formulated as follows. We are given data drawn from an underlying distribution D, a metric d, as well as a pre-specified robustness radius r. We say that a classifier c is r-robust at an input x if it predicts the same label on a ball of radius r around x. Our goal in robust classification is to find a classifier c that maximizes astuteness, which is defined as accuracy on those examples where c is also r-robust. While this formulation has inspired a great deal of recent work, both theoretical and empirical [5, 17, 19, 20, 26, 15, 18, 21, 22, 23, 30], a major limitation is that enforcing a pre-specified robustness radius r may lead to sub-optimal accuracy and robustness. To see this, consider what would be an ideally robust classifier the example in Figure 1. For simplicity, suppose that we know the data distribution. In this case, a classifier that has an uniformly large robustness radius r will misclassify some points from the blue cluster on the left, leading to lower accuracy. This is illustrated in panel (a), in which large robustness radius leads to intersecting robustness regions. On the other hand, in panel (b), the blue cluster on the right is highly separated from the red cluster, and could be accurately classified with a high margin. But this will not happen if the robustness radius is set small enough to avoid the problems posed in panel (a). Thus, enforcing a fixed robustness radius that applies to the entire dataset may lead to lower accuracy and lower robustness. In this work, we propose an alternative formulation of robust classification that ensures that in the large sample limit, there is no robustness-accuracy trade off, and that regions of space with higher separation are classified more robustly. An extra advantage is that our formulation is achievable by existing methods. In particular, we show that two very common non-parametric algorithms – nearest neighbors and kernel classifiers – achieve these properties in the large sample limit. Our formulation is built on the notion of a new large-sample limit. In the standard statistical learning framework, the large-sample ideal is the Bayes optimal classifier that maximizes accuracy on the 35th Conference on Neural Information Processing Systems (NeurIPS 2021). data distribution, and is undefined outside. Since this is not always robust with radius r, prior work introduces the notion of an r-optimal classifier [31] that maximizes accuracy on points where it is also r-robust. However, this classifier also suffers from the same challenges as the example in Figure 1. We depart from both by introducing a new limit that we call the neighborhood preserving Bayes optimal classifier, described as follows. Given an input x that lies in the support of the data distribution D, it predicts the same label as the Bayes optimal. On an x outside the support, it outputs the prediction of the Bayes Optimal on the nearest neighbor of x within the support of D. The first property ensures that there is no loss of accuracy – since it always agrees with the Bayes Optimal within the data distribution. The second ensures higher robustness in regions that are better separated. Our goal is now to design classifiers that converge to the neighborhood preserving Bayes optimal in the large sample limit; this ensures that with enough data, the classifier will have accuracy approaching that of the Bayes optimal, as well as higher robustness where possible without sacrificing accuracy. We next investigate how to design classifiers with this convergence property. Our starting point is classical statistical theory [25] that shows that a class of methods known as weight functions will converge to a Bayes optimal in the large sample limit provided certain conditions hold; these include k-nearest neighbors under certain conditions on k and n, certain kinds of decision trees as well as kernel classifiers. Through an analysis of weight functions, we next establish precise conditions under which they converge to the neighborhood preserving Bayes optimal in the large sample limit. As expected, these are stronger than standard convergence to the Bayes optimal. In the large sample limit, we show that kn-nearest neighbors converge to the neighborhood preserving Bayes optimal provided kn = ω(log n), and kernel classifiers converge to the neighborhood preserving Bayes optimal provided certain technical conditions (such as the bandwidth shrinking sufficiently slowly). By contrast, certain types of histograms do not converge to the neighborhood preserving Bayes optimal, even if they do converge to the Bayes optimal. We round these off with a lower bound that shows that for nearest neighbor, the condition that kn = ω(log n) is tight. In particular, for kn = O(log n), there exist distributions for which kn-nearest neighbors provably fails to converge towards the neighborhood preserving Bayes optimal (despite converging towards the standard Bayes optimal). In summary, the contributions of the paper are as follows. First, we propose a new large sample limit the neighborhood preserving Bayes optimal and a new formulation for robust classification. We then establish conditions under which weight functions, a class of non-parametric methods, converge to the neighborhood preserving Bayes optimal in the large sample limit. Using these conditions, we show that kn-nearest neighbors satisfy these conditions when kn = ω(log n), and kernel classifiers satisfy these conditions provided the kernel function K has faster than polynomial decay, and the bandwidth parameter hn decreases sufficiently slowly. To complement these results, we also include negative examples of non-parametric classifiers that do not converge. We provide an example where histograms do not converge to the neighborhood preserving Bayes optimal with increasing n. We also show a lower bound for nearest neighbors, indi- cating that kn = ω(log n) is both necessary and sufficient for convergence towards the neighborhood preserving Bayes optimal. Our results indicate that the neighborhood preserving Bayes optimal formulation shows promise and has some interesting theoretical properties. We leave open the question of coming up with other alternative formulations that can better balance both robustness and accuracy for all kinds of data distributions, as well as are achievable algorithmically. We believe that addressing this would greatly help address the challenges in adversarial robustness. 2 Preliminaries We consider binary classification over Rd × {±1}, and let ρ denote any distance metric on Rd. We let µ denote the measure over Rd corresponding to the probability distribution over which instances x ∈ Rd are drawn. Each instance x is then labeled as +1 with probability η(x) and −1 with probability 1− η(x). Together, µ and η comprise our data distribution D = (µ, η) over Rd × {±1}. For comparison to the robust case, for a classifier f : Rd → {±1} and a distribution D over Rd × {±1}, it will be instructive to consider its accuracy, denoted A(f,D), which is defined as the fraction of examples from D that f labels correctly. Accuracy is maximized by the Bayes Optimal classifier: which we denote by g. It can be shown that for any x ∈ supp(µ), g(x) = 1 if η(x) ≥ 12 , and g(x) = −1 otherwise. Our goal is to build classifiers Rd → {±1} that are both accurate and robust to small perturbations. For any example x, perturbations to it are constrained to taking place in the robustness region of x, denoted Ux. We will let U = {Ux : x ∈ Rd} denote the collections of all robustness regions. We say that a classifier f : Rd → {±1} is robust at x if for all x′ ∈ Ux, f(x′) = f(x). Combining robustness and accuracy, we say that classifier is astute at a point x if it is both accurate and robust. Formally, we have the following definition. Definition 1. A classifier f : Rd → {±1} is said to be astute at (x, y) with respect to robustness collection U if f(x) = y and f is robust at x with respect to U . If D is a data distribution over Rd × {±1}, the astuteness of f over D with respect to U , denoted AU (f,D), is the fraction of examples (x, y) ∼ D for which f is astute at (x, y) with respect to U . Thus AU (f,D) = P(x,y)∼D[f(x′) = y,∀x′ ∈ Ux]. Non-parametric Classifiers We now briefly review several kinds of non-parametric classifiers that we will consider throughout this paper. We begin with weight functions, which are a general class of non-parametric algorithms that encompass many classic algorithms, including nearest neighbors and kernel classifiers. Weight functions are built from training sets, S = {(x1, y1), (x2, y2, ), . . . , (xn, yn)} by assigning a function wSi : Rd → [0, 1] that essentially scores how relevant the training point (xi, yi) is to the example being classified. The functions wSi are allowed to depend on x1, . . . , xn but must be independent of the labels y1, . . . , yn. Given these functions, a point x is classified by just checking whether ∑ yiw S i (x) ≥ 0 or not. If it is nonnegative, we output +1 and otherwise −1. A complete description of weight functions is included in the appendix. Next, we enumerate several common Non-parametric classifiers that can be construed as weight functions. Details can be found in the appendix. Histogram classifiers partition the domain Rd into cells recursively by splitting cells that contain a sufficiently large number of points xi. This corresponds to a weight function in which wSi (x) = 1 kx if xi is in the same cell as x, where kx denotes the number of points in the cell containing x. kn-nearest neighbors corresponds to a weight function in which wSi (x) = 1 kn if xi is one of the kn nearest neighbors of x, and wSi (x) = 0 otherwise. Kernel-Similarity classifiers are weight functions built from a kernel function K : R≥0 → R≥0 and a window size (hn)∞1 such that w S i (x) ∝ K(ρ(x, xi)/hn) (we normalize by dividing by∑n 1 K((ρ(x, xi)/hn))). 3 The Neighborhood preserving Bayes optimal classifier Robust classification is typically studied by setting the robustness regions, U = {Ux}x∈Rd , to be balls of radius r centered at x, Ux = {x′ : ρ(x, x′) ≤ r}. The quantity r is the robustness radius, and is typically set by the practitioner (before any training has occurred). This method has a limitation with regards to trade-offs between accuracy and robustness. To increase the margin or robustness, we must have a large robustness radius (thus allowing us to defend from larger adversarial attacks). However, with large robustness radii, this can come at a cost of accuracy, as it is not possible to robustly give different labels to points with intersecting robustness regions. For an illustration, consider Figure 1. Here we consider a data distribution D = (µ, η) in which the blue regions denote all points with η(x) > 0.5 (and thus should be labeled +), and the red regions denote all points with η(x) < 0.5 (and thus should be labeled −). Observe that it is not possible to be simultaneously accurate and robust at points A,B while enforcing a large robustness radius, as demonstrated by the intersecting balls. While this can be resolved by using a smaller radius, this results in losing out on potential robustness at point C. In principal, we should be able to afford a large margin of robustness about C due to its relatively far distance from the red regions. Motivated by this issue, we seek to find a formalism for robustness that allows us to simultaneously avoid paying for any accuracy-robustness trade-offs and adaptively size robustness regions (thus allowing us to defend against a larger range of adversarial attacks at points that are located in more homogenous zones of the distribution support). To approach this, we will first provide an ideal limit object: a classifier that has the same accuracy as the Bayes optimal (thus meeting our first criteria) that has good robustness properties. We call this the the neighborhood preserving Bayes optimal classifier, defined as follows. Definition 2. Let D = (µ, η) be a distribution over Rd ×{±1}. Then the neighborhood preserving Bayes optimal classifier of D, denoted gneighbor, is the classifier defined as follows. Let µ+ = {x : η(x) ≥ 12} and µ − = {x : η(x) < 12}. Then for any x ∈ R d, gneighbor(x) = +1 if ρ(x, µ+) ≤ ρ(x, µ−), and gneighbor(x) = −1 otherwise. This classifier can be thought of as the most robust classifier that matches the accuracy of the Bayes optimal. We call it neighborhood preserving because it extends the Bayes optimal classifier into a local neighborhood about every point in the support. For an illustration, refer to Figure 2, which plots the decision boundary of the neighborhood preserving Bayes optimal for an example distribution. Next, we turn our attention towards measuring its robustness, which must be done with respect to some set of robustness regions U = {Ux}. While these regions Ux can be nearly arbitrary, we seek regions Ux such that AU (gmax,D) = A(gbayes,D) (our astuteness equals the maximum possible accuracy) and Ux are “as large as possible" (representing large robustness). To this end, we propose the following regions. Definition 3. Let D = (µ, η) be a data distribution over Rd × {±1}. Let µ+ = {x : η(x) > 12}, µ− = {x : η(x) < 12}, and µ 1/2 = {x : η(x) = 12}. For x ∈ µ +, we define the neighborhood preserving robustness region, denoted Vx, as Vx = {x′ : ρ(x, x′) < ρ(µ− ∪ µ 1 2 , x′)}. It consists of all points that are closer to x than they are to µ− ∪ µ1/2 (points oppositely labeled from x). We can use a similar definition for x ∈ µ−. Finally, if x ∈ µ1/2, we simply set Vx = {x}. These robustness regions take advantage of the structure of the neighborhood preserving Bayes optimal. They can essentially be thought of as regions that maximally extend from any point x in the support of D to the decision boundary of the neighborhood preserving Bayes optimal. We include an illustration of the regions Vx for an example distribution in Figure 2. As a technical note, for x ∈ supp(D) with η(x) = 0.5, we give them a trivial robustness region. The rational for doing this is that η(x) = 0.5 is an edge case that is arbitrary to classify, and consequently enforcing a robustness region at that point is arbitrary and difficult to enforce. We now formalize the robustness and accuracy guarantees of the max-margin Bayes optimal classifier with the following two results. Theorem 4. (Accuracy) Let D be a data distribution. Let V denote the collection of neighborhood preserving robustness regions, and let g denote the Bayes optimal classifier. Then the neighborhood preserving Bayes optimal classifier, gneighbor, satisfies AV(gneighbor,D) = A(g,D), where A(g,D) denotes the accuracy of the Bayes optimal. Thus, gneighbor maximizes accuracy. Theorem 5. (Robustness) Let D be a data distribution, let f be a classifier, and let U be a set of robustness regions. Suppose that AU (f,D) = A(g,D), where g denotes the Bayes optimal classifier. Then there exists x ∈ supp(D) such that Vx 6⊂ Ux, where Vx denotes the neighborhood preserving robustness region about x. In particular, we cannot have Vx be a strict subset of Ux for all x. Theorem 4 shows that the neighborhood preserving Bayes classifier achieves maximal accuracy, while Theorem 5 shows that achieving a strictly higher robustness (while maintaining accuracy) is not possible; while it is possible to make accurate classifiers which have higher robustness than gneighbor in some regions of space, it is not possible for this to hold across all regions. Thus, the neighborhood preserving Bayes optimal classifier can be thought of as a local maximum to the constrained optimization problem of maximizing robustness subject to having maximum (equal to the Bayes optimal) accuracy. 3.1 Neighborhood Consistency Having defined the neighborhood preserving Bayes optimal classifier, we now turn our attention towards building classifiers that converge towards it. Before doing this, we must precisely define what it means to converge. Intuitively, this consists of building classifiers whose robustness regions “approach" the robustness regions of the neighborhood preserving Bayes optimal classifier. This motivates the definition of partial neighborhood preserving robustness regions. Definition 6. Let 0 < κ < 1 be a real number, and let D = (µ, η) be a data distribution over Rd × {±1}. Let µ+ = {x : η(x) > 12}, µ − = {x : η(x) < 12}, and µ 1/2 = {x : η(x) = 12}. For x ∈ µ+, we define the neighborhood preserving robustness region, denoted Vx, as Vx = {x′ : ρ(x, x′) < κρ(µ− ∪ µ 1 2 , x′)}. It consists of all points that are closer to x than they are to µ− ∪ µ1/2 (points oppositely labeled from x) by a factor of κ. We can use a similar definition for x ∈ µ−. Finally, if η(x) = 12 , we simply set V κx = {x}. Observe that V κx ⊂ Vx for all 0 < κ < 1, and thus being robust with respect to V κx is a milder condition than Vx. Using this notion, we can now define margin consistency. Definition 7. A learning algorithm A is said to be neighborhood consistent if the following holds for any data distribution D. For any 0 < , δ, κ < 1, there exists N such that for all n ≥ N , with probability at least 1− δ over S ∼ Dn, AVκ(AS , D) ≥ A(g,D)− , where g denotes the Bayes optimal classifier and AS denotes the classifier learned by algorithm A from dataset S. This condition essentially says that the astuteness of the classifier learned by the algorithm converges towards the accuracy of the Bayes optimal classifier. Furthermore, we stipulate that this holds as long as the astuteness is measured with respect to some Vκ. Observe that as κ→ 1, these regions converge towards the neighborhood preserving robustness regions, thus giving us a classifier with robustness effectively equal to that of the neighborhood preserving Bayes optimal classifier. 4 Neighborhood Consistent Non-Parametric Classifiers Having defined neighborhood consistency, we turn to the following question: which non-parametric algorithms are neighborhood consistent? Our starting point will be the standard literature for the convergence of non-parametric classifiers with regard to accuracy. We begin by considering the standard conditions for kn-nearest neighbors to converge (in accuracy) towards the Bayes optimal. kn-nearest neighbors is consistent if and only if the following two conditions are met: limn→∞ kn = ∞, and limn→∞ knn = 0. The first condition guarantees that each point is classified by using an increasing number of nearest neighbors (thus making the probability of a misclassification small), and the second condition guarantees that each point is classified using only points very close to it. We will refer to the first condition as precision, and the second condition as locality. A natural question is whether the same principles suffice for neighborhood consistency as well. We began by showing that without any additional constraints, the answer is no. Theorem 8. Let D = (µ, η) be the data distribution where µ denotes the uniform distribution over [0, 1] and η is defined as: η(x) = x. Over this space, let ρ be the euclidean distance metric. Suppose kn = O(log n) for 1 ≤ n < ∞. Then kn-nearest neighbors is not neighborhood consistent with respect to D. The issue in the example above is that for smaller kn, kn-nearest neighbors lacks sufficient precision. For neighborhood consistnecy, points must be labeled using even more training points than are needed accuracy. This is because the classifier must be uniformly correct across the entirety of V κx . Thus, to build neighborhood consistent classifiers, we must bolster the precision from the standard amount used for standard consistency. To do this, we begin by introducing splitting numbers, a useful tool for bolstering the precision of weight functions. 4.1 Splitting Numbers We will now generalize beyond nearest neighbors to consider weight functions. Doing so will allow us to simultaneously analyze nearest neighbors and kernel classifiers. To do so, we must first rigorously substantiate our intuitions about increasing precision into concrete requirements. This will require several technical definitions. Definition 9. Let µ be a probability measure over Rd. For any x ∈ Rd, the probability radius rp(x) is the smallest radius for which B(x, rp(x)) has probability mass at least p. More precisely, rp(x) = inf{r : µ(B(x, r)) ≥ p}. Definition 10. Let W be a weight function and let S = {x1, x2, . . . , xn} be any finite subset of Rd. For any x ∈ Rd, α ≥ 0, and 0 ≤ β ≤ 1, let Wx,α,β = {i : ρ(x, xi) ≤ α,wSi (x) ≥ β}. Then the splitting number of W with respect to S, denoted as T (W,S) is the number of distinct subsets generated by Wx,αβ as x ranges over Rd, α ranges over [0,∞), and β ranges over [0, 1]. Thus T (W,S) = |{Wx,α,β : x ∈ Rd, 0 ≤ α, 0 ≤ β ≤ 1}|. Splitting numbers allow us to ensure high amounts of precision over a weight function. To prove neighborhood consistency, it is necessary for a classifier to be correct at all points in a given region. Consequently, techniques that consider a single point will be insufficient. The splitting number provides a mechanism for studying entire regions simultaneously. For more details on splitting numbers, we include several examples in the appendix. 4.2 Sufficient Conditions for Neighborhood Consistency We now state our main result. Theorem 11. Let W be a weight function, D a distribution over Rd × {±1}, U a neighborhood preserving collection, and (tn)∞1 be a sequence of positive integers such that the following four conditions hold. 1. W is consistent (with resp. to accuracy) with resp. to D. 2. For any 0 < p < 1, limn→∞ES∼Dn [supx∈Rd ∑n 1 w S i (x)1ρ(x,xi)>rp(x)] = 0. 3. limn→∞ES∼Dn [tn supx∈Rd w S i (x)] = 0. 4. limn→∞ES∼Dn log T (W,S) tn = 0. Then W is neighborhood consistent with respect to D. Remarks: Condition 1 is necessary because neighborhood consistency implies standard consistency – or, convergence in accuracy to the Bayes Optimal. Standard consistency has been well studied for non-parametric classifiers, and there are a variety of results that can be used to ensure it – for example, Stone’s Theorem (included in the appendix). Conditions 2. and 3. are stronger version of conditions 2. and 3. of Stone’s theorem. In particular, both include a supremum taken over all x ∈ Rd as opposed to simply considering a random point x ∼ D. This is necessary for ensuring correct labels on entire regions of points simultaneously. We also note that the dependence on rp(x) (as opposed to some fixed r) is a key property used for adaptive robustness. This allows the algorithm to adjust to potential differing distance scales over different regions in Rd. This idea is reminiscent of the analysis given in [6], which also considers probability radii. Condition 4. is an entirely new condition which allows us to simultaneously consider all T (W,S) subsets of S. This is needed for analyzing weighted sums with arbitrary weights. Next, we apply Theorem 11 to get specific examples of margin consistent non-parametric algorithms. 4.3 Nearest Neighbors and Kernel Classifiers We now provide sufficient conditions for kn-nearest neighbors to be neighborhood consistent. Corollary 12. Suppose (kn)∞1 satisfies (1) limn→∞ kn n = 0, and (2) limn→∞ logn kn = 0. Then kn-nearest neighbors is neighborhood consistent. As a result of Theorem 8, corollary 12 is tight for nearest neighbors. Thus kn nearest neighbors is neighborhood consistent if and only if kn = ω(log n). Next, we give sufficient conditions for a kernel-similarity classifier. Corollary 13. Let W be a kernel classifier over Rd × {±1} constructed from K : R+ → R+ and hn. Suppose the following properties hold. 1. K is decreasing, and satisfies ∫ Rd K(||x||)dx <∞. 2. limn→∞ hn = 0 and limn→∞ nhdn =∞. 3. For any c > 1, limx→∞ K(cx) K(x) = 0. 4. For any x ≥ 0, limn→∞ nlognK( x hn ) =∞. Then W is neighborhood consistent. Observe that conditions 1. 2. and 3. are satisfied by many common Kernel functions such as the Gaussian or Exponential kernel (K(x) = exp(−x2)/ K(x) = exp(−x)). Condition 4. can be similarly satisfied by just increasing hn to be sufficiently large. Overall, this theorem states that Kernel classification is neighborhood consistent as long as the bandwidth shrinks slowly enough. 4.4 Histogram Classifiers Having discussed neighborhood consistent nearest-neighbors and kernel classifier, we now turn our attention towards another popular weight function, histogram classifiers. Recall that histogram classifiers operate by partitioning their input space into increasingly small cells, and then classifying each cell by using a majority vote from the training examples within that cell (a detailed description can be found in the appendix). We seek to answer the following question: is increasing precision sufficient for making histogram classifiers neighborhood consistent? Unfortunately, the answer this turns out not to be no. The main issue is that histogram classifiers have no mechanism for performing classification outside the support of the data distribution. For an example of this, refer to Figure 3. Here we see a distribution being classified by a histogram classifier. Observe that the cell labeled A contains points that are strictly closer to µ+ than µ−, and consequently, for sufficiently large κ, V κx will intersect A for some point x ∈ µ+. A similar argument holds for the cells labeled B and C.. However, since A,B,C are all in cells that will never contain any data, they will never be labeled in a meaningful way. Because of this, histogram classifiers are not neighborhood consistent. 5 Validation To complement our theoretical large sample results for non-parametric classifiers, we now include several experiments to understand their behavior for finite samples. We seek to understand how quickly non-parametic classifiers converge towards the neighborhood preserving Bayes optimal. We focus our attention on kernel classifiers and use two different kernel similarity functions: the first, an exponential kernel, and the second, a polynomial kernel. These classifiers were chosen so that the former meets the conditions of Corollary 13, and the latter does not. Full details on these classifiers can be found in the appendix. To be able to measure performance with increasing data size, we look at a simple synthetic dataset over overlayed circles (see Figure 5 for an illustration) with support designed so that the data is intrinsically multiscaled. In particular, this calls for different levels of robustness in different regions. For simplicity, we use a global label noise parameter of 0.2, meaning that any sample drawn from this distribution is labeled differently than its support with probability 0.2. Further details about our dataset are given in section D. Performance Measure. For a given classifier, we evaluate its astuteness at a test point x with respect to the robustness region V κx (Definition 6). While these regions are not computable in practice due to their dependency on the support of the data distribution, we are able to approximate them for this synthetic example due to our explicit knowledge of the data distribution. Details for doing this can be found in the appendix. To compute the empirical astuteness of a kernel classifier WK about test point x, we perform a grid search over all points in V κx to ensure that all points in the robustness region are labeled correctly. For each classifier, we measure the empirical astuteness by using three trials of 20 test points and taking the average. While this is a relatively small amount of test data, it suffices as our purpose is to just verify that the algorithm roughly converges towards the optimal possible astuteness. Recall that for any neighborhood consistent algorithm, as n→∞, AVκ should converge towards A∗, the accuracy of the Bayes optimal classifier, for any 0 < κ < 1. Thus, to verify this holds, we use κ = 0.1, 0.3, 0.5. For each of these values, we plot the empirical astuteness as the training sample size n gets larger and larger. As a baseline, we also plot their standard accuracy on the test set. Results and Discussion: The results are presented in Figure 4; the left panel is for the exponential kernel, while the right one is for the polynomial kernel. As predicted by our theory, we see that in all cases, the exponential kernel converges towards the maximum astuteness regardless of the value of κ: the only difference is that the rate of convergence is slower for larger values of κ. This is, of course, expected because larger values of κ entail larger robustness regions. By contrast, the polynomial kernel performs progressively worse for larger values of κ. This kernel was selected specifically to violate the conditions of Corollary 13, and in particular fails criteria 3. However, note that the polynomial kernel nevertheless performs will with respect to accuracy thus giving another example demonstrating the added difficulty of neighborhood consistency. Our results bridge the gap between our asymptotic theoretical results and finite sample regimes. In particular, we see that kernel classifiers that meet the conditions of Corollary 13 are able to converge in astuteness towards the neighborhood preserving Bayes optimal classifier, while classifiers that do not meet these conditions fail. 6 Related Work There is a wealth of literature on robust classification, most of which impose the same robustness radius r on the entire data. [5, 17, 19, 20, 26, 15, 16, 18, 21, 22, 23], among others, focus primarily on neural networks, and robustness regions that are `1, `2, or `∞ norm balls of a given radius r. [7] and [12] show how to train neural networks with different robustness radii at different points by trading off robustness and accuracy; their work differ from ours in that they focus on neural networks, their robustness regions are still norm balls, and that their work is largely empirical. Our framework is also related to large margin classification – in the sense that the robustness regions U induce a margin constraint on the decision boundary. The most popular large margin classifier is the Support Vector Machine[9, 3, 14] – a large margin linear classifier that minimizes the worstcase margin over the training data. Similar ideas have also been used to design classifiers that are more flexible than linear; for example, [27] shows how to build large margin Lipschitz classifiers by rounding globally Lipschitz functions. Finally, there has also been purely empirical work on achieving large margins for more complex classifiers – such as [13] for deep neural networks that minimizes the worst case margin, and [29] for metric learning to find large margin nearest neighbors. Our work differs from these in that our goal is to ensure a high enough local margin at each x, (by considering the neighborhood preserving regions Vx) as opposed to optimizing a global margin. Finally, our analysis builds on prior work on robust classification for non-parametric methods in the standard framework. [1, 24, 28, 31] provide adversarial attacks on non-parametric methods. Wang et. al. [28] develops a defense for 1-NN that removes a subset of the training set to ensure higher robustness. Yang et. al [31] proposes the r-optimal classifier – which is the maximally astute classifier in the standard robustness framework – and proposes a defense called Adversarial Pruning. Theoretically, [4] provide conditions under which weight functions converge towards the r-optimal classifier in the large sample limit. They show that for r-separated distributions, where points from different classes are at least distance 2r or more apart, nearest neighbors and kernel classifiers satisfy these conditions. In the more general case, they use Adversarial Pruning as a preprocessing step to ensure that the training data is r-separated, and show that this preprocessing step followed by nearest neighbors or kernel classifiers leads to solutions that are robust and accurate in the large sample limit. Our result fundamentally differs from theirs in that we analyze a different algorithm, and our proof techniques are quite different. In particular, the fundamental differences between the r-optimal classifier and the neighborhood preserving Bayes optimal classifier call for different algorithms and different analysis techniques. In concurrent work, [8] proposes a similar limit to the neighborhood preserving Bayes optimal which they refer to as the margin canonical Bayes. However, their work then focuses on a data augmentation technique that leads to convergence whereas we focus on proving the neighborhood consistency of classical non-parametric classifiers.
1. What is the focus and contribution of the paper regarding robust learning? 2. What are the strengths of the proposed approach, particularly in its theoretical analysis? 3. Do you have any concerns about the practicality of the proposed framework, especially in real-world classification problems? 4. How do the theoretical results inform robust learning using non-parametric methods, and can they extend beyond those methods? 5. What are your thoughts on the synthetic experiments provided, and how might the paper benefit from including experiments on other datasets?
Summary Of The Paper Review
Summary Of The Paper This paper argues that the standard robust learning method ignores the data heterogeneity: the robustness radius is fixed as some artificial value r for all the examples. The authors then propose a new neighborhood optimal classifier to address this problem, and theoretically studies the convergence condition for a class of non-parametric methods, including histogram classifiers, nearest neighbors, and kernel classifiers. Synthetic experiments are provided to verify their theoretical results. Review The proposed robustness framework is well motivated and clearly defined. At a high level, introducing heterogeneous robustness regions for different examples seems to be a promising direction for a better robustness definition. The theoretical results are also clearly presented and easy to follow. Nevertheless, my main concern of this paper is the practicability of the proposed framework: the proposed definition of neighborhood optimal classifier requires the knowledge of the support of the conditional distribution, which seems to be not practical for typical classification problem. More specifically, I have the following questions: How to apply Definition 3 to a more typical finite-sample classification problem? Are there ways to estimate the support of the ground-truth conditional distribution? What does the theoretical results suggest for robust learning using non-parametric methods? How can these results generalize beyond non-parametric methods? The dataset of overlayed circles considered in Section 5 seems to be too artificial to me. Including the experimental results on other datasets, such as Gaussian mixtures or even some image benchmarks, will strength the paper.
NIPS
Title Consistent Non-Parametric Methods for Maximizing Robustness Abstract Learning classifiers that are robust to adversarial examples has received a great deal of recent attention. A major drawback of the standard robust learning framework is there is an artificial robustness radius r that applies to all inputs. This ignores the fact that data may be highly heterogeneous, in which case it is plausible that robustness regions should be larger in some regions of data, and smaller in others. In this paper, we address this limitation by proposing a new limit classifier, called the neighborhood optimal classifier, that extends the Bayes optimal classifier outside its support by using the label of the closest in-support point. We then argue that this classifier maximizes the size of its robustness regions subject to the constraint of having accuracy equal to the Bayes optimal. We then present sufficient conditions under which general non-parametric methods that can be represented as weight functions converge towards this limit, and show that both nearest neighbors and kernel classifiers satisfy them under certain conditions. 1 Introduction Adversarially robust classification, that has been of much recent interest, is typically formulated as follows. We are given data drawn from an underlying distribution D, a metric d, as well as a pre-specified robustness radius r. We say that a classifier c is r-robust at an input x if it predicts the same label on a ball of radius r around x. Our goal in robust classification is to find a classifier c that maximizes astuteness, which is defined as accuracy on those examples where c is also r-robust. While this formulation has inspired a great deal of recent work, both theoretical and empirical [5, 17, 19, 20, 26, 15, 18, 21, 22, 23, 30], a major limitation is that enforcing a pre-specified robustness radius r may lead to sub-optimal accuracy and robustness. To see this, consider what would be an ideally robust classifier the example in Figure 1. For simplicity, suppose that we know the data distribution. In this case, a classifier that has an uniformly large robustness radius r will misclassify some points from the blue cluster on the left, leading to lower accuracy. This is illustrated in panel (a), in which large robustness radius leads to intersecting robustness regions. On the other hand, in panel (b), the blue cluster on the right is highly separated from the red cluster, and could be accurately classified with a high margin. But this will not happen if the robustness radius is set small enough to avoid the problems posed in panel (a). Thus, enforcing a fixed robustness radius that applies to the entire dataset may lead to lower accuracy and lower robustness. In this work, we propose an alternative formulation of robust classification that ensures that in the large sample limit, there is no robustness-accuracy trade off, and that regions of space with higher separation are classified more robustly. An extra advantage is that our formulation is achievable by existing methods. In particular, we show that two very common non-parametric algorithms – nearest neighbors and kernel classifiers – achieve these properties in the large sample limit. Our formulation is built on the notion of a new large-sample limit. In the standard statistical learning framework, the large-sample ideal is the Bayes optimal classifier that maximizes accuracy on the 35th Conference on Neural Information Processing Systems (NeurIPS 2021). data distribution, and is undefined outside. Since this is not always robust with radius r, prior work introduces the notion of an r-optimal classifier [31] that maximizes accuracy on points where it is also r-robust. However, this classifier also suffers from the same challenges as the example in Figure 1. We depart from both by introducing a new limit that we call the neighborhood preserving Bayes optimal classifier, described as follows. Given an input x that lies in the support of the data distribution D, it predicts the same label as the Bayes optimal. On an x outside the support, it outputs the prediction of the Bayes Optimal on the nearest neighbor of x within the support of D. The first property ensures that there is no loss of accuracy – since it always agrees with the Bayes Optimal within the data distribution. The second ensures higher robustness in regions that are better separated. Our goal is now to design classifiers that converge to the neighborhood preserving Bayes optimal in the large sample limit; this ensures that with enough data, the classifier will have accuracy approaching that of the Bayes optimal, as well as higher robustness where possible without sacrificing accuracy. We next investigate how to design classifiers with this convergence property. Our starting point is classical statistical theory [25] that shows that a class of methods known as weight functions will converge to a Bayes optimal in the large sample limit provided certain conditions hold; these include k-nearest neighbors under certain conditions on k and n, certain kinds of decision trees as well as kernel classifiers. Through an analysis of weight functions, we next establish precise conditions under which they converge to the neighborhood preserving Bayes optimal in the large sample limit. As expected, these are stronger than standard convergence to the Bayes optimal. In the large sample limit, we show that kn-nearest neighbors converge to the neighborhood preserving Bayes optimal provided kn = ω(log n), and kernel classifiers converge to the neighborhood preserving Bayes optimal provided certain technical conditions (such as the bandwidth shrinking sufficiently slowly). By contrast, certain types of histograms do not converge to the neighborhood preserving Bayes optimal, even if they do converge to the Bayes optimal. We round these off with a lower bound that shows that for nearest neighbor, the condition that kn = ω(log n) is tight. In particular, for kn = O(log n), there exist distributions for which kn-nearest neighbors provably fails to converge towards the neighborhood preserving Bayes optimal (despite converging towards the standard Bayes optimal). In summary, the contributions of the paper are as follows. First, we propose a new large sample limit the neighborhood preserving Bayes optimal and a new formulation for robust classification. We then establish conditions under which weight functions, a class of non-parametric methods, converge to the neighborhood preserving Bayes optimal in the large sample limit. Using these conditions, we show that kn-nearest neighbors satisfy these conditions when kn = ω(log n), and kernel classifiers satisfy these conditions provided the kernel function K has faster than polynomial decay, and the bandwidth parameter hn decreases sufficiently slowly. To complement these results, we also include negative examples of non-parametric classifiers that do not converge. We provide an example where histograms do not converge to the neighborhood preserving Bayes optimal with increasing n. We also show a lower bound for nearest neighbors, indi- cating that kn = ω(log n) is both necessary and sufficient for convergence towards the neighborhood preserving Bayes optimal. Our results indicate that the neighborhood preserving Bayes optimal formulation shows promise and has some interesting theoretical properties. We leave open the question of coming up with other alternative formulations that can better balance both robustness and accuracy for all kinds of data distributions, as well as are achievable algorithmically. We believe that addressing this would greatly help address the challenges in adversarial robustness. 2 Preliminaries We consider binary classification over Rd × {±1}, and let ρ denote any distance metric on Rd. We let µ denote the measure over Rd corresponding to the probability distribution over which instances x ∈ Rd are drawn. Each instance x is then labeled as +1 with probability η(x) and −1 with probability 1− η(x). Together, µ and η comprise our data distribution D = (µ, η) over Rd × {±1}. For comparison to the robust case, for a classifier f : Rd → {±1} and a distribution D over Rd × {±1}, it will be instructive to consider its accuracy, denoted A(f,D), which is defined as the fraction of examples from D that f labels correctly. Accuracy is maximized by the Bayes Optimal classifier: which we denote by g. It can be shown that for any x ∈ supp(µ), g(x) = 1 if η(x) ≥ 12 , and g(x) = −1 otherwise. Our goal is to build classifiers Rd → {±1} that are both accurate and robust to small perturbations. For any example x, perturbations to it are constrained to taking place in the robustness region of x, denoted Ux. We will let U = {Ux : x ∈ Rd} denote the collections of all robustness regions. We say that a classifier f : Rd → {±1} is robust at x if for all x′ ∈ Ux, f(x′) = f(x). Combining robustness and accuracy, we say that classifier is astute at a point x if it is both accurate and robust. Formally, we have the following definition. Definition 1. A classifier f : Rd → {±1} is said to be astute at (x, y) with respect to robustness collection U if f(x) = y and f is robust at x with respect to U . If D is a data distribution over Rd × {±1}, the astuteness of f over D with respect to U , denoted AU (f,D), is the fraction of examples (x, y) ∼ D for which f is astute at (x, y) with respect to U . Thus AU (f,D) = P(x,y)∼D[f(x′) = y,∀x′ ∈ Ux]. Non-parametric Classifiers We now briefly review several kinds of non-parametric classifiers that we will consider throughout this paper. We begin with weight functions, which are a general class of non-parametric algorithms that encompass many classic algorithms, including nearest neighbors and kernel classifiers. Weight functions are built from training sets, S = {(x1, y1), (x2, y2, ), . . . , (xn, yn)} by assigning a function wSi : Rd → [0, 1] that essentially scores how relevant the training point (xi, yi) is to the example being classified. The functions wSi are allowed to depend on x1, . . . , xn but must be independent of the labels y1, . . . , yn. Given these functions, a point x is classified by just checking whether ∑ yiw S i (x) ≥ 0 or not. If it is nonnegative, we output +1 and otherwise −1. A complete description of weight functions is included in the appendix. Next, we enumerate several common Non-parametric classifiers that can be construed as weight functions. Details can be found in the appendix. Histogram classifiers partition the domain Rd into cells recursively by splitting cells that contain a sufficiently large number of points xi. This corresponds to a weight function in which wSi (x) = 1 kx if xi is in the same cell as x, where kx denotes the number of points in the cell containing x. kn-nearest neighbors corresponds to a weight function in which wSi (x) = 1 kn if xi is one of the kn nearest neighbors of x, and wSi (x) = 0 otherwise. Kernel-Similarity classifiers are weight functions built from a kernel function K : R≥0 → R≥0 and a window size (hn)∞1 such that w S i (x) ∝ K(ρ(x, xi)/hn) (we normalize by dividing by∑n 1 K((ρ(x, xi)/hn))). 3 The Neighborhood preserving Bayes optimal classifier Robust classification is typically studied by setting the robustness regions, U = {Ux}x∈Rd , to be balls of radius r centered at x, Ux = {x′ : ρ(x, x′) ≤ r}. The quantity r is the robustness radius, and is typically set by the practitioner (before any training has occurred). This method has a limitation with regards to trade-offs between accuracy and robustness. To increase the margin or robustness, we must have a large robustness radius (thus allowing us to defend from larger adversarial attacks). However, with large robustness radii, this can come at a cost of accuracy, as it is not possible to robustly give different labels to points with intersecting robustness regions. For an illustration, consider Figure 1. Here we consider a data distribution D = (µ, η) in which the blue regions denote all points with η(x) > 0.5 (and thus should be labeled +), and the red regions denote all points with η(x) < 0.5 (and thus should be labeled −). Observe that it is not possible to be simultaneously accurate and robust at points A,B while enforcing a large robustness radius, as demonstrated by the intersecting balls. While this can be resolved by using a smaller radius, this results in losing out on potential robustness at point C. In principal, we should be able to afford a large margin of robustness about C due to its relatively far distance from the red regions. Motivated by this issue, we seek to find a formalism for robustness that allows us to simultaneously avoid paying for any accuracy-robustness trade-offs and adaptively size robustness regions (thus allowing us to defend against a larger range of adversarial attacks at points that are located in more homogenous zones of the distribution support). To approach this, we will first provide an ideal limit object: a classifier that has the same accuracy as the Bayes optimal (thus meeting our first criteria) that has good robustness properties. We call this the the neighborhood preserving Bayes optimal classifier, defined as follows. Definition 2. Let D = (µ, η) be a distribution over Rd ×{±1}. Then the neighborhood preserving Bayes optimal classifier of D, denoted gneighbor, is the classifier defined as follows. Let µ+ = {x : η(x) ≥ 12} and µ − = {x : η(x) < 12}. Then for any x ∈ R d, gneighbor(x) = +1 if ρ(x, µ+) ≤ ρ(x, µ−), and gneighbor(x) = −1 otherwise. This classifier can be thought of as the most robust classifier that matches the accuracy of the Bayes optimal. We call it neighborhood preserving because it extends the Bayes optimal classifier into a local neighborhood about every point in the support. For an illustration, refer to Figure 2, which plots the decision boundary of the neighborhood preserving Bayes optimal for an example distribution. Next, we turn our attention towards measuring its robustness, which must be done with respect to some set of robustness regions U = {Ux}. While these regions Ux can be nearly arbitrary, we seek regions Ux such that AU (gmax,D) = A(gbayes,D) (our astuteness equals the maximum possible accuracy) and Ux are “as large as possible" (representing large robustness). To this end, we propose the following regions. Definition 3. Let D = (µ, η) be a data distribution over Rd × {±1}. Let µ+ = {x : η(x) > 12}, µ− = {x : η(x) < 12}, and µ 1/2 = {x : η(x) = 12}. For x ∈ µ +, we define the neighborhood preserving robustness region, denoted Vx, as Vx = {x′ : ρ(x, x′) < ρ(µ− ∪ µ 1 2 , x′)}. It consists of all points that are closer to x than they are to µ− ∪ µ1/2 (points oppositely labeled from x). We can use a similar definition for x ∈ µ−. Finally, if x ∈ µ1/2, we simply set Vx = {x}. These robustness regions take advantage of the structure of the neighborhood preserving Bayes optimal. They can essentially be thought of as regions that maximally extend from any point x in the support of D to the decision boundary of the neighborhood preserving Bayes optimal. We include an illustration of the regions Vx for an example distribution in Figure 2. As a technical note, for x ∈ supp(D) with η(x) = 0.5, we give them a trivial robustness region. The rational for doing this is that η(x) = 0.5 is an edge case that is arbitrary to classify, and consequently enforcing a robustness region at that point is arbitrary and difficult to enforce. We now formalize the robustness and accuracy guarantees of the max-margin Bayes optimal classifier with the following two results. Theorem 4. (Accuracy) Let D be a data distribution. Let V denote the collection of neighborhood preserving robustness regions, and let g denote the Bayes optimal classifier. Then the neighborhood preserving Bayes optimal classifier, gneighbor, satisfies AV(gneighbor,D) = A(g,D), where A(g,D) denotes the accuracy of the Bayes optimal. Thus, gneighbor maximizes accuracy. Theorem 5. (Robustness) Let D be a data distribution, let f be a classifier, and let U be a set of robustness regions. Suppose that AU (f,D) = A(g,D), where g denotes the Bayes optimal classifier. Then there exists x ∈ supp(D) such that Vx 6⊂ Ux, where Vx denotes the neighborhood preserving robustness region about x. In particular, we cannot have Vx be a strict subset of Ux for all x. Theorem 4 shows that the neighborhood preserving Bayes classifier achieves maximal accuracy, while Theorem 5 shows that achieving a strictly higher robustness (while maintaining accuracy) is not possible; while it is possible to make accurate classifiers which have higher robustness than gneighbor in some regions of space, it is not possible for this to hold across all regions. Thus, the neighborhood preserving Bayes optimal classifier can be thought of as a local maximum to the constrained optimization problem of maximizing robustness subject to having maximum (equal to the Bayes optimal) accuracy. 3.1 Neighborhood Consistency Having defined the neighborhood preserving Bayes optimal classifier, we now turn our attention towards building classifiers that converge towards it. Before doing this, we must precisely define what it means to converge. Intuitively, this consists of building classifiers whose robustness regions “approach" the robustness regions of the neighborhood preserving Bayes optimal classifier. This motivates the definition of partial neighborhood preserving robustness regions. Definition 6. Let 0 < κ < 1 be a real number, and let D = (µ, η) be a data distribution over Rd × {±1}. Let µ+ = {x : η(x) > 12}, µ − = {x : η(x) < 12}, and µ 1/2 = {x : η(x) = 12}. For x ∈ µ+, we define the neighborhood preserving robustness region, denoted Vx, as Vx = {x′ : ρ(x, x′) < κρ(µ− ∪ µ 1 2 , x′)}. It consists of all points that are closer to x than they are to µ− ∪ µ1/2 (points oppositely labeled from x) by a factor of κ. We can use a similar definition for x ∈ µ−. Finally, if η(x) = 12 , we simply set V κx = {x}. Observe that V κx ⊂ Vx for all 0 < κ < 1, and thus being robust with respect to V κx is a milder condition than Vx. Using this notion, we can now define margin consistency. Definition 7. A learning algorithm A is said to be neighborhood consistent if the following holds for any data distribution D. For any 0 < , δ, κ < 1, there exists N such that for all n ≥ N , with probability at least 1− δ over S ∼ Dn, AVκ(AS , D) ≥ A(g,D)− , where g denotes the Bayes optimal classifier and AS denotes the classifier learned by algorithm A from dataset S. This condition essentially says that the astuteness of the classifier learned by the algorithm converges towards the accuracy of the Bayes optimal classifier. Furthermore, we stipulate that this holds as long as the astuteness is measured with respect to some Vκ. Observe that as κ→ 1, these regions converge towards the neighborhood preserving robustness regions, thus giving us a classifier with robustness effectively equal to that of the neighborhood preserving Bayes optimal classifier. 4 Neighborhood Consistent Non-Parametric Classifiers Having defined neighborhood consistency, we turn to the following question: which non-parametric algorithms are neighborhood consistent? Our starting point will be the standard literature for the convergence of non-parametric classifiers with regard to accuracy. We begin by considering the standard conditions for kn-nearest neighbors to converge (in accuracy) towards the Bayes optimal. kn-nearest neighbors is consistent if and only if the following two conditions are met: limn→∞ kn = ∞, and limn→∞ knn = 0. The first condition guarantees that each point is classified by using an increasing number of nearest neighbors (thus making the probability of a misclassification small), and the second condition guarantees that each point is classified using only points very close to it. We will refer to the first condition as precision, and the second condition as locality. A natural question is whether the same principles suffice for neighborhood consistency as well. We began by showing that without any additional constraints, the answer is no. Theorem 8. Let D = (µ, η) be the data distribution where µ denotes the uniform distribution over [0, 1] and η is defined as: η(x) = x. Over this space, let ρ be the euclidean distance metric. Suppose kn = O(log n) for 1 ≤ n < ∞. Then kn-nearest neighbors is not neighborhood consistent with respect to D. The issue in the example above is that for smaller kn, kn-nearest neighbors lacks sufficient precision. For neighborhood consistnecy, points must be labeled using even more training points than are needed accuracy. This is because the classifier must be uniformly correct across the entirety of V κx . Thus, to build neighborhood consistent classifiers, we must bolster the precision from the standard amount used for standard consistency. To do this, we begin by introducing splitting numbers, a useful tool for bolstering the precision of weight functions. 4.1 Splitting Numbers We will now generalize beyond nearest neighbors to consider weight functions. Doing so will allow us to simultaneously analyze nearest neighbors and kernel classifiers. To do so, we must first rigorously substantiate our intuitions about increasing precision into concrete requirements. This will require several technical definitions. Definition 9. Let µ be a probability measure over Rd. For any x ∈ Rd, the probability radius rp(x) is the smallest radius for which B(x, rp(x)) has probability mass at least p. More precisely, rp(x) = inf{r : µ(B(x, r)) ≥ p}. Definition 10. Let W be a weight function and let S = {x1, x2, . . . , xn} be any finite subset of Rd. For any x ∈ Rd, α ≥ 0, and 0 ≤ β ≤ 1, let Wx,α,β = {i : ρ(x, xi) ≤ α,wSi (x) ≥ β}. Then the splitting number of W with respect to S, denoted as T (W,S) is the number of distinct subsets generated by Wx,αβ as x ranges over Rd, α ranges over [0,∞), and β ranges over [0, 1]. Thus T (W,S) = |{Wx,α,β : x ∈ Rd, 0 ≤ α, 0 ≤ β ≤ 1}|. Splitting numbers allow us to ensure high amounts of precision over a weight function. To prove neighborhood consistency, it is necessary for a classifier to be correct at all points in a given region. Consequently, techniques that consider a single point will be insufficient. The splitting number provides a mechanism for studying entire regions simultaneously. For more details on splitting numbers, we include several examples in the appendix. 4.2 Sufficient Conditions for Neighborhood Consistency We now state our main result. Theorem 11. Let W be a weight function, D a distribution over Rd × {±1}, U a neighborhood preserving collection, and (tn)∞1 be a sequence of positive integers such that the following four conditions hold. 1. W is consistent (with resp. to accuracy) with resp. to D. 2. For any 0 < p < 1, limn→∞ES∼Dn [supx∈Rd ∑n 1 w S i (x)1ρ(x,xi)>rp(x)] = 0. 3. limn→∞ES∼Dn [tn supx∈Rd w S i (x)] = 0. 4. limn→∞ES∼Dn log T (W,S) tn = 0. Then W is neighborhood consistent with respect to D. Remarks: Condition 1 is necessary because neighborhood consistency implies standard consistency – or, convergence in accuracy to the Bayes Optimal. Standard consistency has been well studied for non-parametric classifiers, and there are a variety of results that can be used to ensure it – for example, Stone’s Theorem (included in the appendix). Conditions 2. and 3. are stronger version of conditions 2. and 3. of Stone’s theorem. In particular, both include a supremum taken over all x ∈ Rd as opposed to simply considering a random point x ∼ D. This is necessary for ensuring correct labels on entire regions of points simultaneously. We also note that the dependence on rp(x) (as opposed to some fixed r) is a key property used for adaptive robustness. This allows the algorithm to adjust to potential differing distance scales over different regions in Rd. This idea is reminiscent of the analysis given in [6], which also considers probability radii. Condition 4. is an entirely new condition which allows us to simultaneously consider all T (W,S) subsets of S. This is needed for analyzing weighted sums with arbitrary weights. Next, we apply Theorem 11 to get specific examples of margin consistent non-parametric algorithms. 4.3 Nearest Neighbors and Kernel Classifiers We now provide sufficient conditions for kn-nearest neighbors to be neighborhood consistent. Corollary 12. Suppose (kn)∞1 satisfies (1) limn→∞ kn n = 0, and (2) limn→∞ logn kn = 0. Then kn-nearest neighbors is neighborhood consistent. As a result of Theorem 8, corollary 12 is tight for nearest neighbors. Thus kn nearest neighbors is neighborhood consistent if and only if kn = ω(log n). Next, we give sufficient conditions for a kernel-similarity classifier. Corollary 13. Let W be a kernel classifier over Rd × {±1} constructed from K : R+ → R+ and hn. Suppose the following properties hold. 1. K is decreasing, and satisfies ∫ Rd K(||x||)dx <∞. 2. limn→∞ hn = 0 and limn→∞ nhdn =∞. 3. For any c > 1, limx→∞ K(cx) K(x) = 0. 4. For any x ≥ 0, limn→∞ nlognK( x hn ) =∞. Then W is neighborhood consistent. Observe that conditions 1. 2. and 3. are satisfied by many common Kernel functions such as the Gaussian or Exponential kernel (K(x) = exp(−x2)/ K(x) = exp(−x)). Condition 4. can be similarly satisfied by just increasing hn to be sufficiently large. Overall, this theorem states that Kernel classification is neighborhood consistent as long as the bandwidth shrinks slowly enough. 4.4 Histogram Classifiers Having discussed neighborhood consistent nearest-neighbors and kernel classifier, we now turn our attention towards another popular weight function, histogram classifiers. Recall that histogram classifiers operate by partitioning their input space into increasingly small cells, and then classifying each cell by using a majority vote from the training examples within that cell (a detailed description can be found in the appendix). We seek to answer the following question: is increasing precision sufficient for making histogram classifiers neighborhood consistent? Unfortunately, the answer this turns out not to be no. The main issue is that histogram classifiers have no mechanism for performing classification outside the support of the data distribution. For an example of this, refer to Figure 3. Here we see a distribution being classified by a histogram classifier. Observe that the cell labeled A contains points that are strictly closer to µ+ than µ−, and consequently, for sufficiently large κ, V κx will intersect A for some point x ∈ µ+. A similar argument holds for the cells labeled B and C.. However, since A,B,C are all in cells that will never contain any data, they will never be labeled in a meaningful way. Because of this, histogram classifiers are not neighborhood consistent. 5 Validation To complement our theoretical large sample results for non-parametric classifiers, we now include several experiments to understand their behavior for finite samples. We seek to understand how quickly non-parametic classifiers converge towards the neighborhood preserving Bayes optimal. We focus our attention on kernel classifiers and use two different kernel similarity functions: the first, an exponential kernel, and the second, a polynomial kernel. These classifiers were chosen so that the former meets the conditions of Corollary 13, and the latter does not. Full details on these classifiers can be found in the appendix. To be able to measure performance with increasing data size, we look at a simple synthetic dataset over overlayed circles (see Figure 5 for an illustration) with support designed so that the data is intrinsically multiscaled. In particular, this calls for different levels of robustness in different regions. For simplicity, we use a global label noise parameter of 0.2, meaning that any sample drawn from this distribution is labeled differently than its support with probability 0.2. Further details about our dataset are given in section D. Performance Measure. For a given classifier, we evaluate its astuteness at a test point x with respect to the robustness region V κx (Definition 6). While these regions are not computable in practice due to their dependency on the support of the data distribution, we are able to approximate them for this synthetic example due to our explicit knowledge of the data distribution. Details for doing this can be found in the appendix. To compute the empirical astuteness of a kernel classifier WK about test point x, we perform a grid search over all points in V κx to ensure that all points in the robustness region are labeled correctly. For each classifier, we measure the empirical astuteness by using three trials of 20 test points and taking the average. While this is a relatively small amount of test data, it suffices as our purpose is to just verify that the algorithm roughly converges towards the optimal possible astuteness. Recall that for any neighborhood consistent algorithm, as n→∞, AVκ should converge towards A∗, the accuracy of the Bayes optimal classifier, for any 0 < κ < 1. Thus, to verify this holds, we use κ = 0.1, 0.3, 0.5. For each of these values, we plot the empirical astuteness as the training sample size n gets larger and larger. As a baseline, we also plot their standard accuracy on the test set. Results and Discussion: The results are presented in Figure 4; the left panel is for the exponential kernel, while the right one is for the polynomial kernel. As predicted by our theory, we see that in all cases, the exponential kernel converges towards the maximum astuteness regardless of the value of κ: the only difference is that the rate of convergence is slower for larger values of κ. This is, of course, expected because larger values of κ entail larger robustness regions. By contrast, the polynomial kernel performs progressively worse for larger values of κ. This kernel was selected specifically to violate the conditions of Corollary 13, and in particular fails criteria 3. However, note that the polynomial kernel nevertheless performs will with respect to accuracy thus giving another example demonstrating the added difficulty of neighborhood consistency. Our results bridge the gap between our asymptotic theoretical results and finite sample regimes. In particular, we see that kernel classifiers that meet the conditions of Corollary 13 are able to converge in astuteness towards the neighborhood preserving Bayes optimal classifier, while classifiers that do not meet these conditions fail. 6 Related Work There is a wealth of literature on robust classification, most of which impose the same robustness radius r on the entire data. [5, 17, 19, 20, 26, 15, 16, 18, 21, 22, 23], among others, focus primarily on neural networks, and robustness regions that are `1, `2, or `∞ norm balls of a given radius r. [7] and [12] show how to train neural networks with different robustness radii at different points by trading off robustness and accuracy; their work differ from ours in that they focus on neural networks, their robustness regions are still norm balls, and that their work is largely empirical. Our framework is also related to large margin classification – in the sense that the robustness regions U induce a margin constraint on the decision boundary. The most popular large margin classifier is the Support Vector Machine[9, 3, 14] – a large margin linear classifier that minimizes the worstcase margin over the training data. Similar ideas have also been used to design classifiers that are more flexible than linear; for example, [27] shows how to build large margin Lipschitz classifiers by rounding globally Lipschitz functions. Finally, there has also been purely empirical work on achieving large margins for more complex classifiers – such as [13] for deep neural networks that minimizes the worst case margin, and [29] for metric learning to find large margin nearest neighbors. Our work differs from these in that our goal is to ensure a high enough local margin at each x, (by considering the neighborhood preserving regions Vx) as opposed to optimizing a global margin. Finally, our analysis builds on prior work on robust classification for non-parametric methods in the standard framework. [1, 24, 28, 31] provide adversarial attacks on non-parametric methods. Wang et. al. [28] develops a defense for 1-NN that removes a subset of the training set to ensure higher robustness. Yang et. al [31] proposes the r-optimal classifier – which is the maximally astute classifier in the standard robustness framework – and proposes a defense called Adversarial Pruning. Theoretically, [4] provide conditions under which weight functions converge towards the r-optimal classifier in the large sample limit. They show that for r-separated distributions, where points from different classes are at least distance 2r or more apart, nearest neighbors and kernel classifiers satisfy these conditions. In the more general case, they use Adversarial Pruning as a preprocessing step to ensure that the training data is r-separated, and show that this preprocessing step followed by nearest neighbors or kernel classifiers leads to solutions that are robust and accurate in the large sample limit. Our result fundamentally differs from theirs in that we analyze a different algorithm, and our proof techniques are quite different. In particular, the fundamental differences between the r-optimal classifier and the neighborhood preserving Bayes optimal classifier call for different algorithms and different analysis techniques. In concurrent work, [8] proposes a similar limit to the neighborhood preserving Bayes optimal which they refer to as the margin canonical Bayes. However, their work then focuses on a data augmentation technique that leads to convergence whereas we focus on proving the neighborhood consistency of classical non-parametric classifiers.
1. What are the strengths and weaknesses of the paper regarding its contribution to studying astute classification rules? 2. How does the reviewer assess the paper's clarity, quality, novelty, and reproducibility? 3. Are there any questions or concerns regarding the paper's definitions, motivations, and proposed solutions? 4. Can the authors provide further analysis or commentary on the potential extension of their results to other settings or approaches? 5. Are there any minor errors or typos in the paper that should be addressed?
Summary Of The Paper Review
Summary Of The Paper This paper studies classification rules that are "astute", that is, those that are accurate and robust. Importantly, the authors note that common definitions of robust classifiers, which are given in terms of a fixed parameter (typically a size of a perturbation set) are not astute, as they do not fully exploit the fact that margins are not uniform across the data distribution. This paper first studies conditions under their so-called "neighborhood optimal classifier" maximizes robustness while retaining the Bayes classifier accuracy in the unperturbed distribution. Furthermore, the authors show that several popular non-parametric classifier provide consistent estimates for this classifier. Finally, this is demonstrated numerically in a simple experimental setting. Review This paper is very interesting, and a pleasure to read. The limitation of current approaches to study adversarial robustness is clear from the presented concepts, and the proposed solutions are intuitive and elegant. I have a few minor comments: Main comments: While the authors present results of consistency of certain non-parametric classifiers, their finite-sample performance is only evaluated numerically. Can the authors comment, at least briefly, on the potential of extending the analysis of convergence rates of non-parametric classifiers (such as those in [Doring et al, 2018]) to the setting presented here? In the paragraph on line 154, the authors motivate their definition of "neighborhood preserving robustness regions". To do so, they explain that they seek for regions U_x so that the astute accuracy of g_max equals the accuracy of the Bayes classifier. However, it's not totally clear what g_max is. Is it the one that maximizes the measure of U_x? A clarification would be appreciated. On the paragraph following Theorem 5, the authors comment that g_neighbor can be thought of a local maximum to a constrained optimization problem. However, it is not clear why this is not a global maximum - how could one obtain a classifier with larger robustness constrained to having equal accuracy as the Bayes? Minor comments: Lines 57: space missing on "Instead [3]" The paragraphs on line 71 and 78 feel a bit repetitive, given that the preceding paragraphs just gave a summary of the presented contributions. Line 96: coma instead of semicolon? Line 137: principal -> principle Line 144: "the the" Line 171: the authors write "max-margin Bayes" classifier, but I believe they mean the neighborhood preserving Bayes classifier? This might be equivalent, but has not been defined as "max-margin" explicitely. The notation of algorithm (A_S) and Accuracy is slightly confusing. Line 222 "consistnecy" -> consistency Line 222: "needed accuracy" -> "needed for accuracy" On Corollary 12, the authors use the notation k_n^\infty_1. There's a parenthesis that seems out of place. But also, this notation has not been defined? On Corollary 13, the authors use h_n, but this seems undefined. Do they simply refer to the rule k_nn on the computed features? Line 293: "The answer this" -> the answer to this Figure 3 seems to have some strange gray edges on its bottom and left sides. Line 335: "performs will" -> performs well.
NIPS
Title Consistent Non-Parametric Methods for Maximizing Robustness Abstract Learning classifiers that are robust to adversarial examples has received a great deal of recent attention. A major drawback of the standard robust learning framework is there is an artificial robustness radius r that applies to all inputs. This ignores the fact that data may be highly heterogeneous, in which case it is plausible that robustness regions should be larger in some regions of data, and smaller in others. In this paper, we address this limitation by proposing a new limit classifier, called the neighborhood optimal classifier, that extends the Bayes optimal classifier outside its support by using the label of the closest in-support point. We then argue that this classifier maximizes the size of its robustness regions subject to the constraint of having accuracy equal to the Bayes optimal. We then present sufficient conditions under which general non-parametric methods that can be represented as weight functions converge towards this limit, and show that both nearest neighbors and kernel classifiers satisfy them under certain conditions. 1 Introduction Adversarially robust classification, that has been of much recent interest, is typically formulated as follows. We are given data drawn from an underlying distribution D, a metric d, as well as a pre-specified robustness radius r. We say that a classifier c is r-robust at an input x if it predicts the same label on a ball of radius r around x. Our goal in robust classification is to find a classifier c that maximizes astuteness, which is defined as accuracy on those examples where c is also r-robust. While this formulation has inspired a great deal of recent work, both theoretical and empirical [5, 17, 19, 20, 26, 15, 18, 21, 22, 23, 30], a major limitation is that enforcing a pre-specified robustness radius r may lead to sub-optimal accuracy and robustness. To see this, consider what would be an ideally robust classifier the example in Figure 1. For simplicity, suppose that we know the data distribution. In this case, a classifier that has an uniformly large robustness radius r will misclassify some points from the blue cluster on the left, leading to lower accuracy. This is illustrated in panel (a), in which large robustness radius leads to intersecting robustness regions. On the other hand, in panel (b), the blue cluster on the right is highly separated from the red cluster, and could be accurately classified with a high margin. But this will not happen if the robustness radius is set small enough to avoid the problems posed in panel (a). Thus, enforcing a fixed robustness radius that applies to the entire dataset may lead to lower accuracy and lower robustness. In this work, we propose an alternative formulation of robust classification that ensures that in the large sample limit, there is no robustness-accuracy trade off, and that regions of space with higher separation are classified more robustly. An extra advantage is that our formulation is achievable by existing methods. In particular, we show that two very common non-parametric algorithms – nearest neighbors and kernel classifiers – achieve these properties in the large sample limit. Our formulation is built on the notion of a new large-sample limit. In the standard statistical learning framework, the large-sample ideal is the Bayes optimal classifier that maximizes accuracy on the 35th Conference on Neural Information Processing Systems (NeurIPS 2021). data distribution, and is undefined outside. Since this is not always robust with radius r, prior work introduces the notion of an r-optimal classifier [31] that maximizes accuracy on points where it is also r-robust. However, this classifier also suffers from the same challenges as the example in Figure 1. We depart from both by introducing a new limit that we call the neighborhood preserving Bayes optimal classifier, described as follows. Given an input x that lies in the support of the data distribution D, it predicts the same label as the Bayes optimal. On an x outside the support, it outputs the prediction of the Bayes Optimal on the nearest neighbor of x within the support of D. The first property ensures that there is no loss of accuracy – since it always agrees with the Bayes Optimal within the data distribution. The second ensures higher robustness in regions that are better separated. Our goal is now to design classifiers that converge to the neighborhood preserving Bayes optimal in the large sample limit; this ensures that with enough data, the classifier will have accuracy approaching that of the Bayes optimal, as well as higher robustness where possible without sacrificing accuracy. We next investigate how to design classifiers with this convergence property. Our starting point is classical statistical theory [25] that shows that a class of methods known as weight functions will converge to a Bayes optimal in the large sample limit provided certain conditions hold; these include k-nearest neighbors under certain conditions on k and n, certain kinds of decision trees as well as kernel classifiers. Through an analysis of weight functions, we next establish precise conditions under which they converge to the neighborhood preserving Bayes optimal in the large sample limit. As expected, these are stronger than standard convergence to the Bayes optimal. In the large sample limit, we show that kn-nearest neighbors converge to the neighborhood preserving Bayes optimal provided kn = ω(log n), and kernel classifiers converge to the neighborhood preserving Bayes optimal provided certain technical conditions (such as the bandwidth shrinking sufficiently slowly). By contrast, certain types of histograms do not converge to the neighborhood preserving Bayes optimal, even if they do converge to the Bayes optimal. We round these off with a lower bound that shows that for nearest neighbor, the condition that kn = ω(log n) is tight. In particular, for kn = O(log n), there exist distributions for which kn-nearest neighbors provably fails to converge towards the neighborhood preserving Bayes optimal (despite converging towards the standard Bayes optimal). In summary, the contributions of the paper are as follows. First, we propose a new large sample limit the neighborhood preserving Bayes optimal and a new formulation for robust classification. We then establish conditions under which weight functions, a class of non-parametric methods, converge to the neighborhood preserving Bayes optimal in the large sample limit. Using these conditions, we show that kn-nearest neighbors satisfy these conditions when kn = ω(log n), and kernel classifiers satisfy these conditions provided the kernel function K has faster than polynomial decay, and the bandwidth parameter hn decreases sufficiently slowly. To complement these results, we also include negative examples of non-parametric classifiers that do not converge. We provide an example where histograms do not converge to the neighborhood preserving Bayes optimal with increasing n. We also show a lower bound for nearest neighbors, indi- cating that kn = ω(log n) is both necessary and sufficient for convergence towards the neighborhood preserving Bayes optimal. Our results indicate that the neighborhood preserving Bayes optimal formulation shows promise and has some interesting theoretical properties. We leave open the question of coming up with other alternative formulations that can better balance both robustness and accuracy for all kinds of data distributions, as well as are achievable algorithmically. We believe that addressing this would greatly help address the challenges in adversarial robustness. 2 Preliminaries We consider binary classification over Rd × {±1}, and let ρ denote any distance metric on Rd. We let µ denote the measure over Rd corresponding to the probability distribution over which instances x ∈ Rd are drawn. Each instance x is then labeled as +1 with probability η(x) and −1 with probability 1− η(x). Together, µ and η comprise our data distribution D = (µ, η) over Rd × {±1}. For comparison to the robust case, for a classifier f : Rd → {±1} and a distribution D over Rd × {±1}, it will be instructive to consider its accuracy, denoted A(f,D), which is defined as the fraction of examples from D that f labels correctly. Accuracy is maximized by the Bayes Optimal classifier: which we denote by g. It can be shown that for any x ∈ supp(µ), g(x) = 1 if η(x) ≥ 12 , and g(x) = −1 otherwise. Our goal is to build classifiers Rd → {±1} that are both accurate and robust to small perturbations. For any example x, perturbations to it are constrained to taking place in the robustness region of x, denoted Ux. We will let U = {Ux : x ∈ Rd} denote the collections of all robustness regions. We say that a classifier f : Rd → {±1} is robust at x if for all x′ ∈ Ux, f(x′) = f(x). Combining robustness and accuracy, we say that classifier is astute at a point x if it is both accurate and robust. Formally, we have the following definition. Definition 1. A classifier f : Rd → {±1} is said to be astute at (x, y) with respect to robustness collection U if f(x) = y and f is robust at x with respect to U . If D is a data distribution over Rd × {±1}, the astuteness of f over D with respect to U , denoted AU (f,D), is the fraction of examples (x, y) ∼ D for which f is astute at (x, y) with respect to U . Thus AU (f,D) = P(x,y)∼D[f(x′) = y,∀x′ ∈ Ux]. Non-parametric Classifiers We now briefly review several kinds of non-parametric classifiers that we will consider throughout this paper. We begin with weight functions, which are a general class of non-parametric algorithms that encompass many classic algorithms, including nearest neighbors and kernel classifiers. Weight functions are built from training sets, S = {(x1, y1), (x2, y2, ), . . . , (xn, yn)} by assigning a function wSi : Rd → [0, 1] that essentially scores how relevant the training point (xi, yi) is to the example being classified. The functions wSi are allowed to depend on x1, . . . , xn but must be independent of the labels y1, . . . , yn. Given these functions, a point x is classified by just checking whether ∑ yiw S i (x) ≥ 0 or not. If it is nonnegative, we output +1 and otherwise −1. A complete description of weight functions is included in the appendix. Next, we enumerate several common Non-parametric classifiers that can be construed as weight functions. Details can be found in the appendix. Histogram classifiers partition the domain Rd into cells recursively by splitting cells that contain a sufficiently large number of points xi. This corresponds to a weight function in which wSi (x) = 1 kx if xi is in the same cell as x, where kx denotes the number of points in the cell containing x. kn-nearest neighbors corresponds to a weight function in which wSi (x) = 1 kn if xi is one of the kn nearest neighbors of x, and wSi (x) = 0 otherwise. Kernel-Similarity classifiers are weight functions built from a kernel function K : R≥0 → R≥0 and a window size (hn)∞1 such that w S i (x) ∝ K(ρ(x, xi)/hn) (we normalize by dividing by∑n 1 K((ρ(x, xi)/hn))). 3 The Neighborhood preserving Bayes optimal classifier Robust classification is typically studied by setting the robustness regions, U = {Ux}x∈Rd , to be balls of radius r centered at x, Ux = {x′ : ρ(x, x′) ≤ r}. The quantity r is the robustness radius, and is typically set by the practitioner (before any training has occurred). This method has a limitation with regards to trade-offs between accuracy and robustness. To increase the margin or robustness, we must have a large robustness radius (thus allowing us to defend from larger adversarial attacks). However, with large robustness radii, this can come at a cost of accuracy, as it is not possible to robustly give different labels to points with intersecting robustness regions. For an illustration, consider Figure 1. Here we consider a data distribution D = (µ, η) in which the blue regions denote all points with η(x) > 0.5 (and thus should be labeled +), and the red regions denote all points with η(x) < 0.5 (and thus should be labeled −). Observe that it is not possible to be simultaneously accurate and robust at points A,B while enforcing a large robustness radius, as demonstrated by the intersecting balls. While this can be resolved by using a smaller radius, this results in losing out on potential robustness at point C. In principal, we should be able to afford a large margin of robustness about C due to its relatively far distance from the red regions. Motivated by this issue, we seek to find a formalism for robustness that allows us to simultaneously avoid paying for any accuracy-robustness trade-offs and adaptively size robustness regions (thus allowing us to defend against a larger range of adversarial attacks at points that are located in more homogenous zones of the distribution support). To approach this, we will first provide an ideal limit object: a classifier that has the same accuracy as the Bayes optimal (thus meeting our first criteria) that has good robustness properties. We call this the the neighborhood preserving Bayes optimal classifier, defined as follows. Definition 2. Let D = (µ, η) be a distribution over Rd ×{±1}. Then the neighborhood preserving Bayes optimal classifier of D, denoted gneighbor, is the classifier defined as follows. Let µ+ = {x : η(x) ≥ 12} and µ − = {x : η(x) < 12}. Then for any x ∈ R d, gneighbor(x) = +1 if ρ(x, µ+) ≤ ρ(x, µ−), and gneighbor(x) = −1 otherwise. This classifier can be thought of as the most robust classifier that matches the accuracy of the Bayes optimal. We call it neighborhood preserving because it extends the Bayes optimal classifier into a local neighborhood about every point in the support. For an illustration, refer to Figure 2, which plots the decision boundary of the neighborhood preserving Bayes optimal for an example distribution. Next, we turn our attention towards measuring its robustness, which must be done with respect to some set of robustness regions U = {Ux}. While these regions Ux can be nearly arbitrary, we seek regions Ux such that AU (gmax,D) = A(gbayes,D) (our astuteness equals the maximum possible accuracy) and Ux are “as large as possible" (representing large robustness). To this end, we propose the following regions. Definition 3. Let D = (µ, η) be a data distribution over Rd × {±1}. Let µ+ = {x : η(x) > 12}, µ− = {x : η(x) < 12}, and µ 1/2 = {x : η(x) = 12}. For x ∈ µ +, we define the neighborhood preserving robustness region, denoted Vx, as Vx = {x′ : ρ(x, x′) < ρ(µ− ∪ µ 1 2 , x′)}. It consists of all points that are closer to x than they are to µ− ∪ µ1/2 (points oppositely labeled from x). We can use a similar definition for x ∈ µ−. Finally, if x ∈ µ1/2, we simply set Vx = {x}. These robustness regions take advantage of the structure of the neighborhood preserving Bayes optimal. They can essentially be thought of as regions that maximally extend from any point x in the support of D to the decision boundary of the neighborhood preserving Bayes optimal. We include an illustration of the regions Vx for an example distribution in Figure 2. As a technical note, for x ∈ supp(D) with η(x) = 0.5, we give them a trivial robustness region. The rational for doing this is that η(x) = 0.5 is an edge case that is arbitrary to classify, and consequently enforcing a robustness region at that point is arbitrary and difficult to enforce. We now formalize the robustness and accuracy guarantees of the max-margin Bayes optimal classifier with the following two results. Theorem 4. (Accuracy) Let D be a data distribution. Let V denote the collection of neighborhood preserving robustness regions, and let g denote the Bayes optimal classifier. Then the neighborhood preserving Bayes optimal classifier, gneighbor, satisfies AV(gneighbor,D) = A(g,D), where A(g,D) denotes the accuracy of the Bayes optimal. Thus, gneighbor maximizes accuracy. Theorem 5. (Robustness) Let D be a data distribution, let f be a classifier, and let U be a set of robustness regions. Suppose that AU (f,D) = A(g,D), where g denotes the Bayes optimal classifier. Then there exists x ∈ supp(D) such that Vx 6⊂ Ux, where Vx denotes the neighborhood preserving robustness region about x. In particular, we cannot have Vx be a strict subset of Ux for all x. Theorem 4 shows that the neighborhood preserving Bayes classifier achieves maximal accuracy, while Theorem 5 shows that achieving a strictly higher robustness (while maintaining accuracy) is not possible; while it is possible to make accurate classifiers which have higher robustness than gneighbor in some regions of space, it is not possible for this to hold across all regions. Thus, the neighborhood preserving Bayes optimal classifier can be thought of as a local maximum to the constrained optimization problem of maximizing robustness subject to having maximum (equal to the Bayes optimal) accuracy. 3.1 Neighborhood Consistency Having defined the neighborhood preserving Bayes optimal classifier, we now turn our attention towards building classifiers that converge towards it. Before doing this, we must precisely define what it means to converge. Intuitively, this consists of building classifiers whose robustness regions “approach" the robustness regions of the neighborhood preserving Bayes optimal classifier. This motivates the definition of partial neighborhood preserving robustness regions. Definition 6. Let 0 < κ < 1 be a real number, and let D = (µ, η) be a data distribution over Rd × {±1}. Let µ+ = {x : η(x) > 12}, µ − = {x : η(x) < 12}, and µ 1/2 = {x : η(x) = 12}. For x ∈ µ+, we define the neighborhood preserving robustness region, denoted Vx, as Vx = {x′ : ρ(x, x′) < κρ(µ− ∪ µ 1 2 , x′)}. It consists of all points that are closer to x than they are to µ− ∪ µ1/2 (points oppositely labeled from x) by a factor of κ. We can use a similar definition for x ∈ µ−. Finally, if η(x) = 12 , we simply set V κx = {x}. Observe that V κx ⊂ Vx for all 0 < κ < 1, and thus being robust with respect to V κx is a milder condition than Vx. Using this notion, we can now define margin consistency. Definition 7. A learning algorithm A is said to be neighborhood consistent if the following holds for any data distribution D. For any 0 < , δ, κ < 1, there exists N such that for all n ≥ N , with probability at least 1− δ over S ∼ Dn, AVκ(AS , D) ≥ A(g,D)− , where g denotes the Bayes optimal classifier and AS denotes the classifier learned by algorithm A from dataset S. This condition essentially says that the astuteness of the classifier learned by the algorithm converges towards the accuracy of the Bayes optimal classifier. Furthermore, we stipulate that this holds as long as the astuteness is measured with respect to some Vκ. Observe that as κ→ 1, these regions converge towards the neighborhood preserving robustness regions, thus giving us a classifier with robustness effectively equal to that of the neighborhood preserving Bayes optimal classifier. 4 Neighborhood Consistent Non-Parametric Classifiers Having defined neighborhood consistency, we turn to the following question: which non-parametric algorithms are neighborhood consistent? Our starting point will be the standard literature for the convergence of non-parametric classifiers with regard to accuracy. We begin by considering the standard conditions for kn-nearest neighbors to converge (in accuracy) towards the Bayes optimal. kn-nearest neighbors is consistent if and only if the following two conditions are met: limn→∞ kn = ∞, and limn→∞ knn = 0. The first condition guarantees that each point is classified by using an increasing number of nearest neighbors (thus making the probability of a misclassification small), and the second condition guarantees that each point is classified using only points very close to it. We will refer to the first condition as precision, and the second condition as locality. A natural question is whether the same principles suffice for neighborhood consistency as well. We began by showing that without any additional constraints, the answer is no. Theorem 8. Let D = (µ, η) be the data distribution where µ denotes the uniform distribution over [0, 1] and η is defined as: η(x) = x. Over this space, let ρ be the euclidean distance metric. Suppose kn = O(log n) for 1 ≤ n < ∞. Then kn-nearest neighbors is not neighborhood consistent with respect to D. The issue in the example above is that for smaller kn, kn-nearest neighbors lacks sufficient precision. For neighborhood consistnecy, points must be labeled using even more training points than are needed accuracy. This is because the classifier must be uniformly correct across the entirety of V κx . Thus, to build neighborhood consistent classifiers, we must bolster the precision from the standard amount used for standard consistency. To do this, we begin by introducing splitting numbers, a useful tool for bolstering the precision of weight functions. 4.1 Splitting Numbers We will now generalize beyond nearest neighbors to consider weight functions. Doing so will allow us to simultaneously analyze nearest neighbors and kernel classifiers. To do so, we must first rigorously substantiate our intuitions about increasing precision into concrete requirements. This will require several technical definitions. Definition 9. Let µ be a probability measure over Rd. For any x ∈ Rd, the probability radius rp(x) is the smallest radius for which B(x, rp(x)) has probability mass at least p. More precisely, rp(x) = inf{r : µ(B(x, r)) ≥ p}. Definition 10. Let W be a weight function and let S = {x1, x2, . . . , xn} be any finite subset of Rd. For any x ∈ Rd, α ≥ 0, and 0 ≤ β ≤ 1, let Wx,α,β = {i : ρ(x, xi) ≤ α,wSi (x) ≥ β}. Then the splitting number of W with respect to S, denoted as T (W,S) is the number of distinct subsets generated by Wx,αβ as x ranges over Rd, α ranges over [0,∞), and β ranges over [0, 1]. Thus T (W,S) = |{Wx,α,β : x ∈ Rd, 0 ≤ α, 0 ≤ β ≤ 1}|. Splitting numbers allow us to ensure high amounts of precision over a weight function. To prove neighborhood consistency, it is necessary for a classifier to be correct at all points in a given region. Consequently, techniques that consider a single point will be insufficient. The splitting number provides a mechanism for studying entire regions simultaneously. For more details on splitting numbers, we include several examples in the appendix. 4.2 Sufficient Conditions for Neighborhood Consistency We now state our main result. Theorem 11. Let W be a weight function, D a distribution over Rd × {±1}, U a neighborhood preserving collection, and (tn)∞1 be a sequence of positive integers such that the following four conditions hold. 1. W is consistent (with resp. to accuracy) with resp. to D. 2. For any 0 < p < 1, limn→∞ES∼Dn [supx∈Rd ∑n 1 w S i (x)1ρ(x,xi)>rp(x)] = 0. 3. limn→∞ES∼Dn [tn supx∈Rd w S i (x)] = 0. 4. limn→∞ES∼Dn log T (W,S) tn = 0. Then W is neighborhood consistent with respect to D. Remarks: Condition 1 is necessary because neighborhood consistency implies standard consistency – or, convergence in accuracy to the Bayes Optimal. Standard consistency has been well studied for non-parametric classifiers, and there are a variety of results that can be used to ensure it – for example, Stone’s Theorem (included in the appendix). Conditions 2. and 3. are stronger version of conditions 2. and 3. of Stone’s theorem. In particular, both include a supremum taken over all x ∈ Rd as opposed to simply considering a random point x ∼ D. This is necessary for ensuring correct labels on entire regions of points simultaneously. We also note that the dependence on rp(x) (as opposed to some fixed r) is a key property used for adaptive robustness. This allows the algorithm to adjust to potential differing distance scales over different regions in Rd. This idea is reminiscent of the analysis given in [6], which also considers probability radii. Condition 4. is an entirely new condition which allows us to simultaneously consider all T (W,S) subsets of S. This is needed for analyzing weighted sums with arbitrary weights. Next, we apply Theorem 11 to get specific examples of margin consistent non-parametric algorithms. 4.3 Nearest Neighbors and Kernel Classifiers We now provide sufficient conditions for kn-nearest neighbors to be neighborhood consistent. Corollary 12. Suppose (kn)∞1 satisfies (1) limn→∞ kn n = 0, and (2) limn→∞ logn kn = 0. Then kn-nearest neighbors is neighborhood consistent. As a result of Theorem 8, corollary 12 is tight for nearest neighbors. Thus kn nearest neighbors is neighborhood consistent if and only if kn = ω(log n). Next, we give sufficient conditions for a kernel-similarity classifier. Corollary 13. Let W be a kernel classifier over Rd × {±1} constructed from K : R+ → R+ and hn. Suppose the following properties hold. 1. K is decreasing, and satisfies ∫ Rd K(||x||)dx <∞. 2. limn→∞ hn = 0 and limn→∞ nhdn =∞. 3. For any c > 1, limx→∞ K(cx) K(x) = 0. 4. For any x ≥ 0, limn→∞ nlognK( x hn ) =∞. Then W is neighborhood consistent. Observe that conditions 1. 2. and 3. are satisfied by many common Kernel functions such as the Gaussian or Exponential kernel (K(x) = exp(−x2)/ K(x) = exp(−x)). Condition 4. can be similarly satisfied by just increasing hn to be sufficiently large. Overall, this theorem states that Kernel classification is neighborhood consistent as long as the bandwidth shrinks slowly enough. 4.4 Histogram Classifiers Having discussed neighborhood consistent nearest-neighbors and kernel classifier, we now turn our attention towards another popular weight function, histogram classifiers. Recall that histogram classifiers operate by partitioning their input space into increasingly small cells, and then classifying each cell by using a majority vote from the training examples within that cell (a detailed description can be found in the appendix). We seek to answer the following question: is increasing precision sufficient for making histogram classifiers neighborhood consistent? Unfortunately, the answer this turns out not to be no. The main issue is that histogram classifiers have no mechanism for performing classification outside the support of the data distribution. For an example of this, refer to Figure 3. Here we see a distribution being classified by a histogram classifier. Observe that the cell labeled A contains points that are strictly closer to µ+ than µ−, and consequently, for sufficiently large κ, V κx will intersect A for some point x ∈ µ+. A similar argument holds for the cells labeled B and C.. However, since A,B,C are all in cells that will never contain any data, they will never be labeled in a meaningful way. Because of this, histogram classifiers are not neighborhood consistent. 5 Validation To complement our theoretical large sample results for non-parametric classifiers, we now include several experiments to understand their behavior for finite samples. We seek to understand how quickly non-parametic classifiers converge towards the neighborhood preserving Bayes optimal. We focus our attention on kernel classifiers and use two different kernel similarity functions: the first, an exponential kernel, and the second, a polynomial kernel. These classifiers were chosen so that the former meets the conditions of Corollary 13, and the latter does not. Full details on these classifiers can be found in the appendix. To be able to measure performance with increasing data size, we look at a simple synthetic dataset over overlayed circles (see Figure 5 for an illustration) with support designed so that the data is intrinsically multiscaled. In particular, this calls for different levels of robustness in different regions. For simplicity, we use a global label noise parameter of 0.2, meaning that any sample drawn from this distribution is labeled differently than its support with probability 0.2. Further details about our dataset are given in section D. Performance Measure. For a given classifier, we evaluate its astuteness at a test point x with respect to the robustness region V κx (Definition 6). While these regions are not computable in practice due to their dependency on the support of the data distribution, we are able to approximate them for this synthetic example due to our explicit knowledge of the data distribution. Details for doing this can be found in the appendix. To compute the empirical astuteness of a kernel classifier WK about test point x, we perform a grid search over all points in V κx to ensure that all points in the robustness region are labeled correctly. For each classifier, we measure the empirical astuteness by using three trials of 20 test points and taking the average. While this is a relatively small amount of test data, it suffices as our purpose is to just verify that the algorithm roughly converges towards the optimal possible astuteness. Recall that for any neighborhood consistent algorithm, as n→∞, AVκ should converge towards A∗, the accuracy of the Bayes optimal classifier, for any 0 < κ < 1. Thus, to verify this holds, we use κ = 0.1, 0.3, 0.5. For each of these values, we plot the empirical astuteness as the training sample size n gets larger and larger. As a baseline, we also plot their standard accuracy on the test set. Results and Discussion: The results are presented in Figure 4; the left panel is for the exponential kernel, while the right one is for the polynomial kernel. As predicted by our theory, we see that in all cases, the exponential kernel converges towards the maximum astuteness regardless of the value of κ: the only difference is that the rate of convergence is slower for larger values of κ. This is, of course, expected because larger values of κ entail larger robustness regions. By contrast, the polynomial kernel performs progressively worse for larger values of κ. This kernel was selected specifically to violate the conditions of Corollary 13, and in particular fails criteria 3. However, note that the polynomial kernel nevertheless performs will with respect to accuracy thus giving another example demonstrating the added difficulty of neighborhood consistency. Our results bridge the gap between our asymptotic theoretical results and finite sample regimes. In particular, we see that kernel classifiers that meet the conditions of Corollary 13 are able to converge in astuteness towards the neighborhood preserving Bayes optimal classifier, while classifiers that do not meet these conditions fail. 6 Related Work There is a wealth of literature on robust classification, most of which impose the same robustness radius r on the entire data. [5, 17, 19, 20, 26, 15, 16, 18, 21, 22, 23], among others, focus primarily on neural networks, and robustness regions that are `1, `2, or `∞ norm balls of a given radius r. [7] and [12] show how to train neural networks with different robustness radii at different points by trading off robustness and accuracy; their work differ from ours in that they focus on neural networks, their robustness regions are still norm balls, and that their work is largely empirical. Our framework is also related to large margin classification – in the sense that the robustness regions U induce a margin constraint on the decision boundary. The most popular large margin classifier is the Support Vector Machine[9, 3, 14] – a large margin linear classifier that minimizes the worstcase margin over the training data. Similar ideas have also been used to design classifiers that are more flexible than linear; for example, [27] shows how to build large margin Lipschitz classifiers by rounding globally Lipschitz functions. Finally, there has also been purely empirical work on achieving large margins for more complex classifiers – such as [13] for deep neural networks that minimizes the worst case margin, and [29] for metric learning to find large margin nearest neighbors. Our work differs from these in that our goal is to ensure a high enough local margin at each x, (by considering the neighborhood preserving regions Vx) as opposed to optimizing a global margin. Finally, our analysis builds on prior work on robust classification for non-parametric methods in the standard framework. [1, 24, 28, 31] provide adversarial attacks on non-parametric methods. Wang et. al. [28] develops a defense for 1-NN that removes a subset of the training set to ensure higher robustness. Yang et. al [31] proposes the r-optimal classifier – which is the maximally astute classifier in the standard robustness framework – and proposes a defense called Adversarial Pruning. Theoretically, [4] provide conditions under which weight functions converge towards the r-optimal classifier in the large sample limit. They show that for r-separated distributions, where points from different classes are at least distance 2r or more apart, nearest neighbors and kernel classifiers satisfy these conditions. In the more general case, they use Adversarial Pruning as a preprocessing step to ensure that the training data is r-separated, and show that this preprocessing step followed by nearest neighbors or kernel classifiers leads to solutions that are robust and accurate in the large sample limit. Our result fundamentally differs from theirs in that we analyze a different algorithm, and our proof techniques are quite different. In particular, the fundamental differences between the r-optimal classifier and the neighborhood preserving Bayes optimal classifier call for different algorithms and different analysis techniques. In concurrent work, [8] proposes a similar limit to the neighborhood preserving Bayes optimal which they refer to as the margin canonical Bayes. However, their work then focuses on a data augmentation technique that leads to convergence whereas we focus on proving the neighborhood consistency of classical non-parametric classifiers.
1. What is the main contribution of the paper regarding robust learning frameworks? 2. What are the strengths and weaknesses of the paper's theoretical results? 3. How does the reviewer assess the writing quality and readability of the paper? 4. What motivations or real-world applications are lacking in the paper? 5. Are there any concerns about the finiteness of sample size in the paper's results?
Summary Of The Paper Review
Summary Of The Paper This paper aims to give a new robust learning framework where the radius of adversarial attacks at different points can adaptively change. The main purpose is "to increase robustness, without sacrificing accuracy", as claimed by the author(s). In order to achieve this goal, author(s) have utilized a number of consistent non-parametric learning algorithms which are already proved to converge to an optimal classifier as the sample size increases (optimality could be defined in any certain sense, however, in this paper Bayes optimality has been mostly focused). Then, a new modification has been proposed in order to help such learning techniques to preserve as much adversarial robustness as they can, while still converge to the Bayes optimal classifier. The main results are (most probably) solid, and maybe even interesting specially when we look at them from a pure theoretical standpoint. Moreover, I checked all the theoretical results and some of their proofs, and didn't find any notable technical mistakes or counter-intuitive conclusions. However, paper suffers from uninformative writing (specially in the introduction section) and lack of proper justification for some of the core ideas in real-world applications. The above problems, in addition to the fact that results are mostly asymptotic and centered around "consistency" of classifiers rather than their finite-sample performance, are the main weaknesses of the paper. My vote at the current stage is weak-reject. Review As far as I have understood, the main contribution of this paper is as follows: When the number of samples n goes toward infinity and consequently the true data distribution becomes known to the learner, then "the most robust classifier which coincides with the Bayes optimal classifier inside the support" would be trivial and easy to achieve. This paper gives a methodical approach so that some classes of non-paramteric classifiers (like k -nearest neighbor) can be computed for finite n , while are also guaranteed to converge to the trivial robust classifiers of n = ∞ which are described above (The guarantees are all asymptotic in this work). There are some other contributions in proving the consistency of certain non-parametric classes of learners, which I assume to be additional results. Main comments: Paper can greatly benefit from an improved writing, specially for the Introduction section. Some terms, such as "accuracy and astuteness" have been carelessly used in this section without being properly defined (they are both defined later in subsequent sections). This has made the Intro part to be somehow uninformative. Also, some motivations in the beginning of the paper have become lost and buried under tones of theorems and definitions inside the manuscript. For example, the abstract of the paper gives some motivations w.r.t. varying radius of attacks r for different regions of the distribution. However, I do not see the results to be that much aligned with this particular motif. Maybe the abstract and parts of the introduction section should be rewritten. Another issue is that paper lacks real-world justification for its main idea: Having robustness to out-of-distribution samples (for example, off-manifold data points which are usually the result of adversarial attacks) is interesting, but the discussions around this issue in the paper is slim to none. In other words, why should any reader be concerned with samples like x which are not in the support of data distribution, i.e. x ∉ supp ( μ ) ? One might assume the answer to be crystal clear for some well-informed audience, but that's not the case for general readers. Another fundamental problem is that results in this work are asymptotic. No non-asymptotic bounds have been proposed to guarantee any type of performance measure when the number of samples n is finite. Any type of guarantees on the adversarial robustness when n is infinite and the true underlying data distribution (or at least the true data manifold) is revealed are not that much interesting. It would be nice if authors can give some certificate for robustness when n is finite. Some of the cited works in this paper (Sinha et al., for example) have already done that for distributionally robust scenarios. Minor comments: -(Line 144): "the" has been repeated.
NIPS
Title Consistent Non-Parametric Methods for Maximizing Robustness Abstract Learning classifiers that are robust to adversarial examples has received a great deal of recent attention. A major drawback of the standard robust learning framework is there is an artificial robustness radius r that applies to all inputs. This ignores the fact that data may be highly heterogeneous, in which case it is plausible that robustness regions should be larger in some regions of data, and smaller in others. In this paper, we address this limitation by proposing a new limit classifier, called the neighborhood optimal classifier, that extends the Bayes optimal classifier outside its support by using the label of the closest in-support point. We then argue that this classifier maximizes the size of its robustness regions subject to the constraint of having accuracy equal to the Bayes optimal. We then present sufficient conditions under which general non-parametric methods that can be represented as weight functions converge towards this limit, and show that both nearest neighbors and kernel classifiers satisfy them under certain conditions. 1 Introduction Adversarially robust classification, that has been of much recent interest, is typically formulated as follows. We are given data drawn from an underlying distribution D, a metric d, as well as a pre-specified robustness radius r. We say that a classifier c is r-robust at an input x if it predicts the same label on a ball of radius r around x. Our goal in robust classification is to find a classifier c that maximizes astuteness, which is defined as accuracy on those examples where c is also r-robust. While this formulation has inspired a great deal of recent work, both theoretical and empirical [5, 17, 19, 20, 26, 15, 18, 21, 22, 23, 30], a major limitation is that enforcing a pre-specified robustness radius r may lead to sub-optimal accuracy and robustness. To see this, consider what would be an ideally robust classifier the example in Figure 1. For simplicity, suppose that we know the data distribution. In this case, a classifier that has an uniformly large robustness radius r will misclassify some points from the blue cluster on the left, leading to lower accuracy. This is illustrated in panel (a), in which large robustness radius leads to intersecting robustness regions. On the other hand, in panel (b), the blue cluster on the right is highly separated from the red cluster, and could be accurately classified with a high margin. But this will not happen if the robustness radius is set small enough to avoid the problems posed in panel (a). Thus, enforcing a fixed robustness radius that applies to the entire dataset may lead to lower accuracy and lower robustness. In this work, we propose an alternative formulation of robust classification that ensures that in the large sample limit, there is no robustness-accuracy trade off, and that regions of space with higher separation are classified more robustly. An extra advantage is that our formulation is achievable by existing methods. In particular, we show that two very common non-parametric algorithms – nearest neighbors and kernel classifiers – achieve these properties in the large sample limit. Our formulation is built on the notion of a new large-sample limit. In the standard statistical learning framework, the large-sample ideal is the Bayes optimal classifier that maximizes accuracy on the 35th Conference on Neural Information Processing Systems (NeurIPS 2021). data distribution, and is undefined outside. Since this is not always robust with radius r, prior work introduces the notion of an r-optimal classifier [31] that maximizes accuracy on points where it is also r-robust. However, this classifier also suffers from the same challenges as the example in Figure 1. We depart from both by introducing a new limit that we call the neighborhood preserving Bayes optimal classifier, described as follows. Given an input x that lies in the support of the data distribution D, it predicts the same label as the Bayes optimal. On an x outside the support, it outputs the prediction of the Bayes Optimal on the nearest neighbor of x within the support of D. The first property ensures that there is no loss of accuracy – since it always agrees with the Bayes Optimal within the data distribution. The second ensures higher robustness in regions that are better separated. Our goal is now to design classifiers that converge to the neighborhood preserving Bayes optimal in the large sample limit; this ensures that with enough data, the classifier will have accuracy approaching that of the Bayes optimal, as well as higher robustness where possible without sacrificing accuracy. We next investigate how to design classifiers with this convergence property. Our starting point is classical statistical theory [25] that shows that a class of methods known as weight functions will converge to a Bayes optimal in the large sample limit provided certain conditions hold; these include k-nearest neighbors under certain conditions on k and n, certain kinds of decision trees as well as kernel classifiers. Through an analysis of weight functions, we next establish precise conditions under which they converge to the neighborhood preserving Bayes optimal in the large sample limit. As expected, these are stronger than standard convergence to the Bayes optimal. In the large sample limit, we show that kn-nearest neighbors converge to the neighborhood preserving Bayes optimal provided kn = ω(log n), and kernel classifiers converge to the neighborhood preserving Bayes optimal provided certain technical conditions (such as the bandwidth shrinking sufficiently slowly). By contrast, certain types of histograms do not converge to the neighborhood preserving Bayes optimal, even if they do converge to the Bayes optimal. We round these off with a lower bound that shows that for nearest neighbor, the condition that kn = ω(log n) is tight. In particular, for kn = O(log n), there exist distributions for which kn-nearest neighbors provably fails to converge towards the neighborhood preserving Bayes optimal (despite converging towards the standard Bayes optimal). In summary, the contributions of the paper are as follows. First, we propose a new large sample limit the neighborhood preserving Bayes optimal and a new formulation for robust classification. We then establish conditions under which weight functions, a class of non-parametric methods, converge to the neighborhood preserving Bayes optimal in the large sample limit. Using these conditions, we show that kn-nearest neighbors satisfy these conditions when kn = ω(log n), and kernel classifiers satisfy these conditions provided the kernel function K has faster than polynomial decay, and the bandwidth parameter hn decreases sufficiently slowly. To complement these results, we also include negative examples of non-parametric classifiers that do not converge. We provide an example where histograms do not converge to the neighborhood preserving Bayes optimal with increasing n. We also show a lower bound for nearest neighbors, indi- cating that kn = ω(log n) is both necessary and sufficient for convergence towards the neighborhood preserving Bayes optimal. Our results indicate that the neighborhood preserving Bayes optimal formulation shows promise and has some interesting theoretical properties. We leave open the question of coming up with other alternative formulations that can better balance both robustness and accuracy for all kinds of data distributions, as well as are achievable algorithmically. We believe that addressing this would greatly help address the challenges in adversarial robustness. 2 Preliminaries We consider binary classification over Rd × {±1}, and let ρ denote any distance metric on Rd. We let µ denote the measure over Rd corresponding to the probability distribution over which instances x ∈ Rd are drawn. Each instance x is then labeled as +1 with probability η(x) and −1 with probability 1− η(x). Together, µ and η comprise our data distribution D = (µ, η) over Rd × {±1}. For comparison to the robust case, for a classifier f : Rd → {±1} and a distribution D over Rd × {±1}, it will be instructive to consider its accuracy, denoted A(f,D), which is defined as the fraction of examples from D that f labels correctly. Accuracy is maximized by the Bayes Optimal classifier: which we denote by g. It can be shown that for any x ∈ supp(µ), g(x) = 1 if η(x) ≥ 12 , and g(x) = −1 otherwise. Our goal is to build classifiers Rd → {±1} that are both accurate and robust to small perturbations. For any example x, perturbations to it are constrained to taking place in the robustness region of x, denoted Ux. We will let U = {Ux : x ∈ Rd} denote the collections of all robustness regions. We say that a classifier f : Rd → {±1} is robust at x if for all x′ ∈ Ux, f(x′) = f(x). Combining robustness and accuracy, we say that classifier is astute at a point x if it is both accurate and robust. Formally, we have the following definition. Definition 1. A classifier f : Rd → {±1} is said to be astute at (x, y) with respect to robustness collection U if f(x) = y and f is robust at x with respect to U . If D is a data distribution over Rd × {±1}, the astuteness of f over D with respect to U , denoted AU (f,D), is the fraction of examples (x, y) ∼ D for which f is astute at (x, y) with respect to U . Thus AU (f,D) = P(x,y)∼D[f(x′) = y,∀x′ ∈ Ux]. Non-parametric Classifiers We now briefly review several kinds of non-parametric classifiers that we will consider throughout this paper. We begin with weight functions, which are a general class of non-parametric algorithms that encompass many classic algorithms, including nearest neighbors and kernel classifiers. Weight functions are built from training sets, S = {(x1, y1), (x2, y2, ), . . . , (xn, yn)} by assigning a function wSi : Rd → [0, 1] that essentially scores how relevant the training point (xi, yi) is to the example being classified. The functions wSi are allowed to depend on x1, . . . , xn but must be independent of the labels y1, . . . , yn. Given these functions, a point x is classified by just checking whether ∑ yiw S i (x) ≥ 0 or not. If it is nonnegative, we output +1 and otherwise −1. A complete description of weight functions is included in the appendix. Next, we enumerate several common Non-parametric classifiers that can be construed as weight functions. Details can be found in the appendix. Histogram classifiers partition the domain Rd into cells recursively by splitting cells that contain a sufficiently large number of points xi. This corresponds to a weight function in which wSi (x) = 1 kx if xi is in the same cell as x, where kx denotes the number of points in the cell containing x. kn-nearest neighbors corresponds to a weight function in which wSi (x) = 1 kn if xi is one of the kn nearest neighbors of x, and wSi (x) = 0 otherwise. Kernel-Similarity classifiers are weight functions built from a kernel function K : R≥0 → R≥0 and a window size (hn)∞1 such that w S i (x) ∝ K(ρ(x, xi)/hn) (we normalize by dividing by∑n 1 K((ρ(x, xi)/hn))). 3 The Neighborhood preserving Bayes optimal classifier Robust classification is typically studied by setting the robustness regions, U = {Ux}x∈Rd , to be balls of radius r centered at x, Ux = {x′ : ρ(x, x′) ≤ r}. The quantity r is the robustness radius, and is typically set by the practitioner (before any training has occurred). This method has a limitation with regards to trade-offs between accuracy and robustness. To increase the margin or robustness, we must have a large robustness radius (thus allowing us to defend from larger adversarial attacks). However, with large robustness radii, this can come at a cost of accuracy, as it is not possible to robustly give different labels to points with intersecting robustness regions. For an illustration, consider Figure 1. Here we consider a data distribution D = (µ, η) in which the blue regions denote all points with η(x) > 0.5 (and thus should be labeled +), and the red regions denote all points with η(x) < 0.5 (and thus should be labeled −). Observe that it is not possible to be simultaneously accurate and robust at points A,B while enforcing a large robustness radius, as demonstrated by the intersecting balls. While this can be resolved by using a smaller radius, this results in losing out on potential robustness at point C. In principal, we should be able to afford a large margin of robustness about C due to its relatively far distance from the red regions. Motivated by this issue, we seek to find a formalism for robustness that allows us to simultaneously avoid paying for any accuracy-robustness trade-offs and adaptively size robustness regions (thus allowing us to defend against a larger range of adversarial attacks at points that are located in more homogenous zones of the distribution support). To approach this, we will first provide an ideal limit object: a classifier that has the same accuracy as the Bayes optimal (thus meeting our first criteria) that has good robustness properties. We call this the the neighborhood preserving Bayes optimal classifier, defined as follows. Definition 2. Let D = (µ, η) be a distribution over Rd ×{±1}. Then the neighborhood preserving Bayes optimal classifier of D, denoted gneighbor, is the classifier defined as follows. Let µ+ = {x : η(x) ≥ 12} and µ − = {x : η(x) < 12}. Then for any x ∈ R d, gneighbor(x) = +1 if ρ(x, µ+) ≤ ρ(x, µ−), and gneighbor(x) = −1 otherwise. This classifier can be thought of as the most robust classifier that matches the accuracy of the Bayes optimal. We call it neighborhood preserving because it extends the Bayes optimal classifier into a local neighborhood about every point in the support. For an illustration, refer to Figure 2, which plots the decision boundary of the neighborhood preserving Bayes optimal for an example distribution. Next, we turn our attention towards measuring its robustness, which must be done with respect to some set of robustness regions U = {Ux}. While these regions Ux can be nearly arbitrary, we seek regions Ux such that AU (gmax,D) = A(gbayes,D) (our astuteness equals the maximum possible accuracy) and Ux are “as large as possible" (representing large robustness). To this end, we propose the following regions. Definition 3. Let D = (µ, η) be a data distribution over Rd × {±1}. Let µ+ = {x : η(x) > 12}, µ− = {x : η(x) < 12}, and µ 1/2 = {x : η(x) = 12}. For x ∈ µ +, we define the neighborhood preserving robustness region, denoted Vx, as Vx = {x′ : ρ(x, x′) < ρ(µ− ∪ µ 1 2 , x′)}. It consists of all points that are closer to x than they are to µ− ∪ µ1/2 (points oppositely labeled from x). We can use a similar definition for x ∈ µ−. Finally, if x ∈ µ1/2, we simply set Vx = {x}. These robustness regions take advantage of the structure of the neighborhood preserving Bayes optimal. They can essentially be thought of as regions that maximally extend from any point x in the support of D to the decision boundary of the neighborhood preserving Bayes optimal. We include an illustration of the regions Vx for an example distribution in Figure 2. As a technical note, for x ∈ supp(D) with η(x) = 0.5, we give them a trivial robustness region. The rational for doing this is that η(x) = 0.5 is an edge case that is arbitrary to classify, and consequently enforcing a robustness region at that point is arbitrary and difficult to enforce. We now formalize the robustness and accuracy guarantees of the max-margin Bayes optimal classifier with the following two results. Theorem 4. (Accuracy) Let D be a data distribution. Let V denote the collection of neighborhood preserving robustness regions, and let g denote the Bayes optimal classifier. Then the neighborhood preserving Bayes optimal classifier, gneighbor, satisfies AV(gneighbor,D) = A(g,D), where A(g,D) denotes the accuracy of the Bayes optimal. Thus, gneighbor maximizes accuracy. Theorem 5. (Robustness) Let D be a data distribution, let f be a classifier, and let U be a set of robustness regions. Suppose that AU (f,D) = A(g,D), where g denotes the Bayes optimal classifier. Then there exists x ∈ supp(D) such that Vx 6⊂ Ux, where Vx denotes the neighborhood preserving robustness region about x. In particular, we cannot have Vx be a strict subset of Ux for all x. Theorem 4 shows that the neighborhood preserving Bayes classifier achieves maximal accuracy, while Theorem 5 shows that achieving a strictly higher robustness (while maintaining accuracy) is not possible; while it is possible to make accurate classifiers which have higher robustness than gneighbor in some regions of space, it is not possible for this to hold across all regions. Thus, the neighborhood preserving Bayes optimal classifier can be thought of as a local maximum to the constrained optimization problem of maximizing robustness subject to having maximum (equal to the Bayes optimal) accuracy. 3.1 Neighborhood Consistency Having defined the neighborhood preserving Bayes optimal classifier, we now turn our attention towards building classifiers that converge towards it. Before doing this, we must precisely define what it means to converge. Intuitively, this consists of building classifiers whose robustness regions “approach" the robustness regions of the neighborhood preserving Bayes optimal classifier. This motivates the definition of partial neighborhood preserving robustness regions. Definition 6. Let 0 < κ < 1 be a real number, and let D = (µ, η) be a data distribution over Rd × {±1}. Let µ+ = {x : η(x) > 12}, µ − = {x : η(x) < 12}, and µ 1/2 = {x : η(x) = 12}. For x ∈ µ+, we define the neighborhood preserving robustness region, denoted Vx, as Vx = {x′ : ρ(x, x′) < κρ(µ− ∪ µ 1 2 , x′)}. It consists of all points that are closer to x than they are to µ− ∪ µ1/2 (points oppositely labeled from x) by a factor of κ. We can use a similar definition for x ∈ µ−. Finally, if η(x) = 12 , we simply set V κx = {x}. Observe that V κx ⊂ Vx for all 0 < κ < 1, and thus being robust with respect to V κx is a milder condition than Vx. Using this notion, we can now define margin consistency. Definition 7. A learning algorithm A is said to be neighborhood consistent if the following holds for any data distribution D. For any 0 < , δ, κ < 1, there exists N such that for all n ≥ N , with probability at least 1− δ over S ∼ Dn, AVκ(AS , D) ≥ A(g,D)− , where g denotes the Bayes optimal classifier and AS denotes the classifier learned by algorithm A from dataset S. This condition essentially says that the astuteness of the classifier learned by the algorithm converges towards the accuracy of the Bayes optimal classifier. Furthermore, we stipulate that this holds as long as the astuteness is measured with respect to some Vκ. Observe that as κ→ 1, these regions converge towards the neighborhood preserving robustness regions, thus giving us a classifier with robustness effectively equal to that of the neighborhood preserving Bayes optimal classifier. 4 Neighborhood Consistent Non-Parametric Classifiers Having defined neighborhood consistency, we turn to the following question: which non-parametric algorithms are neighborhood consistent? Our starting point will be the standard literature for the convergence of non-parametric classifiers with regard to accuracy. We begin by considering the standard conditions for kn-nearest neighbors to converge (in accuracy) towards the Bayes optimal. kn-nearest neighbors is consistent if and only if the following two conditions are met: limn→∞ kn = ∞, and limn→∞ knn = 0. The first condition guarantees that each point is classified by using an increasing number of nearest neighbors (thus making the probability of a misclassification small), and the second condition guarantees that each point is classified using only points very close to it. We will refer to the first condition as precision, and the second condition as locality. A natural question is whether the same principles suffice for neighborhood consistency as well. We began by showing that without any additional constraints, the answer is no. Theorem 8. Let D = (µ, η) be the data distribution where µ denotes the uniform distribution over [0, 1] and η is defined as: η(x) = x. Over this space, let ρ be the euclidean distance metric. Suppose kn = O(log n) for 1 ≤ n < ∞. Then kn-nearest neighbors is not neighborhood consistent with respect to D. The issue in the example above is that for smaller kn, kn-nearest neighbors lacks sufficient precision. For neighborhood consistnecy, points must be labeled using even more training points than are needed accuracy. This is because the classifier must be uniformly correct across the entirety of V κx . Thus, to build neighborhood consistent classifiers, we must bolster the precision from the standard amount used for standard consistency. To do this, we begin by introducing splitting numbers, a useful tool for bolstering the precision of weight functions. 4.1 Splitting Numbers We will now generalize beyond nearest neighbors to consider weight functions. Doing so will allow us to simultaneously analyze nearest neighbors and kernel classifiers. To do so, we must first rigorously substantiate our intuitions about increasing precision into concrete requirements. This will require several technical definitions. Definition 9. Let µ be a probability measure over Rd. For any x ∈ Rd, the probability radius rp(x) is the smallest radius for which B(x, rp(x)) has probability mass at least p. More precisely, rp(x) = inf{r : µ(B(x, r)) ≥ p}. Definition 10. Let W be a weight function and let S = {x1, x2, . . . , xn} be any finite subset of Rd. For any x ∈ Rd, α ≥ 0, and 0 ≤ β ≤ 1, let Wx,α,β = {i : ρ(x, xi) ≤ α,wSi (x) ≥ β}. Then the splitting number of W with respect to S, denoted as T (W,S) is the number of distinct subsets generated by Wx,αβ as x ranges over Rd, α ranges over [0,∞), and β ranges over [0, 1]. Thus T (W,S) = |{Wx,α,β : x ∈ Rd, 0 ≤ α, 0 ≤ β ≤ 1}|. Splitting numbers allow us to ensure high amounts of precision over a weight function. To prove neighborhood consistency, it is necessary for a classifier to be correct at all points in a given region. Consequently, techniques that consider a single point will be insufficient. The splitting number provides a mechanism for studying entire regions simultaneously. For more details on splitting numbers, we include several examples in the appendix. 4.2 Sufficient Conditions for Neighborhood Consistency We now state our main result. Theorem 11. Let W be a weight function, D a distribution over Rd × {±1}, U a neighborhood preserving collection, and (tn)∞1 be a sequence of positive integers such that the following four conditions hold. 1. W is consistent (with resp. to accuracy) with resp. to D. 2. For any 0 < p < 1, limn→∞ES∼Dn [supx∈Rd ∑n 1 w S i (x)1ρ(x,xi)>rp(x)] = 0. 3. limn→∞ES∼Dn [tn supx∈Rd w S i (x)] = 0. 4. limn→∞ES∼Dn log T (W,S) tn = 0. Then W is neighborhood consistent with respect to D. Remarks: Condition 1 is necessary because neighborhood consistency implies standard consistency – or, convergence in accuracy to the Bayes Optimal. Standard consistency has been well studied for non-parametric classifiers, and there are a variety of results that can be used to ensure it – for example, Stone’s Theorem (included in the appendix). Conditions 2. and 3. are stronger version of conditions 2. and 3. of Stone’s theorem. In particular, both include a supremum taken over all x ∈ Rd as opposed to simply considering a random point x ∼ D. This is necessary for ensuring correct labels on entire regions of points simultaneously. We also note that the dependence on rp(x) (as opposed to some fixed r) is a key property used for adaptive robustness. This allows the algorithm to adjust to potential differing distance scales over different regions in Rd. This idea is reminiscent of the analysis given in [6], which also considers probability radii. Condition 4. is an entirely new condition which allows us to simultaneously consider all T (W,S) subsets of S. This is needed for analyzing weighted sums with arbitrary weights. Next, we apply Theorem 11 to get specific examples of margin consistent non-parametric algorithms. 4.3 Nearest Neighbors and Kernel Classifiers We now provide sufficient conditions for kn-nearest neighbors to be neighborhood consistent. Corollary 12. Suppose (kn)∞1 satisfies (1) limn→∞ kn n = 0, and (2) limn→∞ logn kn = 0. Then kn-nearest neighbors is neighborhood consistent. As a result of Theorem 8, corollary 12 is tight for nearest neighbors. Thus kn nearest neighbors is neighborhood consistent if and only if kn = ω(log n). Next, we give sufficient conditions for a kernel-similarity classifier. Corollary 13. Let W be a kernel classifier over Rd × {±1} constructed from K : R+ → R+ and hn. Suppose the following properties hold. 1. K is decreasing, and satisfies ∫ Rd K(||x||)dx <∞. 2. limn→∞ hn = 0 and limn→∞ nhdn =∞. 3. For any c > 1, limx→∞ K(cx) K(x) = 0. 4. For any x ≥ 0, limn→∞ nlognK( x hn ) =∞. Then W is neighborhood consistent. Observe that conditions 1. 2. and 3. are satisfied by many common Kernel functions such as the Gaussian or Exponential kernel (K(x) = exp(−x2)/ K(x) = exp(−x)). Condition 4. can be similarly satisfied by just increasing hn to be sufficiently large. Overall, this theorem states that Kernel classification is neighborhood consistent as long as the bandwidth shrinks slowly enough. 4.4 Histogram Classifiers Having discussed neighborhood consistent nearest-neighbors and kernel classifier, we now turn our attention towards another popular weight function, histogram classifiers. Recall that histogram classifiers operate by partitioning their input space into increasingly small cells, and then classifying each cell by using a majority vote from the training examples within that cell (a detailed description can be found in the appendix). We seek to answer the following question: is increasing precision sufficient for making histogram classifiers neighborhood consistent? Unfortunately, the answer this turns out not to be no. The main issue is that histogram classifiers have no mechanism for performing classification outside the support of the data distribution. For an example of this, refer to Figure 3. Here we see a distribution being classified by a histogram classifier. Observe that the cell labeled A contains points that are strictly closer to µ+ than µ−, and consequently, for sufficiently large κ, V κx will intersect A for some point x ∈ µ+. A similar argument holds for the cells labeled B and C.. However, since A,B,C are all in cells that will never contain any data, they will never be labeled in a meaningful way. Because of this, histogram classifiers are not neighborhood consistent. 5 Validation To complement our theoretical large sample results for non-parametric classifiers, we now include several experiments to understand their behavior for finite samples. We seek to understand how quickly non-parametic classifiers converge towards the neighborhood preserving Bayes optimal. We focus our attention on kernel classifiers and use two different kernel similarity functions: the first, an exponential kernel, and the second, a polynomial kernel. These classifiers were chosen so that the former meets the conditions of Corollary 13, and the latter does not. Full details on these classifiers can be found in the appendix. To be able to measure performance with increasing data size, we look at a simple synthetic dataset over overlayed circles (see Figure 5 for an illustration) with support designed so that the data is intrinsically multiscaled. In particular, this calls for different levels of robustness in different regions. For simplicity, we use a global label noise parameter of 0.2, meaning that any sample drawn from this distribution is labeled differently than its support with probability 0.2. Further details about our dataset are given in section D. Performance Measure. For a given classifier, we evaluate its astuteness at a test point x with respect to the robustness region V κx (Definition 6). While these regions are not computable in practice due to their dependency on the support of the data distribution, we are able to approximate them for this synthetic example due to our explicit knowledge of the data distribution. Details for doing this can be found in the appendix. To compute the empirical astuteness of a kernel classifier WK about test point x, we perform a grid search over all points in V κx to ensure that all points in the robustness region are labeled correctly. For each classifier, we measure the empirical astuteness by using three trials of 20 test points and taking the average. While this is a relatively small amount of test data, it suffices as our purpose is to just verify that the algorithm roughly converges towards the optimal possible astuteness. Recall that for any neighborhood consistent algorithm, as n→∞, AVκ should converge towards A∗, the accuracy of the Bayes optimal classifier, for any 0 < κ < 1. Thus, to verify this holds, we use κ = 0.1, 0.3, 0.5. For each of these values, we plot the empirical astuteness as the training sample size n gets larger and larger. As a baseline, we also plot their standard accuracy on the test set. Results and Discussion: The results are presented in Figure 4; the left panel is for the exponential kernel, while the right one is for the polynomial kernel. As predicted by our theory, we see that in all cases, the exponential kernel converges towards the maximum astuteness regardless of the value of κ: the only difference is that the rate of convergence is slower for larger values of κ. This is, of course, expected because larger values of κ entail larger robustness regions. By contrast, the polynomial kernel performs progressively worse for larger values of κ. This kernel was selected specifically to violate the conditions of Corollary 13, and in particular fails criteria 3. However, note that the polynomial kernel nevertheless performs will with respect to accuracy thus giving another example demonstrating the added difficulty of neighborhood consistency. Our results bridge the gap between our asymptotic theoretical results and finite sample regimes. In particular, we see that kernel classifiers that meet the conditions of Corollary 13 are able to converge in astuteness towards the neighborhood preserving Bayes optimal classifier, while classifiers that do not meet these conditions fail. 6 Related Work There is a wealth of literature on robust classification, most of which impose the same robustness radius r on the entire data. [5, 17, 19, 20, 26, 15, 16, 18, 21, 22, 23], among others, focus primarily on neural networks, and robustness regions that are `1, `2, or `∞ norm balls of a given radius r. [7] and [12] show how to train neural networks with different robustness radii at different points by trading off robustness and accuracy; their work differ from ours in that they focus on neural networks, their robustness regions are still norm balls, and that their work is largely empirical. Our framework is also related to large margin classification – in the sense that the robustness regions U induce a margin constraint on the decision boundary. The most popular large margin classifier is the Support Vector Machine[9, 3, 14] – a large margin linear classifier that minimizes the worstcase margin over the training data. Similar ideas have also been used to design classifiers that are more flexible than linear; for example, [27] shows how to build large margin Lipschitz classifiers by rounding globally Lipschitz functions. Finally, there has also been purely empirical work on achieving large margins for more complex classifiers – such as [13] for deep neural networks that minimizes the worst case margin, and [29] for metric learning to find large margin nearest neighbors. Our work differs from these in that our goal is to ensure a high enough local margin at each x, (by considering the neighborhood preserving regions Vx) as opposed to optimizing a global margin. Finally, our analysis builds on prior work on robust classification for non-parametric methods in the standard framework. [1, 24, 28, 31] provide adversarial attacks on non-parametric methods. Wang et. al. [28] develops a defense for 1-NN that removes a subset of the training set to ensure higher robustness. Yang et. al [31] proposes the r-optimal classifier – which is the maximally astute classifier in the standard robustness framework – and proposes a defense called Adversarial Pruning. Theoretically, [4] provide conditions under which weight functions converge towards the r-optimal classifier in the large sample limit. They show that for r-separated distributions, where points from different classes are at least distance 2r or more apart, nearest neighbors and kernel classifiers satisfy these conditions. In the more general case, they use Adversarial Pruning as a preprocessing step to ensure that the training data is r-separated, and show that this preprocessing step followed by nearest neighbors or kernel classifiers leads to solutions that are robust and accurate in the large sample limit. Our result fundamentally differs from theirs in that we analyze a different algorithm, and our proof techniques are quite different. In particular, the fundamental differences between the r-optimal classifier and the neighborhood preserving Bayes optimal classifier call for different algorithms and different analysis techniques. In concurrent work, [8] proposes a similar limit to the neighborhood preserving Bayes optimal which they refer to as the margin canonical Bayes. However, their work then focuses on a data augmentation technique that leads to convergence whereas we focus on proving the neighborhood consistency of classical non-parametric classifiers.
1. What is the focus of the paper regarding robust classification? 2. What is the proposed approach to ensure no robustness-accuracy tradeoff? 3. How does the neighborhood preserving Bayes optimal (NPBO) classifier work? 4. What are the sufficient conditions for non-parametric methods to converge towards the limit object? 5. Can you explain the concept of robustness regions and how they are affected by the proposed approach?
Summary Of The Paper Review
Summary Of The Paper The authors argue that robustness regions should be larger in some regions of data, and smaller in others, and propose a new limit classifier, called the neighborhood optimal classifier, that extends the Bayes optimal classifier outside its support by using the label of the closest in-support point. They argue that this classifier maximizes the size of its robustness regions subject to the constraint of having accuracy equal to the Bayes optimal, then present sufficient conditions under which general non-parametric methods that can be represented as weight functions converge towards this limit object, and show that both nearest neighbors and kernel classifiers (under certain assumptions) suffice. Review The authors propose an alternative formulation of robust classification that ensures that in the large sample limit, there is no robustness-accuracy tradeoff, and that regions of space with higher separation are classified more robustly. To this end, the authors introduce a new large-sample limit, the neighborhood preserving Bayes optimal (NPBO) classifier. For an input x, it outputs the prediction of the Bayes Optimal on the nearest neighbor (including itself) in the support of the data distribution D. They show that k_n-nearest neighbors converge to the NPBO provided k_n = ω(log n), and kernel classifiers converge to the NPBO provided certain conditions (the kernel function K has faster than polynomial decay, and the bandwidth parameter h_n decreases sufficiently slowly), but certain types of histogram classifiers do not converge to the NPBO, even if they do converge to the Bayes optimal. They also provide experiments to validate their claims. I feel this work is novel in motivation and solid in theory. I have the following comments for improving the writing and organization: In Section 1, "astuteness" is used without any citation in the first paragraph. In Section 2, there are many conceptions that are existed in previous papers, but the authors did not mention or give references, for instance, Definition 1 "astute and astuteness", weight functions, histogram classfiers, etc. Line 259: Conditions 2. and 3. are ..., conditions 2. and 3. of ... . I think the period "." is not needed after 2 and 3. Line 144: We call this the the neighborhood preserving ...
NIPS
Title Incremental Few-Shot Learning with Attention Attractor Networks Abstract Machine learning classifiers are often trained to recognize a set of pre-defined classes. However, in many applications, it is often desirable to have the flexibility of learning additional concepts, with limited data and without re-training on the full training set. This paper addresses this problem, incremental few-shot learning, where a regular classification network has already been trained to recognize a set of base classes, and several extra novel classes are being considered, each with only a few labeled examples. After learning the novel classes, the model is then evaluated on the overall classification performance on both base and novel classes. To this end, we propose a meta-learning model, the Attention Attractor Network, which regularizes the learning of novel classes. In each episode, we train a set of new weights to recognize novel classes until they converge, and we show that the technique of recurrent back-propagation can back-propagate through the optimization process and facilitate the learning of these parameters. We demonstrate that the learned attractor network can help recognize novel classes while remembering old classes without the need to review the original training set, outperforming various baselines. 1 Introduction The availability of large scale datasets with detailed annotation, such as ImageNet [30], played a significant role in the recent success of deep learning. The need for such a large dataset is however a limitation, since its collection requires intensive human labor. This is also strikingly different from human learning, where new concepts can be learned from very few examples. One line of work that attempts to bridge this gap is few-shot learning [16, 36, 33], where a model learns to output a classifier given only a few labeled examples of the unseen classes. While this is a promising line of work, its practical usability is a concern, because few-shot models only focus on learning novel classes, ignoring the fact that many common classes are readily available in large datasets. An approach that aims to enjoy the best of both worlds, the ability to learn from large datasets for common classes with the flexibility of few-shot learning for others, is incremental few-shot learning [9]. This combines incremental learning where we want to add new classes without catastrophic forgetting [20], with few-shot learning when the new classes, unlike the base classes, only have a small amount of examples. One use case to illustrate the problem is a visual aid system. Most objects of interest are common to all users, e.g., cars, pedestrian signals; however, users would also like to augment the system with additional personalized items or important landmarks in their area. Such a system needs to be able to learn new classes from few examples, without harming the performance on the original classes and typically without access to the dataset used to train the original classes. In this work we present a novel method for incremental few-shot learning where during meta-learning we optimize a regularizer that reduces catastrophic forgetting from the incremental few-shot learning. Our proposed regularizer is inspired by attractor networks [42] and can be thought of as a memory of the base classes, adapted to the new classes. We also show how this regularizer can be optimized, using recurrent back-propagation [18, 1, 25] to back-propagate through the few-shot optimization 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. stage. Finally, we show empirically that our proposed method can produce state-of-the-art results in incremental few-shot learning on mini-ImageNet [36] and tiered-ImageNet [29] tasks. 2 Related Work Recently, there has been a surge in interest in few-shot learning [16, 36, 33, 17], where a model for novel classes is learned with only a few labeled examples. One family of approaches for fewshot learning, including Deep Siamese Networks [16], Matching Networks [36] and Prototypical Networks [33], follows the line of metric learning. In particular, these approaches use deep neural networks to learn a function that maps the input space to the embedding space where examples belonging to the same category are close and those belonging to different categories are far apart. Recently, [8] proposes a graph neural networks based method which captures the information propagation from the labeled support set to the query set. [29] extends Prototypical Networks to leverage unlabeled examples while doing few-shot learning. Despite their simplicity, these methods are very effective and often competitive with the state-of-the-art. Another class of approaches aims to learn models which can adapt to the episodic tasks. In particular, [27] treats the long short-term memory (LSTM) as a meta learner such that it can learn to predict the parameter update of a base learner, e.g., a convolutional neural network (CNN). MAML [7] instead learns the hyperparameters or the initial parameters of the base learner by back-propagating through the gradient descent steps. [31] uses a read/write augmented memory, and [21] combines soft attention with temporal convolutions which enables retrieval of information from past episodes. Methods described above belong to the general class of meta-learning models. First proposed in [32, 23, 35], meta-learning is a machine learning paradigm where the meta-learner tries to improve the base learner using the learning experiences from multiple tasks. Meta-learning methods typically learn the update policy yet lack an overall learning objective in the few-shot episodes. Furthermore, they could potentially suffer from short-horizon bias [41], if at test time the model is trained for longer steps. To address this problem, [4] proposes to use fast convergent models like logistic regression (LR), which can be back-propagated via a closed form update rule. Compared to [4], our proposed method using recurrent back-propagation [18, 1, 25] is more general as it does not require a closed-form update, and the inner loop solver can employ any existing continuous optimizers. Our work is also related to incremental learning, a setting where information is arriving continuously while prior knowledge needs to be transferred. A key challenge is catastrophic forgetting [20, 19], i.e., the model forgets the learned knowledge. Various memory-based models have since been proposed, which store training examples explicitly [28, 34, 5, 24], regularize the parameter updates [15], or learn a generative model [13]. However, in these studies, incremental learning typically starts from scratch, and usually performs worse than a regular model that is trained with all available classes together since it needs to learned a good representation while dealing with catastrophic forgetting. Incremental few-shot learning is also known as low-shot learning. To leverage a good representation, [10, 37, 9] starts off with a pre-trained network on a set of base classes, and tries to augment the classifier with a batch of new classes that has not been seen during training. [10] proposes the squared gradient magnitude loss, which makes the learned classifier from the low-shot examples have a smaller gradient value when learning on all examples. [37] propose the prototypical matching networks, a combination of prototypical network and matching network. The paper also adds hallucination, which generates new examples. [9] proposes an attention based model which generates weights for novel categories. They also promote the use of cosine similarity between feature representations and weight vectors to classify images. In contrast, during each few-shot episode, we directly learn a classifier network that is randomly initialized and solved till convergence, unlike [9] which directly output the prediction. Since the model cannot see base class data within the support set of each few-shot learning episode, it is challenging to learn a classifier that jointly classifies both base and novel categories. Towards this end, we propose to add a learned regularizer, which is predicted by a meta-network, the “attention attractor network”. The network is learned by differentiating through few-shot learning optimization iterations. We found that using an iterative solver with the learned regularizer significantly improves the classifier model on the task of incremental few-shot learning. 3 Model In this section, we first define the setup of incremental few-shot learning, and then we introduce our new model, the Attention Attractor Network, which attends to the set of base classes according to the few-shot training data by using the attractor regularizing term. Figure 1 illustrates the high-level model diagram of our method. 3.1 Incremental Few-Shot Learning The outline of our meta-learning approach to incremental few-shot learning is: (1) We learn a fixed feature representation and a classifier on a set of base classes; (2) In each training and testing episode we train a novel-class classifier with our meta-learned regularizer; (3) We optimize our meta-learned regularizer on combined novel and base classes classification, adapting it to perform well in conjunction with the base classifier. Details of these stages follow. Pretraining Stage: We learn a base model for the regular supervised classification task on dataset {(xa,i, ya,i)}Nai=1 where xa,i is the i-th example from dataset Da and its labeled class ya,i ∈ {1, 2, ...,K}. The purpose of this stage is to learn both a good base classifier and a good representation. The parameters of the base classifier are learned in this stage and will be fixed after pretraining. We denote the parameters of the top fully connected layer of the base classifier Wa ∈ RD×K where D is the dimension of our learned representation. Incremental Few-Shot Episodes: A few-shot dataset Db is presented, from which we can sample few-shot learning episodes E . Note that this can be the same data source as the pretraining datasetDa, but sampled episodically. For each N -shot K ′-way episode, there are K ′ novel classes disjoint from the base classes. Each novel class has N and M images from the support set Sb and the query set Qb respectively. Therefore, we have E = (Sb, Qb), Sb = (xSb,i, ySb,i) N×K′ i=1 , Qb = (x Q b,i, y Q b,i) M×K′ i=1 where yb,i ∈ {K+1, ...,K+K ′}. Sb andQb can be regarded as this episodes training and validation sets. Each episode we learn a classifier on the support set Sb whose learnable parameters Wb are called the fast weights as they are only used during this episode. To evaluate the performance on a joint prediction of both base and novel classes, i.e., a (K +K ′)-way classification, a mini-batch Qa = {(xa,i, ya,i)}M×Ki=1 sampled from Da is also added to Qb to form Qa+b = Qa ∪ Qb. This means that the learning algorithm, which only has access to samples from the novel classes Sb, is evaluated on the joint query set Qa+b. Meta-Learning Stage: In meta-training, we iteratively sample few-shot episodes E and try to learn the meta-parameters in order to minimize the joint prediction loss on Qa+b. In particular, we design a regularizerR(·, θ) such that the fast weights are learned via minimizing the loss `(Wb, Sb)+R(Wb, θ) where `(Wb, Sb) is typically cross-entropy loss for few-shot classification. The meta-learner tries to learn meta-parameters θ such that the optimal fast weights W ∗b w.r.t. the above loss function performs well on Qa+b. In our model, meta-parameters θ are encapsulated in our attention attractor network, which produces regularizers for the fast weights in the few-shot learning objective. Joint Prediction on Base and Novel Classes: We now introduce the details of our joint prediction framework performed in each few-shot episode. First, we construct an episodic classifier, e.g., a logistic regression (LR) model or a multi-layer perceptron (MLP), which takes the learned image features as inputs and classifies them according to the few-shot classes. During training on the support set Sb, we learn the fast weights Wb via minimizing the following regularized cross-entropy objective, which we call the episodic objective: LS(Wb, θ) = − 1 NK ′ NK′∑ i=1 K+K′∑ c=K+1 ySb,i,c log ŷ S b,i,c +R(Wb, θ). (1) This is a general formulation and the specific functional form of the regularization termR(Wb, θ) will be specified later. The predicted output ŷSb,i is obtained via, ŷ S b,i = softmax( [ W>a xb,i, h(xb,i;W ∗ b ) ] ), where h(xb,i) is our classification network and Wb is the fast weights in the network. In the case of LR, h is a linear model: h(xb,i;Wb) = W>b xb,i. h can also be an MLP for more expressive power. During testing on the query set Qa+b, in order to predict both base and novel classes, we directly augment the softmax with the fixed base class weights Wa, ŷ Q i = softmax( [ W>a xi, h(xi;W ∗ b ) ] ), where W ∗b are the optimal parameters that minimize the regularized classification objective in Eq. (1). 3.2 Attention Attractor Networks Directly learning the few-shot episode, e.g., by setting R(Wb, θ) to be zero or simple weight decay, can cause catastrophic forgetting on the base classes. This is becauseWb which is trained to maximize the correct novel class probability can dominate the base classes in the joint prediction. In this section, we introduce the Attention Attractor Network to address this problem. The key feature of our attractor network is the regularization term R(Wb, θ): R(Wb, θ) = K′∑ k′=1 (Wb,k′ − uk′)>diag(exp(γ))(Wb,k′ − uk′), (2) where uk′ is the so-called attractor and Wb,k′ is the k′-th column of Wb. This sum of squared Mahalanobis distances from the attractors adds a bias to the learning signal arriving solely from novel classes. Note that for a classifier such as an MLP, one can extend this regularization term in a layer-wise manner. Specifically, one can have separate attractors per layer, and the number of attractors equals the number of output dimension of that layer. To ensure that the model performs well on base classes, the attractors uk′ must contain some information about examples from base classes. Since we can not directly access these base examples, we propose to use the slow weights to encode such information. Specifically, each base class has a learned attractor vector Uk stored in the memory matrix U = [U1, ..., UK ]. It is computed as, Uk = fφ(Wa,k), where f is a MLP of which the learnable parameters are φ. For each novel class k′ its classifier is regularized towards its attractor uk′ which is a weighted sum of Uk vectors. Intuitively the weighting is an attention mechanism where each novel class attends to the base classes according to the level of interference, i.e. how prediction of new class k′ causes the forgetting of base class k. For each class in the support set, we compute the cosine similarity between the average representation of the class and base weights Wa then normalize using a softmax function ak′,k = exp ( τA( 1N ∑ j hj1[yb,j = k ′],Wa,k) ) ∑ k exp ( τA( 1N ∑ j hj1[yb,j = k ′],Wa,k) ) , (3) where A is the cosine similarity function, hj are the representations of the inputs in the support set Sb and τ is a learnable temperature scalar. ak′,k encodes a normalized pairwise attention matrix between the novel classes and the base classes. The attention vector is then used to compute a linear weighted sum of entries in the memory matrix U , uk′ = ∑ k ak′,kUk + U0, where U0 is an embedding vector and serves as a bias for the attractor. Algorithm 1 Meta Learning for Incremental Few-Shot Learning Require: θ0, Da, Db, h Ensure: θ 1: θ ← θ0; 2: for t = 1 ... T do 3: {(xSb , ySb )}, {(xQb , y Q b )} ← GetEpisode(Db); 4: {xQa+b, y Q a+b} ← GetMiniBatch(Da) ∪ {(x Q b , y Q b )}; 5: 6: repeat 7: LS ← 1 NK′ ∑ i y S b,i log ŷ S b,i +R(Wb; θ); 8: Wb ← OptimizerStep(Wb,∇WbL S); 9: until Wb converges 10: ŷQa+b,j ← softmax([W > a x Q a+b,j , h(x Q a+b,j ;Wb)]); 11: LQ ← 1 2NK′ ∑ j y Q a+b,j log ŷ Q a+b,j ; 12: // Backprop through the above optimization via RBP // A dummy gradient descent step 13: W ′b ←Wb − α∇WbL S ; 14: J ← ∂W ′ b ∂Wb ; v ← ∂L Q ∂Wb ; g ← v; 15: repeat 16: v ← J>v − v; g ← g + v; 17: until g converges 18: 19: θ ← OptimizerStep(θ, g> ∂W ′ b ∂θ ) 20: end for Our design takes inspiration from attractor networks [22, 42], where for each base class one learns an “attractor” that stores the relevant memory regarding that class. We call our full model “dynamic attractors” as they may vary with each episode even after meta-learning. In contrast if we only have the bias term U0, i.e. a single attractor which is shared by all novel classes, it will not change after meta-learning from one episode to the other. We call this model variant the “static attractor”. In summary, our meta parameters θ include φ, U0, γ and τ , which is on the same scale as as the number of paramters in Wa. It is important to note that R(Wb, θ) is convex w.r.t. Wb. Therefore, if we use the LR model as the classifier, the overall training objective on episodes in Eq. (1) is convex which implies that the optimum W ∗b (θ, Sb) is guaranteed to be unique and achievable. Here we emphasize that the optimal parameters W ∗b are functions of parameters θ and few-shot samples Sb. During meta-learning, θ are updated to minimize an expected loss of the query set Qa+b which contains both base and novel classes, averaging over all few-shot learning episodes, min θ E E [ LQ(θ, Sb) ] = E E M(K+K′)∑ j=1 K+K′∑ c=1 yj,c log ŷj,c(θ, Sb) , (4) where the predicted class is ŷj(θ, Sb) = softmax ([ W>a xj , h (xj ;W ∗ b (θ, Sb)) ]) . 3.3 Learning via Recurrent Back-Propagation As there is no closed-form solution to the episodic objective (the optimization problem in Eq. 1), in each episode we need to minimize LS to obtain W ∗b through an iterative optimizer. The question is how to efficiently compute ∂W ∗ b ∂θ , i.e., back-propagating through the optimization. One option is to unroll the iterative optimization process in the computation graph and use back-propagation through time (BPTT) [38]. However, the number of iterations for a gradient-based optimizer to converge can be on the order of thousands, and BPTT can be computationally prohibitive. Another way is to use the truncated BPTT [39] (T-BPTT) which optimizes for T steps of gradient-based optimization, and is commonly used in meta-learning problems. However, when T is small the training objective could be significantly biased. Alternatively, the recurrent back-propagation (RBP) algorithm [1, 25, 18] allows us to back-propagate through the fixed point efficiently without unrolling the computation graph and storing intermediate activations. Consider a vanilla gradient descent process on Wb with step size α. The difference between two steps Φ can be written as Φ(W (t)b ) = W (t) b − F (W (t) b ), where F (W (t) b ) = W (t+1) b = W (t) b − α∇LS(W (t) b ). Since Φ(W ∗ b (θ)) is identically zero as a function of θ, using the implicit function theorem we have ∂W ∗ b ∂θ = (I − J > F,W∗b )−1 ∂F∂θ , where JF,W∗b denotes the Jacobian matrix of the mapping F evaluated at W ∗b . Algorithm 1 outlines the key steps for learning the episodic objective using RBP in the incremental few-shot learning setting. Note that the RBP algorithm implicitly inverts (I − J>) by computing the matrix inverse vector product, and has the same time complexity compared to truncated BPTT given the same number of unrolled steps, but meanwhile RBP does not have to store intermediate activations. Damped Neumann RBP To compute the matrix-inverse vector product (I−J>)−1v, [18] propose to use the Neumann series: (I − J>)−1v = ∑∞ n=0(J >)nv ≡ ∑∞ n=0 v (n). Note that J>v can be computed by standard back-propagation. However, directly applying the Neumann RBP algorithm sometimes leads to numerical instability. Therefore, we propose to add a damping term 0 < < 1 to I − J>. This results in the following update: ṽ(n) = (J> − I)nv. In practice, we found the damping term with = 0.1 helps alleviate the issue significantly. 4 Experiments We experiment on two few-shot classification datasets, mini-ImageNet and tiered-ImageNet. Both are subsets of ImageNet [30], with images sizes reduced to 84 × 84 pixels. We also modified the datasets to accommodate the incremental few-shot learning settings. 1 4.1 Datasets • mini-ImageNet Proposed by [36], mini-ImageNet contains 100 object classes and 60,000 images. We used the splits proposed by [27], where training, validation, and testing have 64, 16 and 20 classes respectively. • tiered-ImageNet Proposed by [29], tiered-ImageNet is a larger subset of ILSVRC-12. It features a categorical split among training, validation, and testing subsets. The categorical split means that classes that belong to the same high-level category, e.g. “working dog” and ”terrier” or some other dog breed, are not split between training, validation and test. This is a harder task, but one that more strictly evaluates generalization to new classes. It is also an order of magnitude larger than mini-ImageNet. 4.2 Experiment setup We use a standard ResNet backbone [11] to learn the feature representation through supervised training. For mini-ImageNet experiments, we follow [21] and use a modified version of ResNet-10. 1Code released at: https://github.com/renmengye/inc-few-shot-attractor-public Table 2: mini-ImageNet 64+5-way results Model 1-shot 5-shotAcc. ↑ ∆ ↓ Acc. ↑ ∆ ↓ ProtoNet [33] 42.73 ± 0.15 -20.21 57.05 ± 0.10 -31.72 Imprint [26] 41.10 ± 0.20 -22.49 44.68 ± 0.23 -27.68 LwoF [9] 52.37 ± 0.20 -13.65 59.90 ± 0.20 -14.18 Ours 54.95 ± 0.30 -11.84 63.04 ± 0.30 -10.66 Table 3: tiered-ImageNet 200+5-way results Model 1-shot 5-shotAcc. ↑ ∆ ↓ Acc. ↑ ∆ ↓ ProtoNet [33] 30.04 ± 0.21 -29.54 41.38 ± 0.28 -26.39 Imprint [26] 39.13 ± 0.15 -22.26 53.60 ± 0.18 -16.35 LwoF [9] 52.40 ± 0.33 -8.27 62.63 ± 0.31 -6.72 Ours 56.11 ± 0.33 -6.11 65.52 ± 0.31 -4.48 ∆ = average decrease in acc. caused by joint prediction within base and novel classes (∆ = 1 2 (∆a + ∆b)) ↑ (↓) represents higher (lower) is better. For tiered-ImageNet, we use the standard ResNet-18 [11], but replace all batch normalization [12] layers with group normalization [40], as there is a large distributional shift from training to testing in tiered-ImageNet due to categorical splits. We used standard data augmentation, with random crops and horizonal flips. We use the same pretrained checkpoint as the starting point for meta-learning. In the meta-learning stage as well as the final evaluation, we sample a few-shot episode from the Db, together with a regular mini-batch from the Da. The base class images are added to the query set of the few-shot episode. The base and novel classes are maintained in equal proportion in our experiments. For all the experiments, we consider 5-way classification with 1 or 5 support examples (i.e. shots). In the experiments, we use a query set of size 25×2 =50. We use L-BFGS [43] to solve the inner loop of our models to make sure Wb converges. We use the ADAM [14] optimizer for meta-learning with a learning rate of 1e-3, which decays by a factor of 10 after 4,000 steps, for a total of 8,000 steps. We fix recurrent backpropagation to 20 iterations and = 0.1. We study two variants of the classifier network. The first is a logistic regression model with a single weight matrix Wb. The second is a 2-layer fully connected MLP model with 40 hidden units in the middle and tanh non-linearity. To make training more efficient, we also add a shortcut connection in our MLP, which directly links the input to the output. In the second stage of training, we keep all backbone weights frozen and only train the meta-parameters θ. 4.3 Evaluation metrics We consider the following evaluation metrics: 1) overall accuracy on individual query sets and the joint query set (“Base”, “Novel”, and “Both”); and 2) decrease in performance caused by joint prediction within the base and novel classes, considered separately (“∆a” and “∆b”). Finally we take the average ∆ = 12 (∆a + ∆b) as a key measure of the overall decrease in accuracy. 4.4 Comparisons We implemented and compared to three methods. First, we adapted Prototypical Networks [33] to incremental few-shot settings. For each base class we store a base representation, which is the average representation (prototype) over all images belonging to the base class. During the few-shot learning stage, we again average the representation of the few-shot classes and add them to the bank of base representations. Finally, we retrieve the nearest neighbor by comparing the representation of a test image with entries in the representation store. In summary, both Wa and Wb are stored as the average representation of all images seen so far that belong to a certain class. We also compare to the following methods: • Weights Imprinting (“Imprint”) [26]: the base weights Wa are learned regularly through supervised pre-training, and Wb are computed using prototypical averaging. • Learning without Forgetting (“LwoF”) [9]: Similar to [26],Wb are computed using prototypical averaging. In addition, Wa is finetuned during episodic meta-learning. We implemented the most advanced variants proposed in the paper, which involves a class-wise attention mechanism. This model is the previous state-of-the-art method on incremental few-shot learning, and has better performance compared to other low-shot models [37, 10]. 4.5 Results We first evaluate our vanilla approach on the standard few-shot classification benchmark where no base classes are present in the query set. Our vanilla model consists of a pretrained CNN and a single-layer logistic regression with weight decay learned from scratch; this model performs on-par Table 4: Ablation studies on mini-ImageNet Table 5: Ablation studies on tiered-ImageNet with other competitive meta-learning approaches (1-shot 55.40 ± 0.51, 5-shot 70.17 ± 0.46). Note that our model uses the same backbone architecture as [21] and [9], and is directly comparable with their results. Similar findings of strong results using simple logistic regression on few-shot classification benchmarks are also recently reported in [6]. Our full model has similar performance as the vanilla model on pure few-shot benchmarks, and the full table is available in Supp. Materials. Next, we compare our models to other methods on incremental few-shot learning benchmarks in Tables 2 and 3. On both benchmarks, our best performing model shows a significant margin over the prior works that predict the prototype representation without using an iterative optimization [33, 26, 9]. 4.6 Ablation studies To understand the effectiveness of each part of the proposed model, we consider the following variants: • Vanilla (“LR, MLP”) optimizes a logistic regression or an MLP network at each few-shot episode, with a weight decay regularizer. • Static attractor (“+S”) learns a fixed attractor center u and attractor slope γ for all classes. • Attention attractor (“+A”) learns the full attention attractor model. For MLP models, the weights below the final layer are controlled by attractors predicted by the average representation across all the episodes. fφ is an MLP with one hidden layer of 50 units. Tables 4 and 5 shows the ablation experiment results. In all cases, the learned regularization function shows better performance than a manually set weight decay constant on the classifier network, in terms of both jointly predicting base and novel classes, as well as less degradation from individual prediction. On mini-ImageNet, our attention attractors have a clear advantage over static attractors. Formulating the classifier as an MLP network is slightly better than the linear models in our experiments. Although the final performance is similar, our RBP-based algorithm have the flexibility of adding the fast episodic model with more capacity. Unlike [4], we do not rely on an analytic form of the gradients of the optimization process. Comparison to truncated BPTT (T-BPTT) An alternative way to learn the regularizer is to unroll the inner optimization for a fixed number of steps in a differentiable computation graph, and then back-propagate through time. Truncated BPTT is a popular learning algorithm in many recent meta-learning approaches [2, 27, 7, 34, 3]. Shown in Figure 2, the performance of T-BPTT learned models are comparable to ours; however, when solved to convergence at test time, the performance of T-BPTT models drops significantly. This is expected as they are only guaranteed to work well for a certain number of steps, and failed to learn a good regularizer. While an early-stopped T-BPTT model can do equally well, in practice it is hard to tell when to stop; whereas for the RBP model, doing the full episodic training is very fast since the number of support examples is small. Visualization of attractor dynamics We visualize attractor dynamics in Figure 3. Our learned attractors pulled the fast weights close towards the base class weights. In comparison, [9] only modifies the prototypes slightly. Varying the number of base classes While the framework proposed in this paper cannot be directly applied on class-incremental continual learning, as there is no module for memory consolidation, we can simulate the continual learning process by varying the number of base classes, to see how the proposed models are affected by different stages of continual learning. Figure 4 shows that the learned regularizers consistently improve over baselines with weight decay only. The overall accuracy increases from 50 to 150 classes due to better representations on the backbone network, and drops at 200 classes due to a more challenging classification task. 5 Conclusion Incremental few-shot learning, the ability to jointly predict based on a set of pre-defined concepts as well as additional novel concepts, is an important step towards making machine learning models more flexible and usable in everyday life. In this work, we propose an attention attractor model, which regulates a per-episode training objective by attending to the set of base classes. We show that our iterative model that solves the few-shot objective till convergence is better than baselines that do one-step inference, and that recurrent back-propagation is an effective and modular tool for learning in a general meta-learning setting, whereas truncated back-propagation through time fails to learn functions that converge well. Future directions of this work include sequential iterative learning of few-shot novel concepts, and hierarchical memory organization. Acknowledgment Supported by NSERC and the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/Interior Business Center (DoI/IBC) contract number D16PC00003. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI/IBC, or the U.S. Government.
1. What are the contributions and architectural/algorithmic advancements presented in the paper? 2. What are the inconsistencies in softmaxes, and how does the proposed attractor-based regularization address them? 3. What are the computational considerations and overheads associated with the method, particularly regarding the inverse of the Jacobian? 4. How do the results on tiered-ImageNet compare to those on mini-ImageNet, and what might explain the difference? 5. Will the code for reproducing the results be made publicly available? 6. Is there a minor error in the paper, specifically regarding the definition of \theta_E?
Review
Review The paper is nicely written, motivates incremental few-shot learning problem, and makes a couple of interesting architectural/algorithmic contributions. The results are interesting/convincing, and the ablation study helps to better understand the properties of the prosed method. I believe the paper would be of interest to the community but more clarifications are necessary. === Comments and questions: - Inconsistencies in softmaxes. From 3.1, it looks like the softmaxes computed used for training fast and slow weights have different normalization constants (see line 135 and line 139), and hence the logits for b-classes might have entirely different scales than logits for a-classes. I feel that the reason why you need the proposed attractor-based regularization in the first place is to compensate for this scaling issue. Alternatively, you can use the same softmax from line 139 when computing the loss in eq (1) to avoid this discrepancy (since the base classes are available, W_a is pretrained and fixed, this should be possible). Why not do that? I would like to see a comparison with the vanilla architecture (i.e., no attractor-based regularization) that uses a consistent softmax normalization. - Computational considerations. RBP requires the inverse of the Jacobian wich scales cubically. What is the computational overhead for this method? How would it scale with the increased number of classes? Analysis and discussion of this are necessary. - Results. The authors mention that tiered-ImageNet is a harder task (which intuitively makes sense), but somehow results on that dataset are better than on the mini-ImageNet. How would you explain that? - Will the code for reproducing results be released? === Minor: - line 124: I believe \theta_E denotes parameters of the feature extractor, but it has not been properly defined.
NIPS
Title Incremental Few-Shot Learning with Attention Attractor Networks Abstract Machine learning classifiers are often trained to recognize a set of pre-defined classes. However, in many applications, it is often desirable to have the flexibility of learning additional concepts, with limited data and without re-training on the full training set. This paper addresses this problem, incremental few-shot learning, where a regular classification network has already been trained to recognize a set of base classes, and several extra novel classes are being considered, each with only a few labeled examples. After learning the novel classes, the model is then evaluated on the overall classification performance on both base and novel classes. To this end, we propose a meta-learning model, the Attention Attractor Network, which regularizes the learning of novel classes. In each episode, we train a set of new weights to recognize novel classes until they converge, and we show that the technique of recurrent back-propagation can back-propagate through the optimization process and facilitate the learning of these parameters. We demonstrate that the learned attractor network can help recognize novel classes while remembering old classes without the need to review the original training set, outperforming various baselines. 1 Introduction The availability of large scale datasets with detailed annotation, such as ImageNet [30], played a significant role in the recent success of deep learning. The need for such a large dataset is however a limitation, since its collection requires intensive human labor. This is also strikingly different from human learning, where new concepts can be learned from very few examples. One line of work that attempts to bridge this gap is few-shot learning [16, 36, 33], where a model learns to output a classifier given only a few labeled examples of the unseen classes. While this is a promising line of work, its practical usability is a concern, because few-shot models only focus on learning novel classes, ignoring the fact that many common classes are readily available in large datasets. An approach that aims to enjoy the best of both worlds, the ability to learn from large datasets for common classes with the flexibility of few-shot learning for others, is incremental few-shot learning [9]. This combines incremental learning where we want to add new classes without catastrophic forgetting [20], with few-shot learning when the new classes, unlike the base classes, only have a small amount of examples. One use case to illustrate the problem is a visual aid system. Most objects of interest are common to all users, e.g., cars, pedestrian signals; however, users would also like to augment the system with additional personalized items or important landmarks in their area. Such a system needs to be able to learn new classes from few examples, without harming the performance on the original classes and typically without access to the dataset used to train the original classes. In this work we present a novel method for incremental few-shot learning where during meta-learning we optimize a regularizer that reduces catastrophic forgetting from the incremental few-shot learning. Our proposed regularizer is inspired by attractor networks [42] and can be thought of as a memory of the base classes, adapted to the new classes. We also show how this regularizer can be optimized, using recurrent back-propagation [18, 1, 25] to back-propagate through the few-shot optimization 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. stage. Finally, we show empirically that our proposed method can produce state-of-the-art results in incremental few-shot learning on mini-ImageNet [36] and tiered-ImageNet [29] tasks. 2 Related Work Recently, there has been a surge in interest in few-shot learning [16, 36, 33, 17], where a model for novel classes is learned with only a few labeled examples. One family of approaches for fewshot learning, including Deep Siamese Networks [16], Matching Networks [36] and Prototypical Networks [33], follows the line of metric learning. In particular, these approaches use deep neural networks to learn a function that maps the input space to the embedding space where examples belonging to the same category are close and those belonging to different categories are far apart. Recently, [8] proposes a graph neural networks based method which captures the information propagation from the labeled support set to the query set. [29] extends Prototypical Networks to leverage unlabeled examples while doing few-shot learning. Despite their simplicity, these methods are very effective and often competitive with the state-of-the-art. Another class of approaches aims to learn models which can adapt to the episodic tasks. In particular, [27] treats the long short-term memory (LSTM) as a meta learner such that it can learn to predict the parameter update of a base learner, e.g., a convolutional neural network (CNN). MAML [7] instead learns the hyperparameters or the initial parameters of the base learner by back-propagating through the gradient descent steps. [31] uses a read/write augmented memory, and [21] combines soft attention with temporal convolutions which enables retrieval of information from past episodes. Methods described above belong to the general class of meta-learning models. First proposed in [32, 23, 35], meta-learning is a machine learning paradigm where the meta-learner tries to improve the base learner using the learning experiences from multiple tasks. Meta-learning methods typically learn the update policy yet lack an overall learning objective in the few-shot episodes. Furthermore, they could potentially suffer from short-horizon bias [41], if at test time the model is trained for longer steps. To address this problem, [4] proposes to use fast convergent models like logistic regression (LR), which can be back-propagated via a closed form update rule. Compared to [4], our proposed method using recurrent back-propagation [18, 1, 25] is more general as it does not require a closed-form update, and the inner loop solver can employ any existing continuous optimizers. Our work is also related to incremental learning, a setting where information is arriving continuously while prior knowledge needs to be transferred. A key challenge is catastrophic forgetting [20, 19], i.e., the model forgets the learned knowledge. Various memory-based models have since been proposed, which store training examples explicitly [28, 34, 5, 24], regularize the parameter updates [15], or learn a generative model [13]. However, in these studies, incremental learning typically starts from scratch, and usually performs worse than a regular model that is trained with all available classes together since it needs to learned a good representation while dealing with catastrophic forgetting. Incremental few-shot learning is also known as low-shot learning. To leverage a good representation, [10, 37, 9] starts off with a pre-trained network on a set of base classes, and tries to augment the classifier with a batch of new classes that has not been seen during training. [10] proposes the squared gradient magnitude loss, which makes the learned classifier from the low-shot examples have a smaller gradient value when learning on all examples. [37] propose the prototypical matching networks, a combination of prototypical network and matching network. The paper also adds hallucination, which generates new examples. [9] proposes an attention based model which generates weights for novel categories. They also promote the use of cosine similarity between feature representations and weight vectors to classify images. In contrast, during each few-shot episode, we directly learn a classifier network that is randomly initialized and solved till convergence, unlike [9] which directly output the prediction. Since the model cannot see base class data within the support set of each few-shot learning episode, it is challenging to learn a classifier that jointly classifies both base and novel categories. Towards this end, we propose to add a learned regularizer, which is predicted by a meta-network, the “attention attractor network”. The network is learned by differentiating through few-shot learning optimization iterations. We found that using an iterative solver with the learned regularizer significantly improves the classifier model on the task of incremental few-shot learning. 3 Model In this section, we first define the setup of incremental few-shot learning, and then we introduce our new model, the Attention Attractor Network, which attends to the set of base classes according to the few-shot training data by using the attractor regularizing term. Figure 1 illustrates the high-level model diagram of our method. 3.1 Incremental Few-Shot Learning The outline of our meta-learning approach to incremental few-shot learning is: (1) We learn a fixed feature representation and a classifier on a set of base classes; (2) In each training and testing episode we train a novel-class classifier with our meta-learned regularizer; (3) We optimize our meta-learned regularizer on combined novel and base classes classification, adapting it to perform well in conjunction with the base classifier. Details of these stages follow. Pretraining Stage: We learn a base model for the regular supervised classification task on dataset {(xa,i, ya,i)}Nai=1 where xa,i is the i-th example from dataset Da and its labeled class ya,i ∈ {1, 2, ...,K}. The purpose of this stage is to learn both a good base classifier and a good representation. The parameters of the base classifier are learned in this stage and will be fixed after pretraining. We denote the parameters of the top fully connected layer of the base classifier Wa ∈ RD×K where D is the dimension of our learned representation. Incremental Few-Shot Episodes: A few-shot dataset Db is presented, from which we can sample few-shot learning episodes E . Note that this can be the same data source as the pretraining datasetDa, but sampled episodically. For each N -shot K ′-way episode, there are K ′ novel classes disjoint from the base classes. Each novel class has N and M images from the support set Sb and the query set Qb respectively. Therefore, we have E = (Sb, Qb), Sb = (xSb,i, ySb,i) N×K′ i=1 , Qb = (x Q b,i, y Q b,i) M×K′ i=1 where yb,i ∈ {K+1, ...,K+K ′}. Sb andQb can be regarded as this episodes training and validation sets. Each episode we learn a classifier on the support set Sb whose learnable parameters Wb are called the fast weights as they are only used during this episode. To evaluate the performance on a joint prediction of both base and novel classes, i.e., a (K +K ′)-way classification, a mini-batch Qa = {(xa,i, ya,i)}M×Ki=1 sampled from Da is also added to Qb to form Qa+b = Qa ∪ Qb. This means that the learning algorithm, which only has access to samples from the novel classes Sb, is evaluated on the joint query set Qa+b. Meta-Learning Stage: In meta-training, we iteratively sample few-shot episodes E and try to learn the meta-parameters in order to minimize the joint prediction loss on Qa+b. In particular, we design a regularizerR(·, θ) such that the fast weights are learned via minimizing the loss `(Wb, Sb)+R(Wb, θ) where `(Wb, Sb) is typically cross-entropy loss for few-shot classification. The meta-learner tries to learn meta-parameters θ such that the optimal fast weights W ∗b w.r.t. the above loss function performs well on Qa+b. In our model, meta-parameters θ are encapsulated in our attention attractor network, which produces regularizers for the fast weights in the few-shot learning objective. Joint Prediction on Base and Novel Classes: We now introduce the details of our joint prediction framework performed in each few-shot episode. First, we construct an episodic classifier, e.g., a logistic regression (LR) model or a multi-layer perceptron (MLP), which takes the learned image features as inputs and classifies them according to the few-shot classes. During training on the support set Sb, we learn the fast weights Wb via minimizing the following regularized cross-entropy objective, which we call the episodic objective: LS(Wb, θ) = − 1 NK ′ NK′∑ i=1 K+K′∑ c=K+1 ySb,i,c log ŷ S b,i,c +R(Wb, θ). (1) This is a general formulation and the specific functional form of the regularization termR(Wb, θ) will be specified later. The predicted output ŷSb,i is obtained via, ŷ S b,i = softmax( [ W>a xb,i, h(xb,i;W ∗ b ) ] ), where h(xb,i) is our classification network and Wb is the fast weights in the network. In the case of LR, h is a linear model: h(xb,i;Wb) = W>b xb,i. h can also be an MLP for more expressive power. During testing on the query set Qa+b, in order to predict both base and novel classes, we directly augment the softmax with the fixed base class weights Wa, ŷ Q i = softmax( [ W>a xi, h(xi;W ∗ b ) ] ), where W ∗b are the optimal parameters that minimize the regularized classification objective in Eq. (1). 3.2 Attention Attractor Networks Directly learning the few-shot episode, e.g., by setting R(Wb, θ) to be zero or simple weight decay, can cause catastrophic forgetting on the base classes. This is becauseWb which is trained to maximize the correct novel class probability can dominate the base classes in the joint prediction. In this section, we introduce the Attention Attractor Network to address this problem. The key feature of our attractor network is the regularization term R(Wb, θ): R(Wb, θ) = K′∑ k′=1 (Wb,k′ − uk′)>diag(exp(γ))(Wb,k′ − uk′), (2) where uk′ is the so-called attractor and Wb,k′ is the k′-th column of Wb. This sum of squared Mahalanobis distances from the attractors adds a bias to the learning signal arriving solely from novel classes. Note that for a classifier such as an MLP, one can extend this regularization term in a layer-wise manner. Specifically, one can have separate attractors per layer, and the number of attractors equals the number of output dimension of that layer. To ensure that the model performs well on base classes, the attractors uk′ must contain some information about examples from base classes. Since we can not directly access these base examples, we propose to use the slow weights to encode such information. Specifically, each base class has a learned attractor vector Uk stored in the memory matrix U = [U1, ..., UK ]. It is computed as, Uk = fφ(Wa,k), where f is a MLP of which the learnable parameters are φ. For each novel class k′ its classifier is regularized towards its attractor uk′ which is a weighted sum of Uk vectors. Intuitively the weighting is an attention mechanism where each novel class attends to the base classes according to the level of interference, i.e. how prediction of new class k′ causes the forgetting of base class k. For each class in the support set, we compute the cosine similarity between the average representation of the class and base weights Wa then normalize using a softmax function ak′,k = exp ( τA( 1N ∑ j hj1[yb,j = k ′],Wa,k) ) ∑ k exp ( τA( 1N ∑ j hj1[yb,j = k ′],Wa,k) ) , (3) where A is the cosine similarity function, hj are the representations of the inputs in the support set Sb and τ is a learnable temperature scalar. ak′,k encodes a normalized pairwise attention matrix between the novel classes and the base classes. The attention vector is then used to compute a linear weighted sum of entries in the memory matrix U , uk′ = ∑ k ak′,kUk + U0, where U0 is an embedding vector and serves as a bias for the attractor. Algorithm 1 Meta Learning for Incremental Few-Shot Learning Require: θ0, Da, Db, h Ensure: θ 1: θ ← θ0; 2: for t = 1 ... T do 3: {(xSb , ySb )}, {(xQb , y Q b )} ← GetEpisode(Db); 4: {xQa+b, y Q a+b} ← GetMiniBatch(Da) ∪ {(x Q b , y Q b )}; 5: 6: repeat 7: LS ← 1 NK′ ∑ i y S b,i log ŷ S b,i +R(Wb; θ); 8: Wb ← OptimizerStep(Wb,∇WbL S); 9: until Wb converges 10: ŷQa+b,j ← softmax([W > a x Q a+b,j , h(x Q a+b,j ;Wb)]); 11: LQ ← 1 2NK′ ∑ j y Q a+b,j log ŷ Q a+b,j ; 12: // Backprop through the above optimization via RBP // A dummy gradient descent step 13: W ′b ←Wb − α∇WbL S ; 14: J ← ∂W ′ b ∂Wb ; v ← ∂L Q ∂Wb ; g ← v; 15: repeat 16: v ← J>v − v; g ← g + v; 17: until g converges 18: 19: θ ← OptimizerStep(θ, g> ∂W ′ b ∂θ ) 20: end for Our design takes inspiration from attractor networks [22, 42], where for each base class one learns an “attractor” that stores the relevant memory regarding that class. We call our full model “dynamic attractors” as they may vary with each episode even after meta-learning. In contrast if we only have the bias term U0, i.e. a single attractor which is shared by all novel classes, it will not change after meta-learning from one episode to the other. We call this model variant the “static attractor”. In summary, our meta parameters θ include φ, U0, γ and τ , which is on the same scale as as the number of paramters in Wa. It is important to note that R(Wb, θ) is convex w.r.t. Wb. Therefore, if we use the LR model as the classifier, the overall training objective on episodes in Eq. (1) is convex which implies that the optimum W ∗b (θ, Sb) is guaranteed to be unique and achievable. Here we emphasize that the optimal parameters W ∗b are functions of parameters θ and few-shot samples Sb. During meta-learning, θ are updated to minimize an expected loss of the query set Qa+b which contains both base and novel classes, averaging over all few-shot learning episodes, min θ E E [ LQ(θ, Sb) ] = E E M(K+K′)∑ j=1 K+K′∑ c=1 yj,c log ŷj,c(θ, Sb) , (4) where the predicted class is ŷj(θ, Sb) = softmax ([ W>a xj , h (xj ;W ∗ b (θ, Sb)) ]) . 3.3 Learning via Recurrent Back-Propagation As there is no closed-form solution to the episodic objective (the optimization problem in Eq. 1), in each episode we need to minimize LS to obtain W ∗b through an iterative optimizer. The question is how to efficiently compute ∂W ∗ b ∂θ , i.e., back-propagating through the optimization. One option is to unroll the iterative optimization process in the computation graph and use back-propagation through time (BPTT) [38]. However, the number of iterations for a gradient-based optimizer to converge can be on the order of thousands, and BPTT can be computationally prohibitive. Another way is to use the truncated BPTT [39] (T-BPTT) which optimizes for T steps of gradient-based optimization, and is commonly used in meta-learning problems. However, when T is small the training objective could be significantly biased. Alternatively, the recurrent back-propagation (RBP) algorithm [1, 25, 18] allows us to back-propagate through the fixed point efficiently without unrolling the computation graph and storing intermediate activations. Consider a vanilla gradient descent process on Wb with step size α. The difference between two steps Φ can be written as Φ(W (t)b ) = W (t) b − F (W (t) b ), where F (W (t) b ) = W (t+1) b = W (t) b − α∇LS(W (t) b ). Since Φ(W ∗ b (θ)) is identically zero as a function of θ, using the implicit function theorem we have ∂W ∗ b ∂θ = (I − J > F,W∗b )−1 ∂F∂θ , where JF,W∗b denotes the Jacobian matrix of the mapping F evaluated at W ∗b . Algorithm 1 outlines the key steps for learning the episodic objective using RBP in the incremental few-shot learning setting. Note that the RBP algorithm implicitly inverts (I − J>) by computing the matrix inverse vector product, and has the same time complexity compared to truncated BPTT given the same number of unrolled steps, but meanwhile RBP does not have to store intermediate activations. Damped Neumann RBP To compute the matrix-inverse vector product (I−J>)−1v, [18] propose to use the Neumann series: (I − J>)−1v = ∑∞ n=0(J >)nv ≡ ∑∞ n=0 v (n). Note that J>v can be computed by standard back-propagation. However, directly applying the Neumann RBP algorithm sometimes leads to numerical instability. Therefore, we propose to add a damping term 0 < < 1 to I − J>. This results in the following update: ṽ(n) = (J> − I)nv. In practice, we found the damping term with = 0.1 helps alleviate the issue significantly. 4 Experiments We experiment on two few-shot classification datasets, mini-ImageNet and tiered-ImageNet. Both are subsets of ImageNet [30], with images sizes reduced to 84 × 84 pixels. We also modified the datasets to accommodate the incremental few-shot learning settings. 1 4.1 Datasets • mini-ImageNet Proposed by [36], mini-ImageNet contains 100 object classes and 60,000 images. We used the splits proposed by [27], where training, validation, and testing have 64, 16 and 20 classes respectively. • tiered-ImageNet Proposed by [29], tiered-ImageNet is a larger subset of ILSVRC-12. It features a categorical split among training, validation, and testing subsets. The categorical split means that classes that belong to the same high-level category, e.g. “working dog” and ”terrier” or some other dog breed, are not split between training, validation and test. This is a harder task, but one that more strictly evaluates generalization to new classes. It is also an order of magnitude larger than mini-ImageNet. 4.2 Experiment setup We use a standard ResNet backbone [11] to learn the feature representation through supervised training. For mini-ImageNet experiments, we follow [21] and use a modified version of ResNet-10. 1Code released at: https://github.com/renmengye/inc-few-shot-attractor-public Table 2: mini-ImageNet 64+5-way results Model 1-shot 5-shotAcc. ↑ ∆ ↓ Acc. ↑ ∆ ↓ ProtoNet [33] 42.73 ± 0.15 -20.21 57.05 ± 0.10 -31.72 Imprint [26] 41.10 ± 0.20 -22.49 44.68 ± 0.23 -27.68 LwoF [9] 52.37 ± 0.20 -13.65 59.90 ± 0.20 -14.18 Ours 54.95 ± 0.30 -11.84 63.04 ± 0.30 -10.66 Table 3: tiered-ImageNet 200+5-way results Model 1-shot 5-shotAcc. ↑ ∆ ↓ Acc. ↑ ∆ ↓ ProtoNet [33] 30.04 ± 0.21 -29.54 41.38 ± 0.28 -26.39 Imprint [26] 39.13 ± 0.15 -22.26 53.60 ± 0.18 -16.35 LwoF [9] 52.40 ± 0.33 -8.27 62.63 ± 0.31 -6.72 Ours 56.11 ± 0.33 -6.11 65.52 ± 0.31 -4.48 ∆ = average decrease in acc. caused by joint prediction within base and novel classes (∆ = 1 2 (∆a + ∆b)) ↑ (↓) represents higher (lower) is better. For tiered-ImageNet, we use the standard ResNet-18 [11], but replace all batch normalization [12] layers with group normalization [40], as there is a large distributional shift from training to testing in tiered-ImageNet due to categorical splits. We used standard data augmentation, with random crops and horizonal flips. We use the same pretrained checkpoint as the starting point for meta-learning. In the meta-learning stage as well as the final evaluation, we sample a few-shot episode from the Db, together with a regular mini-batch from the Da. The base class images are added to the query set of the few-shot episode. The base and novel classes are maintained in equal proportion in our experiments. For all the experiments, we consider 5-way classification with 1 or 5 support examples (i.e. shots). In the experiments, we use a query set of size 25×2 =50. We use L-BFGS [43] to solve the inner loop of our models to make sure Wb converges. We use the ADAM [14] optimizer for meta-learning with a learning rate of 1e-3, which decays by a factor of 10 after 4,000 steps, for a total of 8,000 steps. We fix recurrent backpropagation to 20 iterations and = 0.1. We study two variants of the classifier network. The first is a logistic regression model with a single weight matrix Wb. The second is a 2-layer fully connected MLP model with 40 hidden units in the middle and tanh non-linearity. To make training more efficient, we also add a shortcut connection in our MLP, which directly links the input to the output. In the second stage of training, we keep all backbone weights frozen and only train the meta-parameters θ. 4.3 Evaluation metrics We consider the following evaluation metrics: 1) overall accuracy on individual query sets and the joint query set (“Base”, “Novel”, and “Both”); and 2) decrease in performance caused by joint prediction within the base and novel classes, considered separately (“∆a” and “∆b”). Finally we take the average ∆ = 12 (∆a + ∆b) as a key measure of the overall decrease in accuracy. 4.4 Comparisons We implemented and compared to three methods. First, we adapted Prototypical Networks [33] to incremental few-shot settings. For each base class we store a base representation, which is the average representation (prototype) over all images belonging to the base class. During the few-shot learning stage, we again average the representation of the few-shot classes and add them to the bank of base representations. Finally, we retrieve the nearest neighbor by comparing the representation of a test image with entries in the representation store. In summary, both Wa and Wb are stored as the average representation of all images seen so far that belong to a certain class. We also compare to the following methods: • Weights Imprinting (“Imprint”) [26]: the base weights Wa are learned regularly through supervised pre-training, and Wb are computed using prototypical averaging. • Learning without Forgetting (“LwoF”) [9]: Similar to [26],Wb are computed using prototypical averaging. In addition, Wa is finetuned during episodic meta-learning. We implemented the most advanced variants proposed in the paper, which involves a class-wise attention mechanism. This model is the previous state-of-the-art method on incremental few-shot learning, and has better performance compared to other low-shot models [37, 10]. 4.5 Results We first evaluate our vanilla approach on the standard few-shot classification benchmark where no base classes are present in the query set. Our vanilla model consists of a pretrained CNN and a single-layer logistic regression with weight decay learned from scratch; this model performs on-par Table 4: Ablation studies on mini-ImageNet Table 5: Ablation studies on tiered-ImageNet with other competitive meta-learning approaches (1-shot 55.40 ± 0.51, 5-shot 70.17 ± 0.46). Note that our model uses the same backbone architecture as [21] and [9], and is directly comparable with their results. Similar findings of strong results using simple logistic regression on few-shot classification benchmarks are also recently reported in [6]. Our full model has similar performance as the vanilla model on pure few-shot benchmarks, and the full table is available in Supp. Materials. Next, we compare our models to other methods on incremental few-shot learning benchmarks in Tables 2 and 3. On both benchmarks, our best performing model shows a significant margin over the prior works that predict the prototype representation without using an iterative optimization [33, 26, 9]. 4.6 Ablation studies To understand the effectiveness of each part of the proposed model, we consider the following variants: • Vanilla (“LR, MLP”) optimizes a logistic regression or an MLP network at each few-shot episode, with a weight decay regularizer. • Static attractor (“+S”) learns a fixed attractor center u and attractor slope γ for all classes. • Attention attractor (“+A”) learns the full attention attractor model. For MLP models, the weights below the final layer are controlled by attractors predicted by the average representation across all the episodes. fφ is an MLP with one hidden layer of 50 units. Tables 4 and 5 shows the ablation experiment results. In all cases, the learned regularization function shows better performance than a manually set weight decay constant on the classifier network, in terms of both jointly predicting base and novel classes, as well as less degradation from individual prediction. On mini-ImageNet, our attention attractors have a clear advantage over static attractors. Formulating the classifier as an MLP network is slightly better than the linear models in our experiments. Although the final performance is similar, our RBP-based algorithm have the flexibility of adding the fast episodic model with more capacity. Unlike [4], we do not rely on an analytic form of the gradients of the optimization process. Comparison to truncated BPTT (T-BPTT) An alternative way to learn the regularizer is to unroll the inner optimization for a fixed number of steps in a differentiable computation graph, and then back-propagate through time. Truncated BPTT is a popular learning algorithm in many recent meta-learning approaches [2, 27, 7, 34, 3]. Shown in Figure 2, the performance of T-BPTT learned models are comparable to ours; however, when solved to convergence at test time, the performance of T-BPTT models drops significantly. This is expected as they are only guaranteed to work well for a certain number of steps, and failed to learn a good regularizer. While an early-stopped T-BPTT model can do equally well, in practice it is hard to tell when to stop; whereas for the RBP model, doing the full episodic training is very fast since the number of support examples is small. Visualization of attractor dynamics We visualize attractor dynamics in Figure 3. Our learned attractors pulled the fast weights close towards the base class weights. In comparison, [9] only modifies the prototypes slightly. Varying the number of base classes While the framework proposed in this paper cannot be directly applied on class-incremental continual learning, as there is no module for memory consolidation, we can simulate the continual learning process by varying the number of base classes, to see how the proposed models are affected by different stages of continual learning. Figure 4 shows that the learned regularizers consistently improve over baselines with weight decay only. The overall accuracy increases from 50 to 150 classes due to better representations on the backbone network, and drops at 200 classes due to a more challenging classification task. 5 Conclusion Incremental few-shot learning, the ability to jointly predict based on a set of pre-defined concepts as well as additional novel concepts, is an important step towards making machine learning models more flexible and usable in everyday life. In this work, we propose an attention attractor model, which regulates a per-episode training objective by attending to the set of base classes. We show that our iterative model that solves the few-shot objective till convergence is better than baselines that do one-step inference, and that recurrent back-propagation is an effective and modular tool for learning in a general meta-learning setting, whereas truncated back-propagation through time fails to learn functions that converge well. Future directions of this work include sequential iterative learning of few-shot novel concepts, and hierarchical memory organization. Acknowledgment Supported by NSERC and the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/Interior Business Center (DoI/IBC) contract number D16PC00003. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI/IBC, or the U.S. Government.
1. What is the main contribution of the paper regarding incremental classification? 2. How does the proposed method optimize a regularizer to reduce catastrophic forgetting? 3. Can you provide more clarity on how the learned regularizer relates to memory of base classes adapted to new classes? 4. How does data augmentation ensure disjoint novel classes in the union of original data and distorted versions? 5. Can you explain why parameters \theta_E fit in figure 1 but never appear in the definition of equation 2? 6. Why do W_a weights have different notations such as slow weights and base class weights? 7. Do you have any questions regarding the paper's content or explanations?
Review
Review Overall a very nice and interesting read. In terms of originality I believe the proposed method to be sufficiently novel and at no point felt this was merely an incremental improvement. In terms of significance it should be said that I feel this idea to be fairly specific to the incremental classification setting and wouldn't be general enough to be directly applicable in another domain (e.g. RL). However, I still believe this work should be accepted and would expect recognition within the domain. With regards to the clarity of the submission, I believe sections 3.1 and 3.2 could be improved. Detailed comments below: Introduction: L36: "We optimize a regularizer that reduces catastrophic forgetting" - Perhaps it would be a good idea to delineate this from many of the other works on regularization-based methods to reduce catastrophic forgetting where the regularizer isn't learnt? Examples are [1] or [2]. L37 "can be thought of as a memory of the base classes, adapted to the new classes". This is very unclear and not particularly helpful in the introduction. I could only make sense of this sentence after reading Section 3. Figure 1: Very helpful, thank you for providing this. Section 2: Nice overview of related work. Perhaps some more discussion on work to tackle the catastrophic forgetting problem would be useful here. Section 3: L112: "there are K' novel classes disjoint from the base classes" - When we use the same datset D_a also used during pretraining, this seems only possible when data-augmentation is introduced (as the authors explain in Section 4 (L239)). It would be good to already mention this here (possibly with a footnote). Also, I understand data-augmentation as training on the union of original data and a distroted version thereof. In order to ensure that the K' classes are indeed disjoint, are the authors ensuring that during episodic sampling from D_a there is always some form of distortion applied? If for instance, during sampling of random rotations we choose between {90, 180, 270, 360), one could run into the risk of training on identical data already used during pre-training. L120: "from from" -> "from" L126: Where do parameters \theta_E fit in Figure 1? They appear as arguments to R_(W_b, \theta_E) but never appear in the definition of equation (2). This is not made clear until line 172 which was rather confusing. Also the subscript E seems like a stranger letter to choose. L139: W_a are called "slow weights" (also in L154) whereas they have previously been referred to as "Base class Weights" (Figure 1). I found myself repeatedly looking at Figure 1 to keep track of what was happening in the model. Using consistent notation would have made this a lot easier. Section 3.3: Very clear. Section 4: L245: Missing whitespace after the full-stop. L248: "RBP" -> Recurent back-propagation as readers might be unfamiliar with the abbrevation and might want to only briefly skim the experimental section. Section 4.3: 2) I don't think I understand what \delta_a and \delta_b are supposed to be. Section 4.6: Nice to see the ablation study I was hoping for when reading Section 3. Interesting also to see that in some cases a simple LR model for W_b works better or just as well. Also great to see the comparison between RBP and T-BPTT Figure 3 is really nice. [1] Kirkpatrick, James, et al. "Overcoming catastrophic forgetting in neural networks." Proceedings of the national academy of sciences 114.13 (2017): 3521-3526. [2] Nguyen, Cuong V., et al. "Variational continual learning." arXiv preprint arXiv:1710.10628 (2017).
NIPS
Title Incremental Few-Shot Learning with Attention Attractor Networks Abstract Machine learning classifiers are often trained to recognize a set of pre-defined classes. However, in many applications, it is often desirable to have the flexibility of learning additional concepts, with limited data and without re-training on the full training set. This paper addresses this problem, incremental few-shot learning, where a regular classification network has already been trained to recognize a set of base classes, and several extra novel classes are being considered, each with only a few labeled examples. After learning the novel classes, the model is then evaluated on the overall classification performance on both base and novel classes. To this end, we propose a meta-learning model, the Attention Attractor Network, which regularizes the learning of novel classes. In each episode, we train a set of new weights to recognize novel classes until they converge, and we show that the technique of recurrent back-propagation can back-propagate through the optimization process and facilitate the learning of these parameters. We demonstrate that the learned attractor network can help recognize novel classes while remembering old classes without the need to review the original training set, outperforming various baselines. 1 Introduction The availability of large scale datasets with detailed annotation, such as ImageNet [30], played a significant role in the recent success of deep learning. The need for such a large dataset is however a limitation, since its collection requires intensive human labor. This is also strikingly different from human learning, where new concepts can be learned from very few examples. One line of work that attempts to bridge this gap is few-shot learning [16, 36, 33], where a model learns to output a classifier given only a few labeled examples of the unseen classes. While this is a promising line of work, its practical usability is a concern, because few-shot models only focus on learning novel classes, ignoring the fact that many common classes are readily available in large datasets. An approach that aims to enjoy the best of both worlds, the ability to learn from large datasets for common classes with the flexibility of few-shot learning for others, is incremental few-shot learning [9]. This combines incremental learning where we want to add new classes without catastrophic forgetting [20], with few-shot learning when the new classes, unlike the base classes, only have a small amount of examples. One use case to illustrate the problem is a visual aid system. Most objects of interest are common to all users, e.g., cars, pedestrian signals; however, users would also like to augment the system with additional personalized items or important landmarks in their area. Such a system needs to be able to learn new classes from few examples, without harming the performance on the original classes and typically without access to the dataset used to train the original classes. In this work we present a novel method for incremental few-shot learning where during meta-learning we optimize a regularizer that reduces catastrophic forgetting from the incremental few-shot learning. Our proposed regularizer is inspired by attractor networks [42] and can be thought of as a memory of the base classes, adapted to the new classes. We also show how this regularizer can be optimized, using recurrent back-propagation [18, 1, 25] to back-propagate through the few-shot optimization 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. stage. Finally, we show empirically that our proposed method can produce state-of-the-art results in incremental few-shot learning on mini-ImageNet [36] and tiered-ImageNet [29] tasks. 2 Related Work Recently, there has been a surge in interest in few-shot learning [16, 36, 33, 17], where a model for novel classes is learned with only a few labeled examples. One family of approaches for fewshot learning, including Deep Siamese Networks [16], Matching Networks [36] and Prototypical Networks [33], follows the line of metric learning. In particular, these approaches use deep neural networks to learn a function that maps the input space to the embedding space where examples belonging to the same category are close and those belonging to different categories are far apart. Recently, [8] proposes a graph neural networks based method which captures the information propagation from the labeled support set to the query set. [29] extends Prototypical Networks to leverage unlabeled examples while doing few-shot learning. Despite their simplicity, these methods are very effective and often competitive with the state-of-the-art. Another class of approaches aims to learn models which can adapt to the episodic tasks. In particular, [27] treats the long short-term memory (LSTM) as a meta learner such that it can learn to predict the parameter update of a base learner, e.g., a convolutional neural network (CNN). MAML [7] instead learns the hyperparameters or the initial parameters of the base learner by back-propagating through the gradient descent steps. [31] uses a read/write augmented memory, and [21] combines soft attention with temporal convolutions which enables retrieval of information from past episodes. Methods described above belong to the general class of meta-learning models. First proposed in [32, 23, 35], meta-learning is a machine learning paradigm where the meta-learner tries to improve the base learner using the learning experiences from multiple tasks. Meta-learning methods typically learn the update policy yet lack an overall learning objective in the few-shot episodes. Furthermore, they could potentially suffer from short-horizon bias [41], if at test time the model is trained for longer steps. To address this problem, [4] proposes to use fast convergent models like logistic regression (LR), which can be back-propagated via a closed form update rule. Compared to [4], our proposed method using recurrent back-propagation [18, 1, 25] is more general as it does not require a closed-form update, and the inner loop solver can employ any existing continuous optimizers. Our work is also related to incremental learning, a setting where information is arriving continuously while prior knowledge needs to be transferred. A key challenge is catastrophic forgetting [20, 19], i.e., the model forgets the learned knowledge. Various memory-based models have since been proposed, which store training examples explicitly [28, 34, 5, 24], regularize the parameter updates [15], or learn a generative model [13]. However, in these studies, incremental learning typically starts from scratch, and usually performs worse than a regular model that is trained with all available classes together since it needs to learned a good representation while dealing with catastrophic forgetting. Incremental few-shot learning is also known as low-shot learning. To leverage a good representation, [10, 37, 9] starts off with a pre-trained network on a set of base classes, and tries to augment the classifier with a batch of new classes that has not been seen during training. [10] proposes the squared gradient magnitude loss, which makes the learned classifier from the low-shot examples have a smaller gradient value when learning on all examples. [37] propose the prototypical matching networks, a combination of prototypical network and matching network. The paper also adds hallucination, which generates new examples. [9] proposes an attention based model which generates weights for novel categories. They also promote the use of cosine similarity between feature representations and weight vectors to classify images. In contrast, during each few-shot episode, we directly learn a classifier network that is randomly initialized and solved till convergence, unlike [9] which directly output the prediction. Since the model cannot see base class data within the support set of each few-shot learning episode, it is challenging to learn a classifier that jointly classifies both base and novel categories. Towards this end, we propose to add a learned regularizer, which is predicted by a meta-network, the “attention attractor network”. The network is learned by differentiating through few-shot learning optimization iterations. We found that using an iterative solver with the learned regularizer significantly improves the classifier model on the task of incremental few-shot learning. 3 Model In this section, we first define the setup of incremental few-shot learning, and then we introduce our new model, the Attention Attractor Network, which attends to the set of base classes according to the few-shot training data by using the attractor regularizing term. Figure 1 illustrates the high-level model diagram of our method. 3.1 Incremental Few-Shot Learning The outline of our meta-learning approach to incremental few-shot learning is: (1) We learn a fixed feature representation and a classifier on a set of base classes; (2) In each training and testing episode we train a novel-class classifier with our meta-learned regularizer; (3) We optimize our meta-learned regularizer on combined novel and base classes classification, adapting it to perform well in conjunction with the base classifier. Details of these stages follow. Pretraining Stage: We learn a base model for the regular supervised classification task on dataset {(xa,i, ya,i)}Nai=1 where xa,i is the i-th example from dataset Da and its labeled class ya,i ∈ {1, 2, ...,K}. The purpose of this stage is to learn both a good base classifier and a good representation. The parameters of the base classifier are learned in this stage and will be fixed after pretraining. We denote the parameters of the top fully connected layer of the base classifier Wa ∈ RD×K where D is the dimension of our learned representation. Incremental Few-Shot Episodes: A few-shot dataset Db is presented, from which we can sample few-shot learning episodes E . Note that this can be the same data source as the pretraining datasetDa, but sampled episodically. For each N -shot K ′-way episode, there are K ′ novel classes disjoint from the base classes. Each novel class has N and M images from the support set Sb and the query set Qb respectively. Therefore, we have E = (Sb, Qb), Sb = (xSb,i, ySb,i) N×K′ i=1 , Qb = (x Q b,i, y Q b,i) M×K′ i=1 where yb,i ∈ {K+1, ...,K+K ′}. Sb andQb can be regarded as this episodes training and validation sets. Each episode we learn a classifier on the support set Sb whose learnable parameters Wb are called the fast weights as they are only used during this episode. To evaluate the performance on a joint prediction of both base and novel classes, i.e., a (K +K ′)-way classification, a mini-batch Qa = {(xa,i, ya,i)}M×Ki=1 sampled from Da is also added to Qb to form Qa+b = Qa ∪ Qb. This means that the learning algorithm, which only has access to samples from the novel classes Sb, is evaluated on the joint query set Qa+b. Meta-Learning Stage: In meta-training, we iteratively sample few-shot episodes E and try to learn the meta-parameters in order to minimize the joint prediction loss on Qa+b. In particular, we design a regularizerR(·, θ) such that the fast weights are learned via minimizing the loss `(Wb, Sb)+R(Wb, θ) where `(Wb, Sb) is typically cross-entropy loss for few-shot classification. The meta-learner tries to learn meta-parameters θ such that the optimal fast weights W ∗b w.r.t. the above loss function performs well on Qa+b. In our model, meta-parameters θ are encapsulated in our attention attractor network, which produces regularizers for the fast weights in the few-shot learning objective. Joint Prediction on Base and Novel Classes: We now introduce the details of our joint prediction framework performed in each few-shot episode. First, we construct an episodic classifier, e.g., a logistic regression (LR) model or a multi-layer perceptron (MLP), which takes the learned image features as inputs and classifies them according to the few-shot classes. During training on the support set Sb, we learn the fast weights Wb via minimizing the following regularized cross-entropy objective, which we call the episodic objective: LS(Wb, θ) = − 1 NK ′ NK′∑ i=1 K+K′∑ c=K+1 ySb,i,c log ŷ S b,i,c +R(Wb, θ). (1) This is a general formulation and the specific functional form of the regularization termR(Wb, θ) will be specified later. The predicted output ŷSb,i is obtained via, ŷ S b,i = softmax( [ W>a xb,i, h(xb,i;W ∗ b ) ] ), where h(xb,i) is our classification network and Wb is the fast weights in the network. In the case of LR, h is a linear model: h(xb,i;Wb) = W>b xb,i. h can also be an MLP for more expressive power. During testing on the query set Qa+b, in order to predict both base and novel classes, we directly augment the softmax with the fixed base class weights Wa, ŷ Q i = softmax( [ W>a xi, h(xi;W ∗ b ) ] ), where W ∗b are the optimal parameters that minimize the regularized classification objective in Eq. (1). 3.2 Attention Attractor Networks Directly learning the few-shot episode, e.g., by setting R(Wb, θ) to be zero or simple weight decay, can cause catastrophic forgetting on the base classes. This is becauseWb which is trained to maximize the correct novel class probability can dominate the base classes in the joint prediction. In this section, we introduce the Attention Attractor Network to address this problem. The key feature of our attractor network is the regularization term R(Wb, θ): R(Wb, θ) = K′∑ k′=1 (Wb,k′ − uk′)>diag(exp(γ))(Wb,k′ − uk′), (2) where uk′ is the so-called attractor and Wb,k′ is the k′-th column of Wb. This sum of squared Mahalanobis distances from the attractors adds a bias to the learning signal arriving solely from novel classes. Note that for a classifier such as an MLP, one can extend this regularization term in a layer-wise manner. Specifically, one can have separate attractors per layer, and the number of attractors equals the number of output dimension of that layer. To ensure that the model performs well on base classes, the attractors uk′ must contain some information about examples from base classes. Since we can not directly access these base examples, we propose to use the slow weights to encode such information. Specifically, each base class has a learned attractor vector Uk stored in the memory matrix U = [U1, ..., UK ]. It is computed as, Uk = fφ(Wa,k), where f is a MLP of which the learnable parameters are φ. For each novel class k′ its classifier is regularized towards its attractor uk′ which is a weighted sum of Uk vectors. Intuitively the weighting is an attention mechanism where each novel class attends to the base classes according to the level of interference, i.e. how prediction of new class k′ causes the forgetting of base class k. For each class in the support set, we compute the cosine similarity between the average representation of the class and base weights Wa then normalize using a softmax function ak′,k = exp ( τA( 1N ∑ j hj1[yb,j = k ′],Wa,k) ) ∑ k exp ( τA( 1N ∑ j hj1[yb,j = k ′],Wa,k) ) , (3) where A is the cosine similarity function, hj are the representations of the inputs in the support set Sb and τ is a learnable temperature scalar. ak′,k encodes a normalized pairwise attention matrix between the novel classes and the base classes. The attention vector is then used to compute a linear weighted sum of entries in the memory matrix U , uk′ = ∑ k ak′,kUk + U0, where U0 is an embedding vector and serves as a bias for the attractor. Algorithm 1 Meta Learning for Incremental Few-Shot Learning Require: θ0, Da, Db, h Ensure: θ 1: θ ← θ0; 2: for t = 1 ... T do 3: {(xSb , ySb )}, {(xQb , y Q b )} ← GetEpisode(Db); 4: {xQa+b, y Q a+b} ← GetMiniBatch(Da) ∪ {(x Q b , y Q b )}; 5: 6: repeat 7: LS ← 1 NK′ ∑ i y S b,i log ŷ S b,i +R(Wb; θ); 8: Wb ← OptimizerStep(Wb,∇WbL S); 9: until Wb converges 10: ŷQa+b,j ← softmax([W > a x Q a+b,j , h(x Q a+b,j ;Wb)]); 11: LQ ← 1 2NK′ ∑ j y Q a+b,j log ŷ Q a+b,j ; 12: // Backprop through the above optimization via RBP // A dummy gradient descent step 13: W ′b ←Wb − α∇WbL S ; 14: J ← ∂W ′ b ∂Wb ; v ← ∂L Q ∂Wb ; g ← v; 15: repeat 16: v ← J>v − v; g ← g + v; 17: until g converges 18: 19: θ ← OptimizerStep(θ, g> ∂W ′ b ∂θ ) 20: end for Our design takes inspiration from attractor networks [22, 42], where for each base class one learns an “attractor” that stores the relevant memory regarding that class. We call our full model “dynamic attractors” as they may vary with each episode even after meta-learning. In contrast if we only have the bias term U0, i.e. a single attractor which is shared by all novel classes, it will not change after meta-learning from one episode to the other. We call this model variant the “static attractor”. In summary, our meta parameters θ include φ, U0, γ and τ , which is on the same scale as as the number of paramters in Wa. It is important to note that R(Wb, θ) is convex w.r.t. Wb. Therefore, if we use the LR model as the classifier, the overall training objective on episodes in Eq. (1) is convex which implies that the optimum W ∗b (θ, Sb) is guaranteed to be unique and achievable. Here we emphasize that the optimal parameters W ∗b are functions of parameters θ and few-shot samples Sb. During meta-learning, θ are updated to minimize an expected loss of the query set Qa+b which contains both base and novel classes, averaging over all few-shot learning episodes, min θ E E [ LQ(θ, Sb) ] = E E M(K+K′)∑ j=1 K+K′∑ c=1 yj,c log ŷj,c(θ, Sb) , (4) where the predicted class is ŷj(θ, Sb) = softmax ([ W>a xj , h (xj ;W ∗ b (θ, Sb)) ]) . 3.3 Learning via Recurrent Back-Propagation As there is no closed-form solution to the episodic objective (the optimization problem in Eq. 1), in each episode we need to minimize LS to obtain W ∗b through an iterative optimizer. The question is how to efficiently compute ∂W ∗ b ∂θ , i.e., back-propagating through the optimization. One option is to unroll the iterative optimization process in the computation graph and use back-propagation through time (BPTT) [38]. However, the number of iterations for a gradient-based optimizer to converge can be on the order of thousands, and BPTT can be computationally prohibitive. Another way is to use the truncated BPTT [39] (T-BPTT) which optimizes for T steps of gradient-based optimization, and is commonly used in meta-learning problems. However, when T is small the training objective could be significantly biased. Alternatively, the recurrent back-propagation (RBP) algorithm [1, 25, 18] allows us to back-propagate through the fixed point efficiently without unrolling the computation graph and storing intermediate activations. Consider a vanilla gradient descent process on Wb with step size α. The difference between two steps Φ can be written as Φ(W (t)b ) = W (t) b − F (W (t) b ), where F (W (t) b ) = W (t+1) b = W (t) b − α∇LS(W (t) b ). Since Φ(W ∗ b (θ)) is identically zero as a function of θ, using the implicit function theorem we have ∂W ∗ b ∂θ = (I − J > F,W∗b )−1 ∂F∂θ , where JF,W∗b denotes the Jacobian matrix of the mapping F evaluated at W ∗b . Algorithm 1 outlines the key steps for learning the episodic objective using RBP in the incremental few-shot learning setting. Note that the RBP algorithm implicitly inverts (I − J>) by computing the matrix inverse vector product, and has the same time complexity compared to truncated BPTT given the same number of unrolled steps, but meanwhile RBP does not have to store intermediate activations. Damped Neumann RBP To compute the matrix-inverse vector product (I−J>)−1v, [18] propose to use the Neumann series: (I − J>)−1v = ∑∞ n=0(J >)nv ≡ ∑∞ n=0 v (n). Note that J>v can be computed by standard back-propagation. However, directly applying the Neumann RBP algorithm sometimes leads to numerical instability. Therefore, we propose to add a damping term 0 < < 1 to I − J>. This results in the following update: ṽ(n) = (J> − I)nv. In practice, we found the damping term with = 0.1 helps alleviate the issue significantly. 4 Experiments We experiment on two few-shot classification datasets, mini-ImageNet and tiered-ImageNet. Both are subsets of ImageNet [30], with images sizes reduced to 84 × 84 pixels. We also modified the datasets to accommodate the incremental few-shot learning settings. 1 4.1 Datasets • mini-ImageNet Proposed by [36], mini-ImageNet contains 100 object classes and 60,000 images. We used the splits proposed by [27], where training, validation, and testing have 64, 16 and 20 classes respectively. • tiered-ImageNet Proposed by [29], tiered-ImageNet is a larger subset of ILSVRC-12. It features a categorical split among training, validation, and testing subsets. The categorical split means that classes that belong to the same high-level category, e.g. “working dog” and ”terrier” or some other dog breed, are not split between training, validation and test. This is a harder task, but one that more strictly evaluates generalization to new classes. It is also an order of magnitude larger than mini-ImageNet. 4.2 Experiment setup We use a standard ResNet backbone [11] to learn the feature representation through supervised training. For mini-ImageNet experiments, we follow [21] and use a modified version of ResNet-10. 1Code released at: https://github.com/renmengye/inc-few-shot-attractor-public Table 2: mini-ImageNet 64+5-way results Model 1-shot 5-shotAcc. ↑ ∆ ↓ Acc. ↑ ∆ ↓ ProtoNet [33] 42.73 ± 0.15 -20.21 57.05 ± 0.10 -31.72 Imprint [26] 41.10 ± 0.20 -22.49 44.68 ± 0.23 -27.68 LwoF [9] 52.37 ± 0.20 -13.65 59.90 ± 0.20 -14.18 Ours 54.95 ± 0.30 -11.84 63.04 ± 0.30 -10.66 Table 3: tiered-ImageNet 200+5-way results Model 1-shot 5-shotAcc. ↑ ∆ ↓ Acc. ↑ ∆ ↓ ProtoNet [33] 30.04 ± 0.21 -29.54 41.38 ± 0.28 -26.39 Imprint [26] 39.13 ± 0.15 -22.26 53.60 ± 0.18 -16.35 LwoF [9] 52.40 ± 0.33 -8.27 62.63 ± 0.31 -6.72 Ours 56.11 ± 0.33 -6.11 65.52 ± 0.31 -4.48 ∆ = average decrease in acc. caused by joint prediction within base and novel classes (∆ = 1 2 (∆a + ∆b)) ↑ (↓) represents higher (lower) is better. For tiered-ImageNet, we use the standard ResNet-18 [11], but replace all batch normalization [12] layers with group normalization [40], as there is a large distributional shift from training to testing in tiered-ImageNet due to categorical splits. We used standard data augmentation, with random crops and horizonal flips. We use the same pretrained checkpoint as the starting point for meta-learning. In the meta-learning stage as well as the final evaluation, we sample a few-shot episode from the Db, together with a regular mini-batch from the Da. The base class images are added to the query set of the few-shot episode. The base and novel classes are maintained in equal proportion in our experiments. For all the experiments, we consider 5-way classification with 1 or 5 support examples (i.e. shots). In the experiments, we use a query set of size 25×2 =50. We use L-BFGS [43] to solve the inner loop of our models to make sure Wb converges. We use the ADAM [14] optimizer for meta-learning with a learning rate of 1e-3, which decays by a factor of 10 after 4,000 steps, for a total of 8,000 steps. We fix recurrent backpropagation to 20 iterations and = 0.1. We study two variants of the classifier network. The first is a logistic regression model with a single weight matrix Wb. The second is a 2-layer fully connected MLP model with 40 hidden units in the middle and tanh non-linearity. To make training more efficient, we also add a shortcut connection in our MLP, which directly links the input to the output. In the second stage of training, we keep all backbone weights frozen and only train the meta-parameters θ. 4.3 Evaluation metrics We consider the following evaluation metrics: 1) overall accuracy on individual query sets and the joint query set (“Base”, “Novel”, and “Both”); and 2) decrease in performance caused by joint prediction within the base and novel classes, considered separately (“∆a” and “∆b”). Finally we take the average ∆ = 12 (∆a + ∆b) as a key measure of the overall decrease in accuracy. 4.4 Comparisons We implemented and compared to three methods. First, we adapted Prototypical Networks [33] to incremental few-shot settings. For each base class we store a base representation, which is the average representation (prototype) over all images belonging to the base class. During the few-shot learning stage, we again average the representation of the few-shot classes and add them to the bank of base representations. Finally, we retrieve the nearest neighbor by comparing the representation of a test image with entries in the representation store. In summary, both Wa and Wb are stored as the average representation of all images seen so far that belong to a certain class. We also compare to the following methods: • Weights Imprinting (“Imprint”) [26]: the base weights Wa are learned regularly through supervised pre-training, and Wb are computed using prototypical averaging. • Learning without Forgetting (“LwoF”) [9]: Similar to [26],Wb are computed using prototypical averaging. In addition, Wa is finetuned during episodic meta-learning. We implemented the most advanced variants proposed in the paper, which involves a class-wise attention mechanism. This model is the previous state-of-the-art method on incremental few-shot learning, and has better performance compared to other low-shot models [37, 10]. 4.5 Results We first evaluate our vanilla approach on the standard few-shot classification benchmark where no base classes are present in the query set. Our vanilla model consists of a pretrained CNN and a single-layer logistic regression with weight decay learned from scratch; this model performs on-par Table 4: Ablation studies on mini-ImageNet Table 5: Ablation studies on tiered-ImageNet with other competitive meta-learning approaches (1-shot 55.40 ± 0.51, 5-shot 70.17 ± 0.46). Note that our model uses the same backbone architecture as [21] and [9], and is directly comparable with their results. Similar findings of strong results using simple logistic regression on few-shot classification benchmarks are also recently reported in [6]. Our full model has similar performance as the vanilla model on pure few-shot benchmarks, and the full table is available in Supp. Materials. Next, we compare our models to other methods on incremental few-shot learning benchmarks in Tables 2 and 3. On both benchmarks, our best performing model shows a significant margin over the prior works that predict the prototype representation without using an iterative optimization [33, 26, 9]. 4.6 Ablation studies To understand the effectiveness of each part of the proposed model, we consider the following variants: • Vanilla (“LR, MLP”) optimizes a logistic regression or an MLP network at each few-shot episode, with a weight decay regularizer. • Static attractor (“+S”) learns a fixed attractor center u and attractor slope γ for all classes. • Attention attractor (“+A”) learns the full attention attractor model. For MLP models, the weights below the final layer are controlled by attractors predicted by the average representation across all the episodes. fφ is an MLP with one hidden layer of 50 units. Tables 4 and 5 shows the ablation experiment results. In all cases, the learned regularization function shows better performance than a manually set weight decay constant on the classifier network, in terms of both jointly predicting base and novel classes, as well as less degradation from individual prediction. On mini-ImageNet, our attention attractors have a clear advantage over static attractors. Formulating the classifier as an MLP network is slightly better than the linear models in our experiments. Although the final performance is similar, our RBP-based algorithm have the flexibility of adding the fast episodic model with more capacity. Unlike [4], we do not rely on an analytic form of the gradients of the optimization process. Comparison to truncated BPTT (T-BPTT) An alternative way to learn the regularizer is to unroll the inner optimization for a fixed number of steps in a differentiable computation graph, and then back-propagate through time. Truncated BPTT is a popular learning algorithm in many recent meta-learning approaches [2, 27, 7, 34, 3]. Shown in Figure 2, the performance of T-BPTT learned models are comparable to ours; however, when solved to convergence at test time, the performance of T-BPTT models drops significantly. This is expected as they are only guaranteed to work well for a certain number of steps, and failed to learn a good regularizer. While an early-stopped T-BPTT model can do equally well, in practice it is hard to tell when to stop; whereas for the RBP model, doing the full episodic training is very fast since the number of support examples is small. Visualization of attractor dynamics We visualize attractor dynamics in Figure 3. Our learned attractors pulled the fast weights close towards the base class weights. In comparison, [9] only modifies the prototypes slightly. Varying the number of base classes While the framework proposed in this paper cannot be directly applied on class-incremental continual learning, as there is no module for memory consolidation, we can simulate the continual learning process by varying the number of base classes, to see how the proposed models are affected by different stages of continual learning. Figure 4 shows that the learned regularizers consistently improve over baselines with weight decay only. The overall accuracy increases from 50 to 150 classes due to better representations on the backbone network, and drops at 200 classes due to a more challenging classification task. 5 Conclusion Incremental few-shot learning, the ability to jointly predict based on a set of pre-defined concepts as well as additional novel concepts, is an important step towards making machine learning models more flexible and usable in everyday life. In this work, we propose an attention attractor model, which regulates a per-episode training objective by attending to the set of base classes. We show that our iterative model that solves the few-shot objective till convergence is better than baselines that do one-step inference, and that recurrent back-propagation is an effective and modular tool for learning in a general meta-learning setting, whereas truncated back-propagation through time fails to learn functions that converge well. Future directions of this work include sequential iterative learning of few-shot novel concepts, and hierarchical memory organization. Acknowledgment Supported by NSERC and the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/Interior Business Center (DoI/IBC) contract number D16PC00003. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI/IBC, or the U.S. Government.
1. What is the novelty of the proposed Attention Attractor Network? 2. How sound is the intuition behind the Attention Attractor Network's motivation? 3. Are there any unclear parts in the paper? 4. How significant are the empirical results presented in the paper?
Review
Review Originality: The proposed Attention Attractor Network is novel. Quality: The intuition of using the attention attractor network is not that sound. The motivation of the model is to prevent forgetting base classes. But the method (line 160-166) takes the cosine similarity between the average representation of novel classes and the base class weights. This is purely intuitive. Clarity: There some places not very clear. Significance: the empirical results seem significant.
NIPS
Title Online learning in MDPs with linear function approximation and bandit feedback. Abstract We consider the problem of online learning in an episodic Markov decision process, where the reward function is allowed to change between episodes in an adversarial manner and the learner only observes the rewards associated with its actions. We assume that rewards and the transition function can be represented as linear functions in terms of a known low-dimensional feature map, which allows us to consider the setting where the state space is arbitrarily large. We also assume that the learner has a perfect knowledge of the MDP dynamics. Our main contribution is developing an algorithm whose expected regret after T episodes is bounded by Õ (√ dHT ) , where H is the number of steps in each episode and d is the dimensionality of the feature map. N/A (√ dHT ) , where H is the number of steps in each episode and d is the dimensionality of the feature map. 1 Introduction We study the problem of online learning in episodic Markov Decision Processes (MDP), modelling a sequential decision making problem where the interaction between a learner and its environment is divided into T episodes of fixed length H . At each time step of the episode, the learner observes the current state of the environment, chooses one of the K available actions, and earns a reward. Consequently, the state of the environment changes according to the transition function of the underlying MDP, as a function of the previous state and the action taken by the learner. A key distinguishing feature of our setting is that we assume that the reward function can change arbitrarily between episodes, and the learner only has access to bandit feedback: instead of being able to observe the reward function at the end of the episode, the learner only gets to observe the rewards that it actually received. As traditional in this line of work, we aim to design algorithms for the learner with theoretical guarantees on her regret, which is the difference between the total reward accumulated by the learner and the total reward of the best stationary policy fixed in hindsight. Unlike most previous work on this problem, we allow the state space to be very large and aim to prove performance guarantees that do not depend on the size of the state space, bringing theory one step closer to practical scenarios where assuming finite state spaces is unrealistic. To address the challenge of learning in large state spaces, we adopt the classic RL technique of using linear function approximation and suppose that we have access to a relatively low-dimensional feature map that can be used to represent policies and value functions. We will assume that the feature map is expressive enough so that all action-value functions can be expressed as linear functions of the features, and that the learner has full knowledge of the transition function of the MDP. Our main contribution is designing a computationally efficient algorithm called ONLINE Q-REPS, and prove that in the setting described above, its regret is at most O (√ dHTD (µ∗‖µ0) ) , where d is the dimensionality of the feature map and D (µ∗‖µ0) is the relative entropy between the state-action distribution µ∗ induced by the optimal policy and an initial distribution µ0 given as input to the 35th Conference on Neural Information Processing Systems (NeurIPS 2021). algorithm. Notably, our results do not require the likelihood ratio between these distributions to be uniformly bounded, and the bound shows no dependence on the eigenvalues of the feature covariance matrices. Our algorithm itself requires solving a d2-dimensional convex optimization problem at the beginning of each episode, which can be solved to arbitrary precision ε in time polynomial in d and 1/ε, independently of the size of the state-action space. Our work fits into a long line of research considering online learning in Markov decision processes. The problem of regret minimization in stationary MDPs with a fixed reward function has been studied extensively since the work of Burnetas and Katehakis [6], Auer and Ortner [2], Tewari and Bartlett [31], Jaksch et al. [14], with several important advances made in the past decade [9, 10, 4, 13, 15]. While most of these works considered small finite state spaces, the same techniques have been very recently extended to accommodate infinite state spaces under the assumption of realizable function approximation by Jin et al. [17] and Yang and Wang [33]. In particular, the notion of linear MDPs introduced by Jin et al. [17] has become a standard model for linear function approximation and has been used in several recent works (e.g., 22, 32, 1). Even more relevant is the line of work considering adversarial rewards, initiated by Even-Dar et al. [12], who consider online learning in continuing MDPs with full feedback about the rewards. They proposed a MDP-E algorithm, that achieves O(τ2 √ T logK) regret, where τ is an upper bound on the mixing time of the MDP. Later, Neu et al. [25] proposed an algorithm which guarantees Õ (√ τ3KT/α ) regret with bandit feedback, essentially assuming that all states are reachable with probability α > 0 under all policies. In our work, we focus on episodic MDPs with a fixed episode length H . The setting was first considered in the bandit setting by Neu et al. [23], who proposed an algorithm with a regret bound of O(H2 √ TK/α). Although the number of states does not appear explicitly in the bound, the regret scales at least linearly with the size of the state space X , since |X | ≤ H/α. Later work by Zimin and Neu [35], Dick et al. [11] eliminated the dependence on α and proposed an algorithm achieving Õ( √ TH|X |K) regret. Regret bounds for the full-information case without prior knowledge of the MDP were achieved by Neu et al. [24] and Rosenberg and Mansour [30], of order Õ(H|X |K √ T ) and Õ(H|X | √ KT ), respectively. These results were recently extended to handle bandit feedback about the rewards by Jin et al. [16], ultimately resulting in a regret bound of Õ(H|X | √ KT ). As apparent from the above discussion, all work on online learning in MDPs with adversarial rewards considers finite state spaces. The only exception we are aware of is the recent work of Cai et al. [7], whose algorithm OPPO is guaranteed to achieve Õ (√ d3H3T ) , assuming that the learner has access to d-dimensional features that can perfectly represent all action-value functions. While Cai, Yang, Jin, and Wang [7] remarkably assumed no prior knowledge of the MDP parameters, their guarantees are only achieved in the full-information case. This is to be contrasted with our results that are achieved for the much more restrictive bandit setting, albeit with the stronger assumption of having full knowledge of the underlying MDP, as required by virtually all prior work in the bandit setting, with the exception of Jin et al. [16]. Our results are made possible by a careful combination of recently proposed techniques for contextual bandit problems and optimal control in Markov decision processes. In particular, a core component of our algorithm is a regularized linear programming formulation of optimal control in MDPs due to Bas-Serrano et al. [5], which allows us to reduce the task of computing near-optimal policies in linear MDPs to a low-dimensional convex optimization problem. A similar algorithm design has been previously used for tabular MDPs by Zimin and Neu [35], Dick et al. [11], with the purpose of removing factors of 1/α from the previous state-of-the-art bounds of Neu et al. [23]. Analogously to this improvement, our methodology enables us to make strong assumptions on problem-dependent constants like likelihood ratios between µ∗ and µ0 or eigenvalues of the feature covariance matrices. Another important building block of our method is a version of the recently proposed Matrix Geometric Resampling procedure of Neu and Olkhovskaya [21] that enables us to efficiently estimate the reward functions. Incorporating these estimators in the algorithmic template of Bas-Serrano et al. [5] is far from straightforward and requires several subtle adjustments. Notation. We use 〈·, ·〉 to denote inner products in Euclidean space and by ‖·‖ we denote the Euclidean norm for vectors and the operator norm for matrices. For a symmetric positive definite matrix A, we use λmin(A) to denote its smallest eigenvalue. We write tr (A) for the trace of a matrix A and use A < 0 to denote that an operator A is positive semi-definite, and we use A < B to denote A−B < 0. For a d-dimensional vector v, we denote the corresponding d× d diagonal matrix by diag(v). For a positive integer N , we use [N ] to denote the set of positive integers {1, 2, . . . , N}. Finally, we will denote the set of all probability distributions over any set X by ∆X . 2 Preliminaries An episodic Markovian Decision Process (MDP), denoted by M = (X ,A, H, P, r) is defined by a state space X , action space A, episode length H ∈ Z+, transition function P : X ×A → ∆X and a reward function r : X ×A → [0, 1]. For convenience, we will assume that both X and A are finite sets, although we allow the state space X to be arbitrarily large. Without significant loss of generality, we will assume that the set of available actions is the same A in each state, with cardinality |A| = K. Furthermore, without any loss of generality, we will assume that the MDP has a layered structure, satisfying the following conditions: • The state set X can be decomposed into H disjoint sets: X = ∪Hh=1Xh, • X1 = {x1} and XH = {xH} are singletons, • transitions are only possible between consecutive layers, that is, for any xh ∈ Xh, the distribution P (·|x, a) is supported on Xh+1 for all a and h ∈ [H − 1]. These assumptions are common in the related literature (e.g., 23, 35, 30) and are not essential to our analysis; their primary role is simplifying our notation. In the present paper, we consider an online learning problem where the learner interacts with its environment in a sequence of episodes t = 1, 2, . . . , T , facing a different reward functions rt,1, . . . rt,H+1 selected by a (possibly adaptive) adversary at the beginning of each episode t. Oblivious to the reward function chosen by the adversary, the learner starts interacting with the MDP in each episode from the initial state Xt,1 = x1. At each consecutive step h ∈ [H − 1] within the episode, the learner observes the state Xt,h, picks an action At,h and observes the reward rt,h(Xt,h, At,h). Then, unless h = H , the learner moves to the next state Xt,h+1, which is generated from the distribution P (·|Xt,h, At,h). At the end of step H , the episode terminates and a new one begins. The aim of the learner is to select its actions so that the cumulative sum of rewards is as large as possible. Our algorithm and analysis will make use of the concept of (stationary stochastic) policies π : X → ∆A. A policy π prescribes a behaviour rule to the learner by assigning probability π(a|x) to taking action a at state x. Let τπ = ((X1, A1), (X2, A2), . . . , (XH , AH)) be a trajectory generated by following the policy π through the MDP. Then, for any xh ∈ Xh, ah ∈ A we define the occupancy measure µπh(x, a) = Pπ [(x, a) ∈ τπ]. We will refer to the collection of these distributions across all layers h as the occupancy measure induced by π and denote it as µπ = (µπ1 , µ π 2 , . . . , µ π H). We will denote the set of all valid occupancy measures by U and note that this is a convex set, such that for every element µ ∈ U the following set of linear constraints is satisfied:∑ a∈A µh+1(x, a) = ∑ x′,a′∈Xh×A P (x|x′, a′)µh(x′, a′), ∀x ∈ Xh+1, h ∈ [H − 1], (1) as well as ∑ a µ1(x1, a) = 1. From every valid occupancy measure µ, a stationary stochastic policy π = π1, . . . , πH−1 can be derived as πµ,h(a|x) = µh(x, a)/ ∑ a′ µh(x, a ′). For each h, introducing the linear operators E and P through their action on a set state-action distribution uh as (ETuh)(x) = ∑ a∈A uh(x, a) and (P T huh)(x) = ∑ x′,a′∈Xh,A P (x|x ′, a′)uh(x ′, a′), the constraints can be simply written as ETµh+1 = P Thµh for each h. We will use the inner product notation for the sum over the set of states and actions: 〈µh, rh〉 = ∑ (x,a)∈(Xh×A) µh(x, a)rt,h(x, a). Using this notation, we formulate our objective as selecting a sequence of policies πt for each episode t in a way that it minimizes the total expected regret defined as RT = sup π∗ T∑ t=1 H∑ h=1 (Eπ∗ [rt,h(X∗h, A∗h)]− Eπt [rt(Xt,h, At,h)]) = sup µ∗∈U T∑ t=1 H∑ h=1 〈µ∗h − µ πt h , rt,h〉 , where the notations Eπ∗ [·] and Eπt [·] emphasize that the state-action trajectories are generated by following policies π∗ and πt, respectively. As the above expression suggests, we can reformulate our online learning problem as an instance of online linear optimization where in each episode t, the learner selects an occupancy measure µt ∈ U (with µt = µπt) and gains reward ∑H h=1〈µt,h, rt,h〉. Intuitively, the regret measures the gap between the total reward gained by the learner and that of the best stationary policy fixed in hindsight, with full knowledge of the sequence of rewards chosen by the adversary. This performance measure is standard in the related literature on online learning in MDPs, see, for example Neu et al. [23], Zimin and Neu [35], Neu et al. [24], Rosenberg and Mansour [30], Cai et al. [7]. In this paper, we focus on MDPs with potentially enormous state spaces, which makes it difficult to design computationally tractable algorithms with nontrivial guarantees, unless we make some assumptions. We particularly focus on the classic technique of relying on linear function approximation and assuming that the reward functions occurring during the learning process can be written as a linear function of a low-dimensional feature map. We specify the form of function approximation and the conditions our analysis requires as follows: Assumption 1 (Linear MDP with adversarial rewards). There exists a feature map ϕ : X ×A → Rd and a collection of d signed measures m = (m1, . . . ,md) on X , such that for any (x, a) ∈ X ×A the transition function can be written as P (·|x, a) = 〈m(·), ϕ(x, a)〉 . Furthermore, the reward function chosen by the adversary in each episode t can be written as rt,h(x, a) = 〈θt,h, ϕ(x, a)〉 for some θt,h ∈ Rd. We assume that the features and the parameter vectors satisfy ‖ϕ(x, a)‖ ≤ σ and that the first coordinate ϕ1(x, a) = 1 for all (x, a) ∈ X ×A. Also we assume that ‖θt,h‖ ≤ R. Online learning under this assumption, but with a fixed reward function, has received substantial attention in the recent literature, particularly since the work of Jin et al. [17] who popularized the term “Linear MDP” to refer to this class of MDPs. This has quickly become a common assumption for studying reinforcement learning algorithms (Cai et al. [7], Jin et al. [17], Neu and Pike-Burke [22], Agarwal et al. [1]). This is also a special case of factored linear models (Yao et al. [34], Pires and Szepesvári [29]). Linear MDPs come with several attractive properties that allow efficient optimization and learning. In this work, we will exploit the useful property shown by Neu and Pike-Burke [22] and Bas-Serrano et al. [5] that all occupancy measures in a linear MDP can be seen to satisfy a relaxed version of the constraints in Equation (1). Specifically, for all h, defining the feature matrix Φh ∈ R(Xh×A)×d with its action on the distribution u as ΦThu = ∑ x,a∈Xh,A uh(x, a)ϕ(x, a), we define UΦ as the set of state-action distributions (µ, u) = ((µ1, . . . , µH), (u1, . . . , uH)) satisfying the following constraints: ETuh+1 = P T hµh (∀h), ΦThuh = ΦThµh (∀h), ETu1 = 1. (2) It is easy to see that for all feasible (µ, u) pairs, u satisfies the original constraints (1) if the MDP satisfies Assumption 1: since the transition operator can be written as Ph = ΦhMh for some matrix Mh. In this case, we clearly have ETuh+1 = P T hµh = M T hΦ T hµh = M T hΦ T huh = P T huh, (3) showing that any feasible u is indeed a valid occupancy measure. Furthermore, due to linearity of the rewards in Φ, we also have 〈uh, rt,h〉 = 〈µh, rt,h〉 for all feasible (µ, u) ∈ UΦ. While the number of variables and constraints in Equation (2) is still very large, it has been recently shown that approximate linear optimization over this set can be performed tractably [22, 5]. Our own algorithm design described in the next section will heavily build on these recent results. 3 Algorithm and main results This section presents our main contributions: a new efficient algorithm for the setting described above, along with its performance guarantees. Our algorithm design is based on a reduction to online linear optimization, exploiting the structural results established in the previous section. In particular, we will heavily rely on the algorithmic ideas established by Bas-Serrano et al. [5], who proposed an efficient reduction of approximate linear optimization over the high-dimensional set UΦ to a low-dimensional convex optimization problem. Another key component of our algorithm is an efficient estimator of the reward vectors θt,h based on the work of Neu and Olkhovskaya [21]. For reasons that we will clarify in Section 4, accommodating these reward estimators into the framework of Bas-Serrano et al. [5] is not straightforward and necessitates some subtle changes. 3.1 The policy update rule Our algorithm is an instantiation of the well-known “Follow the Regularized Leader” (FTRL) template commonly used in the design of modern online learning methods (see, e.g., 26). We will make the following design choices: • The decision variables will be the vector (µ, u) ∈ R2(X×A), with the feasible set U2Φ defined through the constraints ETuh = P T hµh (∀h), ΦThdiag(uh)Φh = ΦThdiag(µh)Φh (∀h). (4) These latter constraints ensure that the feature covariance matrices under u and µ will be identical, which is necessary for technical reasons that will be clarified in Section 4. Notice that, due to our assumption that ϕ1(x, a) = 1, we have U2Φ ⊆ UΦ, so all feasible u’s continue to be feasible for the original constraints (1). • The regularization function will be chosen as 1ηD(µ‖µ0) + 1 αDC(u‖µ0) for some positive regularization parameters η and α, where µ0 is the occupancy measure induced by the uniform π0 with π0(a|x) = 1K for all x, a, and D and DC are the marginal and conditional relative entropy functions respectively defined as D(µ‖µ0) = ∑H h=1D(µh‖µ0,h) and DC(µ‖µ0) = ∑H h=1DC(µh‖µ0,h) with D(µh‖µ0,h) = ∑ (x,a)∈(Xh×A) µh(x, a) log µh(x, a) µ0,h(x, a) , and DC(µh‖µ0,h) = ∑ (x,a)∈(Xh×A) µh(x, a) log πµ,h(a|x) π0,h(a|x) . With these choices, the updates of our algorithm in each episode will be given by (µt, ut) = arg max (µ,u)∈U2Φ { t−1∑ s=1 H−1∑ h=1 〈µh, r̂s,h〉 − 1 η D(µ‖µ0)− 1 α DC(u‖µ0) } (5) where r̂t,h ∈ RX×A is an estimator of the reward function rt,h that will be defined shortly. As written above, it is far from obvious if these updates can be calculated efficiently. The following result shows that, despite the apparent intractability of the maximization problem, it is possible to reduce the above problem into a d2-dimensional unconstrained convex optimization problem: Proposition 1. Define for each h ∈ [H − 1], a matrix Zh ∈ Rd×d and let matrix Z ∈ Rd×d(H−1) be defined as Z = (Z1, . . . , ZH−1). We will write h(x) = h, if x ∈ Xh. Define the Q-function taking values QZ(x, a) = ϕ(x, a)TZh(x)ϕ(x, a) and define the value function VZ(x) = 1 α log ∑ a∈A(x) π0(a|x)eαQZ(x,a) For any h ∈ [H−1] and for any x ∈ Xh, a ∈ A(x), denotePx,aVZ = ∑ x′∈Xh(x)+1 P (x ′|x, a)VZ(x′) and ∆t,Z(x, a) = ∑t−1 s=1 r̂s,h(x)(x, a) + Px,aVZ − QZ(x, a). Then, the optimal solution of the optimization problem (5) is given as π̂t,h(a|x) = π0(a|x)e α ( QZ∗t (x,a)−VZ∗t (x) ) , µ̂t,h(x, a) ∝ µ0(x, a)eη∆t,Z∗t (x,a), where Z∗t = (Z ∗ t,1, . . . , Z ∗ t,H−1) is the minimizer of the convex function Gt(Z) = 1 η H−1∑ h=1 log ∑ x∈Xh,a∈A(x) µ0(x, a)e η∆t,Z(x,a) + VZ(x1). (6) A particular merit of this result is that it gives an explicit formula for the policy πt that induces the optimal occupancy measure ut, and that πt(a|x) can be evaluated straightforwardly as a function of the features ϕ(x, a) and the parameters Z∗t . The proof of the result is based on Lagrangian duality, and mainly follows the proof of Proposition 1 in Bas-Serrano et al. [5], with some subtle differences due to the episodic setting we consider and the appearance of the constraints ΦThdiag(uh)Φh = ΦThdiag(µh)Φh. The proof is presented in Appendix A.1. The proposition above inspires a very straightforward implementation that is presented as Algorithm 1. Due to the direct relation with the algorithm of Bas-Serrano et al. [5], we refer to this method as ONLINE Q-REPS, where Q-REPS stands for “Relative Entropy Policy Search with Q-functions”. ONLINE Q-REPS adapts the general idea of Q-REPS to the online setting in a similar way as the O-REPS algorithm of Zimin and Neu [35] adapted the Relative Entropy Policy Search method of Peters et al. [28] to regret minimization in tabular MDPs with adversarial rewards. While O-REPS would in principle be still applicable to the large-scale setting we study in this paper and would plausibly achieve similar regret guarantees, its implementation would be nearly impossible due to the lack of the structural properties enjoyed by ONLINE Q-REPS, as established in Proposition 1. Algorithm 1 ONLINE Q-REPS Parameters: η, α > 0, exploration parameter γ ∈ (0, 1), Initialization: Set θ̂1,h = 0 for all h, compute Z1. For t = 1, . . . , T , repeat: • Draw Yt ∼ Ber(γ), • For h = 1, . . . ,H , do: – Observe Xt,h and, for all a ∈ A(Xt,h), set πt,h(a|Xt,h) = π0,h(a|Xt,h)eα(QZt (Xt,h,a)−VZt (Xt,h)), – if Y = 0, draw At,h ∼ πt,h(·|Xt,h), otherwise draw At,h ∼ π0,h(·|Xt,h), – observe the reward rt,h(Xt,h, At,h). • Compute θ̂t,1, . . . , θ̂t,H−1, Zt+1. 3.2 The reward estimator We now turn to describing the reward estimators r̂t,h, which will require several further definitions. Specifically, a concept of key importance will be the following feature covariance matrix: Σt,h = Eπt [ϕ(Xt,h, At,h)ϕ(Xt,h, At,h)T] . Making sure that Σt,h is invertible, we can define the estimator θ̃t,h = Σ −1 t,hϕ(Xt,h, At,h)rt,h(Xt,h, At,h). (7) This estimate shares many similarities with the estimates that are broadly used in the literature on adversarial linear bandits [18, 3, 8]. It is easy to see that θ̃t,h is an unbiased estimate of θt,h: Et [ θ̃t,h ] = Et [ Σ−1t,hϕ(Xt,h, At,h)ϕ(Xt,h, , At,h) Tθt,h ] = Σ−1t,hΣt,hθt,h = θt,h. Unfortunately, exact computation of Σt,h is intractable. To address this issue, we propose a method to directly estimate the inverse of the covariance matrix Σt,h by adapting the Matrix Geometric Resampling method of Neu and Olkhovskaya [21] (which itself is originally inspired by the Geometric Resampling method of 19, 20). Our adaptation has two parameters β > 0 andM ∈ Z+, and generates an estimate of the inverse covariance matrix through the following procedure1: 1The version we present here is a naïve implementation, optimized for readability. We present a more practical variant in Appendix B Matrix Geometric Resampling Input: simulator of P , policy π̃t = (π̃t,1, . . . , π̃t,H−1). For i = 1, . . . ,M , repeat: 1. Simulate a trajectory τ(i) = {(X1(i), A1(i)), . . . , (XH−1(i), AH−1(i))}, following the policy π̃t in P , 2. For h = 1, . . . ,H − 1, repeat: Compute (a) Bi,h = ϕ(Xh(i), Ah(i))ϕ(Xh(i), Ah(i))T, (b) Ci,h = ∏i j=1(I − βBj,h). Return Σ̂+t,h = βI + β ∑M i=1 Ci,h for all h ∈ [H − 1]. Based on the above procedure, we finally define our estimator as θ̂t,h = Σ̂ + t,hϕ(Xt,h, At,h)rt,h(Xt,h, At,h). The idea of the estimate is based on the truncation of the Neumann-series expansion of the matrix Σ−1t,h at the M th order term. Then, for large enough M , the matrix Σ + t,h is a good estimator of the inverse covariance matrix, which will be quantified formally in the analysis. For more intuition on the estimate, see section 3.2. in Neu and Olkhovskaya [21]. With a careful implementation explained in Appendix B, θ̂t,h can be computed in O(MHKd) time, using M calls to the simulator. 3.3 The regret bound We are now ready to state our main result: a bound on the expected regret of ONLINE Q-REPS. During the analysis, we will suppose that all the optimization problems solved by the algorithm are solved up to an additive error of ε ≥ 0. Furthermore, we will denote the covariance matrix generated by the uniform policy at layer h as Σ0,h, and make the following assumption: Assumption 2. The eigenvalues of Σ0,h for all h are lower bounded by λmin > 0. Our main result is the following guarantee regarding the performance of ONLINE Q-REPS: Theorem 1. Suppose that the MDP satisfies Assumptions 1 and 2 and λmin > 0. Furthermore, suppose that, for all t, Zt satisfies Gt(Zt) ≤ minZ Gt(Z) + ε for some ε ≥ 0. Then, for γ ∈ (0, 1), M ≥ 0, positive η ≤ 1σ2β(M+1)H and any positive β ≤ 1 2σ2 √ d(M+1) , the expected regret of ONLINE Q-REPS over T episodes satisfies RT ≤2TσRH · exp (−γβλminM) + γHT + ηTH(3 + 5d) + 1 η D(µ∗‖µ0) + 1 α DC(u ∗‖µ0) + √ αεTH(1 + η(M + 1)2). Furthermore, letting β = 1 2σ2 √ d(M+1) , M = ⌈ σ4d log2( √ THσR) γ2λ2min ⌉ , η = 1√ TdH , α = 1√ TdH and γ = 1√ TH and supposing that T is large enough so that the above constraints on M,γ, η and β are satisfied, we also have RT =Õ (√ dHT (1 +D(µ∗‖µ0) +DC(u∗‖µ0)) + √ ε(TH)9/4d5/4 1 λ4min ) . Thus, when all optimization problems are solved up to precision ε = (TH)−7/2d−3/2λ8min, the regret of ONLINE Q-REPS is guaranteed to be of O (√ dHTD(µ∗‖µ0) ) . 3.4 Implementation While Proposition 1 establishes the form of the ideal policy updates πt through the solution of an unconstrained convex optimization problem, it is not obvious that this optimization problem can be solved efficiently. Indeed, one immediate challenge in optimizing Gt is that its gradient takes the form ∇Gt(Z) = ∑ x,a µ̃Z(x, a) ϕ(x, a)ϕ(x, a)T −∑ x′,a′ P (x′|x, a)πZ(a′|x′)ϕ(x′, a′)ϕ(x′, a′)T , where µ̃Z(x, a) = µ0(x,a) exp(η∆Z(x,a))∑ x′,a′ µ0(x ′,a′) exp(η∆Z(x′,a′)) . Sampling from this latter distribution (and thus ob- taining unbiased estimators of∇Gt(Z)) is problematic due to the intractable normalization constant. This challenge can be addressed in a variety of ways. First, one can estimate the gradients via weighted importance sampling from the distribution µ̃Z and using these in a stochastic optimization procedure. This approach has been recently proposed and analyzed for an approximate implementation of REPS by Pacchiano et al. [27], who showed that it results in ε-optimal policy updates given polynomially many samples in 1/ε. Alternatively, one can consider an empirical counterpart of the loss function replacing the expectation with respect to µ0 with an empirical average over a number of i.i.d. samples drawn from the same distribution. The resulting loss function can then be optimized via standard stochastic optimization methods. This approach has been proposed and analyzed by Bas-Serrano et al. [5]. We describe the specifics of this latter approach in Appendix C. 4 Analysis This section gives the proof of Theorem 1 by stating the main technical results as lemmas and putting them together to obtain the final bound. In the first part of the proof, we show the upper bound on the auxiliary regret minimization game with general reward inputs and ideal updates. Then, we relate this quantity to the true expected regret by taking into account the properties of our reward estimates and the optimization errors incurred when calculating the updates. The proofs of all the lemmas are deferred to Appendix A. We start by defining the idealized updates (µ̂t, ût) obtained by solving the update steps in Equation (5) exactly, and we let ut be the occupancy measure induced by policy πt that is based on the near-optimal parameters Zt satisfying Gt(Zt) ≤ minZ Gt(Z) + ε. We will also let µt be the occupancy measure resulting from mixing ut with the exploratory distribution µ0 and note that µt,h = (1−γ)ut,h+γµt,h. Using this notation, we will consider an auxiliary online learning problem with the sequence of reward functions given as r̂t,h(x, a) = 〈ϕ(x, a), θ̂t,h〉, and study the performance of the idealized sequence (µ̂t, ût) therein: R̂T = T∑ t=1 H−1∑ h=1 〈µ∗h − ût,h, r̂t,h〉. Our first lemma bounds the above quantity: Lemma 1. Suppose that θ̂t,h is such that ∣∣η · 〈ϕ(x, a), θ̂t,h〉∣∣ < 1 holds for all x, a. Then, the auxiliary regret satisfies R̂T ≤ η T∑ t=1 H−1∑ h=1 〈µ̂t,h, r̂2t,h〉+ 1 η D(µ∗‖µ0) + 1 α DC(u ∗‖µ0). While the proof makes use of a general potential-based argument commonly used for analyzing FTRLstyle algorithms, it involves several nontrivial elements exploiting the structural results concerning ONLINE Q-REPS proved in Proposition 1. In particular, these properties enable us to upper bound the potential differences in a particularly simple way. The main term on contributing to the regret R̂T can be bounded as follows: Lemma 2. Suppose that ϕ(Xt,h, a) is satisfying ‖ϕ(Xt,h, a)‖2 ≤ σ for any a, 0 < β ≤ 1 2σ2 √ d(M+1) and M > 0. Then for each t and h, Et [ 〈µ̂t,h, r̂2t,h〉 ] ≤ 3 + 5d+ (M + 1)2 ‖ût,h − ut,h‖1 . The proof of this claim makes heavy use of the fact that 〈µ̂t,h, r̂2t,h〉 = 〈ût,h, r̂2t,h〉, which is ensured by the construction of the reward estimator r̂t,h and the constraints on the feature covariance matrices in Equation (4). This property is not guaranteed to hold under the first-order constraints (2) used in the previous works of Neu and Pike-Burke [22] and Bas-Serrano et al. [5], which eventually justifies the higher complexity of our algorithm. It remains to relate the auxiliary regret to the actual regret. The main challenge is accounting for the mismatch between µt and ut, and the bias of r̂t, denoted as bt,h(x, a) = Et [r̂t,h(x, a)]− rt,h(x, a). To address these issues, we observe that for any t, h, we have 〈µt,h, rt,h〉 = 〈(1− γ)ut,h + γµ0,h, rt,h〉 = 〈(1− γ)ût,h + γµ0,h, rt,h〉+ (1− γ) 〈ut,h − ût,h, rt,h〉 ≥ Et [〈(1− γ)ût,h + γµ0,h, r̂t,h〉] + ‖bt,h‖∞ + (1− γ) ‖ut,h − ût,h‖1 , where in the last step we used the fact that ‖rt,h‖∞ ≤ 1. After straightforward algebraic manipulations, this implies that the regret can be bounded as RT ≤ (1− γ)E [ R̂T ] + T∑ t=1 H∑ h=1 E [ γ 〈µ0,h − µ∗h, rt,h〉+ ‖ût,h − ut,h‖1 + ‖bt,h‖∞ ] . (8) In order to proceed, we need to verify the condition ∣∣η · 〈ϕ(x, a), θ̂t,h〉∣∣ < 1 so that we can apply Lemma 1 to bound R̂T . This is done in the following lemma: Lemma 3. Suppose that η ≤ 1σ2β(M+1)H . Then, for all, t, h, the reward estimates satisfy η ‖r̂t,h‖∞ < 1. Proceeding under the condition η(M + 1), we can apply Lemma 1 to bound the first term on the right-hand side of Equation (8), giving RT ≤ D(µ∗‖µ0) η + DC(u ∗‖µ0) α + (3 + 5d)ηHT + γHT + ∑ t,h E [ (η(M + 1)2 + 1) ‖ût,h − ut,h‖1 + ‖bt,h‖∞ ] . It remains to bound the bias of the reward estimators and the effect of the optimization errors that result in the mismatch between ut and ût. The following lemma shows that this mismatch can be directly controlled as a function of the optimization error: Lemma 4. The following bound is satisfied for all t and h: ‖ût,h − ut,h‖1 ≤ √ 2αε. The final element in the proof is the following lemma that bounds the bias of the estimator: Lemma 5. For M ≥ 0, β = 1σ2β(M+1)H , we have ‖bt,h‖∞ ≤ σR exp (−γβλminM). Putting these bounds together with the above derivations concludes the proof of Theorem 1. 5 Discussion This paper studies the problem of online learning in MDPs, merging two important lines of work on this problem concerned with linear function approximation [17, 7] and bandit feedback with adversarial rewards [23, 25, 35]. Our results are the first in this setting and not directly comparable with any previous work, although some favorable comparisons can be made with previous results in related settings. In the tabular setting where d = |X ||A|, our bounds exactly recover the minimax optimal guarantees first achieved by the O-REPS algorithm of Zimin and Neu [35]. For realizable linear function approximation, the work closest to ours is that of Cai et al. [7], who prove bounds of order √ d2H3T , which is worse by a factor of √ dH than our result. Their setting, however, is not exactly comparable to ours due to the different assumptions about the feedback about the rewards and the knowledge of the transition function. One particular strength of our work is providing a complete analysis of the propagation of optimization errors incurred while performing the updates. This is indeed a unique contribution in the related literature, where the effect of such errors typically go unaddressed. Specifically, the algorithms of Zimin and Neu [35], Rosenberg and Mansour [30], and Jin et al. [16] are all based on solving convex optimization problems similar to ours, the effect of optimization errors or potential methods for solving the optimization problems are not discussed at all. That said, we believe that the methods for calculating the updates discussed in Section 3.4 are far from perfect, and more research will be necessary to find truly practical optimization methods to solve this problem. The most important open question we leave behind concerns the requirement to have full prior knowledge of P . In the tabular case, this challenge has been successfully addressed in the adversarial MDP problem recently by Jin et al. [16], whose technique is based on adjusting the constraints (1) with a confidence set over the transition functions, to account for the uncertainty about the dynamics. We find it plausible that a similar extension of ONLINE Q-REPS is possible by incorporating a confidence set for linear MDPs, as has been done in the case of i.i.d. rewards by Neu and Pike-Burke [22]. Nevertheless, the details of such an extension remain highly non-trivial, and we leave the challenge of working them out open for future work.
1. What is the focus of the paper regarding online learning in linear MDPs? 2. What are the strengths of the proposed algorithm, particularly its computational advantages? 3. What are the weaknesses or mispresentations in the main result, as pointed out by the reviewer? 4. How does the reviewer assess the regret bound and its relation to λ min? 5. What are the implications of the algorithm's sensitivity to λ min, and how could it be addressed in future work?
Summary Of The Paper Review
Summary Of The Paper This paper studies online learning in linear MDPs with bandit feedback and adversarial losses, and a known transition kernel. The regret bound against the best policy is of order d H T . The proposed algorithm is an extension of Q-REPS of Bas-Serrano et al. which previously only works for the standard tabular setting. The authors claim that Q-REPS has computational advantages over O-REPS: despite the fact that the state and action space can be arbitrarily large, solving the optimization problem in each step only requires poly(d) computation. Review Although the strong assumption on the known transition kernel simplifies the problem a lot, the setting of adversarial bandit linear MDP has not been studied before, and the authors propose an interesting algorithm to tackle it. I like the fact that the authors make the algorithm as transparent as possible by detailing every sub-procedure e.g. Matrix Geometric Resampling and how to solve the convex programming using gradient descent. However, I suspect that there is some mis-presentation in the main result, as I pointed out below. I would like to see authors address it in the rebuttal (either modifying the algorithm, or weakening the claimed bounds). As I can see, the regret bound should actually scale with T λ min , but not the T given in Theorem 1. The reason is that in Lemma 1, it is required that η is smaller than the order of 1 β M (because the magnitude of θ ^ can be as large as β M ). A similar condition (i.e., η ≤ 2 ( M + 2 ) H ) is also specified in Theorem 1. Then the 1 η D ( μ ∗ | μ 0 ) will scale at least with β M . Now in order to make the first regret term 2 T σ R H exp ⁡ ( − γ β λ min M ) sublinear, one must pick M = Ω ( 1 γ β λ min ) . So the regret term β M is at least Ω ( 1 γ λ min ) . Further considering another regret term γ H T , the regret will scale at least with T λ min . In practice the learner has no way to know λ min and set M and ϵ properly. The authors should discuss how the algorithm works when λ min is unknown. ========== after the discussion phase ========== The authors agreed to write their bound in the clearer form of T λ min , and discuss the difficulties of removing λ min assumption in the MDP setting. These make the paper more complete, so I slightly raised my score.
NIPS
Title Online learning in MDPs with linear function approximation and bandit feedback. Abstract We consider the problem of online learning in an episodic Markov decision process, where the reward function is allowed to change between episodes in an adversarial manner and the learner only observes the rewards associated with its actions. We assume that rewards and the transition function can be represented as linear functions in terms of a known low-dimensional feature map, which allows us to consider the setting where the state space is arbitrarily large. We also assume that the learner has a perfect knowledge of the MDP dynamics. Our main contribution is developing an algorithm whose expected regret after T episodes is bounded by Õ (√ dHT ) , where H is the number of steps in each episode and d is the dimensionality of the feature map. N/A (√ dHT ) , where H is the number of steps in each episode and d is the dimensionality of the feature map. 1 Introduction We study the problem of online learning in episodic Markov Decision Processes (MDP), modelling a sequential decision making problem where the interaction between a learner and its environment is divided into T episodes of fixed length H . At each time step of the episode, the learner observes the current state of the environment, chooses one of the K available actions, and earns a reward. Consequently, the state of the environment changes according to the transition function of the underlying MDP, as a function of the previous state and the action taken by the learner. A key distinguishing feature of our setting is that we assume that the reward function can change arbitrarily between episodes, and the learner only has access to bandit feedback: instead of being able to observe the reward function at the end of the episode, the learner only gets to observe the rewards that it actually received. As traditional in this line of work, we aim to design algorithms for the learner with theoretical guarantees on her regret, which is the difference between the total reward accumulated by the learner and the total reward of the best stationary policy fixed in hindsight. Unlike most previous work on this problem, we allow the state space to be very large and aim to prove performance guarantees that do not depend on the size of the state space, bringing theory one step closer to practical scenarios where assuming finite state spaces is unrealistic. To address the challenge of learning in large state spaces, we adopt the classic RL technique of using linear function approximation and suppose that we have access to a relatively low-dimensional feature map that can be used to represent policies and value functions. We will assume that the feature map is expressive enough so that all action-value functions can be expressed as linear functions of the features, and that the learner has full knowledge of the transition function of the MDP. Our main contribution is designing a computationally efficient algorithm called ONLINE Q-REPS, and prove that in the setting described above, its regret is at most O (√ dHTD (µ∗‖µ0) ) , where d is the dimensionality of the feature map and D (µ∗‖µ0) is the relative entropy between the state-action distribution µ∗ induced by the optimal policy and an initial distribution µ0 given as input to the 35th Conference on Neural Information Processing Systems (NeurIPS 2021). algorithm. Notably, our results do not require the likelihood ratio between these distributions to be uniformly bounded, and the bound shows no dependence on the eigenvalues of the feature covariance matrices. Our algorithm itself requires solving a d2-dimensional convex optimization problem at the beginning of each episode, which can be solved to arbitrary precision ε in time polynomial in d and 1/ε, independently of the size of the state-action space. Our work fits into a long line of research considering online learning in Markov decision processes. The problem of regret minimization in stationary MDPs with a fixed reward function has been studied extensively since the work of Burnetas and Katehakis [6], Auer and Ortner [2], Tewari and Bartlett [31], Jaksch et al. [14], with several important advances made in the past decade [9, 10, 4, 13, 15]. While most of these works considered small finite state spaces, the same techniques have been very recently extended to accommodate infinite state spaces under the assumption of realizable function approximation by Jin et al. [17] and Yang and Wang [33]. In particular, the notion of linear MDPs introduced by Jin et al. [17] has become a standard model for linear function approximation and has been used in several recent works (e.g., 22, 32, 1). Even more relevant is the line of work considering adversarial rewards, initiated by Even-Dar et al. [12], who consider online learning in continuing MDPs with full feedback about the rewards. They proposed a MDP-E algorithm, that achieves O(τ2 √ T logK) regret, where τ is an upper bound on the mixing time of the MDP. Later, Neu et al. [25] proposed an algorithm which guarantees Õ (√ τ3KT/α ) regret with bandit feedback, essentially assuming that all states are reachable with probability α > 0 under all policies. In our work, we focus on episodic MDPs with a fixed episode length H . The setting was first considered in the bandit setting by Neu et al. [23], who proposed an algorithm with a regret bound of O(H2 √ TK/α). Although the number of states does not appear explicitly in the bound, the regret scales at least linearly with the size of the state space X , since |X | ≤ H/α. Later work by Zimin and Neu [35], Dick et al. [11] eliminated the dependence on α and proposed an algorithm achieving Õ( √ TH|X |K) regret. Regret bounds for the full-information case without prior knowledge of the MDP were achieved by Neu et al. [24] and Rosenberg and Mansour [30], of order Õ(H|X |K √ T ) and Õ(H|X | √ KT ), respectively. These results were recently extended to handle bandit feedback about the rewards by Jin et al. [16], ultimately resulting in a regret bound of Õ(H|X | √ KT ). As apparent from the above discussion, all work on online learning in MDPs with adversarial rewards considers finite state spaces. The only exception we are aware of is the recent work of Cai et al. [7], whose algorithm OPPO is guaranteed to achieve Õ (√ d3H3T ) , assuming that the learner has access to d-dimensional features that can perfectly represent all action-value functions. While Cai, Yang, Jin, and Wang [7] remarkably assumed no prior knowledge of the MDP parameters, their guarantees are only achieved in the full-information case. This is to be contrasted with our results that are achieved for the much more restrictive bandit setting, albeit with the stronger assumption of having full knowledge of the underlying MDP, as required by virtually all prior work in the bandit setting, with the exception of Jin et al. [16]. Our results are made possible by a careful combination of recently proposed techniques for contextual bandit problems and optimal control in Markov decision processes. In particular, a core component of our algorithm is a regularized linear programming formulation of optimal control in MDPs due to Bas-Serrano et al. [5], which allows us to reduce the task of computing near-optimal policies in linear MDPs to a low-dimensional convex optimization problem. A similar algorithm design has been previously used for tabular MDPs by Zimin and Neu [35], Dick et al. [11], with the purpose of removing factors of 1/α from the previous state-of-the-art bounds of Neu et al. [23]. Analogously to this improvement, our methodology enables us to make strong assumptions on problem-dependent constants like likelihood ratios between µ∗ and µ0 or eigenvalues of the feature covariance matrices. Another important building block of our method is a version of the recently proposed Matrix Geometric Resampling procedure of Neu and Olkhovskaya [21] that enables us to efficiently estimate the reward functions. Incorporating these estimators in the algorithmic template of Bas-Serrano et al. [5] is far from straightforward and requires several subtle adjustments. Notation. We use 〈·, ·〉 to denote inner products in Euclidean space and by ‖·‖ we denote the Euclidean norm for vectors and the operator norm for matrices. For a symmetric positive definite matrix A, we use λmin(A) to denote its smallest eigenvalue. We write tr (A) for the trace of a matrix A and use A < 0 to denote that an operator A is positive semi-definite, and we use A < B to denote A−B < 0. For a d-dimensional vector v, we denote the corresponding d× d diagonal matrix by diag(v). For a positive integer N , we use [N ] to denote the set of positive integers {1, 2, . . . , N}. Finally, we will denote the set of all probability distributions over any set X by ∆X . 2 Preliminaries An episodic Markovian Decision Process (MDP), denoted by M = (X ,A, H, P, r) is defined by a state space X , action space A, episode length H ∈ Z+, transition function P : X ×A → ∆X and a reward function r : X ×A → [0, 1]. For convenience, we will assume that both X and A are finite sets, although we allow the state space X to be arbitrarily large. Without significant loss of generality, we will assume that the set of available actions is the same A in each state, with cardinality |A| = K. Furthermore, without any loss of generality, we will assume that the MDP has a layered structure, satisfying the following conditions: • The state set X can be decomposed into H disjoint sets: X = ∪Hh=1Xh, • X1 = {x1} and XH = {xH} are singletons, • transitions are only possible between consecutive layers, that is, for any xh ∈ Xh, the distribution P (·|x, a) is supported on Xh+1 for all a and h ∈ [H − 1]. These assumptions are common in the related literature (e.g., 23, 35, 30) and are not essential to our analysis; their primary role is simplifying our notation. In the present paper, we consider an online learning problem where the learner interacts with its environment in a sequence of episodes t = 1, 2, . . . , T , facing a different reward functions rt,1, . . . rt,H+1 selected by a (possibly adaptive) adversary at the beginning of each episode t. Oblivious to the reward function chosen by the adversary, the learner starts interacting with the MDP in each episode from the initial state Xt,1 = x1. At each consecutive step h ∈ [H − 1] within the episode, the learner observes the state Xt,h, picks an action At,h and observes the reward rt,h(Xt,h, At,h). Then, unless h = H , the learner moves to the next state Xt,h+1, which is generated from the distribution P (·|Xt,h, At,h). At the end of step H , the episode terminates and a new one begins. The aim of the learner is to select its actions so that the cumulative sum of rewards is as large as possible. Our algorithm and analysis will make use of the concept of (stationary stochastic) policies π : X → ∆A. A policy π prescribes a behaviour rule to the learner by assigning probability π(a|x) to taking action a at state x. Let τπ = ((X1, A1), (X2, A2), . . . , (XH , AH)) be a trajectory generated by following the policy π through the MDP. Then, for any xh ∈ Xh, ah ∈ A we define the occupancy measure µπh(x, a) = Pπ [(x, a) ∈ τπ]. We will refer to the collection of these distributions across all layers h as the occupancy measure induced by π and denote it as µπ = (µπ1 , µ π 2 , . . . , µ π H). We will denote the set of all valid occupancy measures by U and note that this is a convex set, such that for every element µ ∈ U the following set of linear constraints is satisfied:∑ a∈A µh+1(x, a) = ∑ x′,a′∈Xh×A P (x|x′, a′)µh(x′, a′), ∀x ∈ Xh+1, h ∈ [H − 1], (1) as well as ∑ a µ1(x1, a) = 1. From every valid occupancy measure µ, a stationary stochastic policy π = π1, . . . , πH−1 can be derived as πµ,h(a|x) = µh(x, a)/ ∑ a′ µh(x, a ′). For each h, introducing the linear operators E and P through their action on a set state-action distribution uh as (ETuh)(x) = ∑ a∈A uh(x, a) and (P T huh)(x) = ∑ x′,a′∈Xh,A P (x|x ′, a′)uh(x ′, a′), the constraints can be simply written as ETµh+1 = P Thµh for each h. We will use the inner product notation for the sum over the set of states and actions: 〈µh, rh〉 = ∑ (x,a)∈(Xh×A) µh(x, a)rt,h(x, a). Using this notation, we formulate our objective as selecting a sequence of policies πt for each episode t in a way that it minimizes the total expected regret defined as RT = sup π∗ T∑ t=1 H∑ h=1 (Eπ∗ [rt,h(X∗h, A∗h)]− Eπt [rt(Xt,h, At,h)]) = sup µ∗∈U T∑ t=1 H∑ h=1 〈µ∗h − µ πt h , rt,h〉 , where the notations Eπ∗ [·] and Eπt [·] emphasize that the state-action trajectories are generated by following policies π∗ and πt, respectively. As the above expression suggests, we can reformulate our online learning problem as an instance of online linear optimization where in each episode t, the learner selects an occupancy measure µt ∈ U (with µt = µπt) and gains reward ∑H h=1〈µt,h, rt,h〉. Intuitively, the regret measures the gap between the total reward gained by the learner and that of the best stationary policy fixed in hindsight, with full knowledge of the sequence of rewards chosen by the adversary. This performance measure is standard in the related literature on online learning in MDPs, see, for example Neu et al. [23], Zimin and Neu [35], Neu et al. [24], Rosenberg and Mansour [30], Cai et al. [7]. In this paper, we focus on MDPs with potentially enormous state spaces, which makes it difficult to design computationally tractable algorithms with nontrivial guarantees, unless we make some assumptions. We particularly focus on the classic technique of relying on linear function approximation and assuming that the reward functions occurring during the learning process can be written as a linear function of a low-dimensional feature map. We specify the form of function approximation and the conditions our analysis requires as follows: Assumption 1 (Linear MDP with adversarial rewards). There exists a feature map ϕ : X ×A → Rd and a collection of d signed measures m = (m1, . . . ,md) on X , such that for any (x, a) ∈ X ×A the transition function can be written as P (·|x, a) = 〈m(·), ϕ(x, a)〉 . Furthermore, the reward function chosen by the adversary in each episode t can be written as rt,h(x, a) = 〈θt,h, ϕ(x, a)〉 for some θt,h ∈ Rd. We assume that the features and the parameter vectors satisfy ‖ϕ(x, a)‖ ≤ σ and that the first coordinate ϕ1(x, a) = 1 for all (x, a) ∈ X ×A. Also we assume that ‖θt,h‖ ≤ R. Online learning under this assumption, but with a fixed reward function, has received substantial attention in the recent literature, particularly since the work of Jin et al. [17] who popularized the term “Linear MDP” to refer to this class of MDPs. This has quickly become a common assumption for studying reinforcement learning algorithms (Cai et al. [7], Jin et al. [17], Neu and Pike-Burke [22], Agarwal et al. [1]). This is also a special case of factored linear models (Yao et al. [34], Pires and Szepesvári [29]). Linear MDPs come with several attractive properties that allow efficient optimization and learning. In this work, we will exploit the useful property shown by Neu and Pike-Burke [22] and Bas-Serrano et al. [5] that all occupancy measures in a linear MDP can be seen to satisfy a relaxed version of the constraints in Equation (1). Specifically, for all h, defining the feature matrix Φh ∈ R(Xh×A)×d with its action on the distribution u as ΦThu = ∑ x,a∈Xh,A uh(x, a)ϕ(x, a), we define UΦ as the set of state-action distributions (µ, u) = ((µ1, . . . , µH), (u1, . . . , uH)) satisfying the following constraints: ETuh+1 = P T hµh (∀h), ΦThuh = ΦThµh (∀h), ETu1 = 1. (2) It is easy to see that for all feasible (µ, u) pairs, u satisfies the original constraints (1) if the MDP satisfies Assumption 1: since the transition operator can be written as Ph = ΦhMh for some matrix Mh. In this case, we clearly have ETuh+1 = P T hµh = M T hΦ T hµh = M T hΦ T huh = P T huh, (3) showing that any feasible u is indeed a valid occupancy measure. Furthermore, due to linearity of the rewards in Φ, we also have 〈uh, rt,h〉 = 〈µh, rt,h〉 for all feasible (µ, u) ∈ UΦ. While the number of variables and constraints in Equation (2) is still very large, it has been recently shown that approximate linear optimization over this set can be performed tractably [22, 5]. Our own algorithm design described in the next section will heavily build on these recent results. 3 Algorithm and main results This section presents our main contributions: a new efficient algorithm for the setting described above, along with its performance guarantees. Our algorithm design is based on a reduction to online linear optimization, exploiting the structural results established in the previous section. In particular, we will heavily rely on the algorithmic ideas established by Bas-Serrano et al. [5], who proposed an efficient reduction of approximate linear optimization over the high-dimensional set UΦ to a low-dimensional convex optimization problem. Another key component of our algorithm is an efficient estimator of the reward vectors θt,h based on the work of Neu and Olkhovskaya [21]. For reasons that we will clarify in Section 4, accommodating these reward estimators into the framework of Bas-Serrano et al. [5] is not straightforward and necessitates some subtle changes. 3.1 The policy update rule Our algorithm is an instantiation of the well-known “Follow the Regularized Leader” (FTRL) template commonly used in the design of modern online learning methods (see, e.g., 26). We will make the following design choices: • The decision variables will be the vector (µ, u) ∈ R2(X×A), with the feasible set U2Φ defined through the constraints ETuh = P T hµh (∀h), ΦThdiag(uh)Φh = ΦThdiag(µh)Φh (∀h). (4) These latter constraints ensure that the feature covariance matrices under u and µ will be identical, which is necessary for technical reasons that will be clarified in Section 4. Notice that, due to our assumption that ϕ1(x, a) = 1, we have U2Φ ⊆ UΦ, so all feasible u’s continue to be feasible for the original constraints (1). • The regularization function will be chosen as 1ηD(µ‖µ0) + 1 αDC(u‖µ0) for some positive regularization parameters η and α, where µ0 is the occupancy measure induced by the uniform π0 with π0(a|x) = 1K for all x, a, and D and DC are the marginal and conditional relative entropy functions respectively defined as D(µ‖µ0) = ∑H h=1D(µh‖µ0,h) and DC(µ‖µ0) = ∑H h=1DC(µh‖µ0,h) with D(µh‖µ0,h) = ∑ (x,a)∈(Xh×A) µh(x, a) log µh(x, a) µ0,h(x, a) , and DC(µh‖µ0,h) = ∑ (x,a)∈(Xh×A) µh(x, a) log πµ,h(a|x) π0,h(a|x) . With these choices, the updates of our algorithm in each episode will be given by (µt, ut) = arg max (µ,u)∈U2Φ { t−1∑ s=1 H−1∑ h=1 〈µh, r̂s,h〉 − 1 η D(µ‖µ0)− 1 α DC(u‖µ0) } (5) where r̂t,h ∈ RX×A is an estimator of the reward function rt,h that will be defined shortly. As written above, it is far from obvious if these updates can be calculated efficiently. The following result shows that, despite the apparent intractability of the maximization problem, it is possible to reduce the above problem into a d2-dimensional unconstrained convex optimization problem: Proposition 1. Define for each h ∈ [H − 1], a matrix Zh ∈ Rd×d and let matrix Z ∈ Rd×d(H−1) be defined as Z = (Z1, . . . , ZH−1). We will write h(x) = h, if x ∈ Xh. Define the Q-function taking values QZ(x, a) = ϕ(x, a)TZh(x)ϕ(x, a) and define the value function VZ(x) = 1 α log ∑ a∈A(x) π0(a|x)eαQZ(x,a) For any h ∈ [H−1] and for any x ∈ Xh, a ∈ A(x), denotePx,aVZ = ∑ x′∈Xh(x)+1 P (x ′|x, a)VZ(x′) and ∆t,Z(x, a) = ∑t−1 s=1 r̂s,h(x)(x, a) + Px,aVZ − QZ(x, a). Then, the optimal solution of the optimization problem (5) is given as π̂t,h(a|x) = π0(a|x)e α ( QZ∗t (x,a)−VZ∗t (x) ) , µ̂t,h(x, a) ∝ µ0(x, a)eη∆t,Z∗t (x,a), where Z∗t = (Z ∗ t,1, . . . , Z ∗ t,H−1) is the minimizer of the convex function Gt(Z) = 1 η H−1∑ h=1 log ∑ x∈Xh,a∈A(x) µ0(x, a)e η∆t,Z(x,a) + VZ(x1). (6) A particular merit of this result is that it gives an explicit formula for the policy πt that induces the optimal occupancy measure ut, and that πt(a|x) can be evaluated straightforwardly as a function of the features ϕ(x, a) and the parameters Z∗t . The proof of the result is based on Lagrangian duality, and mainly follows the proof of Proposition 1 in Bas-Serrano et al. [5], with some subtle differences due to the episodic setting we consider and the appearance of the constraints ΦThdiag(uh)Φh = ΦThdiag(µh)Φh. The proof is presented in Appendix A.1. The proposition above inspires a very straightforward implementation that is presented as Algorithm 1. Due to the direct relation with the algorithm of Bas-Serrano et al. [5], we refer to this method as ONLINE Q-REPS, where Q-REPS stands for “Relative Entropy Policy Search with Q-functions”. ONLINE Q-REPS adapts the general idea of Q-REPS to the online setting in a similar way as the O-REPS algorithm of Zimin and Neu [35] adapted the Relative Entropy Policy Search method of Peters et al. [28] to regret minimization in tabular MDPs with adversarial rewards. While O-REPS would in principle be still applicable to the large-scale setting we study in this paper and would plausibly achieve similar regret guarantees, its implementation would be nearly impossible due to the lack of the structural properties enjoyed by ONLINE Q-REPS, as established in Proposition 1. Algorithm 1 ONLINE Q-REPS Parameters: η, α > 0, exploration parameter γ ∈ (0, 1), Initialization: Set θ̂1,h = 0 for all h, compute Z1. For t = 1, . . . , T , repeat: • Draw Yt ∼ Ber(γ), • For h = 1, . . . ,H , do: – Observe Xt,h and, for all a ∈ A(Xt,h), set πt,h(a|Xt,h) = π0,h(a|Xt,h)eα(QZt (Xt,h,a)−VZt (Xt,h)), – if Y = 0, draw At,h ∼ πt,h(·|Xt,h), otherwise draw At,h ∼ π0,h(·|Xt,h), – observe the reward rt,h(Xt,h, At,h). • Compute θ̂t,1, . . . , θ̂t,H−1, Zt+1. 3.2 The reward estimator We now turn to describing the reward estimators r̂t,h, which will require several further definitions. Specifically, a concept of key importance will be the following feature covariance matrix: Σt,h = Eπt [ϕ(Xt,h, At,h)ϕ(Xt,h, At,h)T] . Making sure that Σt,h is invertible, we can define the estimator θ̃t,h = Σ −1 t,hϕ(Xt,h, At,h)rt,h(Xt,h, At,h). (7) This estimate shares many similarities with the estimates that are broadly used in the literature on adversarial linear bandits [18, 3, 8]. It is easy to see that θ̃t,h is an unbiased estimate of θt,h: Et [ θ̃t,h ] = Et [ Σ−1t,hϕ(Xt,h, At,h)ϕ(Xt,h, , At,h) Tθt,h ] = Σ−1t,hΣt,hθt,h = θt,h. Unfortunately, exact computation of Σt,h is intractable. To address this issue, we propose a method to directly estimate the inverse of the covariance matrix Σt,h by adapting the Matrix Geometric Resampling method of Neu and Olkhovskaya [21] (which itself is originally inspired by the Geometric Resampling method of 19, 20). Our adaptation has two parameters β > 0 andM ∈ Z+, and generates an estimate of the inverse covariance matrix through the following procedure1: 1The version we present here is a naïve implementation, optimized for readability. We present a more practical variant in Appendix B Matrix Geometric Resampling Input: simulator of P , policy π̃t = (π̃t,1, . . . , π̃t,H−1). For i = 1, . . . ,M , repeat: 1. Simulate a trajectory τ(i) = {(X1(i), A1(i)), . . . , (XH−1(i), AH−1(i))}, following the policy π̃t in P , 2. For h = 1, . . . ,H − 1, repeat: Compute (a) Bi,h = ϕ(Xh(i), Ah(i))ϕ(Xh(i), Ah(i))T, (b) Ci,h = ∏i j=1(I − βBj,h). Return Σ̂+t,h = βI + β ∑M i=1 Ci,h for all h ∈ [H − 1]. Based on the above procedure, we finally define our estimator as θ̂t,h = Σ̂ + t,hϕ(Xt,h, At,h)rt,h(Xt,h, At,h). The idea of the estimate is based on the truncation of the Neumann-series expansion of the matrix Σ−1t,h at the M th order term. Then, for large enough M , the matrix Σ + t,h is a good estimator of the inverse covariance matrix, which will be quantified formally in the analysis. For more intuition on the estimate, see section 3.2. in Neu and Olkhovskaya [21]. With a careful implementation explained in Appendix B, θ̂t,h can be computed in O(MHKd) time, using M calls to the simulator. 3.3 The regret bound We are now ready to state our main result: a bound on the expected regret of ONLINE Q-REPS. During the analysis, we will suppose that all the optimization problems solved by the algorithm are solved up to an additive error of ε ≥ 0. Furthermore, we will denote the covariance matrix generated by the uniform policy at layer h as Σ0,h, and make the following assumption: Assumption 2. The eigenvalues of Σ0,h for all h are lower bounded by λmin > 0. Our main result is the following guarantee regarding the performance of ONLINE Q-REPS: Theorem 1. Suppose that the MDP satisfies Assumptions 1 and 2 and λmin > 0. Furthermore, suppose that, for all t, Zt satisfies Gt(Zt) ≤ minZ Gt(Z) + ε for some ε ≥ 0. Then, for γ ∈ (0, 1), M ≥ 0, positive η ≤ 1σ2β(M+1)H and any positive β ≤ 1 2σ2 √ d(M+1) , the expected regret of ONLINE Q-REPS over T episodes satisfies RT ≤2TσRH · exp (−γβλminM) + γHT + ηTH(3 + 5d) + 1 η D(µ∗‖µ0) + 1 α DC(u ∗‖µ0) + √ αεTH(1 + η(M + 1)2). Furthermore, letting β = 1 2σ2 √ d(M+1) , M = ⌈ σ4d log2( √ THσR) γ2λ2min ⌉ , η = 1√ TdH , α = 1√ TdH and γ = 1√ TH and supposing that T is large enough so that the above constraints on M,γ, η and β are satisfied, we also have RT =Õ (√ dHT (1 +D(µ∗‖µ0) +DC(u∗‖µ0)) + √ ε(TH)9/4d5/4 1 λ4min ) . Thus, when all optimization problems are solved up to precision ε = (TH)−7/2d−3/2λ8min, the regret of ONLINE Q-REPS is guaranteed to be of O (√ dHTD(µ∗‖µ0) ) . 3.4 Implementation While Proposition 1 establishes the form of the ideal policy updates πt through the solution of an unconstrained convex optimization problem, it is not obvious that this optimization problem can be solved efficiently. Indeed, one immediate challenge in optimizing Gt is that its gradient takes the form ∇Gt(Z) = ∑ x,a µ̃Z(x, a) ϕ(x, a)ϕ(x, a)T −∑ x′,a′ P (x′|x, a)πZ(a′|x′)ϕ(x′, a′)ϕ(x′, a′)T , where µ̃Z(x, a) = µ0(x,a) exp(η∆Z(x,a))∑ x′,a′ µ0(x ′,a′) exp(η∆Z(x′,a′)) . Sampling from this latter distribution (and thus ob- taining unbiased estimators of∇Gt(Z)) is problematic due to the intractable normalization constant. This challenge can be addressed in a variety of ways. First, one can estimate the gradients via weighted importance sampling from the distribution µ̃Z and using these in a stochastic optimization procedure. This approach has been recently proposed and analyzed for an approximate implementation of REPS by Pacchiano et al. [27], who showed that it results in ε-optimal policy updates given polynomially many samples in 1/ε. Alternatively, one can consider an empirical counterpart of the loss function replacing the expectation with respect to µ0 with an empirical average over a number of i.i.d. samples drawn from the same distribution. The resulting loss function can then be optimized via standard stochastic optimization methods. This approach has been proposed and analyzed by Bas-Serrano et al. [5]. We describe the specifics of this latter approach in Appendix C. 4 Analysis This section gives the proof of Theorem 1 by stating the main technical results as lemmas and putting them together to obtain the final bound. In the first part of the proof, we show the upper bound on the auxiliary regret minimization game with general reward inputs and ideal updates. Then, we relate this quantity to the true expected regret by taking into account the properties of our reward estimates and the optimization errors incurred when calculating the updates. The proofs of all the lemmas are deferred to Appendix A. We start by defining the idealized updates (µ̂t, ût) obtained by solving the update steps in Equation (5) exactly, and we let ut be the occupancy measure induced by policy πt that is based on the near-optimal parameters Zt satisfying Gt(Zt) ≤ minZ Gt(Z) + ε. We will also let µt be the occupancy measure resulting from mixing ut with the exploratory distribution µ0 and note that µt,h = (1−γ)ut,h+γµt,h. Using this notation, we will consider an auxiliary online learning problem with the sequence of reward functions given as r̂t,h(x, a) = 〈ϕ(x, a), θ̂t,h〉, and study the performance of the idealized sequence (µ̂t, ût) therein: R̂T = T∑ t=1 H−1∑ h=1 〈µ∗h − ût,h, r̂t,h〉. Our first lemma bounds the above quantity: Lemma 1. Suppose that θ̂t,h is such that ∣∣η · 〈ϕ(x, a), θ̂t,h〉∣∣ < 1 holds for all x, a. Then, the auxiliary regret satisfies R̂T ≤ η T∑ t=1 H−1∑ h=1 〈µ̂t,h, r̂2t,h〉+ 1 η D(µ∗‖µ0) + 1 α DC(u ∗‖µ0). While the proof makes use of a general potential-based argument commonly used for analyzing FTRLstyle algorithms, it involves several nontrivial elements exploiting the structural results concerning ONLINE Q-REPS proved in Proposition 1. In particular, these properties enable us to upper bound the potential differences in a particularly simple way. The main term on contributing to the regret R̂T can be bounded as follows: Lemma 2. Suppose that ϕ(Xt,h, a) is satisfying ‖ϕ(Xt,h, a)‖2 ≤ σ for any a, 0 < β ≤ 1 2σ2 √ d(M+1) and M > 0. Then for each t and h, Et [ 〈µ̂t,h, r̂2t,h〉 ] ≤ 3 + 5d+ (M + 1)2 ‖ût,h − ut,h‖1 . The proof of this claim makes heavy use of the fact that 〈µ̂t,h, r̂2t,h〉 = 〈ût,h, r̂2t,h〉, which is ensured by the construction of the reward estimator r̂t,h and the constraints on the feature covariance matrices in Equation (4). This property is not guaranteed to hold under the first-order constraints (2) used in the previous works of Neu and Pike-Burke [22] and Bas-Serrano et al. [5], which eventually justifies the higher complexity of our algorithm. It remains to relate the auxiliary regret to the actual regret. The main challenge is accounting for the mismatch between µt and ut, and the bias of r̂t, denoted as bt,h(x, a) = Et [r̂t,h(x, a)]− rt,h(x, a). To address these issues, we observe that for any t, h, we have 〈µt,h, rt,h〉 = 〈(1− γ)ut,h + γµ0,h, rt,h〉 = 〈(1− γ)ût,h + γµ0,h, rt,h〉+ (1− γ) 〈ut,h − ût,h, rt,h〉 ≥ Et [〈(1− γ)ût,h + γµ0,h, r̂t,h〉] + ‖bt,h‖∞ + (1− γ) ‖ut,h − ût,h‖1 , where in the last step we used the fact that ‖rt,h‖∞ ≤ 1. After straightforward algebraic manipulations, this implies that the regret can be bounded as RT ≤ (1− γ)E [ R̂T ] + T∑ t=1 H∑ h=1 E [ γ 〈µ0,h − µ∗h, rt,h〉+ ‖ût,h − ut,h‖1 + ‖bt,h‖∞ ] . (8) In order to proceed, we need to verify the condition ∣∣η · 〈ϕ(x, a), θ̂t,h〉∣∣ < 1 so that we can apply Lemma 1 to bound R̂T . This is done in the following lemma: Lemma 3. Suppose that η ≤ 1σ2β(M+1)H . Then, for all, t, h, the reward estimates satisfy η ‖r̂t,h‖∞ < 1. Proceeding under the condition η(M + 1), we can apply Lemma 1 to bound the first term on the right-hand side of Equation (8), giving RT ≤ D(µ∗‖µ0) η + DC(u ∗‖µ0) α + (3 + 5d)ηHT + γHT + ∑ t,h E [ (η(M + 1)2 + 1) ‖ût,h − ut,h‖1 + ‖bt,h‖∞ ] . It remains to bound the bias of the reward estimators and the effect of the optimization errors that result in the mismatch between ut and ût. The following lemma shows that this mismatch can be directly controlled as a function of the optimization error: Lemma 4. The following bound is satisfied for all t and h: ‖ût,h − ut,h‖1 ≤ √ 2αε. The final element in the proof is the following lemma that bounds the bias of the estimator: Lemma 5. For M ≥ 0, β = 1σ2β(M+1)H , we have ‖bt,h‖∞ ≤ σR exp (−γβλminM). Putting these bounds together with the above derivations concludes the proof of Theorem 1. 5 Discussion This paper studies the problem of online learning in MDPs, merging two important lines of work on this problem concerned with linear function approximation [17, 7] and bandit feedback with adversarial rewards [23, 25, 35]. Our results are the first in this setting and not directly comparable with any previous work, although some favorable comparisons can be made with previous results in related settings. In the tabular setting where d = |X ||A|, our bounds exactly recover the minimax optimal guarantees first achieved by the O-REPS algorithm of Zimin and Neu [35]. For realizable linear function approximation, the work closest to ours is that of Cai et al. [7], who prove bounds of order √ d2H3T , which is worse by a factor of √ dH than our result. Their setting, however, is not exactly comparable to ours due to the different assumptions about the feedback about the rewards and the knowledge of the transition function. One particular strength of our work is providing a complete analysis of the propagation of optimization errors incurred while performing the updates. This is indeed a unique contribution in the related literature, where the effect of such errors typically go unaddressed. Specifically, the algorithms of Zimin and Neu [35], Rosenberg and Mansour [30], and Jin et al. [16] are all based on solving convex optimization problems similar to ours, the effect of optimization errors or potential methods for solving the optimization problems are not discussed at all. That said, we believe that the methods for calculating the updates discussed in Section 3.4 are far from perfect, and more research will be necessary to find truly practical optimization methods to solve this problem. The most important open question we leave behind concerns the requirement to have full prior knowledge of P . In the tabular case, this challenge has been successfully addressed in the adversarial MDP problem recently by Jin et al. [16], whose technique is based on adjusting the constraints (1) with a confidence set over the transition functions, to account for the uncertainty about the dynamics. We find it plausible that a similar extension of ONLINE Q-REPS is possible by incorporating a confidence set for linear MDPs, as has been done in the case of i.i.d. rewards by Neu and Pike-Burke [22]. Nevertheless, the details of such an extension remain highly non-trivial, and we leave the challenge of working them out open for future work.
1. What is the focus of the paper regarding online learning in MDPs? 2. What are the strengths of the proposed method, particularly in its regret bound? 3. What are the limitations of the paper, especially regarding its assumptions and applicability? 4. How does the reviewer assess the quality of the result and its contribution to the field? 5. Are there any suggestions or questions regarding the presentation of the result, such as the effect of optimization error or the use of off-the-shelf solvers?
Summary Of The Paper Review
Summary Of The Paper This paper studies the problem of online learning in a finite horizon MDP with known transition function and adversarially changing rewards at each episode and only bandit feedback from those rewards. In particular, the authors consider the case when the state space is large warranting linear function approximation with the assumption that the MDP is a linear MDP with a known feature map. The proposed method is essentially a follow-the-leader algorithm using estimated rewards, which are found by estimating the parameter of the reward function. Review Overall, I think this is a good paper. It is very well-written and tackles an interesting open problem in the field. The related work and introduction are detailed and comprehensive and do a good job of making the contributions of the paper clear. The assumptions are repeatedly described. The regret bound is interesting and significant in that the authors show that sqrt{dHT} regret is possible in this difficult setting. Although the proposed method combines ideas from several prior works, I do not think that this is necessarily bad and it does not seem to detract from the quality of the result. As the authors point out, the main limitation is that the results apply only in the case where the transition function is exactly known and it is not clear how this might extend beyond. That being said, I still think this setting is important to understand. The other limitation is that the regret bound depends on the divergence between the initial and optimal distributions, which I do not think is always needed in tabular cases, so this would be interesting to address in future work. It is pointed out that the bound has no dependence on the minimum eigenvalue, but I actually think this would be preferable to the divergence since the latter is asking for something akin to “good coverage” which is immutable in the problem while the former is asking only for a good model, which could be chosen by a practitioner even without good coverage. Other questions & suggestions: While it is not strictly necessary, I think that the result of Theorem 1 would feel more complete if one could see exactly the effect of the optimization error of solving G, for example, in terms of samples as described in Appendix C. Right now there is an epsilon that represents this in the main theorem. However, since a great deal of the motivation is tractability, perhaps a corollary would be helpful even though this would just be a simple calculation. In Lemma 8 in the appendix, doesn’t one need a uniform convergence guarantee since the goal is to minimize over \hat{G}? The statement looks like it holds only for a single Z, but the proof seems to handle it more generally. Also, when can one expect that Delta is bounded? Is there anything wrong with simply plugging the optimization problem (5) into an off-the-shelf solver if one were to implement this practically instead of solving the unconstrained optimization problem? It is my understanding that many solves can handle relative entropy objectives and the constraints seem tame enough. There is some discussion already about it potentially being intractable, but in what way is this the case? Too many states/actions? or something inherent about objective/constraints?
NIPS
Title Online learning in MDPs with linear function approximation and bandit feedback. Abstract We consider the problem of online learning in an episodic Markov decision process, where the reward function is allowed to change between episodes in an adversarial manner and the learner only observes the rewards associated with its actions. We assume that rewards and the transition function can be represented as linear functions in terms of a known low-dimensional feature map, which allows us to consider the setting where the state space is arbitrarily large. We also assume that the learner has a perfect knowledge of the MDP dynamics. Our main contribution is developing an algorithm whose expected regret after T episodes is bounded by Õ (√ dHT ) , where H is the number of steps in each episode and d is the dimensionality of the feature map. N/A (√ dHT ) , where H is the number of steps in each episode and d is the dimensionality of the feature map. 1 Introduction We study the problem of online learning in episodic Markov Decision Processes (MDP), modelling a sequential decision making problem where the interaction between a learner and its environment is divided into T episodes of fixed length H . At each time step of the episode, the learner observes the current state of the environment, chooses one of the K available actions, and earns a reward. Consequently, the state of the environment changes according to the transition function of the underlying MDP, as a function of the previous state and the action taken by the learner. A key distinguishing feature of our setting is that we assume that the reward function can change arbitrarily between episodes, and the learner only has access to bandit feedback: instead of being able to observe the reward function at the end of the episode, the learner only gets to observe the rewards that it actually received. As traditional in this line of work, we aim to design algorithms for the learner with theoretical guarantees on her regret, which is the difference between the total reward accumulated by the learner and the total reward of the best stationary policy fixed in hindsight. Unlike most previous work on this problem, we allow the state space to be very large and aim to prove performance guarantees that do not depend on the size of the state space, bringing theory one step closer to practical scenarios where assuming finite state spaces is unrealistic. To address the challenge of learning in large state spaces, we adopt the classic RL technique of using linear function approximation and suppose that we have access to a relatively low-dimensional feature map that can be used to represent policies and value functions. We will assume that the feature map is expressive enough so that all action-value functions can be expressed as linear functions of the features, and that the learner has full knowledge of the transition function of the MDP. Our main contribution is designing a computationally efficient algorithm called ONLINE Q-REPS, and prove that in the setting described above, its regret is at most O (√ dHTD (µ∗‖µ0) ) , where d is the dimensionality of the feature map and D (µ∗‖µ0) is the relative entropy between the state-action distribution µ∗ induced by the optimal policy and an initial distribution µ0 given as input to the 35th Conference on Neural Information Processing Systems (NeurIPS 2021). algorithm. Notably, our results do not require the likelihood ratio between these distributions to be uniformly bounded, and the bound shows no dependence on the eigenvalues of the feature covariance matrices. Our algorithm itself requires solving a d2-dimensional convex optimization problem at the beginning of each episode, which can be solved to arbitrary precision ε in time polynomial in d and 1/ε, independently of the size of the state-action space. Our work fits into a long line of research considering online learning in Markov decision processes. The problem of regret minimization in stationary MDPs with a fixed reward function has been studied extensively since the work of Burnetas and Katehakis [6], Auer and Ortner [2], Tewari and Bartlett [31], Jaksch et al. [14], with several important advances made in the past decade [9, 10, 4, 13, 15]. While most of these works considered small finite state spaces, the same techniques have been very recently extended to accommodate infinite state spaces under the assumption of realizable function approximation by Jin et al. [17] and Yang and Wang [33]. In particular, the notion of linear MDPs introduced by Jin et al. [17] has become a standard model for linear function approximation and has been used in several recent works (e.g., 22, 32, 1). Even more relevant is the line of work considering adversarial rewards, initiated by Even-Dar et al. [12], who consider online learning in continuing MDPs with full feedback about the rewards. They proposed a MDP-E algorithm, that achieves O(τ2 √ T logK) regret, where τ is an upper bound on the mixing time of the MDP. Later, Neu et al. [25] proposed an algorithm which guarantees Õ (√ τ3KT/α ) regret with bandit feedback, essentially assuming that all states are reachable with probability α > 0 under all policies. In our work, we focus on episodic MDPs with a fixed episode length H . The setting was first considered in the bandit setting by Neu et al. [23], who proposed an algorithm with a regret bound of O(H2 √ TK/α). Although the number of states does not appear explicitly in the bound, the regret scales at least linearly with the size of the state space X , since |X | ≤ H/α. Later work by Zimin and Neu [35], Dick et al. [11] eliminated the dependence on α and proposed an algorithm achieving Õ( √ TH|X |K) regret. Regret bounds for the full-information case without prior knowledge of the MDP were achieved by Neu et al. [24] and Rosenberg and Mansour [30], of order Õ(H|X |K √ T ) and Õ(H|X | √ KT ), respectively. These results were recently extended to handle bandit feedback about the rewards by Jin et al. [16], ultimately resulting in a regret bound of Õ(H|X | √ KT ). As apparent from the above discussion, all work on online learning in MDPs with adversarial rewards considers finite state spaces. The only exception we are aware of is the recent work of Cai et al. [7], whose algorithm OPPO is guaranteed to achieve Õ (√ d3H3T ) , assuming that the learner has access to d-dimensional features that can perfectly represent all action-value functions. While Cai, Yang, Jin, and Wang [7] remarkably assumed no prior knowledge of the MDP parameters, their guarantees are only achieved in the full-information case. This is to be contrasted with our results that are achieved for the much more restrictive bandit setting, albeit with the stronger assumption of having full knowledge of the underlying MDP, as required by virtually all prior work in the bandit setting, with the exception of Jin et al. [16]. Our results are made possible by a careful combination of recently proposed techniques for contextual bandit problems and optimal control in Markov decision processes. In particular, a core component of our algorithm is a regularized linear programming formulation of optimal control in MDPs due to Bas-Serrano et al. [5], which allows us to reduce the task of computing near-optimal policies in linear MDPs to a low-dimensional convex optimization problem. A similar algorithm design has been previously used for tabular MDPs by Zimin and Neu [35], Dick et al. [11], with the purpose of removing factors of 1/α from the previous state-of-the-art bounds of Neu et al. [23]. Analogously to this improvement, our methodology enables us to make strong assumptions on problem-dependent constants like likelihood ratios between µ∗ and µ0 or eigenvalues of the feature covariance matrices. Another important building block of our method is a version of the recently proposed Matrix Geometric Resampling procedure of Neu and Olkhovskaya [21] that enables us to efficiently estimate the reward functions. Incorporating these estimators in the algorithmic template of Bas-Serrano et al. [5] is far from straightforward and requires several subtle adjustments. Notation. We use 〈·, ·〉 to denote inner products in Euclidean space and by ‖·‖ we denote the Euclidean norm for vectors and the operator norm for matrices. For a symmetric positive definite matrix A, we use λmin(A) to denote its smallest eigenvalue. We write tr (A) for the trace of a matrix A and use A < 0 to denote that an operator A is positive semi-definite, and we use A < B to denote A−B < 0. For a d-dimensional vector v, we denote the corresponding d× d diagonal matrix by diag(v). For a positive integer N , we use [N ] to denote the set of positive integers {1, 2, . . . , N}. Finally, we will denote the set of all probability distributions over any set X by ∆X . 2 Preliminaries An episodic Markovian Decision Process (MDP), denoted by M = (X ,A, H, P, r) is defined by a state space X , action space A, episode length H ∈ Z+, transition function P : X ×A → ∆X and a reward function r : X ×A → [0, 1]. For convenience, we will assume that both X and A are finite sets, although we allow the state space X to be arbitrarily large. Without significant loss of generality, we will assume that the set of available actions is the same A in each state, with cardinality |A| = K. Furthermore, without any loss of generality, we will assume that the MDP has a layered structure, satisfying the following conditions: • The state set X can be decomposed into H disjoint sets: X = ∪Hh=1Xh, • X1 = {x1} and XH = {xH} are singletons, • transitions are only possible between consecutive layers, that is, for any xh ∈ Xh, the distribution P (·|x, a) is supported on Xh+1 for all a and h ∈ [H − 1]. These assumptions are common in the related literature (e.g., 23, 35, 30) and are not essential to our analysis; their primary role is simplifying our notation. In the present paper, we consider an online learning problem where the learner interacts with its environment in a sequence of episodes t = 1, 2, . . . , T , facing a different reward functions rt,1, . . . rt,H+1 selected by a (possibly adaptive) adversary at the beginning of each episode t. Oblivious to the reward function chosen by the adversary, the learner starts interacting with the MDP in each episode from the initial state Xt,1 = x1. At each consecutive step h ∈ [H − 1] within the episode, the learner observes the state Xt,h, picks an action At,h and observes the reward rt,h(Xt,h, At,h). Then, unless h = H , the learner moves to the next state Xt,h+1, which is generated from the distribution P (·|Xt,h, At,h). At the end of step H , the episode terminates and a new one begins. The aim of the learner is to select its actions so that the cumulative sum of rewards is as large as possible. Our algorithm and analysis will make use of the concept of (stationary stochastic) policies π : X → ∆A. A policy π prescribes a behaviour rule to the learner by assigning probability π(a|x) to taking action a at state x. Let τπ = ((X1, A1), (X2, A2), . . . , (XH , AH)) be a trajectory generated by following the policy π through the MDP. Then, for any xh ∈ Xh, ah ∈ A we define the occupancy measure µπh(x, a) = Pπ [(x, a) ∈ τπ]. We will refer to the collection of these distributions across all layers h as the occupancy measure induced by π and denote it as µπ = (µπ1 , µ π 2 , . . . , µ π H). We will denote the set of all valid occupancy measures by U and note that this is a convex set, such that for every element µ ∈ U the following set of linear constraints is satisfied:∑ a∈A µh+1(x, a) = ∑ x′,a′∈Xh×A P (x|x′, a′)µh(x′, a′), ∀x ∈ Xh+1, h ∈ [H − 1], (1) as well as ∑ a µ1(x1, a) = 1. From every valid occupancy measure µ, a stationary stochastic policy π = π1, . . . , πH−1 can be derived as πµ,h(a|x) = µh(x, a)/ ∑ a′ µh(x, a ′). For each h, introducing the linear operators E and P through their action on a set state-action distribution uh as (ETuh)(x) = ∑ a∈A uh(x, a) and (P T huh)(x) = ∑ x′,a′∈Xh,A P (x|x ′, a′)uh(x ′, a′), the constraints can be simply written as ETµh+1 = P Thµh for each h. We will use the inner product notation for the sum over the set of states and actions: 〈µh, rh〉 = ∑ (x,a)∈(Xh×A) µh(x, a)rt,h(x, a). Using this notation, we formulate our objective as selecting a sequence of policies πt for each episode t in a way that it minimizes the total expected regret defined as RT = sup π∗ T∑ t=1 H∑ h=1 (Eπ∗ [rt,h(X∗h, A∗h)]− Eπt [rt(Xt,h, At,h)]) = sup µ∗∈U T∑ t=1 H∑ h=1 〈µ∗h − µ πt h , rt,h〉 , where the notations Eπ∗ [·] and Eπt [·] emphasize that the state-action trajectories are generated by following policies π∗ and πt, respectively. As the above expression suggests, we can reformulate our online learning problem as an instance of online linear optimization where in each episode t, the learner selects an occupancy measure µt ∈ U (with µt = µπt) and gains reward ∑H h=1〈µt,h, rt,h〉. Intuitively, the regret measures the gap between the total reward gained by the learner and that of the best stationary policy fixed in hindsight, with full knowledge of the sequence of rewards chosen by the adversary. This performance measure is standard in the related literature on online learning in MDPs, see, for example Neu et al. [23], Zimin and Neu [35], Neu et al. [24], Rosenberg and Mansour [30], Cai et al. [7]. In this paper, we focus on MDPs with potentially enormous state spaces, which makes it difficult to design computationally tractable algorithms with nontrivial guarantees, unless we make some assumptions. We particularly focus on the classic technique of relying on linear function approximation and assuming that the reward functions occurring during the learning process can be written as a linear function of a low-dimensional feature map. We specify the form of function approximation and the conditions our analysis requires as follows: Assumption 1 (Linear MDP with adversarial rewards). There exists a feature map ϕ : X ×A → Rd and a collection of d signed measures m = (m1, . . . ,md) on X , such that for any (x, a) ∈ X ×A the transition function can be written as P (·|x, a) = 〈m(·), ϕ(x, a)〉 . Furthermore, the reward function chosen by the adversary in each episode t can be written as rt,h(x, a) = 〈θt,h, ϕ(x, a)〉 for some θt,h ∈ Rd. We assume that the features and the parameter vectors satisfy ‖ϕ(x, a)‖ ≤ σ and that the first coordinate ϕ1(x, a) = 1 for all (x, a) ∈ X ×A. Also we assume that ‖θt,h‖ ≤ R. Online learning under this assumption, but with a fixed reward function, has received substantial attention in the recent literature, particularly since the work of Jin et al. [17] who popularized the term “Linear MDP” to refer to this class of MDPs. This has quickly become a common assumption for studying reinforcement learning algorithms (Cai et al. [7], Jin et al. [17], Neu and Pike-Burke [22], Agarwal et al. [1]). This is also a special case of factored linear models (Yao et al. [34], Pires and Szepesvári [29]). Linear MDPs come with several attractive properties that allow efficient optimization and learning. In this work, we will exploit the useful property shown by Neu and Pike-Burke [22] and Bas-Serrano et al. [5] that all occupancy measures in a linear MDP can be seen to satisfy a relaxed version of the constraints in Equation (1). Specifically, for all h, defining the feature matrix Φh ∈ R(Xh×A)×d with its action on the distribution u as ΦThu = ∑ x,a∈Xh,A uh(x, a)ϕ(x, a), we define UΦ as the set of state-action distributions (µ, u) = ((µ1, . . . , µH), (u1, . . . , uH)) satisfying the following constraints: ETuh+1 = P T hµh (∀h), ΦThuh = ΦThµh (∀h), ETu1 = 1. (2) It is easy to see that for all feasible (µ, u) pairs, u satisfies the original constraints (1) if the MDP satisfies Assumption 1: since the transition operator can be written as Ph = ΦhMh for some matrix Mh. In this case, we clearly have ETuh+1 = P T hµh = M T hΦ T hµh = M T hΦ T huh = P T huh, (3) showing that any feasible u is indeed a valid occupancy measure. Furthermore, due to linearity of the rewards in Φ, we also have 〈uh, rt,h〉 = 〈µh, rt,h〉 for all feasible (µ, u) ∈ UΦ. While the number of variables and constraints in Equation (2) is still very large, it has been recently shown that approximate linear optimization over this set can be performed tractably [22, 5]. Our own algorithm design described in the next section will heavily build on these recent results. 3 Algorithm and main results This section presents our main contributions: a new efficient algorithm for the setting described above, along with its performance guarantees. Our algorithm design is based on a reduction to online linear optimization, exploiting the structural results established in the previous section. In particular, we will heavily rely on the algorithmic ideas established by Bas-Serrano et al. [5], who proposed an efficient reduction of approximate linear optimization over the high-dimensional set UΦ to a low-dimensional convex optimization problem. Another key component of our algorithm is an efficient estimator of the reward vectors θt,h based on the work of Neu and Olkhovskaya [21]. For reasons that we will clarify in Section 4, accommodating these reward estimators into the framework of Bas-Serrano et al. [5] is not straightforward and necessitates some subtle changes. 3.1 The policy update rule Our algorithm is an instantiation of the well-known “Follow the Regularized Leader” (FTRL) template commonly used in the design of modern online learning methods (see, e.g., 26). We will make the following design choices: • The decision variables will be the vector (µ, u) ∈ R2(X×A), with the feasible set U2Φ defined through the constraints ETuh = P T hµh (∀h), ΦThdiag(uh)Φh = ΦThdiag(µh)Φh (∀h). (4) These latter constraints ensure that the feature covariance matrices under u and µ will be identical, which is necessary for technical reasons that will be clarified in Section 4. Notice that, due to our assumption that ϕ1(x, a) = 1, we have U2Φ ⊆ UΦ, so all feasible u’s continue to be feasible for the original constraints (1). • The regularization function will be chosen as 1ηD(µ‖µ0) + 1 αDC(u‖µ0) for some positive regularization parameters η and α, where µ0 is the occupancy measure induced by the uniform π0 with π0(a|x) = 1K for all x, a, and D and DC are the marginal and conditional relative entropy functions respectively defined as D(µ‖µ0) = ∑H h=1D(µh‖µ0,h) and DC(µ‖µ0) = ∑H h=1DC(µh‖µ0,h) with D(µh‖µ0,h) = ∑ (x,a)∈(Xh×A) µh(x, a) log µh(x, a) µ0,h(x, a) , and DC(µh‖µ0,h) = ∑ (x,a)∈(Xh×A) µh(x, a) log πµ,h(a|x) π0,h(a|x) . With these choices, the updates of our algorithm in each episode will be given by (µt, ut) = arg max (µ,u)∈U2Φ { t−1∑ s=1 H−1∑ h=1 〈µh, r̂s,h〉 − 1 η D(µ‖µ0)− 1 α DC(u‖µ0) } (5) where r̂t,h ∈ RX×A is an estimator of the reward function rt,h that will be defined shortly. As written above, it is far from obvious if these updates can be calculated efficiently. The following result shows that, despite the apparent intractability of the maximization problem, it is possible to reduce the above problem into a d2-dimensional unconstrained convex optimization problem: Proposition 1. Define for each h ∈ [H − 1], a matrix Zh ∈ Rd×d and let matrix Z ∈ Rd×d(H−1) be defined as Z = (Z1, . . . , ZH−1). We will write h(x) = h, if x ∈ Xh. Define the Q-function taking values QZ(x, a) = ϕ(x, a)TZh(x)ϕ(x, a) and define the value function VZ(x) = 1 α log ∑ a∈A(x) π0(a|x)eαQZ(x,a) For any h ∈ [H−1] and for any x ∈ Xh, a ∈ A(x), denotePx,aVZ = ∑ x′∈Xh(x)+1 P (x ′|x, a)VZ(x′) and ∆t,Z(x, a) = ∑t−1 s=1 r̂s,h(x)(x, a) + Px,aVZ − QZ(x, a). Then, the optimal solution of the optimization problem (5) is given as π̂t,h(a|x) = π0(a|x)e α ( QZ∗t (x,a)−VZ∗t (x) ) , µ̂t,h(x, a) ∝ µ0(x, a)eη∆t,Z∗t (x,a), where Z∗t = (Z ∗ t,1, . . . , Z ∗ t,H−1) is the minimizer of the convex function Gt(Z) = 1 η H−1∑ h=1 log ∑ x∈Xh,a∈A(x) µ0(x, a)e η∆t,Z(x,a) + VZ(x1). (6) A particular merit of this result is that it gives an explicit formula for the policy πt that induces the optimal occupancy measure ut, and that πt(a|x) can be evaluated straightforwardly as a function of the features ϕ(x, a) and the parameters Z∗t . The proof of the result is based on Lagrangian duality, and mainly follows the proof of Proposition 1 in Bas-Serrano et al. [5], with some subtle differences due to the episodic setting we consider and the appearance of the constraints ΦThdiag(uh)Φh = ΦThdiag(µh)Φh. The proof is presented in Appendix A.1. The proposition above inspires a very straightforward implementation that is presented as Algorithm 1. Due to the direct relation with the algorithm of Bas-Serrano et al. [5], we refer to this method as ONLINE Q-REPS, where Q-REPS stands for “Relative Entropy Policy Search with Q-functions”. ONLINE Q-REPS adapts the general idea of Q-REPS to the online setting in a similar way as the O-REPS algorithm of Zimin and Neu [35] adapted the Relative Entropy Policy Search method of Peters et al. [28] to regret minimization in tabular MDPs with adversarial rewards. While O-REPS would in principle be still applicable to the large-scale setting we study in this paper and would plausibly achieve similar regret guarantees, its implementation would be nearly impossible due to the lack of the structural properties enjoyed by ONLINE Q-REPS, as established in Proposition 1. Algorithm 1 ONLINE Q-REPS Parameters: η, α > 0, exploration parameter γ ∈ (0, 1), Initialization: Set θ̂1,h = 0 for all h, compute Z1. For t = 1, . . . , T , repeat: • Draw Yt ∼ Ber(γ), • For h = 1, . . . ,H , do: – Observe Xt,h and, for all a ∈ A(Xt,h), set πt,h(a|Xt,h) = π0,h(a|Xt,h)eα(QZt (Xt,h,a)−VZt (Xt,h)), – if Y = 0, draw At,h ∼ πt,h(·|Xt,h), otherwise draw At,h ∼ π0,h(·|Xt,h), – observe the reward rt,h(Xt,h, At,h). • Compute θ̂t,1, . . . , θ̂t,H−1, Zt+1. 3.2 The reward estimator We now turn to describing the reward estimators r̂t,h, which will require several further definitions. Specifically, a concept of key importance will be the following feature covariance matrix: Σt,h = Eπt [ϕ(Xt,h, At,h)ϕ(Xt,h, At,h)T] . Making sure that Σt,h is invertible, we can define the estimator θ̃t,h = Σ −1 t,hϕ(Xt,h, At,h)rt,h(Xt,h, At,h). (7) This estimate shares many similarities with the estimates that are broadly used in the literature on adversarial linear bandits [18, 3, 8]. It is easy to see that θ̃t,h is an unbiased estimate of θt,h: Et [ θ̃t,h ] = Et [ Σ−1t,hϕ(Xt,h, At,h)ϕ(Xt,h, , At,h) Tθt,h ] = Σ−1t,hΣt,hθt,h = θt,h. Unfortunately, exact computation of Σt,h is intractable. To address this issue, we propose a method to directly estimate the inverse of the covariance matrix Σt,h by adapting the Matrix Geometric Resampling method of Neu and Olkhovskaya [21] (which itself is originally inspired by the Geometric Resampling method of 19, 20). Our adaptation has two parameters β > 0 andM ∈ Z+, and generates an estimate of the inverse covariance matrix through the following procedure1: 1The version we present here is a naïve implementation, optimized for readability. We present a more practical variant in Appendix B Matrix Geometric Resampling Input: simulator of P , policy π̃t = (π̃t,1, . . . , π̃t,H−1). For i = 1, . . . ,M , repeat: 1. Simulate a trajectory τ(i) = {(X1(i), A1(i)), . . . , (XH−1(i), AH−1(i))}, following the policy π̃t in P , 2. For h = 1, . . . ,H − 1, repeat: Compute (a) Bi,h = ϕ(Xh(i), Ah(i))ϕ(Xh(i), Ah(i))T, (b) Ci,h = ∏i j=1(I − βBj,h). Return Σ̂+t,h = βI + β ∑M i=1 Ci,h for all h ∈ [H − 1]. Based on the above procedure, we finally define our estimator as θ̂t,h = Σ̂ + t,hϕ(Xt,h, At,h)rt,h(Xt,h, At,h). The idea of the estimate is based on the truncation of the Neumann-series expansion of the matrix Σ−1t,h at the M th order term. Then, for large enough M , the matrix Σ + t,h is a good estimator of the inverse covariance matrix, which will be quantified formally in the analysis. For more intuition on the estimate, see section 3.2. in Neu and Olkhovskaya [21]. With a careful implementation explained in Appendix B, θ̂t,h can be computed in O(MHKd) time, using M calls to the simulator. 3.3 The regret bound We are now ready to state our main result: a bound on the expected regret of ONLINE Q-REPS. During the analysis, we will suppose that all the optimization problems solved by the algorithm are solved up to an additive error of ε ≥ 0. Furthermore, we will denote the covariance matrix generated by the uniform policy at layer h as Σ0,h, and make the following assumption: Assumption 2. The eigenvalues of Σ0,h for all h are lower bounded by λmin > 0. Our main result is the following guarantee regarding the performance of ONLINE Q-REPS: Theorem 1. Suppose that the MDP satisfies Assumptions 1 and 2 and λmin > 0. Furthermore, suppose that, for all t, Zt satisfies Gt(Zt) ≤ minZ Gt(Z) + ε for some ε ≥ 0. Then, for γ ∈ (0, 1), M ≥ 0, positive η ≤ 1σ2β(M+1)H and any positive β ≤ 1 2σ2 √ d(M+1) , the expected regret of ONLINE Q-REPS over T episodes satisfies RT ≤2TσRH · exp (−γβλminM) + γHT + ηTH(3 + 5d) + 1 η D(µ∗‖µ0) + 1 α DC(u ∗‖µ0) + √ αεTH(1 + η(M + 1)2). Furthermore, letting β = 1 2σ2 √ d(M+1) , M = ⌈ σ4d log2( √ THσR) γ2λ2min ⌉ , η = 1√ TdH , α = 1√ TdH and γ = 1√ TH and supposing that T is large enough so that the above constraints on M,γ, η and β are satisfied, we also have RT =Õ (√ dHT (1 +D(µ∗‖µ0) +DC(u∗‖µ0)) + √ ε(TH)9/4d5/4 1 λ4min ) . Thus, when all optimization problems are solved up to precision ε = (TH)−7/2d−3/2λ8min, the regret of ONLINE Q-REPS is guaranteed to be of O (√ dHTD(µ∗‖µ0) ) . 3.4 Implementation While Proposition 1 establishes the form of the ideal policy updates πt through the solution of an unconstrained convex optimization problem, it is not obvious that this optimization problem can be solved efficiently. Indeed, one immediate challenge in optimizing Gt is that its gradient takes the form ∇Gt(Z) = ∑ x,a µ̃Z(x, a) ϕ(x, a)ϕ(x, a)T −∑ x′,a′ P (x′|x, a)πZ(a′|x′)ϕ(x′, a′)ϕ(x′, a′)T , where µ̃Z(x, a) = µ0(x,a) exp(η∆Z(x,a))∑ x′,a′ µ0(x ′,a′) exp(η∆Z(x′,a′)) . Sampling from this latter distribution (and thus ob- taining unbiased estimators of∇Gt(Z)) is problematic due to the intractable normalization constant. This challenge can be addressed in a variety of ways. First, one can estimate the gradients via weighted importance sampling from the distribution µ̃Z and using these in a stochastic optimization procedure. This approach has been recently proposed and analyzed for an approximate implementation of REPS by Pacchiano et al. [27], who showed that it results in ε-optimal policy updates given polynomially many samples in 1/ε. Alternatively, one can consider an empirical counterpart of the loss function replacing the expectation with respect to µ0 with an empirical average over a number of i.i.d. samples drawn from the same distribution. The resulting loss function can then be optimized via standard stochastic optimization methods. This approach has been proposed and analyzed by Bas-Serrano et al. [5]. We describe the specifics of this latter approach in Appendix C. 4 Analysis This section gives the proof of Theorem 1 by stating the main technical results as lemmas and putting them together to obtain the final bound. In the first part of the proof, we show the upper bound on the auxiliary regret minimization game with general reward inputs and ideal updates. Then, we relate this quantity to the true expected regret by taking into account the properties of our reward estimates and the optimization errors incurred when calculating the updates. The proofs of all the lemmas are deferred to Appendix A. We start by defining the idealized updates (µ̂t, ût) obtained by solving the update steps in Equation (5) exactly, and we let ut be the occupancy measure induced by policy πt that is based on the near-optimal parameters Zt satisfying Gt(Zt) ≤ minZ Gt(Z) + ε. We will also let µt be the occupancy measure resulting from mixing ut with the exploratory distribution µ0 and note that µt,h = (1−γ)ut,h+γµt,h. Using this notation, we will consider an auxiliary online learning problem with the sequence of reward functions given as r̂t,h(x, a) = 〈ϕ(x, a), θ̂t,h〉, and study the performance of the idealized sequence (µ̂t, ût) therein: R̂T = T∑ t=1 H−1∑ h=1 〈µ∗h − ût,h, r̂t,h〉. Our first lemma bounds the above quantity: Lemma 1. Suppose that θ̂t,h is such that ∣∣η · 〈ϕ(x, a), θ̂t,h〉∣∣ < 1 holds for all x, a. Then, the auxiliary regret satisfies R̂T ≤ η T∑ t=1 H−1∑ h=1 〈µ̂t,h, r̂2t,h〉+ 1 η D(µ∗‖µ0) + 1 α DC(u ∗‖µ0). While the proof makes use of a general potential-based argument commonly used for analyzing FTRLstyle algorithms, it involves several nontrivial elements exploiting the structural results concerning ONLINE Q-REPS proved in Proposition 1. In particular, these properties enable us to upper bound the potential differences in a particularly simple way. The main term on contributing to the regret R̂T can be bounded as follows: Lemma 2. Suppose that ϕ(Xt,h, a) is satisfying ‖ϕ(Xt,h, a)‖2 ≤ σ for any a, 0 < β ≤ 1 2σ2 √ d(M+1) and M > 0. Then for each t and h, Et [ 〈µ̂t,h, r̂2t,h〉 ] ≤ 3 + 5d+ (M + 1)2 ‖ût,h − ut,h‖1 . The proof of this claim makes heavy use of the fact that 〈µ̂t,h, r̂2t,h〉 = 〈ût,h, r̂2t,h〉, which is ensured by the construction of the reward estimator r̂t,h and the constraints on the feature covariance matrices in Equation (4). This property is not guaranteed to hold under the first-order constraints (2) used in the previous works of Neu and Pike-Burke [22] and Bas-Serrano et al. [5], which eventually justifies the higher complexity of our algorithm. It remains to relate the auxiliary regret to the actual regret. The main challenge is accounting for the mismatch between µt and ut, and the bias of r̂t, denoted as bt,h(x, a) = Et [r̂t,h(x, a)]− rt,h(x, a). To address these issues, we observe that for any t, h, we have 〈µt,h, rt,h〉 = 〈(1− γ)ut,h + γµ0,h, rt,h〉 = 〈(1− γ)ût,h + γµ0,h, rt,h〉+ (1− γ) 〈ut,h − ût,h, rt,h〉 ≥ Et [〈(1− γ)ût,h + γµ0,h, r̂t,h〉] + ‖bt,h‖∞ + (1− γ) ‖ut,h − ût,h‖1 , where in the last step we used the fact that ‖rt,h‖∞ ≤ 1. After straightforward algebraic manipulations, this implies that the regret can be bounded as RT ≤ (1− γ)E [ R̂T ] + T∑ t=1 H∑ h=1 E [ γ 〈µ0,h − µ∗h, rt,h〉+ ‖ût,h − ut,h‖1 + ‖bt,h‖∞ ] . (8) In order to proceed, we need to verify the condition ∣∣η · 〈ϕ(x, a), θ̂t,h〉∣∣ < 1 so that we can apply Lemma 1 to bound R̂T . This is done in the following lemma: Lemma 3. Suppose that η ≤ 1σ2β(M+1)H . Then, for all, t, h, the reward estimates satisfy η ‖r̂t,h‖∞ < 1. Proceeding under the condition η(M + 1), we can apply Lemma 1 to bound the first term on the right-hand side of Equation (8), giving RT ≤ D(µ∗‖µ0) η + DC(u ∗‖µ0) α + (3 + 5d)ηHT + γHT + ∑ t,h E [ (η(M + 1)2 + 1) ‖ût,h − ut,h‖1 + ‖bt,h‖∞ ] . It remains to bound the bias of the reward estimators and the effect of the optimization errors that result in the mismatch between ut and ût. The following lemma shows that this mismatch can be directly controlled as a function of the optimization error: Lemma 4. The following bound is satisfied for all t and h: ‖ût,h − ut,h‖1 ≤ √ 2αε. The final element in the proof is the following lemma that bounds the bias of the estimator: Lemma 5. For M ≥ 0, β = 1σ2β(M+1)H , we have ‖bt,h‖∞ ≤ σR exp (−γβλminM). Putting these bounds together with the above derivations concludes the proof of Theorem 1. 5 Discussion This paper studies the problem of online learning in MDPs, merging two important lines of work on this problem concerned with linear function approximation [17, 7] and bandit feedback with adversarial rewards [23, 25, 35]. Our results are the first in this setting and not directly comparable with any previous work, although some favorable comparisons can be made with previous results in related settings. In the tabular setting where d = |X ||A|, our bounds exactly recover the minimax optimal guarantees first achieved by the O-REPS algorithm of Zimin and Neu [35]. For realizable linear function approximation, the work closest to ours is that of Cai et al. [7], who prove bounds of order √ d2H3T , which is worse by a factor of √ dH than our result. Their setting, however, is not exactly comparable to ours due to the different assumptions about the feedback about the rewards and the knowledge of the transition function. One particular strength of our work is providing a complete analysis of the propagation of optimization errors incurred while performing the updates. This is indeed a unique contribution in the related literature, where the effect of such errors typically go unaddressed. Specifically, the algorithms of Zimin and Neu [35], Rosenberg and Mansour [30], and Jin et al. [16] are all based on solving convex optimization problems similar to ours, the effect of optimization errors or potential methods for solving the optimization problems are not discussed at all. That said, we believe that the methods for calculating the updates discussed in Section 3.4 are far from perfect, and more research will be necessary to find truly practical optimization methods to solve this problem. The most important open question we leave behind concerns the requirement to have full prior knowledge of P . In the tabular case, this challenge has been successfully addressed in the adversarial MDP problem recently by Jin et al. [16], whose technique is based on adjusting the constraints (1) with a confidence set over the transition functions, to account for the uncertainty about the dynamics. We find it plausible that a similar extension of ONLINE Q-REPS is possible by incorporating a confidence set for linear MDPs, as has been done in the case of i.i.d. rewards by Neu and Pike-Burke [22]. Nevertheless, the details of such an extension remain highly non-trivial, and we leave the challenge of working them out open for future work.
1. What is the focus and contribution of the paper regarding online learning in Markov decision processes? 2. What are the strengths of the proposed approach, particularly in its theoretical foundations? 3. What are the weaknesses or unclear aspects of the paper, especially in its presentation and formulation? 4. Do you have any concerns about the algorithm's ability to achieve a vanishing regret? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This paper presents a theoretical approach for online learning in Markov decision processes (MDPs) with time-varying adversarial rewards and infinite, and possibly continuous, state spaces. The proposed approach is based on linear assumptions on the structure of the MDP and approximate solutions of a linear programming problem. Theoretical regret bounds and discussions on the implementation of a tractable algorithmic solution are provided. Review The paper is mostly well written and presents the solution to a problem relatively unexplored in the literature: online learning in MDPs with adversarial rewards over infinite/continuous state spaces. The presentation of the method is mostly rigorous in its mathematical formulations with clear theoretical justifications. However, the paper still has a few remaining clarity issues, which I elaborate below. Issues: The first half of the introduction lacks citations/references to back up some of the claims (e.g., lines 21 and 24). The second half of the introduction (line 41 onwards) could be labelled as a "related work" section to better structure the paper. K is introduced without prior definition in line 52. The notation O ~ is not defined, though it's commonly applied to indicate suppression of log-factors in the big- O notation. In the preliminaries section, the reward function is first introduced as if it was stationary (see line 97), but it is later on considered to be varying across time step and episode. It is not totally clear whether the algorithm can achieve a vanishing regret as T → ∞ , since this would imply that both η → 0 and α → 0 , i.e., the optimisation problem in Theorem 1 would have to be solved up to arbitrary precision. Could the authors comment on that?
NIPS
Title Online learning in MDPs with linear function approximation and bandit feedback. Abstract We consider the problem of online learning in an episodic Markov decision process, where the reward function is allowed to change between episodes in an adversarial manner and the learner only observes the rewards associated with its actions. We assume that rewards and the transition function can be represented as linear functions in terms of a known low-dimensional feature map, which allows us to consider the setting where the state space is arbitrarily large. We also assume that the learner has a perfect knowledge of the MDP dynamics. Our main contribution is developing an algorithm whose expected regret after T episodes is bounded by Õ (√ dHT ) , where H is the number of steps in each episode and d is the dimensionality of the feature map. N/A (√ dHT ) , where H is the number of steps in each episode and d is the dimensionality of the feature map. 1 Introduction We study the problem of online learning in episodic Markov Decision Processes (MDP), modelling a sequential decision making problem where the interaction between a learner and its environment is divided into T episodes of fixed length H . At each time step of the episode, the learner observes the current state of the environment, chooses one of the K available actions, and earns a reward. Consequently, the state of the environment changes according to the transition function of the underlying MDP, as a function of the previous state and the action taken by the learner. A key distinguishing feature of our setting is that we assume that the reward function can change arbitrarily between episodes, and the learner only has access to bandit feedback: instead of being able to observe the reward function at the end of the episode, the learner only gets to observe the rewards that it actually received. As traditional in this line of work, we aim to design algorithms for the learner with theoretical guarantees on her regret, which is the difference between the total reward accumulated by the learner and the total reward of the best stationary policy fixed in hindsight. Unlike most previous work on this problem, we allow the state space to be very large and aim to prove performance guarantees that do not depend on the size of the state space, bringing theory one step closer to practical scenarios where assuming finite state spaces is unrealistic. To address the challenge of learning in large state spaces, we adopt the classic RL technique of using linear function approximation and suppose that we have access to a relatively low-dimensional feature map that can be used to represent policies and value functions. We will assume that the feature map is expressive enough so that all action-value functions can be expressed as linear functions of the features, and that the learner has full knowledge of the transition function of the MDP. Our main contribution is designing a computationally efficient algorithm called ONLINE Q-REPS, and prove that in the setting described above, its regret is at most O (√ dHTD (µ∗‖µ0) ) , where d is the dimensionality of the feature map and D (µ∗‖µ0) is the relative entropy between the state-action distribution µ∗ induced by the optimal policy and an initial distribution µ0 given as input to the 35th Conference on Neural Information Processing Systems (NeurIPS 2021). algorithm. Notably, our results do not require the likelihood ratio between these distributions to be uniformly bounded, and the bound shows no dependence on the eigenvalues of the feature covariance matrices. Our algorithm itself requires solving a d2-dimensional convex optimization problem at the beginning of each episode, which can be solved to arbitrary precision ε in time polynomial in d and 1/ε, independently of the size of the state-action space. Our work fits into a long line of research considering online learning in Markov decision processes. The problem of regret minimization in stationary MDPs with a fixed reward function has been studied extensively since the work of Burnetas and Katehakis [6], Auer and Ortner [2], Tewari and Bartlett [31], Jaksch et al. [14], with several important advances made in the past decade [9, 10, 4, 13, 15]. While most of these works considered small finite state spaces, the same techniques have been very recently extended to accommodate infinite state spaces under the assumption of realizable function approximation by Jin et al. [17] and Yang and Wang [33]. In particular, the notion of linear MDPs introduced by Jin et al. [17] has become a standard model for linear function approximation and has been used in several recent works (e.g., 22, 32, 1). Even more relevant is the line of work considering adversarial rewards, initiated by Even-Dar et al. [12], who consider online learning in continuing MDPs with full feedback about the rewards. They proposed a MDP-E algorithm, that achieves O(τ2 √ T logK) regret, where τ is an upper bound on the mixing time of the MDP. Later, Neu et al. [25] proposed an algorithm which guarantees Õ (√ τ3KT/α ) regret with bandit feedback, essentially assuming that all states are reachable with probability α > 0 under all policies. In our work, we focus on episodic MDPs with a fixed episode length H . The setting was first considered in the bandit setting by Neu et al. [23], who proposed an algorithm with a regret bound of O(H2 √ TK/α). Although the number of states does not appear explicitly in the bound, the regret scales at least linearly with the size of the state space X , since |X | ≤ H/α. Later work by Zimin and Neu [35], Dick et al. [11] eliminated the dependence on α and proposed an algorithm achieving Õ( √ TH|X |K) regret. Regret bounds for the full-information case without prior knowledge of the MDP were achieved by Neu et al. [24] and Rosenberg and Mansour [30], of order Õ(H|X |K √ T ) and Õ(H|X | √ KT ), respectively. These results were recently extended to handle bandit feedback about the rewards by Jin et al. [16], ultimately resulting in a regret bound of Õ(H|X | √ KT ). As apparent from the above discussion, all work on online learning in MDPs with adversarial rewards considers finite state spaces. The only exception we are aware of is the recent work of Cai et al. [7], whose algorithm OPPO is guaranteed to achieve Õ (√ d3H3T ) , assuming that the learner has access to d-dimensional features that can perfectly represent all action-value functions. While Cai, Yang, Jin, and Wang [7] remarkably assumed no prior knowledge of the MDP parameters, their guarantees are only achieved in the full-information case. This is to be contrasted with our results that are achieved for the much more restrictive bandit setting, albeit with the stronger assumption of having full knowledge of the underlying MDP, as required by virtually all prior work in the bandit setting, with the exception of Jin et al. [16]. Our results are made possible by a careful combination of recently proposed techniques for contextual bandit problems and optimal control in Markov decision processes. In particular, a core component of our algorithm is a regularized linear programming formulation of optimal control in MDPs due to Bas-Serrano et al. [5], which allows us to reduce the task of computing near-optimal policies in linear MDPs to a low-dimensional convex optimization problem. A similar algorithm design has been previously used for tabular MDPs by Zimin and Neu [35], Dick et al. [11], with the purpose of removing factors of 1/α from the previous state-of-the-art bounds of Neu et al. [23]. Analogously to this improvement, our methodology enables us to make strong assumptions on problem-dependent constants like likelihood ratios between µ∗ and µ0 or eigenvalues of the feature covariance matrices. Another important building block of our method is a version of the recently proposed Matrix Geometric Resampling procedure of Neu and Olkhovskaya [21] that enables us to efficiently estimate the reward functions. Incorporating these estimators in the algorithmic template of Bas-Serrano et al. [5] is far from straightforward and requires several subtle adjustments. Notation. We use 〈·, ·〉 to denote inner products in Euclidean space and by ‖·‖ we denote the Euclidean norm for vectors and the operator norm for matrices. For a symmetric positive definite matrix A, we use λmin(A) to denote its smallest eigenvalue. We write tr (A) for the trace of a matrix A and use A < 0 to denote that an operator A is positive semi-definite, and we use A < B to denote A−B < 0. For a d-dimensional vector v, we denote the corresponding d× d diagonal matrix by diag(v). For a positive integer N , we use [N ] to denote the set of positive integers {1, 2, . . . , N}. Finally, we will denote the set of all probability distributions over any set X by ∆X . 2 Preliminaries An episodic Markovian Decision Process (MDP), denoted by M = (X ,A, H, P, r) is defined by a state space X , action space A, episode length H ∈ Z+, transition function P : X ×A → ∆X and a reward function r : X ×A → [0, 1]. For convenience, we will assume that both X and A are finite sets, although we allow the state space X to be arbitrarily large. Without significant loss of generality, we will assume that the set of available actions is the same A in each state, with cardinality |A| = K. Furthermore, without any loss of generality, we will assume that the MDP has a layered structure, satisfying the following conditions: • The state set X can be decomposed into H disjoint sets: X = ∪Hh=1Xh, • X1 = {x1} and XH = {xH} are singletons, • transitions are only possible between consecutive layers, that is, for any xh ∈ Xh, the distribution P (·|x, a) is supported on Xh+1 for all a and h ∈ [H − 1]. These assumptions are common in the related literature (e.g., 23, 35, 30) and are not essential to our analysis; their primary role is simplifying our notation. In the present paper, we consider an online learning problem where the learner interacts with its environment in a sequence of episodes t = 1, 2, . . . , T , facing a different reward functions rt,1, . . . rt,H+1 selected by a (possibly adaptive) adversary at the beginning of each episode t. Oblivious to the reward function chosen by the adversary, the learner starts interacting with the MDP in each episode from the initial state Xt,1 = x1. At each consecutive step h ∈ [H − 1] within the episode, the learner observes the state Xt,h, picks an action At,h and observes the reward rt,h(Xt,h, At,h). Then, unless h = H , the learner moves to the next state Xt,h+1, which is generated from the distribution P (·|Xt,h, At,h). At the end of step H , the episode terminates and a new one begins. The aim of the learner is to select its actions so that the cumulative sum of rewards is as large as possible. Our algorithm and analysis will make use of the concept of (stationary stochastic) policies π : X → ∆A. A policy π prescribes a behaviour rule to the learner by assigning probability π(a|x) to taking action a at state x. Let τπ = ((X1, A1), (X2, A2), . . . , (XH , AH)) be a trajectory generated by following the policy π through the MDP. Then, for any xh ∈ Xh, ah ∈ A we define the occupancy measure µπh(x, a) = Pπ [(x, a) ∈ τπ]. We will refer to the collection of these distributions across all layers h as the occupancy measure induced by π and denote it as µπ = (µπ1 , µ π 2 , . . . , µ π H). We will denote the set of all valid occupancy measures by U and note that this is a convex set, such that for every element µ ∈ U the following set of linear constraints is satisfied:∑ a∈A µh+1(x, a) = ∑ x′,a′∈Xh×A P (x|x′, a′)µh(x′, a′), ∀x ∈ Xh+1, h ∈ [H − 1], (1) as well as ∑ a µ1(x1, a) = 1. From every valid occupancy measure µ, a stationary stochastic policy π = π1, . . . , πH−1 can be derived as πµ,h(a|x) = µh(x, a)/ ∑ a′ µh(x, a ′). For each h, introducing the linear operators E and P through their action on a set state-action distribution uh as (ETuh)(x) = ∑ a∈A uh(x, a) and (P T huh)(x) = ∑ x′,a′∈Xh,A P (x|x ′, a′)uh(x ′, a′), the constraints can be simply written as ETµh+1 = P Thµh for each h. We will use the inner product notation for the sum over the set of states and actions: 〈µh, rh〉 = ∑ (x,a)∈(Xh×A) µh(x, a)rt,h(x, a). Using this notation, we formulate our objective as selecting a sequence of policies πt for each episode t in a way that it minimizes the total expected regret defined as RT = sup π∗ T∑ t=1 H∑ h=1 (Eπ∗ [rt,h(X∗h, A∗h)]− Eπt [rt(Xt,h, At,h)]) = sup µ∗∈U T∑ t=1 H∑ h=1 〈µ∗h − µ πt h , rt,h〉 , where the notations Eπ∗ [·] and Eπt [·] emphasize that the state-action trajectories are generated by following policies π∗ and πt, respectively. As the above expression suggests, we can reformulate our online learning problem as an instance of online linear optimization where in each episode t, the learner selects an occupancy measure µt ∈ U (with µt = µπt) and gains reward ∑H h=1〈µt,h, rt,h〉. Intuitively, the regret measures the gap between the total reward gained by the learner and that of the best stationary policy fixed in hindsight, with full knowledge of the sequence of rewards chosen by the adversary. This performance measure is standard in the related literature on online learning in MDPs, see, for example Neu et al. [23], Zimin and Neu [35], Neu et al. [24], Rosenberg and Mansour [30], Cai et al. [7]. In this paper, we focus on MDPs with potentially enormous state spaces, which makes it difficult to design computationally tractable algorithms with nontrivial guarantees, unless we make some assumptions. We particularly focus on the classic technique of relying on linear function approximation and assuming that the reward functions occurring during the learning process can be written as a linear function of a low-dimensional feature map. We specify the form of function approximation and the conditions our analysis requires as follows: Assumption 1 (Linear MDP with adversarial rewards). There exists a feature map ϕ : X ×A → Rd and a collection of d signed measures m = (m1, . . . ,md) on X , such that for any (x, a) ∈ X ×A the transition function can be written as P (·|x, a) = 〈m(·), ϕ(x, a)〉 . Furthermore, the reward function chosen by the adversary in each episode t can be written as rt,h(x, a) = 〈θt,h, ϕ(x, a)〉 for some θt,h ∈ Rd. We assume that the features and the parameter vectors satisfy ‖ϕ(x, a)‖ ≤ σ and that the first coordinate ϕ1(x, a) = 1 for all (x, a) ∈ X ×A. Also we assume that ‖θt,h‖ ≤ R. Online learning under this assumption, but with a fixed reward function, has received substantial attention in the recent literature, particularly since the work of Jin et al. [17] who popularized the term “Linear MDP” to refer to this class of MDPs. This has quickly become a common assumption for studying reinforcement learning algorithms (Cai et al. [7], Jin et al. [17], Neu and Pike-Burke [22], Agarwal et al. [1]). This is also a special case of factored linear models (Yao et al. [34], Pires and Szepesvári [29]). Linear MDPs come with several attractive properties that allow efficient optimization and learning. In this work, we will exploit the useful property shown by Neu and Pike-Burke [22] and Bas-Serrano et al. [5] that all occupancy measures in a linear MDP can be seen to satisfy a relaxed version of the constraints in Equation (1). Specifically, for all h, defining the feature matrix Φh ∈ R(Xh×A)×d with its action on the distribution u as ΦThu = ∑ x,a∈Xh,A uh(x, a)ϕ(x, a), we define UΦ as the set of state-action distributions (µ, u) = ((µ1, . . . , µH), (u1, . . . , uH)) satisfying the following constraints: ETuh+1 = P T hµh (∀h), ΦThuh = ΦThµh (∀h), ETu1 = 1. (2) It is easy to see that for all feasible (µ, u) pairs, u satisfies the original constraints (1) if the MDP satisfies Assumption 1: since the transition operator can be written as Ph = ΦhMh for some matrix Mh. In this case, we clearly have ETuh+1 = P T hµh = M T hΦ T hµh = M T hΦ T huh = P T huh, (3) showing that any feasible u is indeed a valid occupancy measure. Furthermore, due to linearity of the rewards in Φ, we also have 〈uh, rt,h〉 = 〈µh, rt,h〉 for all feasible (µ, u) ∈ UΦ. While the number of variables and constraints in Equation (2) is still very large, it has been recently shown that approximate linear optimization over this set can be performed tractably [22, 5]. Our own algorithm design described in the next section will heavily build on these recent results. 3 Algorithm and main results This section presents our main contributions: a new efficient algorithm for the setting described above, along with its performance guarantees. Our algorithm design is based on a reduction to online linear optimization, exploiting the structural results established in the previous section. In particular, we will heavily rely on the algorithmic ideas established by Bas-Serrano et al. [5], who proposed an efficient reduction of approximate linear optimization over the high-dimensional set UΦ to a low-dimensional convex optimization problem. Another key component of our algorithm is an efficient estimator of the reward vectors θt,h based on the work of Neu and Olkhovskaya [21]. For reasons that we will clarify in Section 4, accommodating these reward estimators into the framework of Bas-Serrano et al. [5] is not straightforward and necessitates some subtle changes. 3.1 The policy update rule Our algorithm is an instantiation of the well-known “Follow the Regularized Leader” (FTRL) template commonly used in the design of modern online learning methods (see, e.g., 26). We will make the following design choices: • The decision variables will be the vector (µ, u) ∈ R2(X×A), with the feasible set U2Φ defined through the constraints ETuh = P T hµh (∀h), ΦThdiag(uh)Φh = ΦThdiag(µh)Φh (∀h). (4) These latter constraints ensure that the feature covariance matrices under u and µ will be identical, which is necessary for technical reasons that will be clarified in Section 4. Notice that, due to our assumption that ϕ1(x, a) = 1, we have U2Φ ⊆ UΦ, so all feasible u’s continue to be feasible for the original constraints (1). • The regularization function will be chosen as 1ηD(µ‖µ0) + 1 αDC(u‖µ0) for some positive regularization parameters η and α, where µ0 is the occupancy measure induced by the uniform π0 with π0(a|x) = 1K for all x, a, and D and DC are the marginal and conditional relative entropy functions respectively defined as D(µ‖µ0) = ∑H h=1D(µh‖µ0,h) and DC(µ‖µ0) = ∑H h=1DC(µh‖µ0,h) with D(µh‖µ0,h) = ∑ (x,a)∈(Xh×A) µh(x, a) log µh(x, a) µ0,h(x, a) , and DC(µh‖µ0,h) = ∑ (x,a)∈(Xh×A) µh(x, a) log πµ,h(a|x) π0,h(a|x) . With these choices, the updates of our algorithm in each episode will be given by (µt, ut) = arg max (µ,u)∈U2Φ { t−1∑ s=1 H−1∑ h=1 〈µh, r̂s,h〉 − 1 η D(µ‖µ0)− 1 α DC(u‖µ0) } (5) where r̂t,h ∈ RX×A is an estimator of the reward function rt,h that will be defined shortly. As written above, it is far from obvious if these updates can be calculated efficiently. The following result shows that, despite the apparent intractability of the maximization problem, it is possible to reduce the above problem into a d2-dimensional unconstrained convex optimization problem: Proposition 1. Define for each h ∈ [H − 1], a matrix Zh ∈ Rd×d and let matrix Z ∈ Rd×d(H−1) be defined as Z = (Z1, . . . , ZH−1). We will write h(x) = h, if x ∈ Xh. Define the Q-function taking values QZ(x, a) = ϕ(x, a)TZh(x)ϕ(x, a) and define the value function VZ(x) = 1 α log ∑ a∈A(x) π0(a|x)eαQZ(x,a) For any h ∈ [H−1] and for any x ∈ Xh, a ∈ A(x), denotePx,aVZ = ∑ x′∈Xh(x)+1 P (x ′|x, a)VZ(x′) and ∆t,Z(x, a) = ∑t−1 s=1 r̂s,h(x)(x, a) + Px,aVZ − QZ(x, a). Then, the optimal solution of the optimization problem (5) is given as π̂t,h(a|x) = π0(a|x)e α ( QZ∗t (x,a)−VZ∗t (x) ) , µ̂t,h(x, a) ∝ µ0(x, a)eη∆t,Z∗t (x,a), where Z∗t = (Z ∗ t,1, . . . , Z ∗ t,H−1) is the minimizer of the convex function Gt(Z) = 1 η H−1∑ h=1 log ∑ x∈Xh,a∈A(x) µ0(x, a)e η∆t,Z(x,a) + VZ(x1). (6) A particular merit of this result is that it gives an explicit formula for the policy πt that induces the optimal occupancy measure ut, and that πt(a|x) can be evaluated straightforwardly as a function of the features ϕ(x, a) and the parameters Z∗t . The proof of the result is based on Lagrangian duality, and mainly follows the proof of Proposition 1 in Bas-Serrano et al. [5], with some subtle differences due to the episodic setting we consider and the appearance of the constraints ΦThdiag(uh)Φh = ΦThdiag(µh)Φh. The proof is presented in Appendix A.1. The proposition above inspires a very straightforward implementation that is presented as Algorithm 1. Due to the direct relation with the algorithm of Bas-Serrano et al. [5], we refer to this method as ONLINE Q-REPS, where Q-REPS stands for “Relative Entropy Policy Search with Q-functions”. ONLINE Q-REPS adapts the general idea of Q-REPS to the online setting in a similar way as the O-REPS algorithm of Zimin and Neu [35] adapted the Relative Entropy Policy Search method of Peters et al. [28] to regret minimization in tabular MDPs with adversarial rewards. While O-REPS would in principle be still applicable to the large-scale setting we study in this paper and would plausibly achieve similar regret guarantees, its implementation would be nearly impossible due to the lack of the structural properties enjoyed by ONLINE Q-REPS, as established in Proposition 1. Algorithm 1 ONLINE Q-REPS Parameters: η, α > 0, exploration parameter γ ∈ (0, 1), Initialization: Set θ̂1,h = 0 for all h, compute Z1. For t = 1, . . . , T , repeat: • Draw Yt ∼ Ber(γ), • For h = 1, . . . ,H , do: – Observe Xt,h and, for all a ∈ A(Xt,h), set πt,h(a|Xt,h) = π0,h(a|Xt,h)eα(QZt (Xt,h,a)−VZt (Xt,h)), – if Y = 0, draw At,h ∼ πt,h(·|Xt,h), otherwise draw At,h ∼ π0,h(·|Xt,h), – observe the reward rt,h(Xt,h, At,h). • Compute θ̂t,1, . . . , θ̂t,H−1, Zt+1. 3.2 The reward estimator We now turn to describing the reward estimators r̂t,h, which will require several further definitions. Specifically, a concept of key importance will be the following feature covariance matrix: Σt,h = Eπt [ϕ(Xt,h, At,h)ϕ(Xt,h, At,h)T] . Making sure that Σt,h is invertible, we can define the estimator θ̃t,h = Σ −1 t,hϕ(Xt,h, At,h)rt,h(Xt,h, At,h). (7) This estimate shares many similarities with the estimates that are broadly used in the literature on adversarial linear bandits [18, 3, 8]. It is easy to see that θ̃t,h is an unbiased estimate of θt,h: Et [ θ̃t,h ] = Et [ Σ−1t,hϕ(Xt,h, At,h)ϕ(Xt,h, , At,h) Tθt,h ] = Σ−1t,hΣt,hθt,h = θt,h. Unfortunately, exact computation of Σt,h is intractable. To address this issue, we propose a method to directly estimate the inverse of the covariance matrix Σt,h by adapting the Matrix Geometric Resampling method of Neu and Olkhovskaya [21] (which itself is originally inspired by the Geometric Resampling method of 19, 20). Our adaptation has two parameters β > 0 andM ∈ Z+, and generates an estimate of the inverse covariance matrix through the following procedure1: 1The version we present here is a naïve implementation, optimized for readability. We present a more practical variant in Appendix B Matrix Geometric Resampling Input: simulator of P , policy π̃t = (π̃t,1, . . . , π̃t,H−1). For i = 1, . . . ,M , repeat: 1. Simulate a trajectory τ(i) = {(X1(i), A1(i)), . . . , (XH−1(i), AH−1(i))}, following the policy π̃t in P , 2. For h = 1, . . . ,H − 1, repeat: Compute (a) Bi,h = ϕ(Xh(i), Ah(i))ϕ(Xh(i), Ah(i))T, (b) Ci,h = ∏i j=1(I − βBj,h). Return Σ̂+t,h = βI + β ∑M i=1 Ci,h for all h ∈ [H − 1]. Based on the above procedure, we finally define our estimator as θ̂t,h = Σ̂ + t,hϕ(Xt,h, At,h)rt,h(Xt,h, At,h). The idea of the estimate is based on the truncation of the Neumann-series expansion of the matrix Σ−1t,h at the M th order term. Then, for large enough M , the matrix Σ + t,h is a good estimator of the inverse covariance matrix, which will be quantified formally in the analysis. For more intuition on the estimate, see section 3.2. in Neu and Olkhovskaya [21]. With a careful implementation explained in Appendix B, θ̂t,h can be computed in O(MHKd) time, using M calls to the simulator. 3.3 The regret bound We are now ready to state our main result: a bound on the expected regret of ONLINE Q-REPS. During the analysis, we will suppose that all the optimization problems solved by the algorithm are solved up to an additive error of ε ≥ 0. Furthermore, we will denote the covariance matrix generated by the uniform policy at layer h as Σ0,h, and make the following assumption: Assumption 2. The eigenvalues of Σ0,h for all h are lower bounded by λmin > 0. Our main result is the following guarantee regarding the performance of ONLINE Q-REPS: Theorem 1. Suppose that the MDP satisfies Assumptions 1 and 2 and λmin > 0. Furthermore, suppose that, for all t, Zt satisfies Gt(Zt) ≤ minZ Gt(Z) + ε for some ε ≥ 0. Then, for γ ∈ (0, 1), M ≥ 0, positive η ≤ 1σ2β(M+1)H and any positive β ≤ 1 2σ2 √ d(M+1) , the expected regret of ONLINE Q-REPS over T episodes satisfies RT ≤2TσRH · exp (−γβλminM) + γHT + ηTH(3 + 5d) + 1 η D(µ∗‖µ0) + 1 α DC(u ∗‖µ0) + √ αεTH(1 + η(M + 1)2). Furthermore, letting β = 1 2σ2 √ d(M+1) , M = ⌈ σ4d log2( √ THσR) γ2λ2min ⌉ , η = 1√ TdH , α = 1√ TdH and γ = 1√ TH and supposing that T is large enough so that the above constraints on M,γ, η and β are satisfied, we also have RT =Õ (√ dHT (1 +D(µ∗‖µ0) +DC(u∗‖µ0)) + √ ε(TH)9/4d5/4 1 λ4min ) . Thus, when all optimization problems are solved up to precision ε = (TH)−7/2d−3/2λ8min, the regret of ONLINE Q-REPS is guaranteed to be of O (√ dHTD(µ∗‖µ0) ) . 3.4 Implementation While Proposition 1 establishes the form of the ideal policy updates πt through the solution of an unconstrained convex optimization problem, it is not obvious that this optimization problem can be solved efficiently. Indeed, one immediate challenge in optimizing Gt is that its gradient takes the form ∇Gt(Z) = ∑ x,a µ̃Z(x, a) ϕ(x, a)ϕ(x, a)T −∑ x′,a′ P (x′|x, a)πZ(a′|x′)ϕ(x′, a′)ϕ(x′, a′)T , where µ̃Z(x, a) = µ0(x,a) exp(η∆Z(x,a))∑ x′,a′ µ0(x ′,a′) exp(η∆Z(x′,a′)) . Sampling from this latter distribution (and thus ob- taining unbiased estimators of∇Gt(Z)) is problematic due to the intractable normalization constant. This challenge can be addressed in a variety of ways. First, one can estimate the gradients via weighted importance sampling from the distribution µ̃Z and using these in a stochastic optimization procedure. This approach has been recently proposed and analyzed for an approximate implementation of REPS by Pacchiano et al. [27], who showed that it results in ε-optimal policy updates given polynomially many samples in 1/ε. Alternatively, one can consider an empirical counterpart of the loss function replacing the expectation with respect to µ0 with an empirical average over a number of i.i.d. samples drawn from the same distribution. The resulting loss function can then be optimized via standard stochastic optimization methods. This approach has been proposed and analyzed by Bas-Serrano et al. [5]. We describe the specifics of this latter approach in Appendix C. 4 Analysis This section gives the proof of Theorem 1 by stating the main technical results as lemmas and putting them together to obtain the final bound. In the first part of the proof, we show the upper bound on the auxiliary regret minimization game with general reward inputs and ideal updates. Then, we relate this quantity to the true expected regret by taking into account the properties of our reward estimates and the optimization errors incurred when calculating the updates. The proofs of all the lemmas are deferred to Appendix A. We start by defining the idealized updates (µ̂t, ût) obtained by solving the update steps in Equation (5) exactly, and we let ut be the occupancy measure induced by policy πt that is based on the near-optimal parameters Zt satisfying Gt(Zt) ≤ minZ Gt(Z) + ε. We will also let µt be the occupancy measure resulting from mixing ut with the exploratory distribution µ0 and note that µt,h = (1−γ)ut,h+γµt,h. Using this notation, we will consider an auxiliary online learning problem with the sequence of reward functions given as r̂t,h(x, a) = 〈ϕ(x, a), θ̂t,h〉, and study the performance of the idealized sequence (µ̂t, ût) therein: R̂T = T∑ t=1 H−1∑ h=1 〈µ∗h − ût,h, r̂t,h〉. Our first lemma bounds the above quantity: Lemma 1. Suppose that θ̂t,h is such that ∣∣η · 〈ϕ(x, a), θ̂t,h〉∣∣ < 1 holds for all x, a. Then, the auxiliary regret satisfies R̂T ≤ η T∑ t=1 H−1∑ h=1 〈µ̂t,h, r̂2t,h〉+ 1 η D(µ∗‖µ0) + 1 α DC(u ∗‖µ0). While the proof makes use of a general potential-based argument commonly used for analyzing FTRLstyle algorithms, it involves several nontrivial elements exploiting the structural results concerning ONLINE Q-REPS proved in Proposition 1. In particular, these properties enable us to upper bound the potential differences in a particularly simple way. The main term on contributing to the regret R̂T can be bounded as follows: Lemma 2. Suppose that ϕ(Xt,h, a) is satisfying ‖ϕ(Xt,h, a)‖2 ≤ σ for any a, 0 < β ≤ 1 2σ2 √ d(M+1) and M > 0. Then for each t and h, Et [ 〈µ̂t,h, r̂2t,h〉 ] ≤ 3 + 5d+ (M + 1)2 ‖ût,h − ut,h‖1 . The proof of this claim makes heavy use of the fact that 〈µ̂t,h, r̂2t,h〉 = 〈ût,h, r̂2t,h〉, which is ensured by the construction of the reward estimator r̂t,h and the constraints on the feature covariance matrices in Equation (4). This property is not guaranteed to hold under the first-order constraints (2) used in the previous works of Neu and Pike-Burke [22] and Bas-Serrano et al. [5], which eventually justifies the higher complexity of our algorithm. It remains to relate the auxiliary regret to the actual regret. The main challenge is accounting for the mismatch between µt and ut, and the bias of r̂t, denoted as bt,h(x, a) = Et [r̂t,h(x, a)]− rt,h(x, a). To address these issues, we observe that for any t, h, we have 〈µt,h, rt,h〉 = 〈(1− γ)ut,h + γµ0,h, rt,h〉 = 〈(1− γ)ût,h + γµ0,h, rt,h〉+ (1− γ) 〈ut,h − ût,h, rt,h〉 ≥ Et [〈(1− γ)ût,h + γµ0,h, r̂t,h〉] + ‖bt,h‖∞ + (1− γ) ‖ut,h − ût,h‖1 , where in the last step we used the fact that ‖rt,h‖∞ ≤ 1. After straightforward algebraic manipulations, this implies that the regret can be bounded as RT ≤ (1− γ)E [ R̂T ] + T∑ t=1 H∑ h=1 E [ γ 〈µ0,h − µ∗h, rt,h〉+ ‖ût,h − ut,h‖1 + ‖bt,h‖∞ ] . (8) In order to proceed, we need to verify the condition ∣∣η · 〈ϕ(x, a), θ̂t,h〉∣∣ < 1 so that we can apply Lemma 1 to bound R̂T . This is done in the following lemma: Lemma 3. Suppose that η ≤ 1σ2β(M+1)H . Then, for all, t, h, the reward estimates satisfy η ‖r̂t,h‖∞ < 1. Proceeding under the condition η(M + 1), we can apply Lemma 1 to bound the first term on the right-hand side of Equation (8), giving RT ≤ D(µ∗‖µ0) η + DC(u ∗‖µ0) α + (3 + 5d)ηHT + γHT + ∑ t,h E [ (η(M + 1)2 + 1) ‖ût,h − ut,h‖1 + ‖bt,h‖∞ ] . It remains to bound the bias of the reward estimators and the effect of the optimization errors that result in the mismatch between ut and ût. The following lemma shows that this mismatch can be directly controlled as a function of the optimization error: Lemma 4. The following bound is satisfied for all t and h: ‖ût,h − ut,h‖1 ≤ √ 2αε. The final element in the proof is the following lemma that bounds the bias of the estimator: Lemma 5. For M ≥ 0, β = 1σ2β(M+1)H , we have ‖bt,h‖∞ ≤ σR exp (−γβλminM). Putting these bounds together with the above derivations concludes the proof of Theorem 1. 5 Discussion This paper studies the problem of online learning in MDPs, merging two important lines of work on this problem concerned with linear function approximation [17, 7] and bandit feedback with adversarial rewards [23, 25, 35]. Our results are the first in this setting and not directly comparable with any previous work, although some favorable comparisons can be made with previous results in related settings. In the tabular setting where d = |X ||A|, our bounds exactly recover the minimax optimal guarantees first achieved by the O-REPS algorithm of Zimin and Neu [35]. For realizable linear function approximation, the work closest to ours is that of Cai et al. [7], who prove bounds of order √ d2H3T , which is worse by a factor of √ dH than our result. Their setting, however, is not exactly comparable to ours due to the different assumptions about the feedback about the rewards and the knowledge of the transition function. One particular strength of our work is providing a complete analysis of the propagation of optimization errors incurred while performing the updates. This is indeed a unique contribution in the related literature, where the effect of such errors typically go unaddressed. Specifically, the algorithms of Zimin and Neu [35], Rosenberg and Mansour [30], and Jin et al. [16] are all based on solving convex optimization problems similar to ours, the effect of optimization errors or potential methods for solving the optimization problems are not discussed at all. That said, we believe that the methods for calculating the updates discussed in Section 3.4 are far from perfect, and more research will be necessary to find truly practical optimization methods to solve this problem. The most important open question we leave behind concerns the requirement to have full prior knowledge of P . In the tabular case, this challenge has been successfully addressed in the adversarial MDP problem recently by Jin et al. [16], whose technique is based on adjusting the constraints (1) with a confidence set over the transition functions, to account for the uncertainty about the dynamics. We find it plausible that a similar extension of ONLINE Q-REPS is possible by incorporating a confidence set for linear MDPs, as has been done in the case of i.i.d. rewards by Neu and Pike-Burke [22]. Nevertheless, the details of such an extension remain highly non-trivial, and we leave the challenge of working them out open for future work.
1. What is the focus of the paper regarding the adversarial linear bandit problem? 2. What are the strengths of the proposed algorithm, particularly in its computational efficiency? 3. What are the weaknesses or concerns regarding the assumption of known dynamics and its impact on the resampling procedure? 4. How does the reviewer assess the efficiency of the minimization problem solution in the algorithm? 5. Why does the reviewer question the choice of using a pair (mu, u) as the decision variable?
Summary Of The Paper Review
Summary Of The Paper The authors study the adversarial linear bandit problem under bandit feedback. When the dynamics is known, the authors propose a computationally efficient algorithm to achieve low regret. Review I have read an earlier version of this paper, where the authors assume access to a simulator to conduct matrix geometric resampling without assuming known dynamics. Although assuming knowing the dynamics sounds weaker, I actually like the new version since the assumption is clear and more reasonable -- Actually, I think assuming a simulator in an online problem with adversarial reward sounds super wired. Also I think authors may want to add more discussion to highlight the improvement in the new version, which would be very helpful for reviewers like me. I have some other comments: When assuming the known dynamics, in principle we can compute the "covariance matrix" explicitly. The authors claim this is hard. They may want to add some explanation to at least convince the reader this is hard, which is not clear to me at first glance. This will also highlight the significance of the resampling procedure. When finding Z_t, a minimization problem (6) needs to be solved. It would be great if the authors can add some intuition in the main text why this can be done in a computationally efficient manner. In particular, since it contains a summation of S terms and S could be very large, why the overall complexity only depends on d? The decision variable used is a pair (mu,u). Why using this pair is superior to only using mu? Since both of them are constrained by some linear constraints, I can't see intuitively this is the case.
NIPS
Title Mutual information for symmetric rank-one matrix estimation: A proof of the replica formula Abstract Factorizing low-rank matrices has many applications in machine learning and statistics. For probabilistic models in the Bayes optimal setting, a general expression for the mutual information has been proposed using heuristic statistical physics computations, and proven in few specific cases. Here, we show how to rigorously prove the conjectured formula for the symmetric rank-one case. This allows to express the minimal mean-square-error and to characterize the detectability phase transitions in a large set of estimation problems ranging from community detection to sparse PCA. We also show that for a large set of parameters, an iterative algorithm called approximate message-passing is Bayes optimal. There exists, however, a gap between what currently known polynomial algorithms can do and what is expected information theoretically. Additionally, the proof technique has an interest of its own and exploits three essential ingredients: the interpolation method introduced in statistical physics by Guerra, the analysis of the approximate message-passing algorithm and the theory of spatial coupling and threshold saturation in coding. Our approach is generic and applicable to other open problems in statistical estimation where heuristic statistical physics predictions are available. Consider the following probabilistic rank-one matrix estimation problem: one has access to noisy observations w = (wij)i,j=1 of the pair-wise product of the components of a vector s = (s1, . . . , sn) ∈ R with i.i.d components distributed as Si ∼ P0, i = 1, . . . , n. The entries of w are observed through a noisy element-wise (possibly non-linear) output probabilistic channel Pout(wij |sisj/ √ n). The goal is to estimate the vector s from w assuming that both P0 and Pout are known and independent of n (noise is symmetric so that wij =wji). Many important problems in statistics and machine learning can be expressed in this way, such as sparse PCA [1], the Wigner spiked model [2, 3], community detection [4] or matrix completion [5]. Proving a result initially derived by a heuristic method from statistical physics, we give an explicit expression for the mutual information (MI) and the information theoretic minimal mean-square-error (MMSE) in the asymptotic n→∞ limit. Our results imply that for a large region of parameters, the posterior marginal expectations of the underlying signal components (often assumed intractable 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. to compute) can be obtained in the leading order in n using a polynomial-time algorithm called approximate message-passing (AMP) [6, 3, 4, 7]. We also demonstrate the existence of a region where both AMP and spectral methods [8] fail to provide a good answer to the estimation problem, while it is nevertheless information theoretically possible to do so. We illustrate our theorems with examples and also briefly discuss the implications in terms of computational complexity. 1 Setting and main results The additive white Gaussian noise setting: A standard and natural setting is the case of additive white Gaussian noise (AWGN) of known variance ∆, wij =sisj/ √ n+zij √ ∆, where z=(zij)i,j=1 is a symmetric matrix with i.i.d entries Zij ∼N (0, 1), 1≤ i≤ j≤n. Perhaps surprisingly, it turns out that this Gaussian setting is sufficient to completely characterize all the problems discussed in the introduction, even if these have more complicated output channels. This is made possible by a theorem of channel universality [9] (already proven for community detection in [4] and conjectured in [10]). This theorem states that given an output channel Pout(w|y), such that (s.t) logPout(w|y=0) is three times differentiable with bounded second and third derivatives, then the MI satisfies I(S; W)= I(S; SS/ √ n+Z √ ∆)+O( √ n), where ∆ is the inverse Fisher information (evaluated at y=0) of the output channel: ∆−1 := EPout(w|0)[(∂y logPout(W |y)|y=0)]. Informally, this means that we only have to compute the MI for an AWGN channel to take care of a wide range of problems, which can be expressed in terms of their Fisher information. In this paper we derive rigorously, for a large class of signal distributions P0, an explicit one-letter formula for the MI per variable I(S; W)/n in the asymptotic limit n→∞. Main result: Our central result is a proof of the expression for the asymptotic n→∞MI per variable via the so-called replica symmetric (RS) potential iRS(E; ∆) defined as iRS(E; ∆) := (v − E) + v N/A Consider the following probabilistic rank-one matrix estimation problem: one has access to noisy observations w = (wij)ni,j=1 of the pair-wise product of the components of a vector s = (s1, . . . , sn)ᵀ ∈ Rn with i.i.d components distributed as Si ∼ P0, i = 1, . . . , n. The entries of w are observed through a noisy element-wise (possibly non-linear) output probabilistic channel Pout(wij |sisj/ √ n). The goal is to estimate the vector s from w assuming that both P0 and Pout are known and independent of n (noise is symmetric so that wij =wji). Many important problems in statistics and machine learning can be expressed in this way, such as sparse PCA [1], the Wigner spiked model [2, 3], community detection [4] or matrix completion [5]. Proving a result initially derived by a heuristic method from statistical physics, we give an explicit expression for the mutual information (MI) and the information theoretic minimal mean-square-error (MMSE) in the asymptotic n→∞ limit. Our results imply that for a large region of parameters, the posterior marginal expectations of the underlying signal components (often assumed intractable 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. to compute) can be obtained in the leading order in n using a polynomial-time algorithm called approximate message-passing (AMP) [6, 3, 4, 7]. We also demonstrate the existence of a region where both AMP and spectral methods [8] fail to provide a good answer to the estimation problem, while it is nevertheless information theoretically possible to do so. We illustrate our theorems with examples and also briefly discuss the implications in terms of computational complexity. 1 Setting and main results The additive white Gaussian noise setting: A standard and natural setting is the case of additive white Gaussian noise (AWGN) of known variance ∆, wij =sisj/ √ n+zij √ ∆, where z=(zij)ni,j=1 is a symmetric matrix with i.i.d entries Zij ∼N (0, 1), 1≤ i≤ j≤n. Perhaps surprisingly, it turns out that this Gaussian setting is sufficient to completely characterize all the problems discussed in the introduction, even if these have more complicated output channels. This is made possible by a theorem of channel universality [9] (already proven for community detection in [4] and conjectured in [10]). This theorem states that given an output channel Pout(w|y), such that (s.t) logPout(w|y=0) is three times differentiable with bounded second and third derivatives, then the MI satisfies I(S; W)= I(S; SSᵀ/ √ n+Z √ ∆)+O( √ n), where ∆ is the inverse Fisher information (evaluated at y=0) of the output channel: ∆−1 := EPout(w|0)[(∂y logPout(W |y)|y=0)2]. Informally, this means that we only have to compute the MI for an AWGN channel to take care of a wide range of problems, which can be expressed in terms of their Fisher information. In this paper we derive rigorously, for a large class of signal distributions P0, an explicit one-letter formula for the MI per variable I(S; W)/n in the asymptotic limit n→∞. Main result: Our central result is a proof of the expression for the asymptotic n→∞MI per variable via the so-called replica symmetric (RS) potential iRS(E; ∆) defined as iRS(E; ∆) := (v − E)2 + v2 4∆ − ES,Z [ ln (∫ dxP0(x)e − x2 2Σ(E;∆)2 +x ( S Σ(E;∆)2 + Z Σ(E;∆) ))] , (1) with Z∼N (0, 1), S∼P0, E[S2]=v and Σ(E; ∆)2 :=∆/(v−E), E∈ [0, v]. Here we will assume that P0 is a discrete distribution over a finite bounded real alphabet P0(s)= ∑ν α=1 pαδ(s−aα). Thus the only continuous integral in (1) is the Gaussian over z. Our results can be extended to mixtures of discrete and continuous signal distributions at the expense of technical complications in some proofs. It turns out that both the information theoretic and algorithmic AMP thresholds are determined by the set of stationary points of (1) (w.r.t E). It is possible to show that for all ∆>0 there always exist at least one stationary minimum. Note E=0 is never a stationary point (except for P0 a single Dirac mass) and E= v is stationary only if E[S] = 0. In this contribution we suppose that at most three stationary points exist, corresponding to situations with at most one phase transition. We believe that situations with multiple transitions can also be covered by our techniques. Theorem 1.1 (RS formula for the mutual information) Fix ∆>0 and let P0 be a discrete distribution s.t (1) has at most three stationary points. Then limn→∞ I(S; W)/n=minE∈[0,v] iRS(E; ∆). The proof of the existence of the limit does not require the above hypothesis on P0. Also, it was first shown in [9] that for all n, I(S; W)/n≤minE∈[0,v] iRS(E; ∆), an inequality that we will use in the proof section. It is conceptually useful to define the following threshold: Definition 1.2 (Information theoretic threshold) Define ∆Opt as the first non-analyticity point of the MI as ∆ increases: ∆Opt :=sup{∆| limn→∞ I(S; W)/n is analytic in ]0,∆[}. When P0 is s.t (1) has at most three stationary points, as discussed below, then minE∈[0,v] iRS(E; ∆) has at most one non-analyticity point denoted ∆RS (if minE∈[0,v] iRS(E; ∆) is analytic over all R+ we set ∆RS =∞). Theorem 1.1 gives us a mean to compute the information theoretic threshold ∆Opt =∆RS. A basic application of theorem 1.1 is the expression of the MMSE: Corollary 1.3 (Exact formula for the MMSE) For all ∆ 6= ∆RS, the matrix-MMSE Mmmsen := ES,W[‖SSᵀ − E[XXᵀ|W]‖2F]/n2 (‖ − ‖F being the Frobenius norm) is asymptotically limn→∞Mmmsen(∆ −1) = v2−(v−argminE∈[0,v]iRS(E; ∆))2. Moreover, if ∆<∆AMP (where ∆AMP is the algorithmic threshold, see definition 1.4) or ∆>∆RS, then the usual vector-MMSE Vmmsen :=ES,W[‖S−E[X|W]‖22]/n satisfies limn→∞Vmmsen=argminE∈[0,v]iRS(E; ∆). It is natural to conjecture that the vector-MMSE is given by argminE∈[0,v]iRS(E; ∆) for all ∆ 6=∆RS, but our proof does not quite yield the full statement. A fundamental consequence concerns the performance of the AMP algorithm [6] for estimating s. AMP has been analysed rigorously in [11, 12, 4] where it is shown that its asymptotic performance is tracked by state evolution (SE). Let Et :=limn→∞ ES,Z[‖S− ŝt‖22]/n be the asymptotic average vector-MSE of the AMP estimate ŝt at time t. Define mmse(Σ−2) :=ES,Z [(S−E[X|S+ΣZ])2] as the usual scalar mmse function associated to a scalar AWGN channel of noise variance Σ2, with S∼P0 and Z∼N (0, 1). Then Et+1 = mmse(Σ(Et; ∆)−2), E0 = v, (2) is the SE recursion. Monotonicity properties of the mmse function imply that Et is a decreasing sequence s.t limt→∞Et=E∞ exists. Note that when E[S] = 0 and v is an unstable fixed point, as such, SE “does not start”. While this is not really a problem when one runs AMP in practice, for analysis purposes one can slightly bias P0 and remove the bias at the end of the proofs. Definition 1.4 (AMP algorithmic threshold) For ∆ > 0 small enough, the fixed point equation corresponding to (2) has a unique solution for all noise values in ]0,∆[. We define ∆AMP as the supremum of all such ∆. Corollary 1.5 (Performance of AMP) In the limit n→∞, AMP initialized without any knowledge other than P0 yields upon convergence the asymptotic matrix-MMSE as well as the asymptotic vector-MMSE iff ∆<∆AMP or ∆>∆RS, namely E∞=argminE∈[0,v]iRS(E; ∆). ∆AMP can be read off the replica potential (1): by differentiation of (1) one finds a fixed point equation that corresponds to (2). Thus ∆AMP is the smallest solution of ∂iRS/∂E=∂2iRS/∂E2 =0; in other words it is the “first” horizontal inflexion point appearing in iRS(E; ∆) when ∆ increases. Discussion: With our hypothesis on P0 there are only three possible scenarios: ∆AMP < ∆RS (one “first order” phase transition); ∆AMP = ∆RS < ∞ (one “higher order” phase transition); ∆AMP = ∆RS =∞ (no phase transition). In the sequel we will have in mind the most interesting case, namely one first order phase transition, where we determine the gap between the algorithmic AMP and information theoretic performance. The cases of no phase transition or higher order phase transition, which present no algorithmic gap, are basically covered by the analysis of [3] and follow as a special case from our proof. The only cases that would require more work are those where P0 is s.t (1) develops more than three stationary points and more than one phase transition is present. For ∆AMP<∆RS the structure of stationary points of (1) is as follows1 (figure 1). There exist three branchesEgood(∆), Eunstable(∆) andEbad(∆) s.t: 1) For 0<∆<∆AMP there is a single stationary point Egood(∆) which is a global minimum; 2) At ∆AMP a horizontal inflexion point appears, for ∆∈ [∆AMP,∆RS] there are three stationary points satisfying Egood(∆AMP)<Eunstable(∆AMP)= Ebad(∆AMP), Egood(∆) < Eunstable(∆) < Ebad(∆) otherwise, and moreover iRS(Egood; ∆) ≤ iRS(Ebad; ∆) with equality only at ∆RS; 3) for ∆ > ∆RS there is at least the stationary point Ebad(∆) which is always the global minimum, i.e. iRS(Ebad; ∆)<iRS(Egood; ∆). (For higher ∆ the Egood(∆) and Eunstable(∆) branches may merge and disappear); 4) Egood(∆) is analytic for ∆∈]0,∆′[, ∆′>∆RS, and Ebad(∆) is analytic for ∆>∆AMP. We note for further use in the proof section that E∞=Egood(∆) for ∆<∆AMP and E∞=Ebad(∆) for ∆>∆AMP. Definition 1.4 is equivalent to ∆AMP = sup{∆|E∞=Egood(∆)}. Moreover we will also use that iRS(Egood; ∆) is analytic on ]0,∆′[, iRS(Ebad; ∆) is analytic on ]∆AMP,∞[, and the only non-analyticity point of minE∈[0,v] iRS(E; ∆) is at ∆RS. Relation to other works: Explicit single-letter characterization of the MI in the rank-one problem has attracted a lot of attention recently. Particular cases of theorem 1.1 have been shown rigorously in a number of situations. A special case when si=±1∼Ber(1/2) already appeared in [13] where an equivalent spin glass model is analysed. Very recently, [9] has generalized the results of [13] and, notably, obtained a generic matching upper bound. The same formula has been also rigorously computed following the study of AMP in [3] for spiked models (provided, however, that the signal was not too sparse) and in [4] for strictly symmetric community detection. 1We take E[S] 6= 0. Once theorem 1.1 is proven for this case a limiting argument allows to extend it to E[S]=0. For rank-one symmetric matrix estimation problems, AMP has been introduced by [6], who also computed the SE formula to analyse its performance, generalizing techniques developed by [11] and [12]. SE was further studied by [3] and [4]. In [7, 10], the generalization to larger rank was also considered. The general formula proposed by [10] for the conditional entropy and the MMSE on the basis of the heuristic cavity method from statistical physics was not demonstrated in full generality. Worst, all existing proofs could not reach the more interesting regime where a gap between the algorithmic and information theoretic perfomances appears, leaving a gap with the statistical physics conjectured formula (and rigorous upper bound from [9]). Our result closes this conjecture and has interesting non-trivial implications on the computational complexity of these tasks. Our proof technique combines recent rigorous results in coding theory along the study of capacityachieving spatially coupled codes [14, 15, 16, 17] with other progress, coming from developments in mathematical physics putting on a rigorous basis predictions of spin glass theory [18]. From this point of view, the theorem proved in this paper is relevant in a broader context going beyond low-rank matrix estimation. Hundreds of papers have been published in statistics, machine learning or information theory using the non-rigorous statistical physics approach. We believe that our result helps setting a rigorous foundation of a broad line of work. While we focus on rank-one symmetric matrix estimation, our proof technique is readily extendable to more generic low-rank symmetric matrix or low-rank symmetric tensor estimation. We also believe that it can be extended to other problems of interest in machine learning and signal processing, such as generalized linear regression, features/dictionary learning, compressed sensing or multi-layer neural networks. 2 Two examples: Wigner spiked model and community detection In order to illustrate the consequences of our results we shall present two examples. Wigner spiked model: In this model, the vector s is a Bernoulli random vector, Si∼Ber(ρ). For large enough densities (i.e. ρ>0.041(1)), [3] computed the matrix-MMSE and proved that AMP is a computationally efficient algorithm that asymptotically achieves the matrix-MMSE for any value of the noise ∆. Our results allow to close the gap left open by [3]: on one hand we now obtain rigorously the MMSE for ρ≤ 0.041(1), and on the other one we observe that for such values of ρ, and as ∆ decreases, there is a small region where two local minima coexist in iRS(E; ∆). In particular for ∆AMP<∆<∆Opt = ∆RS the global minimum corresponding to the MMSE differs from the local one that traps AMP, and a computational gap appears (see figure 1). While the region where AMP is Bayes optimal is quite large, the region where is it not, however, is perhaps the most interesting one. While this is by no means evident, statistical physics analogies with physical phase transitions in nature suggest that this region should be hard for a very broad class of algorithms. For small ρ our results are consistent with the known optimal and algorithmic thresholds predicted in sparse PCA [19, 20], that treats the case of sub-extensive ρ=O(1) values. Another interesting line of work for such probabilistic models appeared in the context of random matrix theory (see [8] and references therein) and predicts that a sharp phase transition occurs at a critical value of the noise ∆spectral =ρ2 below which an outlier eigenvalue (and its principal eigenvector) has a positive correlation with the hidden signal. For larger noise values the spectral distribution of the observation is indistinguishable from that of the pure random noise. Asymmetric balanced community detection: We now consider the problem of detecting two communities (groups) with different sizes ρn and (1− ρ)n, that generalizes the one considered in [4]. One is given a graph where the probability to have a link between nodes in the first group is p+µ(1−ρ)/(ρ √ n), between those in the second group is p+µρ/( √ n(1−ρ)), while interconnections appear with probability p−µ/ √ n. With this peculiar “balanced” setting, the nodes in each group have the same degree distribution with mean pn, making them harder to distinguish. According to the universality property described in the first section, this is equivalent to a model with AWGN of variance ∆ = p(1−p)/µ2 where each variable si is chosen according to P0(s)=ρδ(s− √ (1−ρ)/ρ)+(1−ρ)δ(s+ √ ρ/(1−ρ)). Our results for this problem2 are summarized on the right hand side of figure 2. For ρ > ρc = 1/2− √ 1/12 (black point), it is asymptotically information theoretically possible to get an estimation better than chance if and only if ∆<1. When ρ<ρc, however, it becomes possible for much larger values of the noise. Interestingly, AMP and spectral methods have the same transition and can find a positive correlation with the hidden communities for ∆<1, regardless of the value of ρ. Again, a region [∆AMP,∆Opt =∆RS] exists where a computational gap appears when ρ<ρc. One can investigate the very low ρ regime where we find that the information theoretic transition goes as ∆Opt(ρ→0) = 1/(4ρ| log ρ|). Now if we assume that this result stays true even for ρ= O(1) (which is a speculation at this point), we can choose µ→(1−p)ρ √ n such that the small group is a clique. Then the problem corresponds to a “balanced” version of the famous planted clique problem [21]. We find that the AMP/spectral approach finds the 2Note that here since E=v=1 is an extremum of iRS(E; ∆), one must introduce a small bias in P0 and let it then tend to zero at the end of the proofs. hidden clique when it is larger than √ np/(1−p), while the information theoretic transition translates into size of the clique 4p log(n)/(1−p). This is indeed reminiscent of the more classical planted clique problem at p=1/2 with its gap between log(n) (information theoretic), √ n/e (AMP [22]) and √ n (spectral [21]). Since in our balanced case the spectral and AMP limits match, this suggests that the small gain of AMP in the standard clique problem is simply due to the information provided by the distribution of local degrees in the two groups (which is absent in our balanced case). We believe this correspondence strengthens the claim that the AMP gap is actually a fundamental one. 3 Proofs The crux of our proof rests on an auxiliary “spatially coupled system”. The hallmark of spatially coupled models is that one can tune them so that the gap between the algorithmic and information theoretic limits is eliminated, while at the same time the MI is maintained unchanged for the coupled and original models. Roughly speaking, this means that it is possible to algorithmically compute the information theoretic limit of the original model because a suitable algorithm is optimal on the coupled system. The present spatially coupled construction is similar to the one used for the coupled Curie-Weiss model [14]. Consider a ring of length L+1 (L even) with blocks positioned at µ∈{0, . . . , L} and coupled to neighboring blocks {µ−w, . . . , µ+w}. Positions µ are taken modulo L+1 and the integer w∈{0, . . . , L/2} equals the size of the coupling window. The coupled model is wiµjν = siµsjν √ Λµν n + ziµjν √ ∆, (3) where the index iµ∈{1, . . . , n} (resp. jν) belongs to the block µ (resp. ν) along the ring, Λ is an (L+1)×(L+1) matrix which describes the strength of the coupling between blocks, andZiµjν ∼N (0, 1) are i.i.d. For the proof to work, the matrix elements have to be chosen appropriately. We assume that: i) Λ is a doubly stochastic matrix; ii) Λµν depends on |µ−ν|; iii) Λµν is not vanishing for |µ−ν| ≤ w and vanishes for |µ−ν|>w; iv) Λ is smooth in the sense |Λµν−Λµ+1ν |=O(w−2); v) Λ has a non-negative Fourier transform. All these conditions can easily be met, the simplest example being a triangle of base 2w+1 and height 1/(w+1). The construction of the coupled system is completed by introducing a seed in the ring: we assume perfect knowledge of the signal components {siµ} for µ∈B :={−w−1, . . . , w−1} mod L+1. This seed is what allows to close the gap between the algorithmic and information theoretic limits and therefore plays a crucial role. Note it can also be viewed as an “opening” of the chain with fixed boundary conditions. Our first crucial result states that the MI Iw,L(S; W) of the coupled and original systems are the same in a suitable limit. Lemma 3.1 (Equality of mutual informations) For any fixed w the following limits exist and are equal: limL→∞ limn→∞ Iw,L(S; W)/(n(L+1))=limn→∞ I(S; W)/n. An immediate corollary is that non-analyticity points (w.r.t ∆) of the MIs are the same in the coupled and original models. In particular, defining ∆Opt,coup := sup{∆ | limL→∞ limn→∞ Iw,L(S; W)/(n(L+1)) is analytic in ]0,∆[}, we have ∆Opt,coup =∆Opt. The second crucial result states that the AMP threshold of the spatially coupled system is at least as good as ∆RS. The analysis of AMP applies to the coupled system as well [11, 12] and it can be shown that the performance of AMP is assessed by SE. Let Etµ := limn→∞ ES,Z[‖Sµ− ŝ t µ‖22]/n be the asymptotic average vector-MSE of the AMP estimate ŝtµ at time t for the µ-th “block” of S. We associate to each position µ ∈ {0, . . . , L} an independent scalar system with AWGN of the form Y =S+Σµ(E; ∆)Z, with Σµ(E; ∆)2 := ∆/(v− ∑L ν=0 ΛµνEν) and S∼P0, Z∼N (0, 1). Taking into account knowledge of the signal components in B, SE reads: Et+1µ = mmse(Σµ(E t; ∆)−2), E0µ = v for µ ∈ {0, . . . , L} \ B, Etµ = 0 for µ ∈ B, t ≥ 0, (4) where the mmse function is defined as in section 1. From the monotonicity of the mmse function we have Et+1µ ≤Etµ for all µ∈{0, . . . , L}, a partial order which implies that limt→∞ E t= E∞ exists. This allows to define an algorithmic threshold for the coupled system: ∆AMP,w,L :=sup{∆|E∞µ ≤ Egood(∆) ∀ µ}. We show (equality holds but is not directly needed): Lemma 3.2 (Threshold saturation) Let ∆AMP,coup := lim infw→∞ lim infL→∞∆AMP,w,L. We have ∆AMP,coup≥∆RS. Proof sketch of theorem 1.1: First we prove the RS formula for ∆ ≤ ∆Opt. It is known [3] that the matrix-MSE of AMP when n→∞ is equal to v2−(v−Et)2. This cannot improve the matrix-MMSE, hence (v2−(v−E∞)2)/4≥ lim supn→∞Mmmsen/4. For ∆≤∆AMP we have E∞=Egood(∆) which is the global minimum of (1) so the left hand side of the last inequality equals the derivative of minE∈[0,v] iRS(E; ∆) w.r.t ∆−1. Thus using the matrix version of the I-MMSE relation [23] we get d d∆−1 min E∈[0,v] iRS(E; ∆) ≥ lim sup n→∞ 1 n dI(S; W) d∆−1 . (5) Integrating this relation on [0,∆] ⊂ [0,∆AMP] and checking that minE∈[0,v] iRS(E; 0) = H(S) (the Shannon entropy of P0) we obtain minE∈[0,v] iRS(E; ∆)≤ lim infn→∞ I(S; W)/n. But we know I(S; W)/n≤minE∈[0,v] iRS(E; ∆) [9], thus we already get theorem 1.1 for ∆≤∆AMP. We notice that ∆AMP≤∆Opt. While this might seem intuitively clear, it follows from ∆RS≥∆AMP (by their definitions) which together with ∆AMP > ∆Opt would imply from theorem 1.1 that limn→∞ I(S; W)/n is analytic at ∆Opt, a contradiction. The next step is to extend theorem 1.1 to the range [∆AMP,∆Opt]. Suppose for a moment ∆RS≥∆Opt. Then both functions on each side of the RS formula are analytic on the whole range ]0,∆Opt[ and since they are equal for ∆≤∆AMP, they must be equal on their whole analyticity range and by continuity, they must also be equal at ∆Opt (that the functions are continuous follows from independent arguments on the existence of the n→∞ limit of concave functions). It remains to show that ∆RS∈ ]∆AMP,∆Opt[ is impossible. We proceed by contradiction, so suppose this is true. Then both functions on each side of the RS formula are analytic on ]0,∆RS[ and since they are equal for ]0,∆AMP[⊂]0,∆RS[ they must be equal on the whole range ]0,∆RS[ and also at ∆RS by continuity. For ∆>∆RS the fixed point of SE is E∞=Ebad(∆) which is also the global minimum of iRS(E; ∆), hence (5) is verified. Integrating this inequality on ]∆RS,∆[⊂]∆RS,∆Opt[ and using I(S; W)/n≤minE∈[0,v] iRS(E; ∆) again, we find that the RS formula holds for all ∆∈ [0,∆Opt]. But this implies that minE∈[0,v] iRS(E; ∆) is analytic at ∆RS, a contradiction. We now prove the RS formula for ∆≥∆Opt. Note that the previous arguments showed that necessarily ∆Opt≤∆RS. Thus by lemmas 3.1 and 3.2 (and the sub-optimality of AMP as shown as before) we obtain ∆RS ≤ ∆AMP,coup≤∆Opt,coup = ∆Opt≤∆RS. This shows that ∆Opt = ∆RS (this is the point where spatial coupling came in the game and we do not know of other means to prove such an equality). For ∆>∆RS we have E∞ =Ebad(∆) which is the global minimum of iRS(E; ∆). Therefore we again have (5) in this range and the proof can be completed by using once more the integration argument, this time over the range [∆RS,∆]=[∆Opt,∆]. Proof sketch of corollaries 1.3 and 1.5: LetE∗(∆)=argminEiRS(E; ∆) for ∆ 6=∆RS. By explicit calculation one checks that diRS(E∗,∆)/d∆−1 =(v2−(v−E∗(∆))2)/4, so from theorem 1.1 and the matrix form of the I-MMSE relation we find Mmmsen→v2−(v−E∗(∆))2 as n→∞ which is the first part of the statement of corollary 1.3. Let us now turn to corollary 1.5. For n→∞ the vectorMSE of the AMP estimator at time t equals Et, and since the fixed point equation corresponding to SE is precisely the stationarity equation for iRS(E; ∆), we conclude that for ∆ /∈ [∆AMP,∆RS] we must have E∞=E∗(∆). It remains to prove that E∗(∆)=limn→∞Vmmsen(∆) at least for ∆ /∈ [∆AMP,∆RS] (we believe this is in fact true for all ∆). This will settle the second part of corollary 1.3 as well as 1.5. Using (Nishimori) identities ES,W[SiSjE[XiXj |W]]=ES,W[E[XiXj |W]2] (see e.g. [9]) and using the law of large numbers we can show limn→∞Mmmsen ≤ limn→∞(v2− (v− Vmmsen(∆)) 2). Concentration techniques similar to [13] suggest that the equality in fact holds (for ∆ 6= ∆RS) but there are technicalities that prevent us from completing the proof of equality. However it is interesting to note that this equality would imply E∗(∆)=limn→∞Vmmsen(∆) for all ∆ 6=∆RS. Nevertheless, another argument can be used when AMP is optimal. On one hand the right hand side of the inequality is necessarily smaller than v2−(v−E∞)2. On the other hand the left hand side of the inequality is equal to v2−(v−E∗(∆))2. SinceE∗(∆)=E∞ when ∆ /∈ [∆AMP,∆RS], we can conclude limn→∞Vmmsen(∆)=argminEiRS(E; ∆) for this range of ∆. Proof sketch of lemma 3.1: Here we prove the lemma for a ring that is not seeded. An easy argument shows that a seed of size w does not change the MI per variable when L→∞. The statistical physics formulation is convenient: up to the trivial additive term n(L+1)v2/4, the MI Iw,L(S; W) equals the free energy −ES,Z[lnZw,L], where Zw,L := ∫ dxP0(x) exp(−H(x, z,Λ)) and H(x, z,Λ) = 1 ∆ L∑ µ=0 ( Λµµ ∑ iµ≤jµ Aiµjµ(x, z,Λ) + µ+w∑ ν=µ+1 Λµν ∑ iµ,jν Aiµjν (x, z,Λ) ) , (6) with Aiµjν (x, z,Λ) :=(x2iµx 2 jν )/(2n)−(siµsjνxiµxjν )/n−(xiµxjνziµjν √ ∆)/ √ nΛµν . Consider a pair of systems with coupling matrices Λ and Λ′ and i.i.d noize realizations z, z′, an interpolated HamiltonianH(x, z, tΛ)+H(x, z′, (1−t)Λ′), t ∈ [0, 1], and the corresponding partition function Zt. The main idea of the proof is to show that for suitable choices of matrices,− ddtES,Z,Z′ [lnZt]≤0 for all t∈ [0, 1] (up to negligible terms), so that by the fundamental theorem of calculus, we get a comparison between the free energies ofH(x, z,Λ) andH(x, z′,Λ′). Performing the t-derivative brings down a Gibbs average of a polynomial in all variables siµ , xiµ , ziµjν and z ′ iµjν . This expectation over S, Z, Z′ of this Gibbs average is simplified using integration by parts over the Gaussian noise ziµjν , z′iµjν and Nishimori identities (see e.g. proof of corollary 1.3 for one of them). This algebra leads to − 1 n(L+ 1) d dt ES,Z,Z′ [lnZt] = 1 4∆(L+ 1) ES,Z,Z′ [〈qᵀΛq− qᵀΛ′q〉t] +O(1/(nL)), (7) where 〈−〉t is the Gibbs average w.r.t the interpolated Hamiltonian, q is the vector of overlaps qµ := ∑n iµ=1 siµxiµ/n. If we can choose matrices s.t Λ ′ >Λ, the difference of quadratic forms in the Gibbs bracket is negative and we obtain an inequality in the large size limit. We use this scheme to interpolate between the fully decoupled system w=0 and the coupled one 1≤w<L/2 and then between 1≤w <L/2 and the fully connected system w=L/2. The w= 0 system has Λµν = δµν with eigenvalues (1, 1, . . . , 1). For the 1 ≤ w < L/2 system, we take any stochastic translation invariant matrix with non-negative discrete Fourier transform (of its rows): such matrices have an eigenvalue equal to 1 and all others in [0, 1[ (the eigenvalues are precisely equal to the discrete Fourier transform). For w = L/2 we choose Λµν = 1/(L+1) which is a projector with eigenvalues (0, 0, . . . , 1). With these choices we deduce that the free energies and MIs are ordered as Iw=0,L +O(1)≤Iw,L +O(1)≤Iw=L/2,L +O(1). To conclude the proof we divide by n(L+1) and note that the limits of the leftmost and rightmost MIs are equal, provided the limit exists. Indeed the leftmost term equals L times I(S; W) and the rightmost term is the same MI for a system of n(L+1) variables. Existence of the limit follows by subadditivity, proven by a similar interpolation [18]. Proof sketch of lemma 3.2: Fix ∆ < ∆RS. We show that, for w large enough, the coupled SE recursion (4) must converge to a fixed point E∞µ ≤Egood(∆) for all µ. The main intuition behind the proof is to use a “potential function” whose “energy” can be lowered by small perturbation of a fixed point that would go above Egood(∆) [16, 17]. The relevant potential function iw,L(E,∆) is in fact the replica potential of the coupled system (a generalization of (1)). The stationarity condition for this potential is precisely (4) (without the seeding condition). Monotonicity properties of SE ensure that any fixed point has a “unimodal” shape (and recall that it vanishes for µ∈B= {0, . . . , w−1}∪{L−w, . . . , L}). Consider a position µmax∈{w, . . . , L−w−1} where it is maximal and suppose that E∞µmax > Egood(∆). We associate to the fixed point E ∞ a so-called saturated profile Es defined on the whole of Z as follows: Esµ=Egood(∆) for all µ≤µ∞ where µ∞+1 is the smallest position s.t E∞µ >Egood(∆); E s µ=E ∞ µ for µ∈{µ∞+1, . . . , µmax−1}; Esµ=E∞µmax for all µ≥µmax. We show that Es cannot exist for w large enough. To this end define a shift operator by [S(Es)]µ :=Esµ−1. On one hand the shifted profile is a small perturbation of E s which matches a fixed point, except where it is constant, so if we Taylor expand, the first order vanishes and the second order and higher orders can be estimated as |iw,L(S(Es); ∆)−iw,L(Es; ∆)|=O(1/w) uniformly in L. On the other hand, by explicit cancellation of telescopic sums iw,L(S(Es); ∆)−iw,L(Es; ∆)= iRS(Egood; ∆)−iRS(E∞µmax ; ∆). Now one can show from monotonicity properties of SE that if E ∞ is a non trivial fixed point of the coupled SE then E∞µmax cannot be in the basin of attraction of Egood(∆) for the uncoupled SE recursion. Consequently as can be seen on the plot of iRS(E; ∆) (e.g. figure 1) we must have iRS(E∞µmax ; ∆)≥ iRS(Ebad; ∆). Therefore iw,L(S(E s); ∆)−iw,L(Es; ∆)≤ −|iRS(Ebad; ∆)−iRS(Egood; ∆)| which is an energy gain independent of w, and for large enough w we get a contradiction with the previous estimate coming from the Taylor expansion. Acknowledgments J.B and M.D acknowledge funding from the SNSF (grant 200021-156672). Part of this research received funding from the ERC under the EU’s 7th Framework Programme (FP/2007-2013/ERC Grant Agreement 307087-SPARCS). F.K and L.Z thank the Simons Institute for its hospitality.
1. What is the focus of the paper regarding matrix estimation? 2. What are the key contributions of the authors in terms of understanding mutual information? 3. How does the paper relate to previous works in statistical physics and coding theory? 4. What are the strengths of the paper's proof techniques? 5. Are there any limitations or gaps in the paper's analysis?
Review
Review The authors consider the rank-one matrix estimation problem and aim to understand precisely the mutual information between the observed noisy matrix W and the original rank-one matrix SS^T. They show (under certain assumptions) that the asymptotic mutual information per variable is given by the minimum of the so-called replica-symmetric potential function. This then gives a mean to compute the information-theoretic threshold for reconstructing the signal S. The results imply that approximate message-passing (AMP) achieves the MMSE when the noise is either small or large. In between the two there can be a very interesting regime where AMP is not optimal, and there seems to be a computational gap. The authors illustrate their results nicely on two models: the Wigner spike model and community detection. The proof involves a very nice combination of techniques from different areas. The answer was predicted using statistical physics techniques, and so it is natural that these appear in the proof; in particular the interpolation method introduced by Guerra is used. This is combined with recent progress in coding theory on capacity-achieving spatially coupled codes. The paper is very well written. It presents a very nice proof of a prediction from statistical physics about the mutual information for symmetric rank-one matrix estimation. The proof techniques have the potential to be applied more widely in order to obtain a better rigorous understanding about the fundamental limits of low-rank matrix estimation. I believe that this paper is a valuable contribution to NIPS. My main comment is that the proof sketches are dense; but this is unavoidable for a conference version. I therefore urge the authors to prepare a more detailed journal version of the paper.
NIPS
Title Mutual information for symmetric rank-one matrix estimation: A proof of the replica formula Abstract Factorizing low-rank matrices has many applications in machine learning and statistics. For probabilistic models in the Bayes optimal setting, a general expression for the mutual information has been proposed using heuristic statistical physics computations, and proven in few specific cases. Here, we show how to rigorously prove the conjectured formula for the symmetric rank-one case. This allows to express the minimal mean-square-error and to characterize the detectability phase transitions in a large set of estimation problems ranging from community detection to sparse PCA. We also show that for a large set of parameters, an iterative algorithm called approximate message-passing is Bayes optimal. There exists, however, a gap between what currently known polynomial algorithms can do and what is expected information theoretically. Additionally, the proof technique has an interest of its own and exploits three essential ingredients: the interpolation method introduced in statistical physics by Guerra, the analysis of the approximate message-passing algorithm and the theory of spatial coupling and threshold saturation in coding. Our approach is generic and applicable to other open problems in statistical estimation where heuristic statistical physics predictions are available. Consider the following probabilistic rank-one matrix estimation problem: one has access to noisy observations w = (wij)i,j=1 of the pair-wise product of the components of a vector s = (s1, . . . , sn) ∈ R with i.i.d components distributed as Si ∼ P0, i = 1, . . . , n. The entries of w are observed through a noisy element-wise (possibly non-linear) output probabilistic channel Pout(wij |sisj/ √ n). The goal is to estimate the vector s from w assuming that both P0 and Pout are known and independent of n (noise is symmetric so that wij =wji). Many important problems in statistics and machine learning can be expressed in this way, such as sparse PCA [1], the Wigner spiked model [2, 3], community detection [4] or matrix completion [5]. Proving a result initially derived by a heuristic method from statistical physics, we give an explicit expression for the mutual information (MI) and the information theoretic minimal mean-square-error (MMSE) in the asymptotic n→∞ limit. Our results imply that for a large region of parameters, the posterior marginal expectations of the underlying signal components (often assumed intractable 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. to compute) can be obtained in the leading order in n using a polynomial-time algorithm called approximate message-passing (AMP) [6, 3, 4, 7]. We also demonstrate the existence of a region where both AMP and spectral methods [8] fail to provide a good answer to the estimation problem, while it is nevertheless information theoretically possible to do so. We illustrate our theorems with examples and also briefly discuss the implications in terms of computational complexity. 1 Setting and main results The additive white Gaussian noise setting: A standard and natural setting is the case of additive white Gaussian noise (AWGN) of known variance ∆, wij =sisj/ √ n+zij √ ∆, where z=(zij)i,j=1 is a symmetric matrix with i.i.d entries Zij ∼N (0, 1), 1≤ i≤ j≤n. Perhaps surprisingly, it turns out that this Gaussian setting is sufficient to completely characterize all the problems discussed in the introduction, even if these have more complicated output channels. This is made possible by a theorem of channel universality [9] (already proven for community detection in [4] and conjectured in [10]). This theorem states that given an output channel Pout(w|y), such that (s.t) logPout(w|y=0) is three times differentiable with bounded second and third derivatives, then the MI satisfies I(S; W)= I(S; SS/ √ n+Z √ ∆)+O( √ n), where ∆ is the inverse Fisher information (evaluated at y=0) of the output channel: ∆−1 := EPout(w|0)[(∂y logPout(W |y)|y=0)]. Informally, this means that we only have to compute the MI for an AWGN channel to take care of a wide range of problems, which can be expressed in terms of their Fisher information. In this paper we derive rigorously, for a large class of signal distributions P0, an explicit one-letter formula for the MI per variable I(S; W)/n in the asymptotic limit n→∞. Main result: Our central result is a proof of the expression for the asymptotic n→∞MI per variable via the so-called replica symmetric (RS) potential iRS(E; ∆) defined as iRS(E; ∆) := (v − E) + v N/A Consider the following probabilistic rank-one matrix estimation problem: one has access to noisy observations w = (wij)ni,j=1 of the pair-wise product of the components of a vector s = (s1, . . . , sn)ᵀ ∈ Rn with i.i.d components distributed as Si ∼ P0, i = 1, . . . , n. The entries of w are observed through a noisy element-wise (possibly non-linear) output probabilistic channel Pout(wij |sisj/ √ n). The goal is to estimate the vector s from w assuming that both P0 and Pout are known and independent of n (noise is symmetric so that wij =wji). Many important problems in statistics and machine learning can be expressed in this way, such as sparse PCA [1], the Wigner spiked model [2, 3], community detection [4] or matrix completion [5]. Proving a result initially derived by a heuristic method from statistical physics, we give an explicit expression for the mutual information (MI) and the information theoretic minimal mean-square-error (MMSE) in the asymptotic n→∞ limit. Our results imply that for a large region of parameters, the posterior marginal expectations of the underlying signal components (often assumed intractable 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. to compute) can be obtained in the leading order in n using a polynomial-time algorithm called approximate message-passing (AMP) [6, 3, 4, 7]. We also demonstrate the existence of a region where both AMP and spectral methods [8] fail to provide a good answer to the estimation problem, while it is nevertheless information theoretically possible to do so. We illustrate our theorems with examples and also briefly discuss the implications in terms of computational complexity. 1 Setting and main results The additive white Gaussian noise setting: A standard and natural setting is the case of additive white Gaussian noise (AWGN) of known variance ∆, wij =sisj/ √ n+zij √ ∆, where z=(zij)ni,j=1 is a symmetric matrix with i.i.d entries Zij ∼N (0, 1), 1≤ i≤ j≤n. Perhaps surprisingly, it turns out that this Gaussian setting is sufficient to completely characterize all the problems discussed in the introduction, even if these have more complicated output channels. This is made possible by a theorem of channel universality [9] (already proven for community detection in [4] and conjectured in [10]). This theorem states that given an output channel Pout(w|y), such that (s.t) logPout(w|y=0) is three times differentiable with bounded second and third derivatives, then the MI satisfies I(S; W)= I(S; SSᵀ/ √ n+Z √ ∆)+O( √ n), where ∆ is the inverse Fisher information (evaluated at y=0) of the output channel: ∆−1 := EPout(w|0)[(∂y logPout(W |y)|y=0)2]. Informally, this means that we only have to compute the MI for an AWGN channel to take care of a wide range of problems, which can be expressed in terms of their Fisher information. In this paper we derive rigorously, for a large class of signal distributions P0, an explicit one-letter formula for the MI per variable I(S; W)/n in the asymptotic limit n→∞. Main result: Our central result is a proof of the expression for the asymptotic n→∞MI per variable via the so-called replica symmetric (RS) potential iRS(E; ∆) defined as iRS(E; ∆) := (v − E)2 + v2 4∆ − ES,Z [ ln (∫ dxP0(x)e − x2 2Σ(E;∆)2 +x ( S Σ(E;∆)2 + Z Σ(E;∆) ))] , (1) with Z∼N (0, 1), S∼P0, E[S2]=v and Σ(E; ∆)2 :=∆/(v−E), E∈ [0, v]. Here we will assume that P0 is a discrete distribution over a finite bounded real alphabet P0(s)= ∑ν α=1 pαδ(s−aα). Thus the only continuous integral in (1) is the Gaussian over z. Our results can be extended to mixtures of discrete and continuous signal distributions at the expense of technical complications in some proofs. It turns out that both the information theoretic and algorithmic AMP thresholds are determined by the set of stationary points of (1) (w.r.t E). It is possible to show that for all ∆>0 there always exist at least one stationary minimum. Note E=0 is never a stationary point (except for P0 a single Dirac mass) and E= v is stationary only if E[S] = 0. In this contribution we suppose that at most three stationary points exist, corresponding to situations with at most one phase transition. We believe that situations with multiple transitions can also be covered by our techniques. Theorem 1.1 (RS formula for the mutual information) Fix ∆>0 and let P0 be a discrete distribution s.t (1) has at most three stationary points. Then limn→∞ I(S; W)/n=minE∈[0,v] iRS(E; ∆). The proof of the existence of the limit does not require the above hypothesis on P0. Also, it was first shown in [9] that for all n, I(S; W)/n≤minE∈[0,v] iRS(E; ∆), an inequality that we will use in the proof section. It is conceptually useful to define the following threshold: Definition 1.2 (Information theoretic threshold) Define ∆Opt as the first non-analyticity point of the MI as ∆ increases: ∆Opt :=sup{∆| limn→∞ I(S; W)/n is analytic in ]0,∆[}. When P0 is s.t (1) has at most three stationary points, as discussed below, then minE∈[0,v] iRS(E; ∆) has at most one non-analyticity point denoted ∆RS (if minE∈[0,v] iRS(E; ∆) is analytic over all R+ we set ∆RS =∞). Theorem 1.1 gives us a mean to compute the information theoretic threshold ∆Opt =∆RS. A basic application of theorem 1.1 is the expression of the MMSE: Corollary 1.3 (Exact formula for the MMSE) For all ∆ 6= ∆RS, the matrix-MMSE Mmmsen := ES,W[‖SSᵀ − E[XXᵀ|W]‖2F]/n2 (‖ − ‖F being the Frobenius norm) is asymptotically limn→∞Mmmsen(∆ −1) = v2−(v−argminE∈[0,v]iRS(E; ∆))2. Moreover, if ∆<∆AMP (where ∆AMP is the algorithmic threshold, see definition 1.4) or ∆>∆RS, then the usual vector-MMSE Vmmsen :=ES,W[‖S−E[X|W]‖22]/n satisfies limn→∞Vmmsen=argminE∈[0,v]iRS(E; ∆). It is natural to conjecture that the vector-MMSE is given by argminE∈[0,v]iRS(E; ∆) for all ∆ 6=∆RS, but our proof does not quite yield the full statement. A fundamental consequence concerns the performance of the AMP algorithm [6] for estimating s. AMP has been analysed rigorously in [11, 12, 4] where it is shown that its asymptotic performance is tracked by state evolution (SE). Let Et :=limn→∞ ES,Z[‖S− ŝt‖22]/n be the asymptotic average vector-MSE of the AMP estimate ŝt at time t. Define mmse(Σ−2) :=ES,Z [(S−E[X|S+ΣZ])2] as the usual scalar mmse function associated to a scalar AWGN channel of noise variance Σ2, with S∼P0 and Z∼N (0, 1). Then Et+1 = mmse(Σ(Et; ∆)−2), E0 = v, (2) is the SE recursion. Monotonicity properties of the mmse function imply that Et is a decreasing sequence s.t limt→∞Et=E∞ exists. Note that when E[S] = 0 and v is an unstable fixed point, as such, SE “does not start”. While this is not really a problem when one runs AMP in practice, for analysis purposes one can slightly bias P0 and remove the bias at the end of the proofs. Definition 1.4 (AMP algorithmic threshold) For ∆ > 0 small enough, the fixed point equation corresponding to (2) has a unique solution for all noise values in ]0,∆[. We define ∆AMP as the supremum of all such ∆. Corollary 1.5 (Performance of AMP) In the limit n→∞, AMP initialized without any knowledge other than P0 yields upon convergence the asymptotic matrix-MMSE as well as the asymptotic vector-MMSE iff ∆<∆AMP or ∆>∆RS, namely E∞=argminE∈[0,v]iRS(E; ∆). ∆AMP can be read off the replica potential (1): by differentiation of (1) one finds a fixed point equation that corresponds to (2). Thus ∆AMP is the smallest solution of ∂iRS/∂E=∂2iRS/∂E2 =0; in other words it is the “first” horizontal inflexion point appearing in iRS(E; ∆) when ∆ increases. Discussion: With our hypothesis on P0 there are only three possible scenarios: ∆AMP < ∆RS (one “first order” phase transition); ∆AMP = ∆RS < ∞ (one “higher order” phase transition); ∆AMP = ∆RS =∞ (no phase transition). In the sequel we will have in mind the most interesting case, namely one first order phase transition, where we determine the gap between the algorithmic AMP and information theoretic performance. The cases of no phase transition or higher order phase transition, which present no algorithmic gap, are basically covered by the analysis of [3] and follow as a special case from our proof. The only cases that would require more work are those where P0 is s.t (1) develops more than three stationary points and more than one phase transition is present. For ∆AMP<∆RS the structure of stationary points of (1) is as follows1 (figure 1). There exist three branchesEgood(∆), Eunstable(∆) andEbad(∆) s.t: 1) For 0<∆<∆AMP there is a single stationary point Egood(∆) which is a global minimum; 2) At ∆AMP a horizontal inflexion point appears, for ∆∈ [∆AMP,∆RS] there are three stationary points satisfying Egood(∆AMP)<Eunstable(∆AMP)= Ebad(∆AMP), Egood(∆) < Eunstable(∆) < Ebad(∆) otherwise, and moreover iRS(Egood; ∆) ≤ iRS(Ebad; ∆) with equality only at ∆RS; 3) for ∆ > ∆RS there is at least the stationary point Ebad(∆) which is always the global minimum, i.e. iRS(Ebad; ∆)<iRS(Egood; ∆). (For higher ∆ the Egood(∆) and Eunstable(∆) branches may merge and disappear); 4) Egood(∆) is analytic for ∆∈]0,∆′[, ∆′>∆RS, and Ebad(∆) is analytic for ∆>∆AMP. We note for further use in the proof section that E∞=Egood(∆) for ∆<∆AMP and E∞=Ebad(∆) for ∆>∆AMP. Definition 1.4 is equivalent to ∆AMP = sup{∆|E∞=Egood(∆)}. Moreover we will also use that iRS(Egood; ∆) is analytic on ]0,∆′[, iRS(Ebad; ∆) is analytic on ]∆AMP,∞[, and the only non-analyticity point of minE∈[0,v] iRS(E; ∆) is at ∆RS. Relation to other works: Explicit single-letter characterization of the MI in the rank-one problem has attracted a lot of attention recently. Particular cases of theorem 1.1 have been shown rigorously in a number of situations. A special case when si=±1∼Ber(1/2) already appeared in [13] where an equivalent spin glass model is analysed. Very recently, [9] has generalized the results of [13] and, notably, obtained a generic matching upper bound. The same formula has been also rigorously computed following the study of AMP in [3] for spiked models (provided, however, that the signal was not too sparse) and in [4] for strictly symmetric community detection. 1We take E[S] 6= 0. Once theorem 1.1 is proven for this case a limiting argument allows to extend it to E[S]=0. For rank-one symmetric matrix estimation problems, AMP has been introduced by [6], who also computed the SE formula to analyse its performance, generalizing techniques developed by [11] and [12]. SE was further studied by [3] and [4]. In [7, 10], the generalization to larger rank was also considered. The general formula proposed by [10] for the conditional entropy and the MMSE on the basis of the heuristic cavity method from statistical physics was not demonstrated in full generality. Worst, all existing proofs could not reach the more interesting regime where a gap between the algorithmic and information theoretic perfomances appears, leaving a gap with the statistical physics conjectured formula (and rigorous upper bound from [9]). Our result closes this conjecture and has interesting non-trivial implications on the computational complexity of these tasks. Our proof technique combines recent rigorous results in coding theory along the study of capacityachieving spatially coupled codes [14, 15, 16, 17] with other progress, coming from developments in mathematical physics putting on a rigorous basis predictions of spin glass theory [18]. From this point of view, the theorem proved in this paper is relevant in a broader context going beyond low-rank matrix estimation. Hundreds of papers have been published in statistics, machine learning or information theory using the non-rigorous statistical physics approach. We believe that our result helps setting a rigorous foundation of a broad line of work. While we focus on rank-one symmetric matrix estimation, our proof technique is readily extendable to more generic low-rank symmetric matrix or low-rank symmetric tensor estimation. We also believe that it can be extended to other problems of interest in machine learning and signal processing, such as generalized linear regression, features/dictionary learning, compressed sensing or multi-layer neural networks. 2 Two examples: Wigner spiked model and community detection In order to illustrate the consequences of our results we shall present two examples. Wigner spiked model: In this model, the vector s is a Bernoulli random vector, Si∼Ber(ρ). For large enough densities (i.e. ρ>0.041(1)), [3] computed the matrix-MMSE and proved that AMP is a computationally efficient algorithm that asymptotically achieves the matrix-MMSE for any value of the noise ∆. Our results allow to close the gap left open by [3]: on one hand we now obtain rigorously the MMSE for ρ≤ 0.041(1), and on the other one we observe that for such values of ρ, and as ∆ decreases, there is a small region where two local minima coexist in iRS(E; ∆). In particular for ∆AMP<∆<∆Opt = ∆RS the global minimum corresponding to the MMSE differs from the local one that traps AMP, and a computational gap appears (see figure 1). While the region where AMP is Bayes optimal is quite large, the region where is it not, however, is perhaps the most interesting one. While this is by no means evident, statistical physics analogies with physical phase transitions in nature suggest that this region should be hard for a very broad class of algorithms. For small ρ our results are consistent with the known optimal and algorithmic thresholds predicted in sparse PCA [19, 20], that treats the case of sub-extensive ρ=O(1) values. Another interesting line of work for such probabilistic models appeared in the context of random matrix theory (see [8] and references therein) and predicts that a sharp phase transition occurs at a critical value of the noise ∆spectral =ρ2 below which an outlier eigenvalue (and its principal eigenvector) has a positive correlation with the hidden signal. For larger noise values the spectral distribution of the observation is indistinguishable from that of the pure random noise. Asymmetric balanced community detection: We now consider the problem of detecting two communities (groups) with different sizes ρn and (1− ρ)n, that generalizes the one considered in [4]. One is given a graph where the probability to have a link between nodes in the first group is p+µ(1−ρ)/(ρ √ n), between those in the second group is p+µρ/( √ n(1−ρ)), while interconnections appear with probability p−µ/ √ n. With this peculiar “balanced” setting, the nodes in each group have the same degree distribution with mean pn, making them harder to distinguish. According to the universality property described in the first section, this is equivalent to a model with AWGN of variance ∆ = p(1−p)/µ2 where each variable si is chosen according to P0(s)=ρδ(s− √ (1−ρ)/ρ)+(1−ρ)δ(s+ √ ρ/(1−ρ)). Our results for this problem2 are summarized on the right hand side of figure 2. For ρ > ρc = 1/2− √ 1/12 (black point), it is asymptotically information theoretically possible to get an estimation better than chance if and only if ∆<1. When ρ<ρc, however, it becomes possible for much larger values of the noise. Interestingly, AMP and spectral methods have the same transition and can find a positive correlation with the hidden communities for ∆<1, regardless of the value of ρ. Again, a region [∆AMP,∆Opt =∆RS] exists where a computational gap appears when ρ<ρc. One can investigate the very low ρ regime where we find that the information theoretic transition goes as ∆Opt(ρ→0) = 1/(4ρ| log ρ|). Now if we assume that this result stays true even for ρ= O(1) (which is a speculation at this point), we can choose µ→(1−p)ρ √ n such that the small group is a clique. Then the problem corresponds to a “balanced” version of the famous planted clique problem [21]. We find that the AMP/spectral approach finds the 2Note that here since E=v=1 is an extremum of iRS(E; ∆), one must introduce a small bias in P0 and let it then tend to zero at the end of the proofs. hidden clique when it is larger than √ np/(1−p), while the information theoretic transition translates into size of the clique 4p log(n)/(1−p). This is indeed reminiscent of the more classical planted clique problem at p=1/2 with its gap between log(n) (information theoretic), √ n/e (AMP [22]) and √ n (spectral [21]). Since in our balanced case the spectral and AMP limits match, this suggests that the small gain of AMP in the standard clique problem is simply due to the information provided by the distribution of local degrees in the two groups (which is absent in our balanced case). We believe this correspondence strengthens the claim that the AMP gap is actually a fundamental one. 3 Proofs The crux of our proof rests on an auxiliary “spatially coupled system”. The hallmark of spatially coupled models is that one can tune them so that the gap between the algorithmic and information theoretic limits is eliminated, while at the same time the MI is maintained unchanged for the coupled and original models. Roughly speaking, this means that it is possible to algorithmically compute the information theoretic limit of the original model because a suitable algorithm is optimal on the coupled system. The present spatially coupled construction is similar to the one used for the coupled Curie-Weiss model [14]. Consider a ring of length L+1 (L even) with blocks positioned at µ∈{0, . . . , L} and coupled to neighboring blocks {µ−w, . . . , µ+w}. Positions µ are taken modulo L+1 and the integer w∈{0, . . . , L/2} equals the size of the coupling window. The coupled model is wiµjν = siµsjν √ Λµν n + ziµjν √ ∆, (3) where the index iµ∈{1, . . . , n} (resp. jν) belongs to the block µ (resp. ν) along the ring, Λ is an (L+1)×(L+1) matrix which describes the strength of the coupling between blocks, andZiµjν ∼N (0, 1) are i.i.d. For the proof to work, the matrix elements have to be chosen appropriately. We assume that: i) Λ is a doubly stochastic matrix; ii) Λµν depends on |µ−ν|; iii) Λµν is not vanishing for |µ−ν| ≤ w and vanishes for |µ−ν|>w; iv) Λ is smooth in the sense |Λµν−Λµ+1ν |=O(w−2); v) Λ has a non-negative Fourier transform. All these conditions can easily be met, the simplest example being a triangle of base 2w+1 and height 1/(w+1). The construction of the coupled system is completed by introducing a seed in the ring: we assume perfect knowledge of the signal components {siµ} for µ∈B :={−w−1, . . . , w−1} mod L+1. This seed is what allows to close the gap between the algorithmic and information theoretic limits and therefore plays a crucial role. Note it can also be viewed as an “opening” of the chain with fixed boundary conditions. Our first crucial result states that the MI Iw,L(S; W) of the coupled and original systems are the same in a suitable limit. Lemma 3.1 (Equality of mutual informations) For any fixed w the following limits exist and are equal: limL→∞ limn→∞ Iw,L(S; W)/(n(L+1))=limn→∞ I(S; W)/n. An immediate corollary is that non-analyticity points (w.r.t ∆) of the MIs are the same in the coupled and original models. In particular, defining ∆Opt,coup := sup{∆ | limL→∞ limn→∞ Iw,L(S; W)/(n(L+1)) is analytic in ]0,∆[}, we have ∆Opt,coup =∆Opt. The second crucial result states that the AMP threshold of the spatially coupled system is at least as good as ∆RS. The analysis of AMP applies to the coupled system as well [11, 12] and it can be shown that the performance of AMP is assessed by SE. Let Etµ := limn→∞ ES,Z[‖Sµ− ŝ t µ‖22]/n be the asymptotic average vector-MSE of the AMP estimate ŝtµ at time t for the µ-th “block” of S. We associate to each position µ ∈ {0, . . . , L} an independent scalar system with AWGN of the form Y =S+Σµ(E; ∆)Z, with Σµ(E; ∆)2 := ∆/(v− ∑L ν=0 ΛµνEν) and S∼P0, Z∼N (0, 1). Taking into account knowledge of the signal components in B, SE reads: Et+1µ = mmse(Σµ(E t; ∆)−2), E0µ = v for µ ∈ {0, . . . , L} \ B, Etµ = 0 for µ ∈ B, t ≥ 0, (4) where the mmse function is defined as in section 1. From the monotonicity of the mmse function we have Et+1µ ≤Etµ for all µ∈{0, . . . , L}, a partial order which implies that limt→∞ E t= E∞ exists. This allows to define an algorithmic threshold for the coupled system: ∆AMP,w,L :=sup{∆|E∞µ ≤ Egood(∆) ∀ µ}. We show (equality holds but is not directly needed): Lemma 3.2 (Threshold saturation) Let ∆AMP,coup := lim infw→∞ lim infL→∞∆AMP,w,L. We have ∆AMP,coup≥∆RS. Proof sketch of theorem 1.1: First we prove the RS formula for ∆ ≤ ∆Opt. It is known [3] that the matrix-MSE of AMP when n→∞ is equal to v2−(v−Et)2. This cannot improve the matrix-MMSE, hence (v2−(v−E∞)2)/4≥ lim supn→∞Mmmsen/4. For ∆≤∆AMP we have E∞=Egood(∆) which is the global minimum of (1) so the left hand side of the last inequality equals the derivative of minE∈[0,v] iRS(E; ∆) w.r.t ∆−1. Thus using the matrix version of the I-MMSE relation [23] we get d d∆−1 min E∈[0,v] iRS(E; ∆) ≥ lim sup n→∞ 1 n dI(S; W) d∆−1 . (5) Integrating this relation on [0,∆] ⊂ [0,∆AMP] and checking that minE∈[0,v] iRS(E; 0) = H(S) (the Shannon entropy of P0) we obtain minE∈[0,v] iRS(E; ∆)≤ lim infn→∞ I(S; W)/n. But we know I(S; W)/n≤minE∈[0,v] iRS(E; ∆) [9], thus we already get theorem 1.1 for ∆≤∆AMP. We notice that ∆AMP≤∆Opt. While this might seem intuitively clear, it follows from ∆RS≥∆AMP (by their definitions) which together with ∆AMP > ∆Opt would imply from theorem 1.1 that limn→∞ I(S; W)/n is analytic at ∆Opt, a contradiction. The next step is to extend theorem 1.1 to the range [∆AMP,∆Opt]. Suppose for a moment ∆RS≥∆Opt. Then both functions on each side of the RS formula are analytic on the whole range ]0,∆Opt[ and since they are equal for ∆≤∆AMP, they must be equal on their whole analyticity range and by continuity, they must also be equal at ∆Opt (that the functions are continuous follows from independent arguments on the existence of the n→∞ limit of concave functions). It remains to show that ∆RS∈ ]∆AMP,∆Opt[ is impossible. We proceed by contradiction, so suppose this is true. Then both functions on each side of the RS formula are analytic on ]0,∆RS[ and since they are equal for ]0,∆AMP[⊂]0,∆RS[ they must be equal on the whole range ]0,∆RS[ and also at ∆RS by continuity. For ∆>∆RS the fixed point of SE is E∞=Ebad(∆) which is also the global minimum of iRS(E; ∆), hence (5) is verified. Integrating this inequality on ]∆RS,∆[⊂]∆RS,∆Opt[ and using I(S; W)/n≤minE∈[0,v] iRS(E; ∆) again, we find that the RS formula holds for all ∆∈ [0,∆Opt]. But this implies that minE∈[0,v] iRS(E; ∆) is analytic at ∆RS, a contradiction. We now prove the RS formula for ∆≥∆Opt. Note that the previous arguments showed that necessarily ∆Opt≤∆RS. Thus by lemmas 3.1 and 3.2 (and the sub-optimality of AMP as shown as before) we obtain ∆RS ≤ ∆AMP,coup≤∆Opt,coup = ∆Opt≤∆RS. This shows that ∆Opt = ∆RS (this is the point where spatial coupling came in the game and we do not know of other means to prove such an equality). For ∆>∆RS we have E∞ =Ebad(∆) which is the global minimum of iRS(E; ∆). Therefore we again have (5) in this range and the proof can be completed by using once more the integration argument, this time over the range [∆RS,∆]=[∆Opt,∆]. Proof sketch of corollaries 1.3 and 1.5: LetE∗(∆)=argminEiRS(E; ∆) for ∆ 6=∆RS. By explicit calculation one checks that diRS(E∗,∆)/d∆−1 =(v2−(v−E∗(∆))2)/4, so from theorem 1.1 and the matrix form of the I-MMSE relation we find Mmmsen→v2−(v−E∗(∆))2 as n→∞ which is the first part of the statement of corollary 1.3. Let us now turn to corollary 1.5. For n→∞ the vectorMSE of the AMP estimator at time t equals Et, and since the fixed point equation corresponding to SE is precisely the stationarity equation for iRS(E; ∆), we conclude that for ∆ /∈ [∆AMP,∆RS] we must have E∞=E∗(∆). It remains to prove that E∗(∆)=limn→∞Vmmsen(∆) at least for ∆ /∈ [∆AMP,∆RS] (we believe this is in fact true for all ∆). This will settle the second part of corollary 1.3 as well as 1.5. Using (Nishimori) identities ES,W[SiSjE[XiXj |W]]=ES,W[E[XiXj |W]2] (see e.g. [9]) and using the law of large numbers we can show limn→∞Mmmsen ≤ limn→∞(v2− (v− Vmmsen(∆)) 2). Concentration techniques similar to [13] suggest that the equality in fact holds (for ∆ 6= ∆RS) but there are technicalities that prevent us from completing the proof of equality. However it is interesting to note that this equality would imply E∗(∆)=limn→∞Vmmsen(∆) for all ∆ 6=∆RS. Nevertheless, another argument can be used when AMP is optimal. On one hand the right hand side of the inequality is necessarily smaller than v2−(v−E∞)2. On the other hand the left hand side of the inequality is equal to v2−(v−E∗(∆))2. SinceE∗(∆)=E∞ when ∆ /∈ [∆AMP,∆RS], we can conclude limn→∞Vmmsen(∆)=argminEiRS(E; ∆) for this range of ∆. Proof sketch of lemma 3.1: Here we prove the lemma for a ring that is not seeded. An easy argument shows that a seed of size w does not change the MI per variable when L→∞. The statistical physics formulation is convenient: up to the trivial additive term n(L+1)v2/4, the MI Iw,L(S; W) equals the free energy −ES,Z[lnZw,L], where Zw,L := ∫ dxP0(x) exp(−H(x, z,Λ)) and H(x, z,Λ) = 1 ∆ L∑ µ=0 ( Λµµ ∑ iµ≤jµ Aiµjµ(x, z,Λ) + µ+w∑ ν=µ+1 Λµν ∑ iµ,jν Aiµjν (x, z,Λ) ) , (6) with Aiµjν (x, z,Λ) :=(x2iµx 2 jν )/(2n)−(siµsjνxiµxjν )/n−(xiµxjνziµjν √ ∆)/ √ nΛµν . Consider a pair of systems with coupling matrices Λ and Λ′ and i.i.d noize realizations z, z′, an interpolated HamiltonianH(x, z, tΛ)+H(x, z′, (1−t)Λ′), t ∈ [0, 1], and the corresponding partition function Zt. The main idea of the proof is to show that for suitable choices of matrices,− ddtES,Z,Z′ [lnZt]≤0 for all t∈ [0, 1] (up to negligible terms), so that by the fundamental theorem of calculus, we get a comparison between the free energies ofH(x, z,Λ) andH(x, z′,Λ′). Performing the t-derivative brings down a Gibbs average of a polynomial in all variables siµ , xiµ , ziµjν and z ′ iµjν . This expectation over S, Z, Z′ of this Gibbs average is simplified using integration by parts over the Gaussian noise ziµjν , z′iµjν and Nishimori identities (see e.g. proof of corollary 1.3 for one of them). This algebra leads to − 1 n(L+ 1) d dt ES,Z,Z′ [lnZt] = 1 4∆(L+ 1) ES,Z,Z′ [〈qᵀΛq− qᵀΛ′q〉t] +O(1/(nL)), (7) where 〈−〉t is the Gibbs average w.r.t the interpolated Hamiltonian, q is the vector of overlaps qµ := ∑n iµ=1 siµxiµ/n. If we can choose matrices s.t Λ ′ >Λ, the difference of quadratic forms in the Gibbs bracket is negative and we obtain an inequality in the large size limit. We use this scheme to interpolate between the fully decoupled system w=0 and the coupled one 1≤w<L/2 and then between 1≤w <L/2 and the fully connected system w=L/2. The w= 0 system has Λµν = δµν with eigenvalues (1, 1, . . . , 1). For the 1 ≤ w < L/2 system, we take any stochastic translation invariant matrix with non-negative discrete Fourier transform (of its rows): such matrices have an eigenvalue equal to 1 and all others in [0, 1[ (the eigenvalues are precisely equal to the discrete Fourier transform). For w = L/2 we choose Λµν = 1/(L+1) which is a projector with eigenvalues (0, 0, . . . , 1). With these choices we deduce that the free energies and MIs are ordered as Iw=0,L +O(1)≤Iw,L +O(1)≤Iw=L/2,L +O(1). To conclude the proof we divide by n(L+1) and note that the limits of the leftmost and rightmost MIs are equal, provided the limit exists. Indeed the leftmost term equals L times I(S; W) and the rightmost term is the same MI for a system of n(L+1) variables. Existence of the limit follows by subadditivity, proven by a similar interpolation [18]. Proof sketch of lemma 3.2: Fix ∆ < ∆RS. We show that, for w large enough, the coupled SE recursion (4) must converge to a fixed point E∞µ ≤Egood(∆) for all µ. The main intuition behind the proof is to use a “potential function” whose “energy” can be lowered by small perturbation of a fixed point that would go above Egood(∆) [16, 17]. The relevant potential function iw,L(E,∆) is in fact the replica potential of the coupled system (a generalization of (1)). The stationarity condition for this potential is precisely (4) (without the seeding condition). Monotonicity properties of SE ensure that any fixed point has a “unimodal” shape (and recall that it vanishes for µ∈B= {0, . . . , w−1}∪{L−w, . . . , L}). Consider a position µmax∈{w, . . . , L−w−1} where it is maximal and suppose that E∞µmax > Egood(∆). We associate to the fixed point E ∞ a so-called saturated profile Es defined on the whole of Z as follows: Esµ=Egood(∆) for all µ≤µ∞ where µ∞+1 is the smallest position s.t E∞µ >Egood(∆); E s µ=E ∞ µ for µ∈{µ∞+1, . . . , µmax−1}; Esµ=E∞µmax for all µ≥µmax. We show that Es cannot exist for w large enough. To this end define a shift operator by [S(Es)]µ :=Esµ−1. On one hand the shifted profile is a small perturbation of E s which matches a fixed point, except where it is constant, so if we Taylor expand, the first order vanishes and the second order and higher orders can be estimated as |iw,L(S(Es); ∆)−iw,L(Es; ∆)|=O(1/w) uniformly in L. On the other hand, by explicit cancellation of telescopic sums iw,L(S(Es); ∆)−iw,L(Es; ∆)= iRS(Egood; ∆)−iRS(E∞µmax ; ∆). Now one can show from monotonicity properties of SE that if E ∞ is a non trivial fixed point of the coupled SE then E∞µmax cannot be in the basin of attraction of Egood(∆) for the uncoupled SE recursion. Consequently as can be seen on the plot of iRS(E; ∆) (e.g. figure 1) we must have iRS(E∞µmax ; ∆)≥ iRS(Ebad; ∆). Therefore iw,L(S(E s); ∆)−iw,L(Es; ∆)≤ −|iRS(Ebad; ∆)−iRS(Egood; ∆)| which is an energy gain independent of w, and for large enough w we get a contradiction with the previous estimate coming from the Taylor expansion. Acknowledgments J.B and M.D acknowledge funding from the SNSF (grant 200021-156672). Part of this research received funding from the ERC under the EU’s 7th Framework Programme (FP/2007-2013/ERC Grant Agreement 307087-SPARCS). F.K and L.Z thank the Simons Institute for its hospitality.
1. What is the main contribution of the paper in terms of the formula proven for mutual information? 2. How does the paper show the limit of mutual information, and what is the significance of this achievement? 3. What are the implications of the result regarding signal reconstruction and AMP? 4. Can you explain the two examples given in the paper and how they illustrate the theorem? 5. Are there any minor suggestions or improvements that can be made to the paper?
Review
Review The paper proves a conjectured formula of mutual information for symmetric rank-one matrix estimation under the AWGN model. It shows that the limit of mutual information can be expressed as the minimum of the replica symmetric potential function for a wide range of signal distributions, which also gives an exact formula for the MMSE for reconstructing the signal. The result identifies an interesting regime where the information-theoretic optimal performance is not achieved by AMP. The implication of the result is further illustrated by two examples of Wigner spike model and community detection. The proof appears to be non-trivial and novel. The paper is a very nice piece of theoretic work and quite a fun to read. The two sets of figures and the two illustrating examples are particularly entertaining and enlightening. Although similar results have been proved in other work under various models, the current paper seems to be the first to prove the conjectured formula in its full generality and is able to identify a gap between information theoretic lower bounds and algorithmic upper bounds. The proofs appear to be non-trivial and could possibly shed lights on other problems. I recommend the paper to be accepted by NIPS. Some minor suggestion: - It'd be nice if some background and reference are provided for the replica symmetric potential function for better readability. - Publication year seems to be missing for some of the references: [3], [7], [20] - All "pca"'s appear in lower cases in the references.
NIPS
Title Mutual information for symmetric rank-one matrix estimation: A proof of the replica formula Abstract Factorizing low-rank matrices has many applications in machine learning and statistics. For probabilistic models in the Bayes optimal setting, a general expression for the mutual information has been proposed using heuristic statistical physics computations, and proven in few specific cases. Here, we show how to rigorously prove the conjectured formula for the symmetric rank-one case. This allows to express the minimal mean-square-error and to characterize the detectability phase transitions in a large set of estimation problems ranging from community detection to sparse PCA. We also show that for a large set of parameters, an iterative algorithm called approximate message-passing is Bayes optimal. There exists, however, a gap between what currently known polynomial algorithms can do and what is expected information theoretically. Additionally, the proof technique has an interest of its own and exploits three essential ingredients: the interpolation method introduced in statistical physics by Guerra, the analysis of the approximate message-passing algorithm and the theory of spatial coupling and threshold saturation in coding. Our approach is generic and applicable to other open problems in statistical estimation where heuristic statistical physics predictions are available. Consider the following probabilistic rank-one matrix estimation problem: one has access to noisy observations w = (wij)i,j=1 of the pair-wise product of the components of a vector s = (s1, . . . , sn) ∈ R with i.i.d components distributed as Si ∼ P0, i = 1, . . . , n. The entries of w are observed through a noisy element-wise (possibly non-linear) output probabilistic channel Pout(wij |sisj/ √ n). The goal is to estimate the vector s from w assuming that both P0 and Pout are known and independent of n (noise is symmetric so that wij =wji). Many important problems in statistics and machine learning can be expressed in this way, such as sparse PCA [1], the Wigner spiked model [2, 3], community detection [4] or matrix completion [5]. Proving a result initially derived by a heuristic method from statistical physics, we give an explicit expression for the mutual information (MI) and the information theoretic minimal mean-square-error (MMSE) in the asymptotic n→∞ limit. Our results imply that for a large region of parameters, the posterior marginal expectations of the underlying signal components (often assumed intractable 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. to compute) can be obtained in the leading order in n using a polynomial-time algorithm called approximate message-passing (AMP) [6, 3, 4, 7]. We also demonstrate the existence of a region where both AMP and spectral methods [8] fail to provide a good answer to the estimation problem, while it is nevertheless information theoretically possible to do so. We illustrate our theorems with examples and also briefly discuss the implications in terms of computational complexity. 1 Setting and main results The additive white Gaussian noise setting: A standard and natural setting is the case of additive white Gaussian noise (AWGN) of known variance ∆, wij =sisj/ √ n+zij √ ∆, where z=(zij)i,j=1 is a symmetric matrix with i.i.d entries Zij ∼N (0, 1), 1≤ i≤ j≤n. Perhaps surprisingly, it turns out that this Gaussian setting is sufficient to completely characterize all the problems discussed in the introduction, even if these have more complicated output channels. This is made possible by a theorem of channel universality [9] (already proven for community detection in [4] and conjectured in [10]). This theorem states that given an output channel Pout(w|y), such that (s.t) logPout(w|y=0) is three times differentiable with bounded second and third derivatives, then the MI satisfies I(S; W)= I(S; SS/ √ n+Z √ ∆)+O( √ n), where ∆ is the inverse Fisher information (evaluated at y=0) of the output channel: ∆−1 := EPout(w|0)[(∂y logPout(W |y)|y=0)]. Informally, this means that we only have to compute the MI for an AWGN channel to take care of a wide range of problems, which can be expressed in terms of their Fisher information. In this paper we derive rigorously, for a large class of signal distributions P0, an explicit one-letter formula for the MI per variable I(S; W)/n in the asymptotic limit n→∞. Main result: Our central result is a proof of the expression for the asymptotic n→∞MI per variable via the so-called replica symmetric (RS) potential iRS(E; ∆) defined as iRS(E; ∆) := (v − E) + v N/A Consider the following probabilistic rank-one matrix estimation problem: one has access to noisy observations w = (wij)ni,j=1 of the pair-wise product of the components of a vector s = (s1, . . . , sn)ᵀ ∈ Rn with i.i.d components distributed as Si ∼ P0, i = 1, . . . , n. The entries of w are observed through a noisy element-wise (possibly non-linear) output probabilistic channel Pout(wij |sisj/ √ n). The goal is to estimate the vector s from w assuming that both P0 and Pout are known and independent of n (noise is symmetric so that wij =wji). Many important problems in statistics and machine learning can be expressed in this way, such as sparse PCA [1], the Wigner spiked model [2, 3], community detection [4] or matrix completion [5]. Proving a result initially derived by a heuristic method from statistical physics, we give an explicit expression for the mutual information (MI) and the information theoretic minimal mean-square-error (MMSE) in the asymptotic n→∞ limit. Our results imply that for a large region of parameters, the posterior marginal expectations of the underlying signal components (often assumed intractable 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. to compute) can be obtained in the leading order in n using a polynomial-time algorithm called approximate message-passing (AMP) [6, 3, 4, 7]. We also demonstrate the existence of a region where both AMP and spectral methods [8] fail to provide a good answer to the estimation problem, while it is nevertheless information theoretically possible to do so. We illustrate our theorems with examples and also briefly discuss the implications in terms of computational complexity. 1 Setting and main results The additive white Gaussian noise setting: A standard and natural setting is the case of additive white Gaussian noise (AWGN) of known variance ∆, wij =sisj/ √ n+zij √ ∆, where z=(zij)ni,j=1 is a symmetric matrix with i.i.d entries Zij ∼N (0, 1), 1≤ i≤ j≤n. Perhaps surprisingly, it turns out that this Gaussian setting is sufficient to completely characterize all the problems discussed in the introduction, even if these have more complicated output channels. This is made possible by a theorem of channel universality [9] (already proven for community detection in [4] and conjectured in [10]). This theorem states that given an output channel Pout(w|y), such that (s.t) logPout(w|y=0) is three times differentiable with bounded second and third derivatives, then the MI satisfies I(S; W)= I(S; SSᵀ/ √ n+Z √ ∆)+O( √ n), where ∆ is the inverse Fisher information (evaluated at y=0) of the output channel: ∆−1 := EPout(w|0)[(∂y logPout(W |y)|y=0)2]. Informally, this means that we only have to compute the MI for an AWGN channel to take care of a wide range of problems, which can be expressed in terms of their Fisher information. In this paper we derive rigorously, for a large class of signal distributions P0, an explicit one-letter formula for the MI per variable I(S; W)/n in the asymptotic limit n→∞. Main result: Our central result is a proof of the expression for the asymptotic n→∞MI per variable via the so-called replica symmetric (RS) potential iRS(E; ∆) defined as iRS(E; ∆) := (v − E)2 + v2 4∆ − ES,Z [ ln (∫ dxP0(x)e − x2 2Σ(E;∆)2 +x ( S Σ(E;∆)2 + Z Σ(E;∆) ))] , (1) with Z∼N (0, 1), S∼P0, E[S2]=v and Σ(E; ∆)2 :=∆/(v−E), E∈ [0, v]. Here we will assume that P0 is a discrete distribution over a finite bounded real alphabet P0(s)= ∑ν α=1 pαδ(s−aα). Thus the only continuous integral in (1) is the Gaussian over z. Our results can be extended to mixtures of discrete and continuous signal distributions at the expense of technical complications in some proofs. It turns out that both the information theoretic and algorithmic AMP thresholds are determined by the set of stationary points of (1) (w.r.t E). It is possible to show that for all ∆>0 there always exist at least one stationary minimum. Note E=0 is never a stationary point (except for P0 a single Dirac mass) and E= v is stationary only if E[S] = 0. In this contribution we suppose that at most three stationary points exist, corresponding to situations with at most one phase transition. We believe that situations with multiple transitions can also be covered by our techniques. Theorem 1.1 (RS formula for the mutual information) Fix ∆>0 and let P0 be a discrete distribution s.t (1) has at most three stationary points. Then limn→∞ I(S; W)/n=minE∈[0,v] iRS(E; ∆). The proof of the existence of the limit does not require the above hypothesis on P0. Also, it was first shown in [9] that for all n, I(S; W)/n≤minE∈[0,v] iRS(E; ∆), an inequality that we will use in the proof section. It is conceptually useful to define the following threshold: Definition 1.2 (Information theoretic threshold) Define ∆Opt as the first non-analyticity point of the MI as ∆ increases: ∆Opt :=sup{∆| limn→∞ I(S; W)/n is analytic in ]0,∆[}. When P0 is s.t (1) has at most three stationary points, as discussed below, then minE∈[0,v] iRS(E; ∆) has at most one non-analyticity point denoted ∆RS (if minE∈[0,v] iRS(E; ∆) is analytic over all R+ we set ∆RS =∞). Theorem 1.1 gives us a mean to compute the information theoretic threshold ∆Opt =∆RS. A basic application of theorem 1.1 is the expression of the MMSE: Corollary 1.3 (Exact formula for the MMSE) For all ∆ 6= ∆RS, the matrix-MMSE Mmmsen := ES,W[‖SSᵀ − E[XXᵀ|W]‖2F]/n2 (‖ − ‖F being the Frobenius norm) is asymptotically limn→∞Mmmsen(∆ −1) = v2−(v−argminE∈[0,v]iRS(E; ∆))2. Moreover, if ∆<∆AMP (where ∆AMP is the algorithmic threshold, see definition 1.4) or ∆>∆RS, then the usual vector-MMSE Vmmsen :=ES,W[‖S−E[X|W]‖22]/n satisfies limn→∞Vmmsen=argminE∈[0,v]iRS(E; ∆). It is natural to conjecture that the vector-MMSE is given by argminE∈[0,v]iRS(E; ∆) for all ∆ 6=∆RS, but our proof does not quite yield the full statement. A fundamental consequence concerns the performance of the AMP algorithm [6] for estimating s. AMP has been analysed rigorously in [11, 12, 4] where it is shown that its asymptotic performance is tracked by state evolution (SE). Let Et :=limn→∞ ES,Z[‖S− ŝt‖22]/n be the asymptotic average vector-MSE of the AMP estimate ŝt at time t. Define mmse(Σ−2) :=ES,Z [(S−E[X|S+ΣZ])2] as the usual scalar mmse function associated to a scalar AWGN channel of noise variance Σ2, with S∼P0 and Z∼N (0, 1). Then Et+1 = mmse(Σ(Et; ∆)−2), E0 = v, (2) is the SE recursion. Monotonicity properties of the mmse function imply that Et is a decreasing sequence s.t limt→∞Et=E∞ exists. Note that when E[S] = 0 and v is an unstable fixed point, as such, SE “does not start”. While this is not really a problem when one runs AMP in practice, for analysis purposes one can slightly bias P0 and remove the bias at the end of the proofs. Definition 1.4 (AMP algorithmic threshold) For ∆ > 0 small enough, the fixed point equation corresponding to (2) has a unique solution for all noise values in ]0,∆[. We define ∆AMP as the supremum of all such ∆. Corollary 1.5 (Performance of AMP) In the limit n→∞, AMP initialized without any knowledge other than P0 yields upon convergence the asymptotic matrix-MMSE as well as the asymptotic vector-MMSE iff ∆<∆AMP or ∆>∆RS, namely E∞=argminE∈[0,v]iRS(E; ∆). ∆AMP can be read off the replica potential (1): by differentiation of (1) one finds a fixed point equation that corresponds to (2). Thus ∆AMP is the smallest solution of ∂iRS/∂E=∂2iRS/∂E2 =0; in other words it is the “first” horizontal inflexion point appearing in iRS(E; ∆) when ∆ increases. Discussion: With our hypothesis on P0 there are only three possible scenarios: ∆AMP < ∆RS (one “first order” phase transition); ∆AMP = ∆RS < ∞ (one “higher order” phase transition); ∆AMP = ∆RS =∞ (no phase transition). In the sequel we will have in mind the most interesting case, namely one first order phase transition, where we determine the gap between the algorithmic AMP and information theoretic performance. The cases of no phase transition or higher order phase transition, which present no algorithmic gap, are basically covered by the analysis of [3] and follow as a special case from our proof. The only cases that would require more work are those where P0 is s.t (1) develops more than three stationary points and more than one phase transition is present. For ∆AMP<∆RS the structure of stationary points of (1) is as follows1 (figure 1). There exist three branchesEgood(∆), Eunstable(∆) andEbad(∆) s.t: 1) For 0<∆<∆AMP there is a single stationary point Egood(∆) which is a global minimum; 2) At ∆AMP a horizontal inflexion point appears, for ∆∈ [∆AMP,∆RS] there are three stationary points satisfying Egood(∆AMP)<Eunstable(∆AMP)= Ebad(∆AMP), Egood(∆) < Eunstable(∆) < Ebad(∆) otherwise, and moreover iRS(Egood; ∆) ≤ iRS(Ebad; ∆) with equality only at ∆RS; 3) for ∆ > ∆RS there is at least the stationary point Ebad(∆) which is always the global minimum, i.e. iRS(Ebad; ∆)<iRS(Egood; ∆). (For higher ∆ the Egood(∆) and Eunstable(∆) branches may merge and disappear); 4) Egood(∆) is analytic for ∆∈]0,∆′[, ∆′>∆RS, and Ebad(∆) is analytic for ∆>∆AMP. We note for further use in the proof section that E∞=Egood(∆) for ∆<∆AMP and E∞=Ebad(∆) for ∆>∆AMP. Definition 1.4 is equivalent to ∆AMP = sup{∆|E∞=Egood(∆)}. Moreover we will also use that iRS(Egood; ∆) is analytic on ]0,∆′[, iRS(Ebad; ∆) is analytic on ]∆AMP,∞[, and the only non-analyticity point of minE∈[0,v] iRS(E; ∆) is at ∆RS. Relation to other works: Explicit single-letter characterization of the MI in the rank-one problem has attracted a lot of attention recently. Particular cases of theorem 1.1 have been shown rigorously in a number of situations. A special case when si=±1∼Ber(1/2) already appeared in [13] where an equivalent spin glass model is analysed. Very recently, [9] has generalized the results of [13] and, notably, obtained a generic matching upper bound. The same formula has been also rigorously computed following the study of AMP in [3] for spiked models (provided, however, that the signal was not too sparse) and in [4] for strictly symmetric community detection. 1We take E[S] 6= 0. Once theorem 1.1 is proven for this case a limiting argument allows to extend it to E[S]=0. For rank-one symmetric matrix estimation problems, AMP has been introduced by [6], who also computed the SE formula to analyse its performance, generalizing techniques developed by [11] and [12]. SE was further studied by [3] and [4]. In [7, 10], the generalization to larger rank was also considered. The general formula proposed by [10] for the conditional entropy and the MMSE on the basis of the heuristic cavity method from statistical physics was not demonstrated in full generality. Worst, all existing proofs could not reach the more interesting regime where a gap between the algorithmic and information theoretic perfomances appears, leaving a gap with the statistical physics conjectured formula (and rigorous upper bound from [9]). Our result closes this conjecture and has interesting non-trivial implications on the computational complexity of these tasks. Our proof technique combines recent rigorous results in coding theory along the study of capacityachieving spatially coupled codes [14, 15, 16, 17] with other progress, coming from developments in mathematical physics putting on a rigorous basis predictions of spin glass theory [18]. From this point of view, the theorem proved in this paper is relevant in a broader context going beyond low-rank matrix estimation. Hundreds of papers have been published in statistics, machine learning or information theory using the non-rigorous statistical physics approach. We believe that our result helps setting a rigorous foundation of a broad line of work. While we focus on rank-one symmetric matrix estimation, our proof technique is readily extendable to more generic low-rank symmetric matrix or low-rank symmetric tensor estimation. We also believe that it can be extended to other problems of interest in machine learning and signal processing, such as generalized linear regression, features/dictionary learning, compressed sensing or multi-layer neural networks. 2 Two examples: Wigner spiked model and community detection In order to illustrate the consequences of our results we shall present two examples. Wigner spiked model: In this model, the vector s is a Bernoulli random vector, Si∼Ber(ρ). For large enough densities (i.e. ρ>0.041(1)), [3] computed the matrix-MMSE and proved that AMP is a computationally efficient algorithm that asymptotically achieves the matrix-MMSE for any value of the noise ∆. Our results allow to close the gap left open by [3]: on one hand we now obtain rigorously the MMSE for ρ≤ 0.041(1), and on the other one we observe that for such values of ρ, and as ∆ decreases, there is a small region where two local minima coexist in iRS(E; ∆). In particular for ∆AMP<∆<∆Opt = ∆RS the global minimum corresponding to the MMSE differs from the local one that traps AMP, and a computational gap appears (see figure 1). While the region where AMP is Bayes optimal is quite large, the region where is it not, however, is perhaps the most interesting one. While this is by no means evident, statistical physics analogies with physical phase transitions in nature suggest that this region should be hard for a very broad class of algorithms. For small ρ our results are consistent with the known optimal and algorithmic thresholds predicted in sparse PCA [19, 20], that treats the case of sub-extensive ρ=O(1) values. Another interesting line of work for such probabilistic models appeared in the context of random matrix theory (see [8] and references therein) and predicts that a sharp phase transition occurs at a critical value of the noise ∆spectral =ρ2 below which an outlier eigenvalue (and its principal eigenvector) has a positive correlation with the hidden signal. For larger noise values the spectral distribution of the observation is indistinguishable from that of the pure random noise. Asymmetric balanced community detection: We now consider the problem of detecting two communities (groups) with different sizes ρn and (1− ρ)n, that generalizes the one considered in [4]. One is given a graph where the probability to have a link between nodes in the first group is p+µ(1−ρ)/(ρ √ n), between those in the second group is p+µρ/( √ n(1−ρ)), while interconnections appear with probability p−µ/ √ n. With this peculiar “balanced” setting, the nodes in each group have the same degree distribution with mean pn, making them harder to distinguish. According to the universality property described in the first section, this is equivalent to a model with AWGN of variance ∆ = p(1−p)/µ2 where each variable si is chosen according to P0(s)=ρδ(s− √ (1−ρ)/ρ)+(1−ρ)δ(s+ √ ρ/(1−ρ)). Our results for this problem2 are summarized on the right hand side of figure 2. For ρ > ρc = 1/2− √ 1/12 (black point), it is asymptotically information theoretically possible to get an estimation better than chance if and only if ∆<1. When ρ<ρc, however, it becomes possible for much larger values of the noise. Interestingly, AMP and spectral methods have the same transition and can find a positive correlation with the hidden communities for ∆<1, regardless of the value of ρ. Again, a region [∆AMP,∆Opt =∆RS] exists where a computational gap appears when ρ<ρc. One can investigate the very low ρ regime where we find that the information theoretic transition goes as ∆Opt(ρ→0) = 1/(4ρ| log ρ|). Now if we assume that this result stays true even for ρ= O(1) (which is a speculation at this point), we can choose µ→(1−p)ρ √ n such that the small group is a clique. Then the problem corresponds to a “balanced” version of the famous planted clique problem [21]. We find that the AMP/spectral approach finds the 2Note that here since E=v=1 is an extremum of iRS(E; ∆), one must introduce a small bias in P0 and let it then tend to zero at the end of the proofs. hidden clique when it is larger than √ np/(1−p), while the information theoretic transition translates into size of the clique 4p log(n)/(1−p). This is indeed reminiscent of the more classical planted clique problem at p=1/2 with its gap between log(n) (information theoretic), √ n/e (AMP [22]) and √ n (spectral [21]). Since in our balanced case the spectral and AMP limits match, this suggests that the small gain of AMP in the standard clique problem is simply due to the information provided by the distribution of local degrees in the two groups (which is absent in our balanced case). We believe this correspondence strengthens the claim that the AMP gap is actually a fundamental one. 3 Proofs The crux of our proof rests on an auxiliary “spatially coupled system”. The hallmark of spatially coupled models is that one can tune them so that the gap between the algorithmic and information theoretic limits is eliminated, while at the same time the MI is maintained unchanged for the coupled and original models. Roughly speaking, this means that it is possible to algorithmically compute the information theoretic limit of the original model because a suitable algorithm is optimal on the coupled system. The present spatially coupled construction is similar to the one used for the coupled Curie-Weiss model [14]. Consider a ring of length L+1 (L even) with blocks positioned at µ∈{0, . . . , L} and coupled to neighboring blocks {µ−w, . . . , µ+w}. Positions µ are taken modulo L+1 and the integer w∈{0, . . . , L/2} equals the size of the coupling window. The coupled model is wiµjν = siµsjν √ Λµν n + ziµjν √ ∆, (3) where the index iµ∈{1, . . . , n} (resp. jν) belongs to the block µ (resp. ν) along the ring, Λ is an (L+1)×(L+1) matrix which describes the strength of the coupling between blocks, andZiµjν ∼N (0, 1) are i.i.d. For the proof to work, the matrix elements have to be chosen appropriately. We assume that: i) Λ is a doubly stochastic matrix; ii) Λµν depends on |µ−ν|; iii) Λµν is not vanishing for |µ−ν| ≤ w and vanishes for |µ−ν|>w; iv) Λ is smooth in the sense |Λµν−Λµ+1ν |=O(w−2); v) Λ has a non-negative Fourier transform. All these conditions can easily be met, the simplest example being a triangle of base 2w+1 and height 1/(w+1). The construction of the coupled system is completed by introducing a seed in the ring: we assume perfect knowledge of the signal components {siµ} for µ∈B :={−w−1, . . . , w−1} mod L+1. This seed is what allows to close the gap between the algorithmic and information theoretic limits and therefore plays a crucial role. Note it can also be viewed as an “opening” of the chain with fixed boundary conditions. Our first crucial result states that the MI Iw,L(S; W) of the coupled and original systems are the same in a suitable limit. Lemma 3.1 (Equality of mutual informations) For any fixed w the following limits exist and are equal: limL→∞ limn→∞ Iw,L(S; W)/(n(L+1))=limn→∞ I(S; W)/n. An immediate corollary is that non-analyticity points (w.r.t ∆) of the MIs are the same in the coupled and original models. In particular, defining ∆Opt,coup := sup{∆ | limL→∞ limn→∞ Iw,L(S; W)/(n(L+1)) is analytic in ]0,∆[}, we have ∆Opt,coup =∆Opt. The second crucial result states that the AMP threshold of the spatially coupled system is at least as good as ∆RS. The analysis of AMP applies to the coupled system as well [11, 12] and it can be shown that the performance of AMP is assessed by SE. Let Etµ := limn→∞ ES,Z[‖Sµ− ŝ t µ‖22]/n be the asymptotic average vector-MSE of the AMP estimate ŝtµ at time t for the µ-th “block” of S. We associate to each position µ ∈ {0, . . . , L} an independent scalar system with AWGN of the form Y =S+Σµ(E; ∆)Z, with Σµ(E; ∆)2 := ∆/(v− ∑L ν=0 ΛµνEν) and S∼P0, Z∼N (0, 1). Taking into account knowledge of the signal components in B, SE reads: Et+1µ = mmse(Σµ(E t; ∆)−2), E0µ = v for µ ∈ {0, . . . , L} \ B, Etµ = 0 for µ ∈ B, t ≥ 0, (4) where the mmse function is defined as in section 1. From the monotonicity of the mmse function we have Et+1µ ≤Etµ for all µ∈{0, . . . , L}, a partial order which implies that limt→∞ E t= E∞ exists. This allows to define an algorithmic threshold for the coupled system: ∆AMP,w,L :=sup{∆|E∞µ ≤ Egood(∆) ∀ µ}. We show (equality holds but is not directly needed): Lemma 3.2 (Threshold saturation) Let ∆AMP,coup := lim infw→∞ lim infL→∞∆AMP,w,L. We have ∆AMP,coup≥∆RS. Proof sketch of theorem 1.1: First we prove the RS formula for ∆ ≤ ∆Opt. It is known [3] that the matrix-MSE of AMP when n→∞ is equal to v2−(v−Et)2. This cannot improve the matrix-MMSE, hence (v2−(v−E∞)2)/4≥ lim supn→∞Mmmsen/4. For ∆≤∆AMP we have E∞=Egood(∆) which is the global minimum of (1) so the left hand side of the last inequality equals the derivative of minE∈[0,v] iRS(E; ∆) w.r.t ∆−1. Thus using the matrix version of the I-MMSE relation [23] we get d d∆−1 min E∈[0,v] iRS(E; ∆) ≥ lim sup n→∞ 1 n dI(S; W) d∆−1 . (5) Integrating this relation on [0,∆] ⊂ [0,∆AMP] and checking that minE∈[0,v] iRS(E; 0) = H(S) (the Shannon entropy of P0) we obtain minE∈[0,v] iRS(E; ∆)≤ lim infn→∞ I(S; W)/n. But we know I(S; W)/n≤minE∈[0,v] iRS(E; ∆) [9], thus we already get theorem 1.1 for ∆≤∆AMP. We notice that ∆AMP≤∆Opt. While this might seem intuitively clear, it follows from ∆RS≥∆AMP (by their definitions) which together with ∆AMP > ∆Opt would imply from theorem 1.1 that limn→∞ I(S; W)/n is analytic at ∆Opt, a contradiction. The next step is to extend theorem 1.1 to the range [∆AMP,∆Opt]. Suppose for a moment ∆RS≥∆Opt. Then both functions on each side of the RS formula are analytic on the whole range ]0,∆Opt[ and since they are equal for ∆≤∆AMP, they must be equal on their whole analyticity range and by continuity, they must also be equal at ∆Opt (that the functions are continuous follows from independent arguments on the existence of the n→∞ limit of concave functions). It remains to show that ∆RS∈ ]∆AMP,∆Opt[ is impossible. We proceed by contradiction, so suppose this is true. Then both functions on each side of the RS formula are analytic on ]0,∆RS[ and since they are equal for ]0,∆AMP[⊂]0,∆RS[ they must be equal on the whole range ]0,∆RS[ and also at ∆RS by continuity. For ∆>∆RS the fixed point of SE is E∞=Ebad(∆) which is also the global minimum of iRS(E; ∆), hence (5) is verified. Integrating this inequality on ]∆RS,∆[⊂]∆RS,∆Opt[ and using I(S; W)/n≤minE∈[0,v] iRS(E; ∆) again, we find that the RS formula holds for all ∆∈ [0,∆Opt]. But this implies that minE∈[0,v] iRS(E; ∆) is analytic at ∆RS, a contradiction. We now prove the RS formula for ∆≥∆Opt. Note that the previous arguments showed that necessarily ∆Opt≤∆RS. Thus by lemmas 3.1 and 3.2 (and the sub-optimality of AMP as shown as before) we obtain ∆RS ≤ ∆AMP,coup≤∆Opt,coup = ∆Opt≤∆RS. This shows that ∆Opt = ∆RS (this is the point where spatial coupling came in the game and we do not know of other means to prove such an equality). For ∆>∆RS we have E∞ =Ebad(∆) which is the global minimum of iRS(E; ∆). Therefore we again have (5) in this range and the proof can be completed by using once more the integration argument, this time over the range [∆RS,∆]=[∆Opt,∆]. Proof sketch of corollaries 1.3 and 1.5: LetE∗(∆)=argminEiRS(E; ∆) for ∆ 6=∆RS. By explicit calculation one checks that diRS(E∗,∆)/d∆−1 =(v2−(v−E∗(∆))2)/4, so from theorem 1.1 and the matrix form of the I-MMSE relation we find Mmmsen→v2−(v−E∗(∆))2 as n→∞ which is the first part of the statement of corollary 1.3. Let us now turn to corollary 1.5. For n→∞ the vectorMSE of the AMP estimator at time t equals Et, and since the fixed point equation corresponding to SE is precisely the stationarity equation for iRS(E; ∆), we conclude that for ∆ /∈ [∆AMP,∆RS] we must have E∞=E∗(∆). It remains to prove that E∗(∆)=limn→∞Vmmsen(∆) at least for ∆ /∈ [∆AMP,∆RS] (we believe this is in fact true for all ∆). This will settle the second part of corollary 1.3 as well as 1.5. Using (Nishimori) identities ES,W[SiSjE[XiXj |W]]=ES,W[E[XiXj |W]2] (see e.g. [9]) and using the law of large numbers we can show limn→∞Mmmsen ≤ limn→∞(v2− (v− Vmmsen(∆)) 2). Concentration techniques similar to [13] suggest that the equality in fact holds (for ∆ 6= ∆RS) but there are technicalities that prevent us from completing the proof of equality. However it is interesting to note that this equality would imply E∗(∆)=limn→∞Vmmsen(∆) for all ∆ 6=∆RS. Nevertheless, another argument can be used when AMP is optimal. On one hand the right hand side of the inequality is necessarily smaller than v2−(v−E∞)2. On the other hand the left hand side of the inequality is equal to v2−(v−E∗(∆))2. SinceE∗(∆)=E∞ when ∆ /∈ [∆AMP,∆RS], we can conclude limn→∞Vmmsen(∆)=argminEiRS(E; ∆) for this range of ∆. Proof sketch of lemma 3.1: Here we prove the lemma for a ring that is not seeded. An easy argument shows that a seed of size w does not change the MI per variable when L→∞. The statistical physics formulation is convenient: up to the trivial additive term n(L+1)v2/4, the MI Iw,L(S; W) equals the free energy −ES,Z[lnZw,L], where Zw,L := ∫ dxP0(x) exp(−H(x, z,Λ)) and H(x, z,Λ) = 1 ∆ L∑ µ=0 ( Λµµ ∑ iµ≤jµ Aiµjµ(x, z,Λ) + µ+w∑ ν=µ+1 Λµν ∑ iµ,jν Aiµjν (x, z,Λ) ) , (6) with Aiµjν (x, z,Λ) :=(x2iµx 2 jν )/(2n)−(siµsjνxiµxjν )/n−(xiµxjνziµjν √ ∆)/ √ nΛµν . Consider a pair of systems with coupling matrices Λ and Λ′ and i.i.d noize realizations z, z′, an interpolated HamiltonianH(x, z, tΛ)+H(x, z′, (1−t)Λ′), t ∈ [0, 1], and the corresponding partition function Zt. The main idea of the proof is to show that for suitable choices of matrices,− ddtES,Z,Z′ [lnZt]≤0 for all t∈ [0, 1] (up to negligible terms), so that by the fundamental theorem of calculus, we get a comparison between the free energies ofH(x, z,Λ) andH(x, z′,Λ′). Performing the t-derivative brings down a Gibbs average of a polynomial in all variables siµ , xiµ , ziµjν and z ′ iµjν . This expectation over S, Z, Z′ of this Gibbs average is simplified using integration by parts over the Gaussian noise ziµjν , z′iµjν and Nishimori identities (see e.g. proof of corollary 1.3 for one of them). This algebra leads to − 1 n(L+ 1) d dt ES,Z,Z′ [lnZt] = 1 4∆(L+ 1) ES,Z,Z′ [〈qᵀΛq− qᵀΛ′q〉t] +O(1/(nL)), (7) where 〈−〉t is the Gibbs average w.r.t the interpolated Hamiltonian, q is the vector of overlaps qµ := ∑n iµ=1 siµxiµ/n. If we can choose matrices s.t Λ ′ >Λ, the difference of quadratic forms in the Gibbs bracket is negative and we obtain an inequality in the large size limit. We use this scheme to interpolate between the fully decoupled system w=0 and the coupled one 1≤w<L/2 and then between 1≤w <L/2 and the fully connected system w=L/2. The w= 0 system has Λµν = δµν with eigenvalues (1, 1, . . . , 1). For the 1 ≤ w < L/2 system, we take any stochastic translation invariant matrix with non-negative discrete Fourier transform (of its rows): such matrices have an eigenvalue equal to 1 and all others in [0, 1[ (the eigenvalues are precisely equal to the discrete Fourier transform). For w = L/2 we choose Λµν = 1/(L+1) which is a projector with eigenvalues (0, 0, . . . , 1). With these choices we deduce that the free energies and MIs are ordered as Iw=0,L +O(1)≤Iw,L +O(1)≤Iw=L/2,L +O(1). To conclude the proof we divide by n(L+1) and note that the limits of the leftmost and rightmost MIs are equal, provided the limit exists. Indeed the leftmost term equals L times I(S; W) and the rightmost term is the same MI for a system of n(L+1) variables. Existence of the limit follows by subadditivity, proven by a similar interpolation [18]. Proof sketch of lemma 3.2: Fix ∆ < ∆RS. We show that, for w large enough, the coupled SE recursion (4) must converge to a fixed point E∞µ ≤Egood(∆) for all µ. The main intuition behind the proof is to use a “potential function” whose “energy” can be lowered by small perturbation of a fixed point that would go above Egood(∆) [16, 17]. The relevant potential function iw,L(E,∆) is in fact the replica potential of the coupled system (a generalization of (1)). The stationarity condition for this potential is precisely (4) (without the seeding condition). Monotonicity properties of SE ensure that any fixed point has a “unimodal” shape (and recall that it vanishes for µ∈B= {0, . . . , w−1}∪{L−w, . . . , L}). Consider a position µmax∈{w, . . . , L−w−1} where it is maximal and suppose that E∞µmax > Egood(∆). We associate to the fixed point E ∞ a so-called saturated profile Es defined on the whole of Z as follows: Esµ=Egood(∆) for all µ≤µ∞ where µ∞+1 is the smallest position s.t E∞µ >Egood(∆); E s µ=E ∞ µ for µ∈{µ∞+1, . . . , µmax−1}; Esµ=E∞µmax for all µ≥µmax. We show that Es cannot exist for w large enough. To this end define a shift operator by [S(Es)]µ :=Esµ−1. On one hand the shifted profile is a small perturbation of E s which matches a fixed point, except where it is constant, so if we Taylor expand, the first order vanishes and the second order and higher orders can be estimated as |iw,L(S(Es); ∆)−iw,L(Es; ∆)|=O(1/w) uniformly in L. On the other hand, by explicit cancellation of telescopic sums iw,L(S(Es); ∆)−iw,L(Es; ∆)= iRS(Egood; ∆)−iRS(E∞µmax ; ∆). Now one can show from monotonicity properties of SE that if E ∞ is a non trivial fixed point of the coupled SE then E∞µmax cannot be in the basin of attraction of Egood(∆) for the uncoupled SE recursion. Consequently as can be seen on the plot of iRS(E; ∆) (e.g. figure 1) we must have iRS(E∞µmax ; ∆)≥ iRS(Ebad; ∆). Therefore iw,L(S(E s); ∆)−iw,L(Es; ∆)≤ −|iRS(Ebad; ∆)−iRS(Egood; ∆)| which is an energy gain independent of w, and for large enough w we get a contradiction with the previous estimate coming from the Taylor expansion. Acknowledgments J.B and M.D acknowledge funding from the SNSF (grant 200021-156672). Part of this research received funding from the ERC under the EU’s 7th Framework Programme (FP/2007-2013/ERC Grant Agreement 307087-SPARCS). F.K and L.Z thank the Simons Institute for its hospitality.
1. What is the main contribution of the paper regarding mutual information in factoring low-rank matrices? 2. What are the strengths and weaknesses of the paper's approach in using state evolution and spatial coupling? 3. How does the reviewer assess the clarity and conciseness of the paper's presentation? 4. Are there any concerns or questions regarding the paper's analysis, assumptions, and results, particularly in the context of community detection? 5. Are there any issues with the paper's figures, specifically regarding the consistency with the Maxwell rule? 6. Is there a lack of clear definition or explanation of certain terms, such as "matrix-MMSE"? 7. Are there any errors or inconsistencies in the references provided?
Review
Review This paper proves, in a rank-one case, a conjectured general expression for the mutual information in the problem of factoring low-rank matrices from noisy observations. The key ideas are to use state evolution to analyse behaviors of approximate message passing (AMP) applied to the problem, and to apply the idea of spatial coupling in coding theory in order to make AMP performing well beyond the thredhold at which the conventional AMP fails.Even though the arugment in this paper is limited to a rank-one case, it nevertheless provides results covering various problems in application domains, such as the spiked Wigner model and community detection in stochastic block model. I think that the main problem with this paper is that it is very dense in showing some details of the proofs of the main results. I feel that it is still the case even though I consider the theoretical nature of this paper. Due to page limitation, description in this paper was forced to be brief, which altogether makes the paper hard to follow. It would have been better if some parts of the description of the proofs are moved in supplementary material to put more space to the discussion section, in order to appeal a wider audience of the conference. Lines 153-154: I do not understand what is meant here. Is the analysis valid in any arbitrary scaling of \rho in n? If so, then it should be stated explicitly, but it seems that \rho=O(1) is assumed in the analysis. In the community detection problem, I wonder whether the assertion \Delta_Opt=1 for \rho>\rho_c is valid in view of Definition 1.2, as well as whether \Delta_AMP for the same range of \rho is valid in terms of Definition 1.4. It seems that for \rho>\rho_c, at \Delta=1 a second-order transition occurs, so that although the former assertion would be fine the latter would not be justifiable. In the insets of Figure 2, it seems as if MSE for MMSE, as a function of \Delta, has an infinite derivative at \Delta_Opt, which would seem inconsistent with the Maxwell rule. Lines 227-8: The term matrix-MMSE is used without proper definition. I do not understand what is meant by the sentence "This cannot improve the matrix-MMSE." In the reference list, some items (references 2, 3, 7, 8, 14, 18, and 23, as far as I noticed) have incomplete bibliographic information.
NIPS
Title Mutual information for symmetric rank-one matrix estimation: A proof of the replica formula Abstract Factorizing low-rank matrices has many applications in machine learning and statistics. For probabilistic models in the Bayes optimal setting, a general expression for the mutual information has been proposed using heuristic statistical physics computations, and proven in few specific cases. Here, we show how to rigorously prove the conjectured formula for the symmetric rank-one case. This allows to express the minimal mean-square-error and to characterize the detectability phase transitions in a large set of estimation problems ranging from community detection to sparse PCA. We also show that for a large set of parameters, an iterative algorithm called approximate message-passing is Bayes optimal. There exists, however, a gap between what currently known polynomial algorithms can do and what is expected information theoretically. Additionally, the proof technique has an interest of its own and exploits three essential ingredients: the interpolation method introduced in statistical physics by Guerra, the analysis of the approximate message-passing algorithm and the theory of spatial coupling and threshold saturation in coding. Our approach is generic and applicable to other open problems in statistical estimation where heuristic statistical physics predictions are available. Consider the following probabilistic rank-one matrix estimation problem: one has access to noisy observations w = (wij)i,j=1 of the pair-wise product of the components of a vector s = (s1, . . . , sn) ∈ R with i.i.d components distributed as Si ∼ P0, i = 1, . . . , n. The entries of w are observed through a noisy element-wise (possibly non-linear) output probabilistic channel Pout(wij |sisj/ √ n). The goal is to estimate the vector s from w assuming that both P0 and Pout are known and independent of n (noise is symmetric so that wij =wji). Many important problems in statistics and machine learning can be expressed in this way, such as sparse PCA [1], the Wigner spiked model [2, 3], community detection [4] or matrix completion [5]. Proving a result initially derived by a heuristic method from statistical physics, we give an explicit expression for the mutual information (MI) and the information theoretic minimal mean-square-error (MMSE) in the asymptotic n→∞ limit. Our results imply that for a large region of parameters, the posterior marginal expectations of the underlying signal components (often assumed intractable 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. to compute) can be obtained in the leading order in n using a polynomial-time algorithm called approximate message-passing (AMP) [6, 3, 4, 7]. We also demonstrate the existence of a region where both AMP and spectral methods [8] fail to provide a good answer to the estimation problem, while it is nevertheless information theoretically possible to do so. We illustrate our theorems with examples and also briefly discuss the implications in terms of computational complexity. 1 Setting and main results The additive white Gaussian noise setting: A standard and natural setting is the case of additive white Gaussian noise (AWGN) of known variance ∆, wij =sisj/ √ n+zij √ ∆, where z=(zij)i,j=1 is a symmetric matrix with i.i.d entries Zij ∼N (0, 1), 1≤ i≤ j≤n. Perhaps surprisingly, it turns out that this Gaussian setting is sufficient to completely characterize all the problems discussed in the introduction, even if these have more complicated output channels. This is made possible by a theorem of channel universality [9] (already proven for community detection in [4] and conjectured in [10]). This theorem states that given an output channel Pout(w|y), such that (s.t) logPout(w|y=0) is three times differentiable with bounded second and third derivatives, then the MI satisfies I(S; W)= I(S; SS/ √ n+Z √ ∆)+O( √ n), where ∆ is the inverse Fisher information (evaluated at y=0) of the output channel: ∆−1 := EPout(w|0)[(∂y logPout(W |y)|y=0)]. Informally, this means that we only have to compute the MI for an AWGN channel to take care of a wide range of problems, which can be expressed in terms of their Fisher information. In this paper we derive rigorously, for a large class of signal distributions P0, an explicit one-letter formula for the MI per variable I(S; W)/n in the asymptotic limit n→∞. Main result: Our central result is a proof of the expression for the asymptotic n→∞MI per variable via the so-called replica symmetric (RS) potential iRS(E; ∆) defined as iRS(E; ∆) := (v − E) + v N/A Consider the following probabilistic rank-one matrix estimation problem: one has access to noisy observations w = (wij)ni,j=1 of the pair-wise product of the components of a vector s = (s1, . . . , sn)ᵀ ∈ Rn with i.i.d components distributed as Si ∼ P0, i = 1, . . . , n. The entries of w are observed through a noisy element-wise (possibly non-linear) output probabilistic channel Pout(wij |sisj/ √ n). The goal is to estimate the vector s from w assuming that both P0 and Pout are known and independent of n (noise is symmetric so that wij =wji). Many important problems in statistics and machine learning can be expressed in this way, such as sparse PCA [1], the Wigner spiked model [2, 3], community detection [4] or matrix completion [5]. Proving a result initially derived by a heuristic method from statistical physics, we give an explicit expression for the mutual information (MI) and the information theoretic minimal mean-square-error (MMSE) in the asymptotic n→∞ limit. Our results imply that for a large region of parameters, the posterior marginal expectations of the underlying signal components (often assumed intractable 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. to compute) can be obtained in the leading order in n using a polynomial-time algorithm called approximate message-passing (AMP) [6, 3, 4, 7]. We also demonstrate the existence of a region where both AMP and spectral methods [8] fail to provide a good answer to the estimation problem, while it is nevertheless information theoretically possible to do so. We illustrate our theorems with examples and also briefly discuss the implications in terms of computational complexity. 1 Setting and main results The additive white Gaussian noise setting: A standard and natural setting is the case of additive white Gaussian noise (AWGN) of known variance ∆, wij =sisj/ √ n+zij √ ∆, where z=(zij)ni,j=1 is a symmetric matrix with i.i.d entries Zij ∼N (0, 1), 1≤ i≤ j≤n. Perhaps surprisingly, it turns out that this Gaussian setting is sufficient to completely characterize all the problems discussed in the introduction, even if these have more complicated output channels. This is made possible by a theorem of channel universality [9] (already proven for community detection in [4] and conjectured in [10]). This theorem states that given an output channel Pout(w|y), such that (s.t) logPout(w|y=0) is three times differentiable with bounded second and third derivatives, then the MI satisfies I(S; W)= I(S; SSᵀ/ √ n+Z √ ∆)+O( √ n), where ∆ is the inverse Fisher information (evaluated at y=0) of the output channel: ∆−1 := EPout(w|0)[(∂y logPout(W |y)|y=0)2]. Informally, this means that we only have to compute the MI for an AWGN channel to take care of a wide range of problems, which can be expressed in terms of their Fisher information. In this paper we derive rigorously, for a large class of signal distributions P0, an explicit one-letter formula for the MI per variable I(S; W)/n in the asymptotic limit n→∞. Main result: Our central result is a proof of the expression for the asymptotic n→∞MI per variable via the so-called replica symmetric (RS) potential iRS(E; ∆) defined as iRS(E; ∆) := (v − E)2 + v2 4∆ − ES,Z [ ln (∫ dxP0(x)e − x2 2Σ(E;∆)2 +x ( S Σ(E;∆)2 + Z Σ(E;∆) ))] , (1) with Z∼N (0, 1), S∼P0, E[S2]=v and Σ(E; ∆)2 :=∆/(v−E), E∈ [0, v]. Here we will assume that P0 is a discrete distribution over a finite bounded real alphabet P0(s)= ∑ν α=1 pαδ(s−aα). Thus the only continuous integral in (1) is the Gaussian over z. Our results can be extended to mixtures of discrete and continuous signal distributions at the expense of technical complications in some proofs. It turns out that both the information theoretic and algorithmic AMP thresholds are determined by the set of stationary points of (1) (w.r.t E). It is possible to show that for all ∆>0 there always exist at least one stationary minimum. Note E=0 is never a stationary point (except for P0 a single Dirac mass) and E= v is stationary only if E[S] = 0. In this contribution we suppose that at most three stationary points exist, corresponding to situations with at most one phase transition. We believe that situations with multiple transitions can also be covered by our techniques. Theorem 1.1 (RS formula for the mutual information) Fix ∆>0 and let P0 be a discrete distribution s.t (1) has at most three stationary points. Then limn→∞ I(S; W)/n=minE∈[0,v] iRS(E; ∆). The proof of the existence of the limit does not require the above hypothesis on P0. Also, it was first shown in [9] that for all n, I(S; W)/n≤minE∈[0,v] iRS(E; ∆), an inequality that we will use in the proof section. It is conceptually useful to define the following threshold: Definition 1.2 (Information theoretic threshold) Define ∆Opt as the first non-analyticity point of the MI as ∆ increases: ∆Opt :=sup{∆| limn→∞ I(S; W)/n is analytic in ]0,∆[}. When P0 is s.t (1) has at most three stationary points, as discussed below, then minE∈[0,v] iRS(E; ∆) has at most one non-analyticity point denoted ∆RS (if minE∈[0,v] iRS(E; ∆) is analytic over all R+ we set ∆RS =∞). Theorem 1.1 gives us a mean to compute the information theoretic threshold ∆Opt =∆RS. A basic application of theorem 1.1 is the expression of the MMSE: Corollary 1.3 (Exact formula for the MMSE) For all ∆ 6= ∆RS, the matrix-MMSE Mmmsen := ES,W[‖SSᵀ − E[XXᵀ|W]‖2F]/n2 (‖ − ‖F being the Frobenius norm) is asymptotically limn→∞Mmmsen(∆ −1) = v2−(v−argminE∈[0,v]iRS(E; ∆))2. Moreover, if ∆<∆AMP (where ∆AMP is the algorithmic threshold, see definition 1.4) or ∆>∆RS, then the usual vector-MMSE Vmmsen :=ES,W[‖S−E[X|W]‖22]/n satisfies limn→∞Vmmsen=argminE∈[0,v]iRS(E; ∆). It is natural to conjecture that the vector-MMSE is given by argminE∈[0,v]iRS(E; ∆) for all ∆ 6=∆RS, but our proof does not quite yield the full statement. A fundamental consequence concerns the performance of the AMP algorithm [6] for estimating s. AMP has been analysed rigorously in [11, 12, 4] where it is shown that its asymptotic performance is tracked by state evolution (SE). Let Et :=limn→∞ ES,Z[‖S− ŝt‖22]/n be the asymptotic average vector-MSE of the AMP estimate ŝt at time t. Define mmse(Σ−2) :=ES,Z [(S−E[X|S+ΣZ])2] as the usual scalar mmse function associated to a scalar AWGN channel of noise variance Σ2, with S∼P0 and Z∼N (0, 1). Then Et+1 = mmse(Σ(Et; ∆)−2), E0 = v, (2) is the SE recursion. Monotonicity properties of the mmse function imply that Et is a decreasing sequence s.t limt→∞Et=E∞ exists. Note that when E[S] = 0 and v is an unstable fixed point, as such, SE “does not start”. While this is not really a problem when one runs AMP in practice, for analysis purposes one can slightly bias P0 and remove the bias at the end of the proofs. Definition 1.4 (AMP algorithmic threshold) For ∆ > 0 small enough, the fixed point equation corresponding to (2) has a unique solution for all noise values in ]0,∆[. We define ∆AMP as the supremum of all such ∆. Corollary 1.5 (Performance of AMP) In the limit n→∞, AMP initialized without any knowledge other than P0 yields upon convergence the asymptotic matrix-MMSE as well as the asymptotic vector-MMSE iff ∆<∆AMP or ∆>∆RS, namely E∞=argminE∈[0,v]iRS(E; ∆). ∆AMP can be read off the replica potential (1): by differentiation of (1) one finds a fixed point equation that corresponds to (2). Thus ∆AMP is the smallest solution of ∂iRS/∂E=∂2iRS/∂E2 =0; in other words it is the “first” horizontal inflexion point appearing in iRS(E; ∆) when ∆ increases. Discussion: With our hypothesis on P0 there are only three possible scenarios: ∆AMP < ∆RS (one “first order” phase transition); ∆AMP = ∆RS < ∞ (one “higher order” phase transition); ∆AMP = ∆RS =∞ (no phase transition). In the sequel we will have in mind the most interesting case, namely one first order phase transition, where we determine the gap between the algorithmic AMP and information theoretic performance. The cases of no phase transition or higher order phase transition, which present no algorithmic gap, are basically covered by the analysis of [3] and follow as a special case from our proof. The only cases that would require more work are those where P0 is s.t (1) develops more than three stationary points and more than one phase transition is present. For ∆AMP<∆RS the structure of stationary points of (1) is as follows1 (figure 1). There exist three branchesEgood(∆), Eunstable(∆) andEbad(∆) s.t: 1) For 0<∆<∆AMP there is a single stationary point Egood(∆) which is a global minimum; 2) At ∆AMP a horizontal inflexion point appears, for ∆∈ [∆AMP,∆RS] there are three stationary points satisfying Egood(∆AMP)<Eunstable(∆AMP)= Ebad(∆AMP), Egood(∆) < Eunstable(∆) < Ebad(∆) otherwise, and moreover iRS(Egood; ∆) ≤ iRS(Ebad; ∆) with equality only at ∆RS; 3) for ∆ > ∆RS there is at least the stationary point Ebad(∆) which is always the global minimum, i.e. iRS(Ebad; ∆)<iRS(Egood; ∆). (For higher ∆ the Egood(∆) and Eunstable(∆) branches may merge and disappear); 4) Egood(∆) is analytic for ∆∈]0,∆′[, ∆′>∆RS, and Ebad(∆) is analytic for ∆>∆AMP. We note for further use in the proof section that E∞=Egood(∆) for ∆<∆AMP and E∞=Ebad(∆) for ∆>∆AMP. Definition 1.4 is equivalent to ∆AMP = sup{∆|E∞=Egood(∆)}. Moreover we will also use that iRS(Egood; ∆) is analytic on ]0,∆′[, iRS(Ebad; ∆) is analytic on ]∆AMP,∞[, and the only non-analyticity point of minE∈[0,v] iRS(E; ∆) is at ∆RS. Relation to other works: Explicit single-letter characterization of the MI in the rank-one problem has attracted a lot of attention recently. Particular cases of theorem 1.1 have been shown rigorously in a number of situations. A special case when si=±1∼Ber(1/2) already appeared in [13] where an equivalent spin glass model is analysed. Very recently, [9] has generalized the results of [13] and, notably, obtained a generic matching upper bound. The same formula has been also rigorously computed following the study of AMP in [3] for spiked models (provided, however, that the signal was not too sparse) and in [4] for strictly symmetric community detection. 1We take E[S] 6= 0. Once theorem 1.1 is proven for this case a limiting argument allows to extend it to E[S]=0. For rank-one symmetric matrix estimation problems, AMP has been introduced by [6], who also computed the SE formula to analyse its performance, generalizing techniques developed by [11] and [12]. SE was further studied by [3] and [4]. In [7, 10], the generalization to larger rank was also considered. The general formula proposed by [10] for the conditional entropy and the MMSE on the basis of the heuristic cavity method from statistical physics was not demonstrated in full generality. Worst, all existing proofs could not reach the more interesting regime where a gap between the algorithmic and information theoretic perfomances appears, leaving a gap with the statistical physics conjectured formula (and rigorous upper bound from [9]). Our result closes this conjecture and has interesting non-trivial implications on the computational complexity of these tasks. Our proof technique combines recent rigorous results in coding theory along the study of capacityachieving spatially coupled codes [14, 15, 16, 17] with other progress, coming from developments in mathematical physics putting on a rigorous basis predictions of spin glass theory [18]. From this point of view, the theorem proved in this paper is relevant in a broader context going beyond low-rank matrix estimation. Hundreds of papers have been published in statistics, machine learning or information theory using the non-rigorous statistical physics approach. We believe that our result helps setting a rigorous foundation of a broad line of work. While we focus on rank-one symmetric matrix estimation, our proof technique is readily extendable to more generic low-rank symmetric matrix or low-rank symmetric tensor estimation. We also believe that it can be extended to other problems of interest in machine learning and signal processing, such as generalized linear regression, features/dictionary learning, compressed sensing or multi-layer neural networks. 2 Two examples: Wigner spiked model and community detection In order to illustrate the consequences of our results we shall present two examples. Wigner spiked model: In this model, the vector s is a Bernoulli random vector, Si∼Ber(ρ). For large enough densities (i.e. ρ>0.041(1)), [3] computed the matrix-MMSE and proved that AMP is a computationally efficient algorithm that asymptotically achieves the matrix-MMSE for any value of the noise ∆. Our results allow to close the gap left open by [3]: on one hand we now obtain rigorously the MMSE for ρ≤ 0.041(1), and on the other one we observe that for such values of ρ, and as ∆ decreases, there is a small region where two local minima coexist in iRS(E; ∆). In particular for ∆AMP<∆<∆Opt = ∆RS the global minimum corresponding to the MMSE differs from the local one that traps AMP, and a computational gap appears (see figure 1). While the region where AMP is Bayes optimal is quite large, the region where is it not, however, is perhaps the most interesting one. While this is by no means evident, statistical physics analogies with physical phase transitions in nature suggest that this region should be hard for a very broad class of algorithms. For small ρ our results are consistent with the known optimal and algorithmic thresholds predicted in sparse PCA [19, 20], that treats the case of sub-extensive ρ=O(1) values. Another interesting line of work for such probabilistic models appeared in the context of random matrix theory (see [8] and references therein) and predicts that a sharp phase transition occurs at a critical value of the noise ∆spectral =ρ2 below which an outlier eigenvalue (and its principal eigenvector) has a positive correlation with the hidden signal. For larger noise values the spectral distribution of the observation is indistinguishable from that of the pure random noise. Asymmetric balanced community detection: We now consider the problem of detecting two communities (groups) with different sizes ρn and (1− ρ)n, that generalizes the one considered in [4]. One is given a graph where the probability to have a link between nodes in the first group is p+µ(1−ρ)/(ρ √ n), between those in the second group is p+µρ/( √ n(1−ρ)), while interconnections appear with probability p−µ/ √ n. With this peculiar “balanced” setting, the nodes in each group have the same degree distribution with mean pn, making them harder to distinguish. According to the universality property described in the first section, this is equivalent to a model with AWGN of variance ∆ = p(1−p)/µ2 where each variable si is chosen according to P0(s)=ρδ(s− √ (1−ρ)/ρ)+(1−ρ)δ(s+ √ ρ/(1−ρ)). Our results for this problem2 are summarized on the right hand side of figure 2. For ρ > ρc = 1/2− √ 1/12 (black point), it is asymptotically information theoretically possible to get an estimation better than chance if and only if ∆<1. When ρ<ρc, however, it becomes possible for much larger values of the noise. Interestingly, AMP and spectral methods have the same transition and can find a positive correlation with the hidden communities for ∆<1, regardless of the value of ρ. Again, a region [∆AMP,∆Opt =∆RS] exists where a computational gap appears when ρ<ρc. One can investigate the very low ρ regime where we find that the information theoretic transition goes as ∆Opt(ρ→0) = 1/(4ρ| log ρ|). Now if we assume that this result stays true even for ρ= O(1) (which is a speculation at this point), we can choose µ→(1−p)ρ √ n such that the small group is a clique. Then the problem corresponds to a “balanced” version of the famous planted clique problem [21]. We find that the AMP/spectral approach finds the 2Note that here since E=v=1 is an extremum of iRS(E; ∆), one must introduce a small bias in P0 and let it then tend to zero at the end of the proofs. hidden clique when it is larger than √ np/(1−p), while the information theoretic transition translates into size of the clique 4p log(n)/(1−p). This is indeed reminiscent of the more classical planted clique problem at p=1/2 with its gap between log(n) (information theoretic), √ n/e (AMP [22]) and √ n (spectral [21]). Since in our balanced case the spectral and AMP limits match, this suggests that the small gain of AMP in the standard clique problem is simply due to the information provided by the distribution of local degrees in the two groups (which is absent in our balanced case). We believe this correspondence strengthens the claim that the AMP gap is actually a fundamental one. 3 Proofs The crux of our proof rests on an auxiliary “spatially coupled system”. The hallmark of spatially coupled models is that one can tune them so that the gap between the algorithmic and information theoretic limits is eliminated, while at the same time the MI is maintained unchanged for the coupled and original models. Roughly speaking, this means that it is possible to algorithmically compute the information theoretic limit of the original model because a suitable algorithm is optimal on the coupled system. The present spatially coupled construction is similar to the one used for the coupled Curie-Weiss model [14]. Consider a ring of length L+1 (L even) with blocks positioned at µ∈{0, . . . , L} and coupled to neighboring blocks {µ−w, . . . , µ+w}. Positions µ are taken modulo L+1 and the integer w∈{0, . . . , L/2} equals the size of the coupling window. The coupled model is wiµjν = siµsjν √ Λµν n + ziµjν √ ∆, (3) where the index iµ∈{1, . . . , n} (resp. jν) belongs to the block µ (resp. ν) along the ring, Λ is an (L+1)×(L+1) matrix which describes the strength of the coupling between blocks, andZiµjν ∼N (0, 1) are i.i.d. For the proof to work, the matrix elements have to be chosen appropriately. We assume that: i) Λ is a doubly stochastic matrix; ii) Λµν depends on |µ−ν|; iii) Λµν is not vanishing for |µ−ν| ≤ w and vanishes for |µ−ν|>w; iv) Λ is smooth in the sense |Λµν−Λµ+1ν |=O(w−2); v) Λ has a non-negative Fourier transform. All these conditions can easily be met, the simplest example being a triangle of base 2w+1 and height 1/(w+1). The construction of the coupled system is completed by introducing a seed in the ring: we assume perfect knowledge of the signal components {siµ} for µ∈B :={−w−1, . . . , w−1} mod L+1. This seed is what allows to close the gap between the algorithmic and information theoretic limits and therefore plays a crucial role. Note it can also be viewed as an “opening” of the chain with fixed boundary conditions. Our first crucial result states that the MI Iw,L(S; W) of the coupled and original systems are the same in a suitable limit. Lemma 3.1 (Equality of mutual informations) For any fixed w the following limits exist and are equal: limL→∞ limn→∞ Iw,L(S; W)/(n(L+1))=limn→∞ I(S; W)/n. An immediate corollary is that non-analyticity points (w.r.t ∆) of the MIs are the same in the coupled and original models. In particular, defining ∆Opt,coup := sup{∆ | limL→∞ limn→∞ Iw,L(S; W)/(n(L+1)) is analytic in ]0,∆[}, we have ∆Opt,coup =∆Opt. The second crucial result states that the AMP threshold of the spatially coupled system is at least as good as ∆RS. The analysis of AMP applies to the coupled system as well [11, 12] and it can be shown that the performance of AMP is assessed by SE. Let Etµ := limn→∞ ES,Z[‖Sµ− ŝ t µ‖22]/n be the asymptotic average vector-MSE of the AMP estimate ŝtµ at time t for the µ-th “block” of S. We associate to each position µ ∈ {0, . . . , L} an independent scalar system with AWGN of the form Y =S+Σµ(E; ∆)Z, with Σµ(E; ∆)2 := ∆/(v− ∑L ν=0 ΛµνEν) and S∼P0, Z∼N (0, 1). Taking into account knowledge of the signal components in B, SE reads: Et+1µ = mmse(Σµ(E t; ∆)−2), E0µ = v for µ ∈ {0, . . . , L} \ B, Etµ = 0 for µ ∈ B, t ≥ 0, (4) where the mmse function is defined as in section 1. From the monotonicity of the mmse function we have Et+1µ ≤Etµ for all µ∈{0, . . . , L}, a partial order which implies that limt→∞ E t= E∞ exists. This allows to define an algorithmic threshold for the coupled system: ∆AMP,w,L :=sup{∆|E∞µ ≤ Egood(∆) ∀ µ}. We show (equality holds but is not directly needed): Lemma 3.2 (Threshold saturation) Let ∆AMP,coup := lim infw→∞ lim infL→∞∆AMP,w,L. We have ∆AMP,coup≥∆RS. Proof sketch of theorem 1.1: First we prove the RS formula for ∆ ≤ ∆Opt. It is known [3] that the matrix-MSE of AMP when n→∞ is equal to v2−(v−Et)2. This cannot improve the matrix-MMSE, hence (v2−(v−E∞)2)/4≥ lim supn→∞Mmmsen/4. For ∆≤∆AMP we have E∞=Egood(∆) which is the global minimum of (1) so the left hand side of the last inequality equals the derivative of minE∈[0,v] iRS(E; ∆) w.r.t ∆−1. Thus using the matrix version of the I-MMSE relation [23] we get d d∆−1 min E∈[0,v] iRS(E; ∆) ≥ lim sup n→∞ 1 n dI(S; W) d∆−1 . (5) Integrating this relation on [0,∆] ⊂ [0,∆AMP] and checking that minE∈[0,v] iRS(E; 0) = H(S) (the Shannon entropy of P0) we obtain minE∈[0,v] iRS(E; ∆)≤ lim infn→∞ I(S; W)/n. But we know I(S; W)/n≤minE∈[0,v] iRS(E; ∆) [9], thus we already get theorem 1.1 for ∆≤∆AMP. We notice that ∆AMP≤∆Opt. While this might seem intuitively clear, it follows from ∆RS≥∆AMP (by their definitions) which together with ∆AMP > ∆Opt would imply from theorem 1.1 that limn→∞ I(S; W)/n is analytic at ∆Opt, a contradiction. The next step is to extend theorem 1.1 to the range [∆AMP,∆Opt]. Suppose for a moment ∆RS≥∆Opt. Then both functions on each side of the RS formula are analytic on the whole range ]0,∆Opt[ and since they are equal for ∆≤∆AMP, they must be equal on their whole analyticity range and by continuity, they must also be equal at ∆Opt (that the functions are continuous follows from independent arguments on the existence of the n→∞ limit of concave functions). It remains to show that ∆RS∈ ]∆AMP,∆Opt[ is impossible. We proceed by contradiction, so suppose this is true. Then both functions on each side of the RS formula are analytic on ]0,∆RS[ and since they are equal for ]0,∆AMP[⊂]0,∆RS[ they must be equal on the whole range ]0,∆RS[ and also at ∆RS by continuity. For ∆>∆RS the fixed point of SE is E∞=Ebad(∆) which is also the global minimum of iRS(E; ∆), hence (5) is verified. Integrating this inequality on ]∆RS,∆[⊂]∆RS,∆Opt[ and using I(S; W)/n≤minE∈[0,v] iRS(E; ∆) again, we find that the RS formula holds for all ∆∈ [0,∆Opt]. But this implies that minE∈[0,v] iRS(E; ∆) is analytic at ∆RS, a contradiction. We now prove the RS formula for ∆≥∆Opt. Note that the previous arguments showed that necessarily ∆Opt≤∆RS. Thus by lemmas 3.1 and 3.2 (and the sub-optimality of AMP as shown as before) we obtain ∆RS ≤ ∆AMP,coup≤∆Opt,coup = ∆Opt≤∆RS. This shows that ∆Opt = ∆RS (this is the point where spatial coupling came in the game and we do not know of other means to prove such an equality). For ∆>∆RS we have E∞ =Ebad(∆) which is the global minimum of iRS(E; ∆). Therefore we again have (5) in this range and the proof can be completed by using once more the integration argument, this time over the range [∆RS,∆]=[∆Opt,∆]. Proof sketch of corollaries 1.3 and 1.5: LetE∗(∆)=argminEiRS(E; ∆) for ∆ 6=∆RS. By explicit calculation one checks that diRS(E∗,∆)/d∆−1 =(v2−(v−E∗(∆))2)/4, so from theorem 1.1 and the matrix form of the I-MMSE relation we find Mmmsen→v2−(v−E∗(∆))2 as n→∞ which is the first part of the statement of corollary 1.3. Let us now turn to corollary 1.5. For n→∞ the vectorMSE of the AMP estimator at time t equals Et, and since the fixed point equation corresponding to SE is precisely the stationarity equation for iRS(E; ∆), we conclude that for ∆ /∈ [∆AMP,∆RS] we must have E∞=E∗(∆). It remains to prove that E∗(∆)=limn→∞Vmmsen(∆) at least for ∆ /∈ [∆AMP,∆RS] (we believe this is in fact true for all ∆). This will settle the second part of corollary 1.3 as well as 1.5. Using (Nishimori) identities ES,W[SiSjE[XiXj |W]]=ES,W[E[XiXj |W]2] (see e.g. [9]) and using the law of large numbers we can show limn→∞Mmmsen ≤ limn→∞(v2− (v− Vmmsen(∆)) 2). Concentration techniques similar to [13] suggest that the equality in fact holds (for ∆ 6= ∆RS) but there are technicalities that prevent us from completing the proof of equality. However it is interesting to note that this equality would imply E∗(∆)=limn→∞Vmmsen(∆) for all ∆ 6=∆RS. Nevertheless, another argument can be used when AMP is optimal. On one hand the right hand side of the inequality is necessarily smaller than v2−(v−E∞)2. On the other hand the left hand side of the inequality is equal to v2−(v−E∗(∆))2. SinceE∗(∆)=E∞ when ∆ /∈ [∆AMP,∆RS], we can conclude limn→∞Vmmsen(∆)=argminEiRS(E; ∆) for this range of ∆. Proof sketch of lemma 3.1: Here we prove the lemma for a ring that is not seeded. An easy argument shows that a seed of size w does not change the MI per variable when L→∞. The statistical physics formulation is convenient: up to the trivial additive term n(L+1)v2/4, the MI Iw,L(S; W) equals the free energy −ES,Z[lnZw,L], where Zw,L := ∫ dxP0(x) exp(−H(x, z,Λ)) and H(x, z,Λ) = 1 ∆ L∑ µ=0 ( Λµµ ∑ iµ≤jµ Aiµjµ(x, z,Λ) + µ+w∑ ν=µ+1 Λµν ∑ iµ,jν Aiµjν (x, z,Λ) ) , (6) with Aiµjν (x, z,Λ) :=(x2iµx 2 jν )/(2n)−(siµsjνxiµxjν )/n−(xiµxjνziµjν √ ∆)/ √ nΛµν . Consider a pair of systems with coupling matrices Λ and Λ′ and i.i.d noize realizations z, z′, an interpolated HamiltonianH(x, z, tΛ)+H(x, z′, (1−t)Λ′), t ∈ [0, 1], and the corresponding partition function Zt. The main idea of the proof is to show that for suitable choices of matrices,− ddtES,Z,Z′ [lnZt]≤0 for all t∈ [0, 1] (up to negligible terms), so that by the fundamental theorem of calculus, we get a comparison between the free energies ofH(x, z,Λ) andH(x, z′,Λ′). Performing the t-derivative brings down a Gibbs average of a polynomial in all variables siµ , xiµ , ziµjν and z ′ iµjν . This expectation over S, Z, Z′ of this Gibbs average is simplified using integration by parts over the Gaussian noise ziµjν , z′iµjν and Nishimori identities (see e.g. proof of corollary 1.3 for one of them). This algebra leads to − 1 n(L+ 1) d dt ES,Z,Z′ [lnZt] = 1 4∆(L+ 1) ES,Z,Z′ [〈qᵀΛq− qᵀΛ′q〉t] +O(1/(nL)), (7) where 〈−〉t is the Gibbs average w.r.t the interpolated Hamiltonian, q is the vector of overlaps qµ := ∑n iµ=1 siµxiµ/n. If we can choose matrices s.t Λ ′ >Λ, the difference of quadratic forms in the Gibbs bracket is negative and we obtain an inequality in the large size limit. We use this scheme to interpolate between the fully decoupled system w=0 and the coupled one 1≤w<L/2 and then between 1≤w <L/2 and the fully connected system w=L/2. The w= 0 system has Λµν = δµν with eigenvalues (1, 1, . . . , 1). For the 1 ≤ w < L/2 system, we take any stochastic translation invariant matrix with non-negative discrete Fourier transform (of its rows): such matrices have an eigenvalue equal to 1 and all others in [0, 1[ (the eigenvalues are precisely equal to the discrete Fourier transform). For w = L/2 we choose Λµν = 1/(L+1) which is a projector with eigenvalues (0, 0, . . . , 1). With these choices we deduce that the free energies and MIs are ordered as Iw=0,L +O(1)≤Iw,L +O(1)≤Iw=L/2,L +O(1). To conclude the proof we divide by n(L+1) and note that the limits of the leftmost and rightmost MIs are equal, provided the limit exists. Indeed the leftmost term equals L times I(S; W) and the rightmost term is the same MI for a system of n(L+1) variables. Existence of the limit follows by subadditivity, proven by a similar interpolation [18]. Proof sketch of lemma 3.2: Fix ∆ < ∆RS. We show that, for w large enough, the coupled SE recursion (4) must converge to a fixed point E∞µ ≤Egood(∆) for all µ. The main intuition behind the proof is to use a “potential function” whose “energy” can be lowered by small perturbation of a fixed point that would go above Egood(∆) [16, 17]. The relevant potential function iw,L(E,∆) is in fact the replica potential of the coupled system (a generalization of (1)). The stationarity condition for this potential is precisely (4) (without the seeding condition). Monotonicity properties of SE ensure that any fixed point has a “unimodal” shape (and recall that it vanishes for µ∈B= {0, . . . , w−1}∪{L−w, . . . , L}). Consider a position µmax∈{w, . . . , L−w−1} where it is maximal and suppose that E∞µmax > Egood(∆). We associate to the fixed point E ∞ a so-called saturated profile Es defined on the whole of Z as follows: Esµ=Egood(∆) for all µ≤µ∞ where µ∞+1 is the smallest position s.t E∞µ >Egood(∆); E s µ=E ∞ µ for µ∈{µ∞+1, . . . , µmax−1}; Esµ=E∞µmax for all µ≥µmax. We show that Es cannot exist for w large enough. To this end define a shift operator by [S(Es)]µ :=Esµ−1. On one hand the shifted profile is a small perturbation of E s which matches a fixed point, except where it is constant, so if we Taylor expand, the first order vanishes and the second order and higher orders can be estimated as |iw,L(S(Es); ∆)−iw,L(Es; ∆)|=O(1/w) uniformly in L. On the other hand, by explicit cancellation of telescopic sums iw,L(S(Es); ∆)−iw,L(Es; ∆)= iRS(Egood; ∆)−iRS(E∞µmax ; ∆). Now one can show from monotonicity properties of SE that if E ∞ is a non trivial fixed point of the coupled SE then E∞µmax cannot be in the basin of attraction of Egood(∆) for the uncoupled SE recursion. Consequently as can be seen on the plot of iRS(E; ∆) (e.g. figure 1) we must have iRS(E∞µmax ; ∆)≥ iRS(Ebad; ∆). Therefore iw,L(S(E s); ∆)−iw,L(Es; ∆)≤ −|iRS(Ebad; ∆)−iRS(Egood; ∆)| which is an energy gain independent of w, and for large enough w we get a contradiction with the previous estimate coming from the Taylor expansion. Acknowledgments J.B and M.D acknowledge funding from the SNSF (grant 200021-156672). Part of this research received funding from the ERC under the EU’s 7th Framework Programme (FP/2007-2013/ERC Grant Agreement 307087-SPARCS). F.K and L.Z thank the Simons Institute for its hospitality.
1. What is the main contribution of the paper regarding the estimation of a vector S from noisy observations W_ij? 2. What is the key step in the proof of the main result, and how does it relate to the threshold for spatially coupled constructions? 3. How does the approach proposed in the paper differ from prior works in terms of connecting information theoretic performance and AMP's performance? 4. What are some specific comments regarding the paper's clarity and organization? 5. Are there any concerns or questions regarding the proof sketches presented in the paper?
Review
Review The paper considers the estimation of a vector S by observing noisy versions W_{ij} of its coordinate-wise product terms s_i.s_j normalized by \sqrt{n}, under additive Gaussian noise of variance \Delta. The main result is a formula for the limit (1) \lim_{n --> \infty} 1/n I(S; W) in terms of the replica symmetric potential function i_{RS}(E, \Delta), which, as a corollary (using the I-MMSE relation), shows that MMSE(\Delta) for the estimation problem above is given by the minimizer E^* of i_{RS}(E, \Delta). As a further corollary it is shown that Approximate Message Passing (AMP), which is known to converge to a stationary point of i_{RS}(E, \Delta) when \Delta is in appropriate range, yields MMSE in the same range. The proof of (1) is the main contribution of the paper. The proof is based on establishing an equality of the two regions over an appropriate interval and then extending it to a larger region over which the (complex extensions of the) two sides are analytic. A key step is to show that the analytic threshold for the left-side of (1), termed \Delta_{OPT}, is at least as much as that for the right-side, \Delta_{RS}. This is facilitated by relating the two thresholds to the threshold for a spatially coupled construction of W_{ij} (which is done in Lemma 3.1 and Lemma 3.2). The paper addresses an important topic, namely that of providing formal guarantees for algorithms derived from heuristics in statistical physics. The approach proposed in the paper is very interesting and perhaps novel, at to this reviewer it is, wherein the gap in the thresholds for information theoretic performance and the performance of AMP is bridged by connecting the information theoretic threshold and that for AMP to that of a spatially coupled model. This relation in turn relies on the equality of the threshold for spatially coupled construction and the optimal information theoretic threshold, a phenomenon that has led to a recent breakthrough capacity achieving construction in information theory and is proved here again via Lemma 3.1. Another key step is to show that \Delta_{RS} does not exceed the threshold for the spatially coupled construction with AMP; this is the content of Lemma 3.2. While only a sketch of proof is presented, the approach looks convincing and indeed very interesting. The analysis proposed in the paper, if rigorously correct, leads to an interest approach for analyzing the performance of AMP by bringing in a spatially coupled system to relate various thresholds. Whether this contribution is new is beyond the knowledge of this reviewer. Nevertheless, this overall approach is very fascinating. In addition to the general comments above, I have the following specific comments: 1. The paper and the contribution is rather difficult to follow. The authors mention the connection of their results to several problems but then only present two applications via examples in Section 2 without clarifying the exact implication of their approach for these examples. In particular, what was the state-of-the-art before this paper and how has this paper furthered the analysis of these applications? Also, I suggest that the authors spend some more space in explaining the connection between the formula derived here and the general inference problem. Perhaps the relation of the left-side to information theoretic threshold and the right-side to the threshold of AMP (via the replica symmetric potential function) can be mentioned at the outset without much details. 2. The proof sketch is very difficult to follow. The authors should first summarize the plan before getting to the details. I have written above what I could follow, which might be wrong. However, a similar description in words of the proof idea of the authors will be very useful. 3. The sentence leading to (8) and the one following it are unclear to me. In particular, I couldn't follow the observation "so the left hand side of (7) is equal to the derivative of \min_{E}i_{RS(E, \Delta) w.r.t. \Delta^{-1}". Does the other term in i_{RS}(E, \Delta) equals 0 at the min? Also, in the sentence after (8), it seems you interchanged the limit and the integral. If you justify this interchange in a complete version, at least point-out in the sketch that it needs to be done. Finally, understanding the proof sketch of Lemma 3.2 seems beyond the scope of a non-expert. 4. The extensions claimed in 135-39 seem to be much more interesting than the simple case here. If the authors indeed have these extensions available, why not motivate the paper by these extended results. If such extensions are not available as of now, please modify the claim above to be clear about it.
NIPS
Title Mutual information for symmetric rank-one matrix estimation: A proof of the replica formula Abstract Factorizing low-rank matrices has many applications in machine learning and statistics. For probabilistic models in the Bayes optimal setting, a general expression for the mutual information has been proposed using heuristic statistical physics computations, and proven in few specific cases. Here, we show how to rigorously prove the conjectured formula for the symmetric rank-one case. This allows to express the minimal mean-square-error and to characterize the detectability phase transitions in a large set of estimation problems ranging from community detection to sparse PCA. We also show that for a large set of parameters, an iterative algorithm called approximate message-passing is Bayes optimal. There exists, however, a gap between what currently known polynomial algorithms can do and what is expected information theoretically. Additionally, the proof technique has an interest of its own and exploits three essential ingredients: the interpolation method introduced in statistical physics by Guerra, the analysis of the approximate message-passing algorithm and the theory of spatial coupling and threshold saturation in coding. Our approach is generic and applicable to other open problems in statistical estimation where heuristic statistical physics predictions are available. Consider the following probabilistic rank-one matrix estimation problem: one has access to noisy observations w = (wij)i,j=1 of the pair-wise product of the components of a vector s = (s1, . . . , sn) ∈ R with i.i.d components distributed as Si ∼ P0, i = 1, . . . , n. The entries of w are observed through a noisy element-wise (possibly non-linear) output probabilistic channel Pout(wij |sisj/ √ n). The goal is to estimate the vector s from w assuming that both P0 and Pout are known and independent of n (noise is symmetric so that wij =wji). Many important problems in statistics and machine learning can be expressed in this way, such as sparse PCA [1], the Wigner spiked model [2, 3], community detection [4] or matrix completion [5]. Proving a result initially derived by a heuristic method from statistical physics, we give an explicit expression for the mutual information (MI) and the information theoretic minimal mean-square-error (MMSE) in the asymptotic n→∞ limit. Our results imply that for a large region of parameters, the posterior marginal expectations of the underlying signal components (often assumed intractable 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. to compute) can be obtained in the leading order in n using a polynomial-time algorithm called approximate message-passing (AMP) [6, 3, 4, 7]. We also demonstrate the existence of a region where both AMP and spectral methods [8] fail to provide a good answer to the estimation problem, while it is nevertheless information theoretically possible to do so. We illustrate our theorems with examples and also briefly discuss the implications in terms of computational complexity. 1 Setting and main results The additive white Gaussian noise setting: A standard and natural setting is the case of additive white Gaussian noise (AWGN) of known variance ∆, wij =sisj/ √ n+zij √ ∆, where z=(zij)i,j=1 is a symmetric matrix with i.i.d entries Zij ∼N (0, 1), 1≤ i≤ j≤n. Perhaps surprisingly, it turns out that this Gaussian setting is sufficient to completely characterize all the problems discussed in the introduction, even if these have more complicated output channels. This is made possible by a theorem of channel universality [9] (already proven for community detection in [4] and conjectured in [10]). This theorem states that given an output channel Pout(w|y), such that (s.t) logPout(w|y=0) is three times differentiable with bounded second and third derivatives, then the MI satisfies I(S; W)= I(S; SS/ √ n+Z √ ∆)+O( √ n), where ∆ is the inverse Fisher information (evaluated at y=0) of the output channel: ∆−1 := EPout(w|0)[(∂y logPout(W |y)|y=0)]. Informally, this means that we only have to compute the MI for an AWGN channel to take care of a wide range of problems, which can be expressed in terms of their Fisher information. In this paper we derive rigorously, for a large class of signal distributions P0, an explicit one-letter formula for the MI per variable I(S; W)/n in the asymptotic limit n→∞. Main result: Our central result is a proof of the expression for the asymptotic n→∞MI per variable via the so-called replica symmetric (RS) potential iRS(E; ∆) defined as iRS(E; ∆) := (v − E) + v N/A Consider the following probabilistic rank-one matrix estimation problem: one has access to noisy observations w = (wij)ni,j=1 of the pair-wise product of the components of a vector s = (s1, . . . , sn)ᵀ ∈ Rn with i.i.d components distributed as Si ∼ P0, i = 1, . . . , n. The entries of w are observed through a noisy element-wise (possibly non-linear) output probabilistic channel Pout(wij |sisj/ √ n). The goal is to estimate the vector s from w assuming that both P0 and Pout are known and independent of n (noise is symmetric so that wij =wji). Many important problems in statistics and machine learning can be expressed in this way, such as sparse PCA [1], the Wigner spiked model [2, 3], community detection [4] or matrix completion [5]. Proving a result initially derived by a heuristic method from statistical physics, we give an explicit expression for the mutual information (MI) and the information theoretic minimal mean-square-error (MMSE) in the asymptotic n→∞ limit. Our results imply that for a large region of parameters, the posterior marginal expectations of the underlying signal components (often assumed intractable 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. to compute) can be obtained in the leading order in n using a polynomial-time algorithm called approximate message-passing (AMP) [6, 3, 4, 7]. We also demonstrate the existence of a region where both AMP and spectral methods [8] fail to provide a good answer to the estimation problem, while it is nevertheless information theoretically possible to do so. We illustrate our theorems with examples and also briefly discuss the implications in terms of computational complexity. 1 Setting and main results The additive white Gaussian noise setting: A standard and natural setting is the case of additive white Gaussian noise (AWGN) of known variance ∆, wij =sisj/ √ n+zij √ ∆, where z=(zij)ni,j=1 is a symmetric matrix with i.i.d entries Zij ∼N (0, 1), 1≤ i≤ j≤n. Perhaps surprisingly, it turns out that this Gaussian setting is sufficient to completely characterize all the problems discussed in the introduction, even if these have more complicated output channels. This is made possible by a theorem of channel universality [9] (already proven for community detection in [4] and conjectured in [10]). This theorem states that given an output channel Pout(w|y), such that (s.t) logPout(w|y=0) is three times differentiable with bounded second and third derivatives, then the MI satisfies I(S; W)= I(S; SSᵀ/ √ n+Z √ ∆)+O( √ n), where ∆ is the inverse Fisher information (evaluated at y=0) of the output channel: ∆−1 := EPout(w|0)[(∂y logPout(W |y)|y=0)2]. Informally, this means that we only have to compute the MI for an AWGN channel to take care of a wide range of problems, which can be expressed in terms of their Fisher information. In this paper we derive rigorously, for a large class of signal distributions P0, an explicit one-letter formula for the MI per variable I(S; W)/n in the asymptotic limit n→∞. Main result: Our central result is a proof of the expression for the asymptotic n→∞MI per variable via the so-called replica symmetric (RS) potential iRS(E; ∆) defined as iRS(E; ∆) := (v − E)2 + v2 4∆ − ES,Z [ ln (∫ dxP0(x)e − x2 2Σ(E;∆)2 +x ( S Σ(E;∆)2 + Z Σ(E;∆) ))] , (1) with Z∼N (0, 1), S∼P0, E[S2]=v and Σ(E; ∆)2 :=∆/(v−E), E∈ [0, v]. Here we will assume that P0 is a discrete distribution over a finite bounded real alphabet P0(s)= ∑ν α=1 pαδ(s−aα). Thus the only continuous integral in (1) is the Gaussian over z. Our results can be extended to mixtures of discrete and continuous signal distributions at the expense of technical complications in some proofs. It turns out that both the information theoretic and algorithmic AMP thresholds are determined by the set of stationary points of (1) (w.r.t E). It is possible to show that for all ∆>0 there always exist at least one stationary minimum. Note E=0 is never a stationary point (except for P0 a single Dirac mass) and E= v is stationary only if E[S] = 0. In this contribution we suppose that at most three stationary points exist, corresponding to situations with at most one phase transition. We believe that situations with multiple transitions can also be covered by our techniques. Theorem 1.1 (RS formula for the mutual information) Fix ∆>0 and let P0 be a discrete distribution s.t (1) has at most three stationary points. Then limn→∞ I(S; W)/n=minE∈[0,v] iRS(E; ∆). The proof of the existence of the limit does not require the above hypothesis on P0. Also, it was first shown in [9] that for all n, I(S; W)/n≤minE∈[0,v] iRS(E; ∆), an inequality that we will use in the proof section. It is conceptually useful to define the following threshold: Definition 1.2 (Information theoretic threshold) Define ∆Opt as the first non-analyticity point of the MI as ∆ increases: ∆Opt :=sup{∆| limn→∞ I(S; W)/n is analytic in ]0,∆[}. When P0 is s.t (1) has at most three stationary points, as discussed below, then minE∈[0,v] iRS(E; ∆) has at most one non-analyticity point denoted ∆RS (if minE∈[0,v] iRS(E; ∆) is analytic over all R+ we set ∆RS =∞). Theorem 1.1 gives us a mean to compute the information theoretic threshold ∆Opt =∆RS. A basic application of theorem 1.1 is the expression of the MMSE: Corollary 1.3 (Exact formula for the MMSE) For all ∆ 6= ∆RS, the matrix-MMSE Mmmsen := ES,W[‖SSᵀ − E[XXᵀ|W]‖2F]/n2 (‖ − ‖F being the Frobenius norm) is asymptotically limn→∞Mmmsen(∆ −1) = v2−(v−argminE∈[0,v]iRS(E; ∆))2. Moreover, if ∆<∆AMP (where ∆AMP is the algorithmic threshold, see definition 1.4) or ∆>∆RS, then the usual vector-MMSE Vmmsen :=ES,W[‖S−E[X|W]‖22]/n satisfies limn→∞Vmmsen=argminE∈[0,v]iRS(E; ∆). It is natural to conjecture that the vector-MMSE is given by argminE∈[0,v]iRS(E; ∆) for all ∆ 6=∆RS, but our proof does not quite yield the full statement. A fundamental consequence concerns the performance of the AMP algorithm [6] for estimating s. AMP has been analysed rigorously in [11, 12, 4] where it is shown that its asymptotic performance is tracked by state evolution (SE). Let Et :=limn→∞ ES,Z[‖S− ŝt‖22]/n be the asymptotic average vector-MSE of the AMP estimate ŝt at time t. Define mmse(Σ−2) :=ES,Z [(S−E[X|S+ΣZ])2] as the usual scalar mmse function associated to a scalar AWGN channel of noise variance Σ2, with S∼P0 and Z∼N (0, 1). Then Et+1 = mmse(Σ(Et; ∆)−2), E0 = v, (2) is the SE recursion. Monotonicity properties of the mmse function imply that Et is a decreasing sequence s.t limt→∞Et=E∞ exists. Note that when E[S] = 0 and v is an unstable fixed point, as such, SE “does not start”. While this is not really a problem when one runs AMP in practice, for analysis purposes one can slightly bias P0 and remove the bias at the end of the proofs. Definition 1.4 (AMP algorithmic threshold) For ∆ > 0 small enough, the fixed point equation corresponding to (2) has a unique solution for all noise values in ]0,∆[. We define ∆AMP as the supremum of all such ∆. Corollary 1.5 (Performance of AMP) In the limit n→∞, AMP initialized without any knowledge other than P0 yields upon convergence the asymptotic matrix-MMSE as well as the asymptotic vector-MMSE iff ∆<∆AMP or ∆>∆RS, namely E∞=argminE∈[0,v]iRS(E; ∆). ∆AMP can be read off the replica potential (1): by differentiation of (1) one finds a fixed point equation that corresponds to (2). Thus ∆AMP is the smallest solution of ∂iRS/∂E=∂2iRS/∂E2 =0; in other words it is the “first” horizontal inflexion point appearing in iRS(E; ∆) when ∆ increases. Discussion: With our hypothesis on P0 there are only three possible scenarios: ∆AMP < ∆RS (one “first order” phase transition); ∆AMP = ∆RS < ∞ (one “higher order” phase transition); ∆AMP = ∆RS =∞ (no phase transition). In the sequel we will have in mind the most interesting case, namely one first order phase transition, where we determine the gap between the algorithmic AMP and information theoretic performance. The cases of no phase transition or higher order phase transition, which present no algorithmic gap, are basically covered by the analysis of [3] and follow as a special case from our proof. The only cases that would require more work are those where P0 is s.t (1) develops more than three stationary points and more than one phase transition is present. For ∆AMP<∆RS the structure of stationary points of (1) is as follows1 (figure 1). There exist three branchesEgood(∆), Eunstable(∆) andEbad(∆) s.t: 1) For 0<∆<∆AMP there is a single stationary point Egood(∆) which is a global minimum; 2) At ∆AMP a horizontal inflexion point appears, for ∆∈ [∆AMP,∆RS] there are three stationary points satisfying Egood(∆AMP)<Eunstable(∆AMP)= Ebad(∆AMP), Egood(∆) < Eunstable(∆) < Ebad(∆) otherwise, and moreover iRS(Egood; ∆) ≤ iRS(Ebad; ∆) with equality only at ∆RS; 3) for ∆ > ∆RS there is at least the stationary point Ebad(∆) which is always the global minimum, i.e. iRS(Ebad; ∆)<iRS(Egood; ∆). (For higher ∆ the Egood(∆) and Eunstable(∆) branches may merge and disappear); 4) Egood(∆) is analytic for ∆∈]0,∆′[, ∆′>∆RS, and Ebad(∆) is analytic for ∆>∆AMP. We note for further use in the proof section that E∞=Egood(∆) for ∆<∆AMP and E∞=Ebad(∆) for ∆>∆AMP. Definition 1.4 is equivalent to ∆AMP = sup{∆|E∞=Egood(∆)}. Moreover we will also use that iRS(Egood; ∆) is analytic on ]0,∆′[, iRS(Ebad; ∆) is analytic on ]∆AMP,∞[, and the only non-analyticity point of minE∈[0,v] iRS(E; ∆) is at ∆RS. Relation to other works: Explicit single-letter characterization of the MI in the rank-one problem has attracted a lot of attention recently. Particular cases of theorem 1.1 have been shown rigorously in a number of situations. A special case when si=±1∼Ber(1/2) already appeared in [13] where an equivalent spin glass model is analysed. Very recently, [9] has generalized the results of [13] and, notably, obtained a generic matching upper bound. The same formula has been also rigorously computed following the study of AMP in [3] for spiked models (provided, however, that the signal was not too sparse) and in [4] for strictly symmetric community detection. 1We take E[S] 6= 0. Once theorem 1.1 is proven for this case a limiting argument allows to extend it to E[S]=0. For rank-one symmetric matrix estimation problems, AMP has been introduced by [6], who also computed the SE formula to analyse its performance, generalizing techniques developed by [11] and [12]. SE was further studied by [3] and [4]. In [7, 10], the generalization to larger rank was also considered. The general formula proposed by [10] for the conditional entropy and the MMSE on the basis of the heuristic cavity method from statistical physics was not demonstrated in full generality. Worst, all existing proofs could not reach the more interesting regime where a gap between the algorithmic and information theoretic perfomances appears, leaving a gap with the statistical physics conjectured formula (and rigorous upper bound from [9]). Our result closes this conjecture and has interesting non-trivial implications on the computational complexity of these tasks. Our proof technique combines recent rigorous results in coding theory along the study of capacityachieving spatially coupled codes [14, 15, 16, 17] with other progress, coming from developments in mathematical physics putting on a rigorous basis predictions of spin glass theory [18]. From this point of view, the theorem proved in this paper is relevant in a broader context going beyond low-rank matrix estimation. Hundreds of papers have been published in statistics, machine learning or information theory using the non-rigorous statistical physics approach. We believe that our result helps setting a rigorous foundation of a broad line of work. While we focus on rank-one symmetric matrix estimation, our proof technique is readily extendable to more generic low-rank symmetric matrix or low-rank symmetric tensor estimation. We also believe that it can be extended to other problems of interest in machine learning and signal processing, such as generalized linear regression, features/dictionary learning, compressed sensing or multi-layer neural networks. 2 Two examples: Wigner spiked model and community detection In order to illustrate the consequences of our results we shall present two examples. Wigner spiked model: In this model, the vector s is a Bernoulli random vector, Si∼Ber(ρ). For large enough densities (i.e. ρ>0.041(1)), [3] computed the matrix-MMSE and proved that AMP is a computationally efficient algorithm that asymptotically achieves the matrix-MMSE for any value of the noise ∆. Our results allow to close the gap left open by [3]: on one hand we now obtain rigorously the MMSE for ρ≤ 0.041(1), and on the other one we observe that for such values of ρ, and as ∆ decreases, there is a small region where two local minima coexist in iRS(E; ∆). In particular for ∆AMP<∆<∆Opt = ∆RS the global minimum corresponding to the MMSE differs from the local one that traps AMP, and a computational gap appears (see figure 1). While the region where AMP is Bayes optimal is quite large, the region where is it not, however, is perhaps the most interesting one. While this is by no means evident, statistical physics analogies with physical phase transitions in nature suggest that this region should be hard for a very broad class of algorithms. For small ρ our results are consistent with the known optimal and algorithmic thresholds predicted in sparse PCA [19, 20], that treats the case of sub-extensive ρ=O(1) values. Another interesting line of work for such probabilistic models appeared in the context of random matrix theory (see [8] and references therein) and predicts that a sharp phase transition occurs at a critical value of the noise ∆spectral =ρ2 below which an outlier eigenvalue (and its principal eigenvector) has a positive correlation with the hidden signal. For larger noise values the spectral distribution of the observation is indistinguishable from that of the pure random noise. Asymmetric balanced community detection: We now consider the problem of detecting two communities (groups) with different sizes ρn and (1− ρ)n, that generalizes the one considered in [4]. One is given a graph where the probability to have a link between nodes in the first group is p+µ(1−ρ)/(ρ √ n), between those in the second group is p+µρ/( √ n(1−ρ)), while interconnections appear with probability p−µ/ √ n. With this peculiar “balanced” setting, the nodes in each group have the same degree distribution with mean pn, making them harder to distinguish. According to the universality property described in the first section, this is equivalent to a model with AWGN of variance ∆ = p(1−p)/µ2 where each variable si is chosen according to P0(s)=ρδ(s− √ (1−ρ)/ρ)+(1−ρ)δ(s+ √ ρ/(1−ρ)). Our results for this problem2 are summarized on the right hand side of figure 2. For ρ > ρc = 1/2− √ 1/12 (black point), it is asymptotically information theoretically possible to get an estimation better than chance if and only if ∆<1. When ρ<ρc, however, it becomes possible for much larger values of the noise. Interestingly, AMP and spectral methods have the same transition and can find a positive correlation with the hidden communities for ∆<1, regardless of the value of ρ. Again, a region [∆AMP,∆Opt =∆RS] exists where a computational gap appears when ρ<ρc. One can investigate the very low ρ regime where we find that the information theoretic transition goes as ∆Opt(ρ→0) = 1/(4ρ| log ρ|). Now if we assume that this result stays true even for ρ= O(1) (which is a speculation at this point), we can choose µ→(1−p)ρ √ n such that the small group is a clique. Then the problem corresponds to a “balanced” version of the famous planted clique problem [21]. We find that the AMP/spectral approach finds the 2Note that here since E=v=1 is an extremum of iRS(E; ∆), one must introduce a small bias in P0 and let it then tend to zero at the end of the proofs. hidden clique when it is larger than √ np/(1−p), while the information theoretic transition translates into size of the clique 4p log(n)/(1−p). This is indeed reminiscent of the more classical planted clique problem at p=1/2 with its gap between log(n) (information theoretic), √ n/e (AMP [22]) and √ n (spectral [21]). Since in our balanced case the spectral and AMP limits match, this suggests that the small gain of AMP in the standard clique problem is simply due to the information provided by the distribution of local degrees in the two groups (which is absent in our balanced case). We believe this correspondence strengthens the claim that the AMP gap is actually a fundamental one. 3 Proofs The crux of our proof rests on an auxiliary “spatially coupled system”. The hallmark of spatially coupled models is that one can tune them so that the gap between the algorithmic and information theoretic limits is eliminated, while at the same time the MI is maintained unchanged for the coupled and original models. Roughly speaking, this means that it is possible to algorithmically compute the information theoretic limit of the original model because a suitable algorithm is optimal on the coupled system. The present spatially coupled construction is similar to the one used for the coupled Curie-Weiss model [14]. Consider a ring of length L+1 (L even) with blocks positioned at µ∈{0, . . . , L} and coupled to neighboring blocks {µ−w, . . . , µ+w}. Positions µ are taken modulo L+1 and the integer w∈{0, . . . , L/2} equals the size of the coupling window. The coupled model is wiµjν = siµsjν √ Λµν n + ziµjν √ ∆, (3) where the index iµ∈{1, . . . , n} (resp. jν) belongs to the block µ (resp. ν) along the ring, Λ is an (L+1)×(L+1) matrix which describes the strength of the coupling between blocks, andZiµjν ∼N (0, 1) are i.i.d. For the proof to work, the matrix elements have to be chosen appropriately. We assume that: i) Λ is a doubly stochastic matrix; ii) Λµν depends on |µ−ν|; iii) Λµν is not vanishing for |µ−ν| ≤ w and vanishes for |µ−ν|>w; iv) Λ is smooth in the sense |Λµν−Λµ+1ν |=O(w−2); v) Λ has a non-negative Fourier transform. All these conditions can easily be met, the simplest example being a triangle of base 2w+1 and height 1/(w+1). The construction of the coupled system is completed by introducing a seed in the ring: we assume perfect knowledge of the signal components {siµ} for µ∈B :={−w−1, . . . , w−1} mod L+1. This seed is what allows to close the gap between the algorithmic and information theoretic limits and therefore plays a crucial role. Note it can also be viewed as an “opening” of the chain with fixed boundary conditions. Our first crucial result states that the MI Iw,L(S; W) of the coupled and original systems are the same in a suitable limit. Lemma 3.1 (Equality of mutual informations) For any fixed w the following limits exist and are equal: limL→∞ limn→∞ Iw,L(S; W)/(n(L+1))=limn→∞ I(S; W)/n. An immediate corollary is that non-analyticity points (w.r.t ∆) of the MIs are the same in the coupled and original models. In particular, defining ∆Opt,coup := sup{∆ | limL→∞ limn→∞ Iw,L(S; W)/(n(L+1)) is analytic in ]0,∆[}, we have ∆Opt,coup =∆Opt. The second crucial result states that the AMP threshold of the spatially coupled system is at least as good as ∆RS. The analysis of AMP applies to the coupled system as well [11, 12] and it can be shown that the performance of AMP is assessed by SE. Let Etµ := limn→∞ ES,Z[‖Sµ− ŝ t µ‖22]/n be the asymptotic average vector-MSE of the AMP estimate ŝtµ at time t for the µ-th “block” of S. We associate to each position µ ∈ {0, . . . , L} an independent scalar system with AWGN of the form Y =S+Σµ(E; ∆)Z, with Σµ(E; ∆)2 := ∆/(v− ∑L ν=0 ΛµνEν) and S∼P0, Z∼N (0, 1). Taking into account knowledge of the signal components in B, SE reads: Et+1µ = mmse(Σµ(E t; ∆)−2), E0µ = v for µ ∈ {0, . . . , L} \ B, Etµ = 0 for µ ∈ B, t ≥ 0, (4) where the mmse function is defined as in section 1. From the monotonicity of the mmse function we have Et+1µ ≤Etµ for all µ∈{0, . . . , L}, a partial order which implies that limt→∞ E t= E∞ exists. This allows to define an algorithmic threshold for the coupled system: ∆AMP,w,L :=sup{∆|E∞µ ≤ Egood(∆) ∀ µ}. We show (equality holds but is not directly needed): Lemma 3.2 (Threshold saturation) Let ∆AMP,coup := lim infw→∞ lim infL→∞∆AMP,w,L. We have ∆AMP,coup≥∆RS. Proof sketch of theorem 1.1: First we prove the RS formula for ∆ ≤ ∆Opt. It is known [3] that the matrix-MSE of AMP when n→∞ is equal to v2−(v−Et)2. This cannot improve the matrix-MMSE, hence (v2−(v−E∞)2)/4≥ lim supn→∞Mmmsen/4. For ∆≤∆AMP we have E∞=Egood(∆) which is the global minimum of (1) so the left hand side of the last inequality equals the derivative of minE∈[0,v] iRS(E; ∆) w.r.t ∆−1. Thus using the matrix version of the I-MMSE relation [23] we get d d∆−1 min E∈[0,v] iRS(E; ∆) ≥ lim sup n→∞ 1 n dI(S; W) d∆−1 . (5) Integrating this relation on [0,∆] ⊂ [0,∆AMP] and checking that minE∈[0,v] iRS(E; 0) = H(S) (the Shannon entropy of P0) we obtain minE∈[0,v] iRS(E; ∆)≤ lim infn→∞ I(S; W)/n. But we know I(S; W)/n≤minE∈[0,v] iRS(E; ∆) [9], thus we already get theorem 1.1 for ∆≤∆AMP. We notice that ∆AMP≤∆Opt. While this might seem intuitively clear, it follows from ∆RS≥∆AMP (by their definitions) which together with ∆AMP > ∆Opt would imply from theorem 1.1 that limn→∞ I(S; W)/n is analytic at ∆Opt, a contradiction. The next step is to extend theorem 1.1 to the range [∆AMP,∆Opt]. Suppose for a moment ∆RS≥∆Opt. Then both functions on each side of the RS formula are analytic on the whole range ]0,∆Opt[ and since they are equal for ∆≤∆AMP, they must be equal on their whole analyticity range and by continuity, they must also be equal at ∆Opt (that the functions are continuous follows from independent arguments on the existence of the n→∞ limit of concave functions). It remains to show that ∆RS∈ ]∆AMP,∆Opt[ is impossible. We proceed by contradiction, so suppose this is true. Then both functions on each side of the RS formula are analytic on ]0,∆RS[ and since they are equal for ]0,∆AMP[⊂]0,∆RS[ they must be equal on the whole range ]0,∆RS[ and also at ∆RS by continuity. For ∆>∆RS the fixed point of SE is E∞=Ebad(∆) which is also the global minimum of iRS(E; ∆), hence (5) is verified. Integrating this inequality on ]∆RS,∆[⊂]∆RS,∆Opt[ and using I(S; W)/n≤minE∈[0,v] iRS(E; ∆) again, we find that the RS formula holds for all ∆∈ [0,∆Opt]. But this implies that minE∈[0,v] iRS(E; ∆) is analytic at ∆RS, a contradiction. We now prove the RS formula for ∆≥∆Opt. Note that the previous arguments showed that necessarily ∆Opt≤∆RS. Thus by lemmas 3.1 and 3.2 (and the sub-optimality of AMP as shown as before) we obtain ∆RS ≤ ∆AMP,coup≤∆Opt,coup = ∆Opt≤∆RS. This shows that ∆Opt = ∆RS (this is the point where spatial coupling came in the game and we do not know of other means to prove such an equality). For ∆>∆RS we have E∞ =Ebad(∆) which is the global minimum of iRS(E; ∆). Therefore we again have (5) in this range and the proof can be completed by using once more the integration argument, this time over the range [∆RS,∆]=[∆Opt,∆]. Proof sketch of corollaries 1.3 and 1.5: LetE∗(∆)=argminEiRS(E; ∆) for ∆ 6=∆RS. By explicit calculation one checks that diRS(E∗,∆)/d∆−1 =(v2−(v−E∗(∆))2)/4, so from theorem 1.1 and the matrix form of the I-MMSE relation we find Mmmsen→v2−(v−E∗(∆))2 as n→∞ which is the first part of the statement of corollary 1.3. Let us now turn to corollary 1.5. For n→∞ the vectorMSE of the AMP estimator at time t equals Et, and since the fixed point equation corresponding to SE is precisely the stationarity equation for iRS(E; ∆), we conclude that for ∆ /∈ [∆AMP,∆RS] we must have E∞=E∗(∆). It remains to prove that E∗(∆)=limn→∞Vmmsen(∆) at least for ∆ /∈ [∆AMP,∆RS] (we believe this is in fact true for all ∆). This will settle the second part of corollary 1.3 as well as 1.5. Using (Nishimori) identities ES,W[SiSjE[XiXj |W]]=ES,W[E[XiXj |W]2] (see e.g. [9]) and using the law of large numbers we can show limn→∞Mmmsen ≤ limn→∞(v2− (v− Vmmsen(∆)) 2). Concentration techniques similar to [13] suggest that the equality in fact holds (for ∆ 6= ∆RS) but there are technicalities that prevent us from completing the proof of equality. However it is interesting to note that this equality would imply E∗(∆)=limn→∞Vmmsen(∆) for all ∆ 6=∆RS. Nevertheless, another argument can be used when AMP is optimal. On one hand the right hand side of the inequality is necessarily smaller than v2−(v−E∞)2. On the other hand the left hand side of the inequality is equal to v2−(v−E∗(∆))2. SinceE∗(∆)=E∞ when ∆ /∈ [∆AMP,∆RS], we can conclude limn→∞Vmmsen(∆)=argminEiRS(E; ∆) for this range of ∆. Proof sketch of lemma 3.1: Here we prove the lemma for a ring that is not seeded. An easy argument shows that a seed of size w does not change the MI per variable when L→∞. The statistical physics formulation is convenient: up to the trivial additive term n(L+1)v2/4, the MI Iw,L(S; W) equals the free energy −ES,Z[lnZw,L], where Zw,L := ∫ dxP0(x) exp(−H(x, z,Λ)) and H(x, z,Λ) = 1 ∆ L∑ µ=0 ( Λµµ ∑ iµ≤jµ Aiµjµ(x, z,Λ) + µ+w∑ ν=µ+1 Λµν ∑ iµ,jν Aiµjν (x, z,Λ) ) , (6) with Aiµjν (x, z,Λ) :=(x2iµx 2 jν )/(2n)−(siµsjνxiµxjν )/n−(xiµxjνziµjν √ ∆)/ √ nΛµν . Consider a pair of systems with coupling matrices Λ and Λ′ and i.i.d noize realizations z, z′, an interpolated HamiltonianH(x, z, tΛ)+H(x, z′, (1−t)Λ′), t ∈ [0, 1], and the corresponding partition function Zt. The main idea of the proof is to show that for suitable choices of matrices,− ddtES,Z,Z′ [lnZt]≤0 for all t∈ [0, 1] (up to negligible terms), so that by the fundamental theorem of calculus, we get a comparison between the free energies ofH(x, z,Λ) andH(x, z′,Λ′). Performing the t-derivative brings down a Gibbs average of a polynomial in all variables siµ , xiµ , ziµjν and z ′ iµjν . This expectation over S, Z, Z′ of this Gibbs average is simplified using integration by parts over the Gaussian noise ziµjν , z′iµjν and Nishimori identities (see e.g. proof of corollary 1.3 for one of them). This algebra leads to − 1 n(L+ 1) d dt ES,Z,Z′ [lnZt] = 1 4∆(L+ 1) ES,Z,Z′ [〈qᵀΛq− qᵀΛ′q〉t] +O(1/(nL)), (7) where 〈−〉t is the Gibbs average w.r.t the interpolated Hamiltonian, q is the vector of overlaps qµ := ∑n iµ=1 siµxiµ/n. If we can choose matrices s.t Λ ′ >Λ, the difference of quadratic forms in the Gibbs bracket is negative and we obtain an inequality in the large size limit. We use this scheme to interpolate between the fully decoupled system w=0 and the coupled one 1≤w<L/2 and then between 1≤w <L/2 and the fully connected system w=L/2. The w= 0 system has Λµν = δµν with eigenvalues (1, 1, . . . , 1). For the 1 ≤ w < L/2 system, we take any stochastic translation invariant matrix with non-negative discrete Fourier transform (of its rows): such matrices have an eigenvalue equal to 1 and all others in [0, 1[ (the eigenvalues are precisely equal to the discrete Fourier transform). For w = L/2 we choose Λµν = 1/(L+1) which is a projector with eigenvalues (0, 0, . . . , 1). With these choices we deduce that the free energies and MIs are ordered as Iw=0,L +O(1)≤Iw,L +O(1)≤Iw=L/2,L +O(1). To conclude the proof we divide by n(L+1) and note that the limits of the leftmost and rightmost MIs are equal, provided the limit exists. Indeed the leftmost term equals L times I(S; W) and the rightmost term is the same MI for a system of n(L+1) variables. Existence of the limit follows by subadditivity, proven by a similar interpolation [18]. Proof sketch of lemma 3.2: Fix ∆ < ∆RS. We show that, for w large enough, the coupled SE recursion (4) must converge to a fixed point E∞µ ≤Egood(∆) for all µ. The main intuition behind the proof is to use a “potential function” whose “energy” can be lowered by small perturbation of a fixed point that would go above Egood(∆) [16, 17]. The relevant potential function iw,L(E,∆) is in fact the replica potential of the coupled system (a generalization of (1)). The stationarity condition for this potential is precisely (4) (without the seeding condition). Monotonicity properties of SE ensure that any fixed point has a “unimodal” shape (and recall that it vanishes for µ∈B= {0, . . . , w−1}∪{L−w, . . . , L}). Consider a position µmax∈{w, . . . , L−w−1} where it is maximal and suppose that E∞µmax > Egood(∆). We associate to the fixed point E ∞ a so-called saturated profile Es defined on the whole of Z as follows: Esµ=Egood(∆) for all µ≤µ∞ where µ∞+1 is the smallest position s.t E∞µ >Egood(∆); E s µ=E ∞ µ for µ∈{µ∞+1, . . . , µmax−1}; Esµ=E∞µmax for all µ≥µmax. We show that Es cannot exist for w large enough. To this end define a shift operator by [S(Es)]µ :=Esµ−1. On one hand the shifted profile is a small perturbation of E s which matches a fixed point, except where it is constant, so if we Taylor expand, the first order vanishes and the second order and higher orders can be estimated as |iw,L(S(Es); ∆)−iw,L(Es; ∆)|=O(1/w) uniformly in L. On the other hand, by explicit cancellation of telescopic sums iw,L(S(Es); ∆)−iw,L(Es; ∆)= iRS(Egood; ∆)−iRS(E∞µmax ; ∆). Now one can show from monotonicity properties of SE that if E ∞ is a non trivial fixed point of the coupled SE then E∞µmax cannot be in the basin of attraction of Egood(∆) for the uncoupled SE recursion. Consequently as can be seen on the plot of iRS(E; ∆) (e.g. figure 1) we must have iRS(E∞µmax ; ∆)≥ iRS(Ebad; ∆). Therefore iw,L(S(E s); ∆)−iw,L(Es; ∆)≤ −|iRS(Ebad; ∆)−iRS(Egood; ∆)| which is an energy gain independent of w, and for large enough w we get a contradiction with the previous estimate coming from the Taylor expansion. Acknowledgments J.B and M.D acknowledge funding from the SNSF (grant 200021-156672). Part of this research received funding from the ERC under the EU’s 7th Framework Programme (FP/2007-2013/ERC Grant Agreement 307087-SPARCS). F.K and L.Z thank the Simons Institute for its hospitality.
1. What is the main focus of the paper regarding mutual information in the symmetric rank-one matrix model? 2. What are the concerns regarding the paper's style and clarity? 3. Do you have any questions about the paper's use cases, such as Wigner spike model and community detection? 4. How does the reviewer assess the paper's novelty and contributions? 5. Are there any issues with the paper's mathematical explanations and formulae?
Review
Review This paper tries to give a rigorous proof of the expression for the mutual information in the symmetric rank-one matrix model, instead of the statistical physics one, which is considered not rigorous. It first presents the model in claiming that in most case, the asymptotic mutual information can be approximated by the one in the additive white Gaussian noise (AWGN) setting. Then, it gives a list of interleaved formula, hypotheses, definitions and theorems, without explaining them in detail. Next, it begins to interpret the gap between the information theoretical and algorithmic AMP threshold, by explaining how the phase transition happens when the level of noise increases. Then, in Section 2, the paper gives two use cases: Wigner spike model and community detection. Finally, in Section 3, the paper provides a sketch of proof.This paper is written in a casual style. A lot of definitions are missing, the formula are poorly explained, the description is ambiguous and the proofs are extremely concise. It is impossible to understand unless the reader is currently working on the same problem. The reviewer comes from the statistical learning community, who does not have much experience on information theory. With great patience and persistence, and numerous hours spent reading this paper, he gives the following remarks. Bayes optimal. In many cases, the terms "Bayes" and "optimal" can mean many different things. The reader may once have learned the "Bayes optimal classifier" during the school, but he may well have forgotten it. Anyway, it cannot be called a “setting” at L2 in the abstract (zero hit on Google). It had better describe it seriously in the main part. Heuristic statistical physics computation (zero hit on Google). In my area, people call an algorithm heuristic if it hasn't been proven to converge to something desired. In this paper, instead, it is used to describe some calculation without sound mathematical meaning, such as casually interchanging the order of integral, writing some integrals without mathematical definition (e.g., the epoch when Lebesgue integral hasn't be invented). Better clarify this point in the main part. MSE. Mean square error can have many kinds of formulation. Theoretically, from any two random variables, one can calculate an MSE. Better give a clear formula. AMP. Approximate message passing being the most important component of this paper, however, one cannot find the definition of this algorithm in this paper. L8: "a large set of parameters". There are lots of parameters in the world: parameters for models, for algorithms, hyper-parameters... Should be "a large extent of level/degree of noise". Channel universality (L40). The paper cites [9], suggesting that it is proven in [9]. Then why does it continue to cite [4] and [10], leaving the impression that it hasn't been completely proven? By the way, [4] and [9] are on arXiv, which means they may not have gone through serious peer review. AWGN. This abbrev has never been properly introduced, though the reader can guess it is "additive white Gaussian noise". Eq (3). Should explain that I(S; W) is a uni-variable function of the level of noise --Delta, and should emphasize that the Delta in i_RS(E; Delta) is the same Delta as the level of noise, because the RSPF can happily exist alone without making any connection with the rest of the world. A rigorous reader may well claim that there are two different Delta's in Eq (3), and thus disqualify this paper. Eg (2). Should give the interpretation of E (error), otherwise will be difficult to understand the rest of the paper. Community detection. Not clear in this paper how it is linked with the symmetric rank-one matrix estimation. Figure 1. A conflict in the caption. The paper reads "... (1), (2) and (4), but in (3), when Delta_AMP < Delta < Delta_RS, ...". However, (3) is for case "Delta = 0.00125 > Delta_RS". This conflict may be due to inconsistent read of the figure: sometimes the author read it from up to down, other times he read it from left to right. The reviewer suggests adding the No. directly onto each sub-figure. Contribution. It is not clear in the paper whether the threshold gap is one of the contribution of this article.
NIPS
Title Mutual information for symmetric rank-one matrix estimation: A proof of the replica formula Abstract Factorizing low-rank matrices has many applications in machine learning and statistics. For probabilistic models in the Bayes optimal setting, a general expression for the mutual information has been proposed using heuristic statistical physics computations, and proven in few specific cases. Here, we show how to rigorously prove the conjectured formula for the symmetric rank-one case. This allows to express the minimal mean-square-error and to characterize the detectability phase transitions in a large set of estimation problems ranging from community detection to sparse PCA. We also show that for a large set of parameters, an iterative algorithm called approximate message-passing is Bayes optimal. There exists, however, a gap between what currently known polynomial algorithms can do and what is expected information theoretically. Additionally, the proof technique has an interest of its own and exploits three essential ingredients: the interpolation method introduced in statistical physics by Guerra, the analysis of the approximate message-passing algorithm and the theory of spatial coupling and threshold saturation in coding. Our approach is generic and applicable to other open problems in statistical estimation where heuristic statistical physics predictions are available. Consider the following probabilistic rank-one matrix estimation problem: one has access to noisy observations w = (wij)i,j=1 of the pair-wise product of the components of a vector s = (s1, . . . , sn) ∈ R with i.i.d components distributed as Si ∼ P0, i = 1, . . . , n. The entries of w are observed through a noisy element-wise (possibly non-linear) output probabilistic channel Pout(wij |sisj/ √ n). The goal is to estimate the vector s from w assuming that both P0 and Pout are known and independent of n (noise is symmetric so that wij =wji). Many important problems in statistics and machine learning can be expressed in this way, such as sparse PCA [1], the Wigner spiked model [2, 3], community detection [4] or matrix completion [5]. Proving a result initially derived by a heuristic method from statistical physics, we give an explicit expression for the mutual information (MI) and the information theoretic minimal mean-square-error (MMSE) in the asymptotic n→∞ limit. Our results imply that for a large region of parameters, the posterior marginal expectations of the underlying signal components (often assumed intractable 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. to compute) can be obtained in the leading order in n using a polynomial-time algorithm called approximate message-passing (AMP) [6, 3, 4, 7]. We also demonstrate the existence of a region where both AMP and spectral methods [8] fail to provide a good answer to the estimation problem, while it is nevertheless information theoretically possible to do so. We illustrate our theorems with examples and also briefly discuss the implications in terms of computational complexity. 1 Setting and main results The additive white Gaussian noise setting: A standard and natural setting is the case of additive white Gaussian noise (AWGN) of known variance ∆, wij =sisj/ √ n+zij √ ∆, where z=(zij)i,j=1 is a symmetric matrix with i.i.d entries Zij ∼N (0, 1), 1≤ i≤ j≤n. Perhaps surprisingly, it turns out that this Gaussian setting is sufficient to completely characterize all the problems discussed in the introduction, even if these have more complicated output channels. This is made possible by a theorem of channel universality [9] (already proven for community detection in [4] and conjectured in [10]). This theorem states that given an output channel Pout(w|y), such that (s.t) logPout(w|y=0) is three times differentiable with bounded second and third derivatives, then the MI satisfies I(S; W)= I(S; SS/ √ n+Z √ ∆)+O( √ n), where ∆ is the inverse Fisher information (evaluated at y=0) of the output channel: ∆−1 := EPout(w|0)[(∂y logPout(W |y)|y=0)]. Informally, this means that we only have to compute the MI for an AWGN channel to take care of a wide range of problems, which can be expressed in terms of their Fisher information. In this paper we derive rigorously, for a large class of signal distributions P0, an explicit one-letter formula for the MI per variable I(S; W)/n in the asymptotic limit n→∞. Main result: Our central result is a proof of the expression for the asymptotic n→∞MI per variable via the so-called replica symmetric (RS) potential iRS(E; ∆) defined as iRS(E; ∆) := (v − E) + v N/A Consider the following probabilistic rank-one matrix estimation problem: one has access to noisy observations w = (wij)ni,j=1 of the pair-wise product of the components of a vector s = (s1, . . . , sn)ᵀ ∈ Rn with i.i.d components distributed as Si ∼ P0, i = 1, . . . , n. The entries of w are observed through a noisy element-wise (possibly non-linear) output probabilistic channel Pout(wij |sisj/ √ n). The goal is to estimate the vector s from w assuming that both P0 and Pout are known and independent of n (noise is symmetric so that wij =wji). Many important problems in statistics and machine learning can be expressed in this way, such as sparse PCA [1], the Wigner spiked model [2, 3], community detection [4] or matrix completion [5]. Proving a result initially derived by a heuristic method from statistical physics, we give an explicit expression for the mutual information (MI) and the information theoretic minimal mean-square-error (MMSE) in the asymptotic n→∞ limit. Our results imply that for a large region of parameters, the posterior marginal expectations of the underlying signal components (often assumed intractable 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. to compute) can be obtained in the leading order in n using a polynomial-time algorithm called approximate message-passing (AMP) [6, 3, 4, 7]. We also demonstrate the existence of a region where both AMP and spectral methods [8] fail to provide a good answer to the estimation problem, while it is nevertheless information theoretically possible to do so. We illustrate our theorems with examples and also briefly discuss the implications in terms of computational complexity. 1 Setting and main results The additive white Gaussian noise setting: A standard and natural setting is the case of additive white Gaussian noise (AWGN) of known variance ∆, wij =sisj/ √ n+zij √ ∆, where z=(zij)ni,j=1 is a symmetric matrix with i.i.d entries Zij ∼N (0, 1), 1≤ i≤ j≤n. Perhaps surprisingly, it turns out that this Gaussian setting is sufficient to completely characterize all the problems discussed in the introduction, even if these have more complicated output channels. This is made possible by a theorem of channel universality [9] (already proven for community detection in [4] and conjectured in [10]). This theorem states that given an output channel Pout(w|y), such that (s.t) logPout(w|y=0) is three times differentiable with bounded second and third derivatives, then the MI satisfies I(S; W)= I(S; SSᵀ/ √ n+Z √ ∆)+O( √ n), where ∆ is the inverse Fisher information (evaluated at y=0) of the output channel: ∆−1 := EPout(w|0)[(∂y logPout(W |y)|y=0)2]. Informally, this means that we only have to compute the MI for an AWGN channel to take care of a wide range of problems, which can be expressed in terms of their Fisher information. In this paper we derive rigorously, for a large class of signal distributions P0, an explicit one-letter formula for the MI per variable I(S; W)/n in the asymptotic limit n→∞. Main result: Our central result is a proof of the expression for the asymptotic n→∞MI per variable via the so-called replica symmetric (RS) potential iRS(E; ∆) defined as iRS(E; ∆) := (v − E)2 + v2 4∆ − ES,Z [ ln (∫ dxP0(x)e − x2 2Σ(E;∆)2 +x ( S Σ(E;∆)2 + Z Σ(E;∆) ))] , (1) with Z∼N (0, 1), S∼P0, E[S2]=v and Σ(E; ∆)2 :=∆/(v−E), E∈ [0, v]. Here we will assume that P0 is a discrete distribution over a finite bounded real alphabet P0(s)= ∑ν α=1 pαδ(s−aα). Thus the only continuous integral in (1) is the Gaussian over z. Our results can be extended to mixtures of discrete and continuous signal distributions at the expense of technical complications in some proofs. It turns out that both the information theoretic and algorithmic AMP thresholds are determined by the set of stationary points of (1) (w.r.t E). It is possible to show that for all ∆>0 there always exist at least one stationary minimum. Note E=0 is never a stationary point (except for P0 a single Dirac mass) and E= v is stationary only if E[S] = 0. In this contribution we suppose that at most three stationary points exist, corresponding to situations with at most one phase transition. We believe that situations with multiple transitions can also be covered by our techniques. Theorem 1.1 (RS formula for the mutual information) Fix ∆>0 and let P0 be a discrete distribution s.t (1) has at most three stationary points. Then limn→∞ I(S; W)/n=minE∈[0,v] iRS(E; ∆). The proof of the existence of the limit does not require the above hypothesis on P0. Also, it was first shown in [9] that for all n, I(S; W)/n≤minE∈[0,v] iRS(E; ∆), an inequality that we will use in the proof section. It is conceptually useful to define the following threshold: Definition 1.2 (Information theoretic threshold) Define ∆Opt as the first non-analyticity point of the MI as ∆ increases: ∆Opt :=sup{∆| limn→∞ I(S; W)/n is analytic in ]0,∆[}. When P0 is s.t (1) has at most three stationary points, as discussed below, then minE∈[0,v] iRS(E; ∆) has at most one non-analyticity point denoted ∆RS (if minE∈[0,v] iRS(E; ∆) is analytic over all R+ we set ∆RS =∞). Theorem 1.1 gives us a mean to compute the information theoretic threshold ∆Opt =∆RS. A basic application of theorem 1.1 is the expression of the MMSE: Corollary 1.3 (Exact formula for the MMSE) For all ∆ 6= ∆RS, the matrix-MMSE Mmmsen := ES,W[‖SSᵀ − E[XXᵀ|W]‖2F]/n2 (‖ − ‖F being the Frobenius norm) is asymptotically limn→∞Mmmsen(∆ −1) = v2−(v−argminE∈[0,v]iRS(E; ∆))2. Moreover, if ∆<∆AMP (where ∆AMP is the algorithmic threshold, see definition 1.4) or ∆>∆RS, then the usual vector-MMSE Vmmsen :=ES,W[‖S−E[X|W]‖22]/n satisfies limn→∞Vmmsen=argminE∈[0,v]iRS(E; ∆). It is natural to conjecture that the vector-MMSE is given by argminE∈[0,v]iRS(E; ∆) for all ∆ 6=∆RS, but our proof does not quite yield the full statement. A fundamental consequence concerns the performance of the AMP algorithm [6] for estimating s. AMP has been analysed rigorously in [11, 12, 4] where it is shown that its asymptotic performance is tracked by state evolution (SE). Let Et :=limn→∞ ES,Z[‖S− ŝt‖22]/n be the asymptotic average vector-MSE of the AMP estimate ŝt at time t. Define mmse(Σ−2) :=ES,Z [(S−E[X|S+ΣZ])2] as the usual scalar mmse function associated to a scalar AWGN channel of noise variance Σ2, with S∼P0 and Z∼N (0, 1). Then Et+1 = mmse(Σ(Et; ∆)−2), E0 = v, (2) is the SE recursion. Monotonicity properties of the mmse function imply that Et is a decreasing sequence s.t limt→∞Et=E∞ exists. Note that when E[S] = 0 and v is an unstable fixed point, as such, SE “does not start”. While this is not really a problem when one runs AMP in practice, for analysis purposes one can slightly bias P0 and remove the bias at the end of the proofs. Definition 1.4 (AMP algorithmic threshold) For ∆ > 0 small enough, the fixed point equation corresponding to (2) has a unique solution for all noise values in ]0,∆[. We define ∆AMP as the supremum of all such ∆. Corollary 1.5 (Performance of AMP) In the limit n→∞, AMP initialized without any knowledge other than P0 yields upon convergence the asymptotic matrix-MMSE as well as the asymptotic vector-MMSE iff ∆<∆AMP or ∆>∆RS, namely E∞=argminE∈[0,v]iRS(E; ∆). ∆AMP can be read off the replica potential (1): by differentiation of (1) one finds a fixed point equation that corresponds to (2). Thus ∆AMP is the smallest solution of ∂iRS/∂E=∂2iRS/∂E2 =0; in other words it is the “first” horizontal inflexion point appearing in iRS(E; ∆) when ∆ increases. Discussion: With our hypothesis on P0 there are only three possible scenarios: ∆AMP < ∆RS (one “first order” phase transition); ∆AMP = ∆RS < ∞ (one “higher order” phase transition); ∆AMP = ∆RS =∞ (no phase transition). In the sequel we will have in mind the most interesting case, namely one first order phase transition, where we determine the gap between the algorithmic AMP and information theoretic performance. The cases of no phase transition or higher order phase transition, which present no algorithmic gap, are basically covered by the analysis of [3] and follow as a special case from our proof. The only cases that would require more work are those where P0 is s.t (1) develops more than three stationary points and more than one phase transition is present. For ∆AMP<∆RS the structure of stationary points of (1) is as follows1 (figure 1). There exist three branchesEgood(∆), Eunstable(∆) andEbad(∆) s.t: 1) For 0<∆<∆AMP there is a single stationary point Egood(∆) which is a global minimum; 2) At ∆AMP a horizontal inflexion point appears, for ∆∈ [∆AMP,∆RS] there are three stationary points satisfying Egood(∆AMP)<Eunstable(∆AMP)= Ebad(∆AMP), Egood(∆) < Eunstable(∆) < Ebad(∆) otherwise, and moreover iRS(Egood; ∆) ≤ iRS(Ebad; ∆) with equality only at ∆RS; 3) for ∆ > ∆RS there is at least the stationary point Ebad(∆) which is always the global minimum, i.e. iRS(Ebad; ∆)<iRS(Egood; ∆). (For higher ∆ the Egood(∆) and Eunstable(∆) branches may merge and disappear); 4) Egood(∆) is analytic for ∆∈]0,∆′[, ∆′>∆RS, and Ebad(∆) is analytic for ∆>∆AMP. We note for further use in the proof section that E∞=Egood(∆) for ∆<∆AMP and E∞=Ebad(∆) for ∆>∆AMP. Definition 1.4 is equivalent to ∆AMP = sup{∆|E∞=Egood(∆)}. Moreover we will also use that iRS(Egood; ∆) is analytic on ]0,∆′[, iRS(Ebad; ∆) is analytic on ]∆AMP,∞[, and the only non-analyticity point of minE∈[0,v] iRS(E; ∆) is at ∆RS. Relation to other works: Explicit single-letter characterization of the MI in the rank-one problem has attracted a lot of attention recently. Particular cases of theorem 1.1 have been shown rigorously in a number of situations. A special case when si=±1∼Ber(1/2) already appeared in [13] where an equivalent spin glass model is analysed. Very recently, [9] has generalized the results of [13] and, notably, obtained a generic matching upper bound. The same formula has been also rigorously computed following the study of AMP in [3] for spiked models (provided, however, that the signal was not too sparse) and in [4] for strictly symmetric community detection. 1We take E[S] 6= 0. Once theorem 1.1 is proven for this case a limiting argument allows to extend it to E[S]=0. For rank-one symmetric matrix estimation problems, AMP has been introduced by [6], who also computed the SE formula to analyse its performance, generalizing techniques developed by [11] and [12]. SE was further studied by [3] and [4]. In [7, 10], the generalization to larger rank was also considered. The general formula proposed by [10] for the conditional entropy and the MMSE on the basis of the heuristic cavity method from statistical physics was not demonstrated in full generality. Worst, all existing proofs could not reach the more interesting regime where a gap between the algorithmic and information theoretic perfomances appears, leaving a gap with the statistical physics conjectured formula (and rigorous upper bound from [9]). Our result closes this conjecture and has interesting non-trivial implications on the computational complexity of these tasks. Our proof technique combines recent rigorous results in coding theory along the study of capacityachieving spatially coupled codes [14, 15, 16, 17] with other progress, coming from developments in mathematical physics putting on a rigorous basis predictions of spin glass theory [18]. From this point of view, the theorem proved in this paper is relevant in a broader context going beyond low-rank matrix estimation. Hundreds of papers have been published in statistics, machine learning or information theory using the non-rigorous statistical physics approach. We believe that our result helps setting a rigorous foundation of a broad line of work. While we focus on rank-one symmetric matrix estimation, our proof technique is readily extendable to more generic low-rank symmetric matrix or low-rank symmetric tensor estimation. We also believe that it can be extended to other problems of interest in machine learning and signal processing, such as generalized linear regression, features/dictionary learning, compressed sensing or multi-layer neural networks. 2 Two examples: Wigner spiked model and community detection In order to illustrate the consequences of our results we shall present two examples. Wigner spiked model: In this model, the vector s is a Bernoulli random vector, Si∼Ber(ρ). For large enough densities (i.e. ρ>0.041(1)), [3] computed the matrix-MMSE and proved that AMP is a computationally efficient algorithm that asymptotically achieves the matrix-MMSE for any value of the noise ∆. Our results allow to close the gap left open by [3]: on one hand we now obtain rigorously the MMSE for ρ≤ 0.041(1), and on the other one we observe that for such values of ρ, and as ∆ decreases, there is a small region where two local minima coexist in iRS(E; ∆). In particular for ∆AMP<∆<∆Opt = ∆RS the global minimum corresponding to the MMSE differs from the local one that traps AMP, and a computational gap appears (see figure 1). While the region where AMP is Bayes optimal is quite large, the region where is it not, however, is perhaps the most interesting one. While this is by no means evident, statistical physics analogies with physical phase transitions in nature suggest that this region should be hard for a very broad class of algorithms. For small ρ our results are consistent with the known optimal and algorithmic thresholds predicted in sparse PCA [19, 20], that treats the case of sub-extensive ρ=O(1) values. Another interesting line of work for such probabilistic models appeared in the context of random matrix theory (see [8] and references therein) and predicts that a sharp phase transition occurs at a critical value of the noise ∆spectral =ρ2 below which an outlier eigenvalue (and its principal eigenvector) has a positive correlation with the hidden signal. For larger noise values the spectral distribution of the observation is indistinguishable from that of the pure random noise. Asymmetric balanced community detection: We now consider the problem of detecting two communities (groups) with different sizes ρn and (1− ρ)n, that generalizes the one considered in [4]. One is given a graph where the probability to have a link between nodes in the first group is p+µ(1−ρ)/(ρ √ n), between those in the second group is p+µρ/( √ n(1−ρ)), while interconnections appear with probability p−µ/ √ n. With this peculiar “balanced” setting, the nodes in each group have the same degree distribution with mean pn, making them harder to distinguish. According to the universality property described in the first section, this is equivalent to a model with AWGN of variance ∆ = p(1−p)/µ2 where each variable si is chosen according to P0(s)=ρδ(s− √ (1−ρ)/ρ)+(1−ρ)δ(s+ √ ρ/(1−ρ)). Our results for this problem2 are summarized on the right hand side of figure 2. For ρ > ρc = 1/2− √ 1/12 (black point), it is asymptotically information theoretically possible to get an estimation better than chance if and only if ∆<1. When ρ<ρc, however, it becomes possible for much larger values of the noise. Interestingly, AMP and spectral methods have the same transition and can find a positive correlation with the hidden communities for ∆<1, regardless of the value of ρ. Again, a region [∆AMP,∆Opt =∆RS] exists where a computational gap appears when ρ<ρc. One can investigate the very low ρ regime where we find that the information theoretic transition goes as ∆Opt(ρ→0) = 1/(4ρ| log ρ|). Now if we assume that this result stays true even for ρ= O(1) (which is a speculation at this point), we can choose µ→(1−p)ρ √ n such that the small group is a clique. Then the problem corresponds to a “balanced” version of the famous planted clique problem [21]. We find that the AMP/spectral approach finds the 2Note that here since E=v=1 is an extremum of iRS(E; ∆), one must introduce a small bias in P0 and let it then tend to zero at the end of the proofs. hidden clique when it is larger than √ np/(1−p), while the information theoretic transition translates into size of the clique 4p log(n)/(1−p). This is indeed reminiscent of the more classical planted clique problem at p=1/2 with its gap between log(n) (information theoretic), √ n/e (AMP [22]) and √ n (spectral [21]). Since in our balanced case the spectral and AMP limits match, this suggests that the small gain of AMP in the standard clique problem is simply due to the information provided by the distribution of local degrees in the two groups (which is absent in our balanced case). We believe this correspondence strengthens the claim that the AMP gap is actually a fundamental one. 3 Proofs The crux of our proof rests on an auxiliary “spatially coupled system”. The hallmark of spatially coupled models is that one can tune them so that the gap between the algorithmic and information theoretic limits is eliminated, while at the same time the MI is maintained unchanged for the coupled and original models. Roughly speaking, this means that it is possible to algorithmically compute the information theoretic limit of the original model because a suitable algorithm is optimal on the coupled system. The present spatially coupled construction is similar to the one used for the coupled Curie-Weiss model [14]. Consider a ring of length L+1 (L even) with blocks positioned at µ∈{0, . . . , L} and coupled to neighboring blocks {µ−w, . . . , µ+w}. Positions µ are taken modulo L+1 and the integer w∈{0, . . . , L/2} equals the size of the coupling window. The coupled model is wiµjν = siµsjν √ Λµν n + ziµjν √ ∆, (3) where the index iµ∈{1, . . . , n} (resp. jν) belongs to the block µ (resp. ν) along the ring, Λ is an (L+1)×(L+1) matrix which describes the strength of the coupling between blocks, andZiµjν ∼N (0, 1) are i.i.d. For the proof to work, the matrix elements have to be chosen appropriately. We assume that: i) Λ is a doubly stochastic matrix; ii) Λµν depends on |µ−ν|; iii) Λµν is not vanishing for |µ−ν| ≤ w and vanishes for |µ−ν|>w; iv) Λ is smooth in the sense |Λµν−Λµ+1ν |=O(w−2); v) Λ has a non-negative Fourier transform. All these conditions can easily be met, the simplest example being a triangle of base 2w+1 and height 1/(w+1). The construction of the coupled system is completed by introducing a seed in the ring: we assume perfect knowledge of the signal components {siµ} for µ∈B :={−w−1, . . . , w−1} mod L+1. This seed is what allows to close the gap between the algorithmic and information theoretic limits and therefore plays a crucial role. Note it can also be viewed as an “opening” of the chain with fixed boundary conditions. Our first crucial result states that the MI Iw,L(S; W) of the coupled and original systems are the same in a suitable limit. Lemma 3.1 (Equality of mutual informations) For any fixed w the following limits exist and are equal: limL→∞ limn→∞ Iw,L(S; W)/(n(L+1))=limn→∞ I(S; W)/n. An immediate corollary is that non-analyticity points (w.r.t ∆) of the MIs are the same in the coupled and original models. In particular, defining ∆Opt,coup := sup{∆ | limL→∞ limn→∞ Iw,L(S; W)/(n(L+1)) is analytic in ]0,∆[}, we have ∆Opt,coup =∆Opt. The second crucial result states that the AMP threshold of the spatially coupled system is at least as good as ∆RS. The analysis of AMP applies to the coupled system as well [11, 12] and it can be shown that the performance of AMP is assessed by SE. Let Etµ := limn→∞ ES,Z[‖Sµ− ŝ t µ‖22]/n be the asymptotic average vector-MSE of the AMP estimate ŝtµ at time t for the µ-th “block” of S. We associate to each position µ ∈ {0, . . . , L} an independent scalar system with AWGN of the form Y =S+Σµ(E; ∆)Z, with Σµ(E; ∆)2 := ∆/(v− ∑L ν=0 ΛµνEν) and S∼P0, Z∼N (0, 1). Taking into account knowledge of the signal components in B, SE reads: Et+1µ = mmse(Σµ(E t; ∆)−2), E0µ = v for µ ∈ {0, . . . , L} \ B, Etµ = 0 for µ ∈ B, t ≥ 0, (4) where the mmse function is defined as in section 1. From the monotonicity of the mmse function we have Et+1µ ≤Etµ for all µ∈{0, . . . , L}, a partial order which implies that limt→∞ E t= E∞ exists. This allows to define an algorithmic threshold for the coupled system: ∆AMP,w,L :=sup{∆|E∞µ ≤ Egood(∆) ∀ µ}. We show (equality holds but is not directly needed): Lemma 3.2 (Threshold saturation) Let ∆AMP,coup := lim infw→∞ lim infL→∞∆AMP,w,L. We have ∆AMP,coup≥∆RS. Proof sketch of theorem 1.1: First we prove the RS formula for ∆ ≤ ∆Opt. It is known [3] that the matrix-MSE of AMP when n→∞ is equal to v2−(v−Et)2. This cannot improve the matrix-MMSE, hence (v2−(v−E∞)2)/4≥ lim supn→∞Mmmsen/4. For ∆≤∆AMP we have E∞=Egood(∆) which is the global minimum of (1) so the left hand side of the last inequality equals the derivative of minE∈[0,v] iRS(E; ∆) w.r.t ∆−1. Thus using the matrix version of the I-MMSE relation [23] we get d d∆−1 min E∈[0,v] iRS(E; ∆) ≥ lim sup n→∞ 1 n dI(S; W) d∆−1 . (5) Integrating this relation on [0,∆] ⊂ [0,∆AMP] and checking that minE∈[0,v] iRS(E; 0) = H(S) (the Shannon entropy of P0) we obtain minE∈[0,v] iRS(E; ∆)≤ lim infn→∞ I(S; W)/n. But we know I(S; W)/n≤minE∈[0,v] iRS(E; ∆) [9], thus we already get theorem 1.1 for ∆≤∆AMP. We notice that ∆AMP≤∆Opt. While this might seem intuitively clear, it follows from ∆RS≥∆AMP (by their definitions) which together with ∆AMP > ∆Opt would imply from theorem 1.1 that limn→∞ I(S; W)/n is analytic at ∆Opt, a contradiction. The next step is to extend theorem 1.1 to the range [∆AMP,∆Opt]. Suppose for a moment ∆RS≥∆Opt. Then both functions on each side of the RS formula are analytic on the whole range ]0,∆Opt[ and since they are equal for ∆≤∆AMP, they must be equal on their whole analyticity range and by continuity, they must also be equal at ∆Opt (that the functions are continuous follows from independent arguments on the existence of the n→∞ limit of concave functions). It remains to show that ∆RS∈ ]∆AMP,∆Opt[ is impossible. We proceed by contradiction, so suppose this is true. Then both functions on each side of the RS formula are analytic on ]0,∆RS[ and since they are equal for ]0,∆AMP[⊂]0,∆RS[ they must be equal on the whole range ]0,∆RS[ and also at ∆RS by continuity. For ∆>∆RS the fixed point of SE is E∞=Ebad(∆) which is also the global minimum of iRS(E; ∆), hence (5) is verified. Integrating this inequality on ]∆RS,∆[⊂]∆RS,∆Opt[ and using I(S; W)/n≤minE∈[0,v] iRS(E; ∆) again, we find that the RS formula holds for all ∆∈ [0,∆Opt]. But this implies that minE∈[0,v] iRS(E; ∆) is analytic at ∆RS, a contradiction. We now prove the RS formula for ∆≥∆Opt. Note that the previous arguments showed that necessarily ∆Opt≤∆RS. Thus by lemmas 3.1 and 3.2 (and the sub-optimality of AMP as shown as before) we obtain ∆RS ≤ ∆AMP,coup≤∆Opt,coup = ∆Opt≤∆RS. This shows that ∆Opt = ∆RS (this is the point where spatial coupling came in the game and we do not know of other means to prove such an equality). For ∆>∆RS we have E∞ =Ebad(∆) which is the global minimum of iRS(E; ∆). Therefore we again have (5) in this range and the proof can be completed by using once more the integration argument, this time over the range [∆RS,∆]=[∆Opt,∆]. Proof sketch of corollaries 1.3 and 1.5: LetE∗(∆)=argminEiRS(E; ∆) for ∆ 6=∆RS. By explicit calculation one checks that diRS(E∗,∆)/d∆−1 =(v2−(v−E∗(∆))2)/4, so from theorem 1.1 and the matrix form of the I-MMSE relation we find Mmmsen→v2−(v−E∗(∆))2 as n→∞ which is the first part of the statement of corollary 1.3. Let us now turn to corollary 1.5. For n→∞ the vectorMSE of the AMP estimator at time t equals Et, and since the fixed point equation corresponding to SE is precisely the stationarity equation for iRS(E; ∆), we conclude that for ∆ /∈ [∆AMP,∆RS] we must have E∞=E∗(∆). It remains to prove that E∗(∆)=limn→∞Vmmsen(∆) at least for ∆ /∈ [∆AMP,∆RS] (we believe this is in fact true for all ∆). This will settle the second part of corollary 1.3 as well as 1.5. Using (Nishimori) identities ES,W[SiSjE[XiXj |W]]=ES,W[E[XiXj |W]2] (see e.g. [9]) and using the law of large numbers we can show limn→∞Mmmsen ≤ limn→∞(v2− (v− Vmmsen(∆)) 2). Concentration techniques similar to [13] suggest that the equality in fact holds (for ∆ 6= ∆RS) but there are technicalities that prevent us from completing the proof of equality. However it is interesting to note that this equality would imply E∗(∆)=limn→∞Vmmsen(∆) for all ∆ 6=∆RS. Nevertheless, another argument can be used when AMP is optimal. On one hand the right hand side of the inequality is necessarily smaller than v2−(v−E∞)2. On the other hand the left hand side of the inequality is equal to v2−(v−E∗(∆))2. SinceE∗(∆)=E∞ when ∆ /∈ [∆AMP,∆RS], we can conclude limn→∞Vmmsen(∆)=argminEiRS(E; ∆) for this range of ∆. Proof sketch of lemma 3.1: Here we prove the lemma for a ring that is not seeded. An easy argument shows that a seed of size w does not change the MI per variable when L→∞. The statistical physics formulation is convenient: up to the trivial additive term n(L+1)v2/4, the MI Iw,L(S; W) equals the free energy −ES,Z[lnZw,L], where Zw,L := ∫ dxP0(x) exp(−H(x, z,Λ)) and H(x, z,Λ) = 1 ∆ L∑ µ=0 ( Λµµ ∑ iµ≤jµ Aiµjµ(x, z,Λ) + µ+w∑ ν=µ+1 Λµν ∑ iµ,jν Aiµjν (x, z,Λ) ) , (6) with Aiµjν (x, z,Λ) :=(x2iµx 2 jν )/(2n)−(siµsjνxiµxjν )/n−(xiµxjνziµjν √ ∆)/ √ nΛµν . Consider a pair of systems with coupling matrices Λ and Λ′ and i.i.d noize realizations z, z′, an interpolated HamiltonianH(x, z, tΛ)+H(x, z′, (1−t)Λ′), t ∈ [0, 1], and the corresponding partition function Zt. The main idea of the proof is to show that for suitable choices of matrices,− ddtES,Z,Z′ [lnZt]≤0 for all t∈ [0, 1] (up to negligible terms), so that by the fundamental theorem of calculus, we get a comparison between the free energies ofH(x, z,Λ) andH(x, z′,Λ′). Performing the t-derivative brings down a Gibbs average of a polynomial in all variables siµ , xiµ , ziµjν and z ′ iµjν . This expectation over S, Z, Z′ of this Gibbs average is simplified using integration by parts over the Gaussian noise ziµjν , z′iµjν and Nishimori identities (see e.g. proof of corollary 1.3 for one of them). This algebra leads to − 1 n(L+ 1) d dt ES,Z,Z′ [lnZt] = 1 4∆(L+ 1) ES,Z,Z′ [〈qᵀΛq− qᵀΛ′q〉t] +O(1/(nL)), (7) where 〈−〉t is the Gibbs average w.r.t the interpolated Hamiltonian, q is the vector of overlaps qµ := ∑n iµ=1 siµxiµ/n. If we can choose matrices s.t Λ ′ >Λ, the difference of quadratic forms in the Gibbs bracket is negative and we obtain an inequality in the large size limit. We use this scheme to interpolate between the fully decoupled system w=0 and the coupled one 1≤w<L/2 and then between 1≤w <L/2 and the fully connected system w=L/2. The w= 0 system has Λµν = δµν with eigenvalues (1, 1, . . . , 1). For the 1 ≤ w < L/2 system, we take any stochastic translation invariant matrix with non-negative discrete Fourier transform (of its rows): such matrices have an eigenvalue equal to 1 and all others in [0, 1[ (the eigenvalues are precisely equal to the discrete Fourier transform). For w = L/2 we choose Λµν = 1/(L+1) which is a projector with eigenvalues (0, 0, . . . , 1). With these choices we deduce that the free energies and MIs are ordered as Iw=0,L +O(1)≤Iw,L +O(1)≤Iw=L/2,L +O(1). To conclude the proof we divide by n(L+1) and note that the limits of the leftmost and rightmost MIs are equal, provided the limit exists. Indeed the leftmost term equals L times I(S; W) and the rightmost term is the same MI for a system of n(L+1) variables. Existence of the limit follows by subadditivity, proven by a similar interpolation [18]. Proof sketch of lemma 3.2: Fix ∆ < ∆RS. We show that, for w large enough, the coupled SE recursion (4) must converge to a fixed point E∞µ ≤Egood(∆) for all µ. The main intuition behind the proof is to use a “potential function” whose “energy” can be lowered by small perturbation of a fixed point that would go above Egood(∆) [16, 17]. The relevant potential function iw,L(E,∆) is in fact the replica potential of the coupled system (a generalization of (1)). The stationarity condition for this potential is precisely (4) (without the seeding condition). Monotonicity properties of SE ensure that any fixed point has a “unimodal” shape (and recall that it vanishes for µ∈B= {0, . . . , w−1}∪{L−w, . . . , L}). Consider a position µmax∈{w, . . . , L−w−1} where it is maximal and suppose that E∞µmax > Egood(∆). We associate to the fixed point E ∞ a so-called saturated profile Es defined on the whole of Z as follows: Esµ=Egood(∆) for all µ≤µ∞ where µ∞+1 is the smallest position s.t E∞µ >Egood(∆); E s µ=E ∞ µ for µ∈{µ∞+1, . . . , µmax−1}; Esµ=E∞µmax for all µ≥µmax. We show that Es cannot exist for w large enough. To this end define a shift operator by [S(Es)]µ :=Esµ−1. On one hand the shifted profile is a small perturbation of E s which matches a fixed point, except where it is constant, so if we Taylor expand, the first order vanishes and the second order and higher orders can be estimated as |iw,L(S(Es); ∆)−iw,L(Es; ∆)|=O(1/w) uniformly in L. On the other hand, by explicit cancellation of telescopic sums iw,L(S(Es); ∆)−iw,L(Es; ∆)= iRS(Egood; ∆)−iRS(E∞µmax ; ∆). Now one can show from monotonicity properties of SE that if E ∞ is a non trivial fixed point of the coupled SE then E∞µmax cannot be in the basin of attraction of Egood(∆) for the uncoupled SE recursion. Consequently as can be seen on the plot of iRS(E; ∆) (e.g. figure 1) we must have iRS(E∞µmax ; ∆)≥ iRS(Ebad; ∆). Therefore iw,L(S(E s); ∆)−iw,L(Es; ∆)≤ −|iRS(Ebad; ∆)−iRS(Egood; ∆)| which is an energy gain independent of w, and for large enough w we get a contradiction with the previous estimate coming from the Taylor expansion. Acknowledgments J.B and M.D acknowledge funding from the SNSF (grant 200021-156672). Part of this research received funding from the ERC under the EU’s 7th Framework Programme (FP/2007-2013/ERC Grant Agreement 307087-SPARCS). F.K and L.Z thank the Simons Institute for its hospitality.
1. What is the primary contribution of the paper in rank-one matrix estimation? 2. What is the role of replica symmetric potential function i_{RS} in the paper? 3. How does the paper relate to approximate message passing algorithm? 4. Are there any concerns regarding the complexity of formula (2)? 5. Do you have any questions about the practicality of using the main result in real-world scenarios?
Review
Review This paper focuses on rank-one matrix estimation problem. Suppose we have a noisy observation of rank one matrix W = ss^T + \Delta, and we want to reconstruct s. The main contribution of the paper is to identify the asymptotic mutual information I(S;W) via replica symmetric potential function i_{RS}. The relation of approximate message passing algorithm and the quantity i_{RS} is also studied in this paper.Pros: 1. The authors have good mathematical ability to prove theorems. Cons: 1. I don't understand the meaning of replica symmetric potential function i_{RS} (which plays the central role in this paper). The formula (2) is too complicated to understand. So it makes me unable to understand the follow-on results. Maybe it is good to explain i_{RS} after giving the formula. 2. I don't understand how to use the main result in practice. It seems that there are no practical algorithm presented or any guide to use the theorem to improve or explain some existing algorithms.
NIPS
Title AViD Dataset: Anonymized Videos from Diverse Countries Abstract We introduce a new public video dataset for action recognition: Anonymized Videos from Diverse countries (AViD). Unlike existing public video datasets, AViD is a collection of action videos from many different countries. The motivation is to create a public dataset that would benefit training and pretraining of action recognition models for everybody, rather than making it useful for limited countries. Further, all the face identities in the AViD videos are properly anonymized to protect their privacy. It also is a static dataset where each video is licensed with the creative commons license. We confirm that most of the existing video datasets are statistically biased to only capture action videos from a limited number of countries. We experimentally illustrate that models trained with such biased datasets do not transfer perfectly to action videos from the other countries, and show that AViD addresses such problem. We also confirm that the new AViD dataset could serve as a good dataset for pretraining the models, performing comparably or better than prior datasets1. 1 Introduction Video recognition is an important problem with many potential applications. One key challenge in training a video model (e.g., 3D spatio-temporal convolutional neural networks) is the lack of data, as these models generally have more parameters than image models requiring even more data. Kinetics (Kay et al., 2017) found that by training on a hundreds of thousands of labeled video clips, one is able to increase the performance of video models significantly. Other large-scale datasets, such as HVU (Diba et al., 2019), Moments-in-Time (Monfort et al., 2018), and HACS (Zhao et al., 2019) also have been introduced, motivated by such findings. However, many of today’s large-scale datasets suffer from multiple problems: First, due to their collection process, the videos in the datasets are very biased particularly in terms of where the videos are from (Fig. 1 and Table 3). Secondly, many of these datasets become inconsistent as YouTube videos get deleted. For instance, in the years since Kinetics-400 was first released, over 10% of the videos have been removed from YouTube. Further, depending on geographic location, some videos may not be available. This makes it very challenging for researchers in different countries and at different times to equally benefit from the data and reproduce the results, making the trained models to be biased based on when and where they were trained. They are not static datasets (Figure 3). AViD, unlike previous datasets, contains videos from diverse groups of people all over the world. Existing datasets, such as Kinetics, have videos mostly from from North America (Kay et al., 2017) due to being sampled from YouTube and English queries. AViD videos are distributed more broadly across the globe (Fig. 1) since they are sampled from many sites using many different languages. This is important as certain actions are done differently in different cultures, such as greetings (shown 1The dataset is available https://github.com/piergiaj/AViD 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. in Fig. 2), nodding, etc. As many videos contain text, such as news broadcasts, the lack of diversity can further bias results to rely on English text which may not be present in videos from different regions of the world. Experimentally, we show diversity and lack of diversity affects the recognition. Further, we anonymize the videos by blurring all the faces. This prevents humans and machines from identifying people in the videos. This is an important property for institutions, research labs, and companies respecting privacy to take advantage the dataset. Due to this fact, face-based actions (e.g., smile, makeup, brush teeth, etc.) have to be removed as they would be very difficult to recognize with blurring, but we show that the other actions are still reliably recognized. Another technical limitation with YouTube-based datasets including Kinetics, ActivityNet (Caba Heilbron et al., 2015), YouTube-8M (Abu-El-Haija et al., 2016), HowTo100M (Miech et al., 2019), AVA (Gu et al., 2017) and others, is that downloading videos from YouTube is often blocked. The standard tools for downloading videos can run into request errors (many issues on GitHub exist, with no permanent solution). These factors limit many researchers from being able to use large-scale video datasets. To address these challenges, we introduce a new, largescale dataset designed to solve these problems. The key benefits of this dataset is that it captures the same actions as Kinetics plus hundreds of new ones. Further, we choose videos from a variety of sources (Flickr, Instagram, etc.) that have a creative-commons licence. This license allows us to download, modify and distribute the videos as needed. We create a static video dataset that can easily be downloaded. We further provide tags based on the user-generated tags for the video, enabling studying of weakly-labeled data learning. Also unique is the ability to add ‘no action’ which we show helps in action localization tasks. To summarize, • AViD contains actions from diverse countries obtained by querying with many languages. • AViD is a dataset with face identities removed • AViD is a static dataset with all the videos having the creative-commons licence. 2 Dataset Creation The dataset creation process follows multiple steps. First we generated a set of action classes. Next, we sampled videos from a variety of sources to obtain a diverse sample of all actions. Then we generate candidate clips from each video. These clips are then annotated by human. We now provide more details about this process. 2.1 Action Classes Unlike images, where objects are clearly defined and have physical boundaries, determining an action is in videos is a far more ambiguous task. In AViD, we follow many previous works such as Kinetics (Kay et al., 2017), where an action consists of a verb and a noun when needed. For example, ‘cutting apples’ is an action with both a verb and noun while ‘digging’ is just verb. To create the AViD datasets, the action classes begin by combining the actions in Kinetics, Charades, and Moments in Time, as these cover a wide variety of possible actions. We then remove all actions involving the face (e.g., ‘smiling,’ ‘eyeliner,’ etc.) since we are blurring faces, as this makes it extremely difficult to recognize these actions. Note that we do leave actions like ‘burping’ or ‘eating’ which can be recognized by other contextual cues and motion. We then manually combine duplicate/similar actions. This resulted in a set of 736 actions. During the manual annotation process, we allowed users to provide a text description of the actions in the video if none of the candidate actions were suitable and the additional ‘no action’ if there was no action in the video. Based on this process, we found another 159 actions, resulting in 887 total actions. Examples of some of the new ones are ‘medical procedures,’ ‘gardening,’ ‘gokarting,’ etc. Previous works have studied using different forms of actions, some finding actions associated with nouns to be better (Sigurdsson et al., 2017) while others prefer atomic, generic action (Gu et al., 2017). The Moments in Time (Monfort et al., 2018) takes the most common verbs to use as actions, while Charades (Sigurdsson et al., 2016) uses a verb and noun to describe each action. Our choice of action closely follows these, and we further build a hierarchy that will enable studying of verb-only actions compared to verb+noun actions and levels of fine-grained recognition. 2.1.1 Hierarchy After deciding the action classes, we realized there was a noticeable hierarchy capturing these different actions. Hierarchies have been created for ImageNet (Deng et al., 2009) to represent relationships such as fine-grained image classification, but they have not been widely used in video understanding. ActivityNet (Caba Heilbron et al., 2015) has a hierarchy, but is a smaller dataset and the hierarchy mostly capture broad differences and only has 200 action classes. We introduce a hierarchy that captures more interesting relationships between actions, such as ‘fishing’ → ‘fly tying,’ ‘casting fishing line,’ ‘catching fish,’ etc. And more broad differences such as ‘ice fishing’ and ‘recreational fishing.’ Similarly, in the ‘cooking class’ we have ‘cutting fruit’ which has both ‘cutting apples’ and ‘cutting pineapple’. Some actions, like ‘cutting strawberries’ didn’t provide enough clips (e.g., less than 10), and in such case, we did not create the action category and made the videos only belong to the ‘cutting fruit’ class. This hierarchy provides a starting point to study various aspects of what an action is, and how we should define actions and use the hierarchy in classifiers. Part of the hierarchy is shown in Fig. 4, the full hierarchy is provided in the supplementary material. 2.2 Video Collection AViD videos are collected from several websites: Flickr, Instagram, etc. But we ensure all videos are licensed with the creative commons license. This allows us to download, modify (blur faces), and distribute the videos. This enables the construction of a static, anonymized, easily downloadable video dataset for reproducible research. In order to collect a diverse set of candidate videos to have in the dataset, we translated the initial action categories into 22 different languages (e.g., English, Spanish, Portuguese, Chinese, Japanese, Afrikaans, Swahili, Hindi, etc.) covering every continent. We then searched multiple video websites (Instagram, Flickr, Youku, etc.) for these actions to obtain initial video samples. This process resulted in a set of 800k videos. From these videos, we took multiple sample clips. As shown in Fig. 1, this process found videos from all over the globe. We ensured there was no overlap of AViD videos and those in the validation or testing sets of Kinetics. There is some minor overlap between some of AViD videos and the training set of Kinetics, which is an outcome due to that the both datasets were collected from the web. 2.3 Action Annotation We annotate the candidate clips using Amazon Mechanical Turk. In order to make human annotations more efficient, we use I3D model (Carreira and Zisserman, 2017) to generate a set of potential candidate labels for each clip (the exact number depends on how many actions I3D predicted, usually 2-3) and provide them as suggestions to the human annotators. We also provide annotators an option to select the ‘other’ and ‘none’ category and manually specify what the action is. For each task, one of the videos was from an existing dataset where the label was known. This served as a quality check and the annotations were rejected if the worker did not correctly annotate the test video. A subset of the videos where I3D (trained with Kinetics) had very high confidence (> 90%) were verified manually by the authors. As a result, a total of 500k video clips were annotated. Human annotators labeled 300k videos manually, and 200k videos with very high-confidence I3D predictions were checked by the authors and the turkers. Of these, about 100k videos were labeled as the ‘other’ action by the human annotators, suggesting that I3D + Kinetics training does not perform well on these actions. Of these, about 50k videos were discarded due to poor labeling or other errors, resulting in a dataset of 450k total samples. We found the distribution of actions follows a Zipf distribution (shown in Fig. 5, similar to the observation of AVA (Gu et al., 2017). We split the dataset into train/test sets by taking 10% of each class as the test videos. This preserves the Zipf distribution. 2.4 Weak Tag Annotation In addition to action category annotation per video clips, AviD dataset also provides a set of weak text tags. To generate the weak tags for the videos, we start by translating each tag (provided from the web) into English. We then remove stopwords (e.g., ‘to,’ ‘the,’ ‘and,’ etc.) and lemmatize the words (e.g., ‘stopping’ to ‘stop’). This transforms each tag into its base English word. Next, we use word2vec (Mikolov et al., 2013) to compute the distance between each pair of tags, and use affinity propagation and agglomerative clustering to generate 1768 and 4939 clusters, respectively. Each video is then tagged based on these clusters. This results in two different sets of tags for the videos, both of which are provided for further analysis, since it is unclear which tagging strategy will more benefit future approaches. The overall distribution of tags is shown in Fig. 6, also following an exponential distribution. 3 Experiments We conducted a series of experiments with the new AViD dataset. This not only includes testing existing video CNN models on the AViD dataset and further evaluating effectiveness of the dataset for pretraining, but also includes quantitative analysis comparing different datasets. Specifically, we measure video source statistics to check dataset biases, and experimentally confirm how well a model trained with action videos from biased countries generalize to videos from different countries. We also evaluate how face blurring influences the classification accuracy, and introduce weak annotations of the dataset. Implementation Details We implemented the models in PyTorch and trained them using four Titan V GPUs. To enable faster learning, we followed the multi-grid training schedule (Wu et al., 2019). The models, I3D (Carreira and Zisserman, 2017), 2D/(2+1D)/3D ResNets (He et al., 2016; Tran et al., 2018, 2014), Two-stream (Simonyan and Zisserman, 2014), and SlowFast (Feichtenhofer et al., 2018), were trained for 256 epochs. The learning rate followed a cosine decay schedule with a max of 0.1 and a linear warm-up for the first 2k steps. Each GPU used a base batch size of 8 clips, which was then scaled according to the multi-grid schedule (code provided in supplementary materials). The base clip size was 32 frames at 224× 224 image resolution. For evaluation, we compared both convolutional evaluation where the entire T frames at 256× 256 were given as input as well as a multi-crop evaluation where 30 random crops of 32 frames at 224× 224 are used and the prediction is the average over all clips. Baseline Results In Table 2, we report the results of multiple common video model baseline networks. Overall, our findings are consistent with the literature. Diversity Analysis Since AViD is designed to capture various actions from diverse countries, we conduct a set of experiments to measure the diversity and determine the effect of having diverse videos. First, we computed geo-location statistics of AViD and other datasets, and compared them. To obtain the locations of AViD videos, we extract the geo-tagged location for videos where it was available (about 75% of total AViD videos). We used the public API of the site where each AViD video came from to gather the geolocation statistics. Similarly, we used the public YouTube API to gather the geolocation statistics for the Kinetics, HACS, and HVU videos. Further, after the initial release of AViD (on arXiv), the Kinetics team provided us their location statistics estimate (Smaira et al., 2020). As it is a bit different from our estimate, we also directly include such data for the comparison.2 To measure the diversity of each dataset, we report a few metrics: (1) percentage of videos in North America, Latin America, Europe, Asia, and Africa. (2) As a proxy for diversity and bias, we assume a uniform distribution over all countries would be the most fair (this assumption is debatable), then using the Wasserstein distance, we report the distance from the distribution of videos to the uniform distribution. The results are shown in Table 3. We note that due to the large overlap in videos between HVU and Kinetics-600, their diversity stats are nearly identical. Similarly, as HACS is based on English queries of YouTube, it also results in a highly North American biases dataset. We note that Kinetics-600 and -700 made efforts to improve diversity by querying in Spanish and Portuguese, which did improve diversity in those countries (Carreira et al., 2018; Smaira et al., 2020). In addition, we ran an experiment training the baseline model on each dataset, and testing it on videos from different regions of the world. Specifically, we train the baseline 3D ResNet model with either Kinetics-400/600 or AViD. Then we evaluated the models on AViD videos using action classes shared by both Kinetics-400 and AViD (about 397 classes) while splitting evaluation into North American, Rest of World, or other regions. The results are summarized in Table 4. We find that the models trained with any of the three datasets perform quite similarly on the North American videos. However, the Kinetics trained models do not perform as well on the diverse videos, while AViD models show a much smaller drop. This suggests that current datasets do not generalize well to diverse world data, showing the importance of building diverse datasets. In Table 5, we show the results when using all 2We believe the main difference comes from the use of public YouTube API vs. YouTube’s internal geolocation metadata estimated based on various factors. Please see the appendix for more details. AViD classes, but using training on a specific region then testing on that region vs. all other regions3. We observe that the performance drops when training vs. testing are from different regions. This further suggests that having a training set of videos from diverse countries are essential. Fine-tuning We pretrain several of the models with AViD dataset, and fine-tune on HMDB-51 (Kuehne et al., 2011) and Charades (Sigurdsson et al., 2016). The objective is to compare AViD with exising datasets in terms of pretraining, including Kinetics400/600 (Kay et al., 2017) and Moments-in-time (MiT) (Monfort et al., 2018). Note that these results are based on using RGB-only as input; no optical flow is used. In Table 6, we compare the results on HMDB. We find that AViD performs quite similarly to both Kinetics and MiT. Note that the original Kinetics has far more videos than are currently available (as shown in Figure 3), thus the original fine-tuning performance is higher (indicated in parenthesis). In Table 7, we compare the results on the Charades dataset. Because the AViD dataset also provides videos with ‘no action’ in contrast to MiT and Kinetics which only have action videos, we compare the effect of using ‘no action’ as well. While AViD nearly matches or improves performance even without ‘no action’ videos in the classification setting, we find that the inclusion of the ‘no action’ 3There are only ∼35k training clips from Africa, and the smaller training set reduces overall performance. greatly benefits the localization setting, establishing a new state-of-the-art for Charades-localization (25.2 vs. 22.3 in (Piergiovanni and Ryoo, 2019b)). Learning from Weak Tags We compare the effect of using the weak tags generated for the AViD dataset compared to using the manually labeled data. The results are shown in Table 8. Surprisingly, we find that using the weak tags provides strong initial features that can be fine-tuned on HMDB without much different in performance. Future works can explore how to best use the weak tag data. Blurred Face Effect During preprocessing, we use a face detector to blur any found faces in the videos. We utilize a strong Gaussian blur with random parameters. Gaussian blurring can be reversed if the location and parameters are known, however, due to the randomization of the parameters, it would be practically impossible to reverse the blur and recover true identity. Since we are modifying the videos by blurring faces, we conducted experiments to see how face blurring impacts performance. We compare performance on AViD (accuracy) as well as fine-tuning on HMDB (accuracy) and Charades (mAP) classification. The results are shown in Table 9. While face blurring slightly reduces performance, the impact is not that great. This suggests it has a good balance of anonymization, yet still recognizable actions. Importance of Time In videos, the use of temporal information is often important when recognizing actions by using optical flow (Simonyan and Zisserman, 2014), stacking frames, RNNs (Ng et al., 2015), temporal pooling (Piergiovanni et al., 2017), and other approaches. In order to determine how much temporal information AViD needs, we compared single-frame models to multi-frame. We then shuffled the frames to measure the performance drop. The results are shown in Table 10. We find that adding more frames benefits performance, while shuffling them harms multi-frame model performance. This suggests that temporal information is quite useful for recognizing actions in AViD, making it an appropriate dataset for developing spatio-temporal video models. 4 Conclusions We present AViD, a new, static, diverse and anonymized video dataset. We showed the importance of collecting and learning from diverse videos, which is not captured in existing video datasets. Further, AViD is static and easily distributed, enabling reproducible research. Finally, we showed that AViD produces similar or better results on datasets like HMDB and Charades. Broader Impacts We quantitatively confirmed that existing video datasets for action recognition are highly biased. In order to make people and researchers in diverse countries more fairly benefit from a public action recognition dataset, we propose the AViD dataset. We took care to query multiple websites from many countries in many languages to build a dataset that represents as many countries as possible. We experimentally showed that by doing this, we can reduce the bias of learned models. We are not aware of any other large-scales datasets (with hundreds of video hours) which took such country diversity into the consideration during the collection process. As this dataset contains a wide variety of actions, it could enable malicious parties to build systems to monitor people. However, we took many steps to preserve the identity of people and eliminate the ability to learn face-based actions, which greatly reduces the negative uses of the data. The positive impacts of this dataset are enabling reproducible research on video understanding which will help more advance video understanding research with consistent and reliable baselines. We emphasize once more that our dataset is a static dataset respecting the licences of all its videos. Acknowledgement This work was supported in part by the National Science Foundation (IIS-1812943 and CNS1814985).
1. What is the focus and contribution of the paper regarding video dataset creation? 2. What are the strengths of the proposed dataset, particularly in addressing issues with existing datasets? 3. Are there any weaknesses or challenges associated with the dataset, especially regarding rare-category recognition?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper describes a new large-scale video dataset for action recognition. The dataset is designed to resolve the diversity, privacy, and expiration issues present in the existing datasets such as Kinetics [Kay 2017] or Moments in Time [Monfort 2018]. The paper describes the procedures to build the fully-annotated dataset and several studies to examine the statistics and baseline performance of the dataset. Strengths The proposed dataset is a clear contribution to the computer vision community; as described in Sec 1, existing datasets have been having various issues involving the diversity, privacy, and availability / license problems, and they are major reasons why there was no ImageNet-like common benchmark in video action recognition. This work potentially resolves a majority of these problems and serves as a common important resource for the future study in video recognition. As a dataset paper, this work presents several convincing studies to show that the proposed dataset resolves diversity and other issues compared to the previous work. Weaknesses I do not find a major issue in this paper. This is probably unavoidable but the long-tailed distribution (Fig 5 and 6) seems to pose a challenge for rare-category recognition. It would be good if there is any technical suggestion for less frequent labels.
NIPS
Title AViD Dataset: Anonymized Videos from Diverse Countries Abstract We introduce a new public video dataset for action recognition: Anonymized Videos from Diverse countries (AViD). Unlike existing public video datasets, AViD is a collection of action videos from many different countries. The motivation is to create a public dataset that would benefit training and pretraining of action recognition models for everybody, rather than making it useful for limited countries. Further, all the face identities in the AViD videos are properly anonymized to protect their privacy. It also is a static dataset where each video is licensed with the creative commons license. We confirm that most of the existing video datasets are statistically biased to only capture action videos from a limited number of countries. We experimentally illustrate that models trained with such biased datasets do not transfer perfectly to action videos from the other countries, and show that AViD addresses such problem. We also confirm that the new AViD dataset could serve as a good dataset for pretraining the models, performing comparably or better than prior datasets1. 1 Introduction Video recognition is an important problem with many potential applications. One key challenge in training a video model (e.g., 3D spatio-temporal convolutional neural networks) is the lack of data, as these models generally have more parameters than image models requiring even more data. Kinetics (Kay et al., 2017) found that by training on a hundreds of thousands of labeled video clips, one is able to increase the performance of video models significantly. Other large-scale datasets, such as HVU (Diba et al., 2019), Moments-in-Time (Monfort et al., 2018), and HACS (Zhao et al., 2019) also have been introduced, motivated by such findings. However, many of today’s large-scale datasets suffer from multiple problems: First, due to their collection process, the videos in the datasets are very biased particularly in terms of where the videos are from (Fig. 1 and Table 3). Secondly, many of these datasets become inconsistent as YouTube videos get deleted. For instance, in the years since Kinetics-400 was first released, over 10% of the videos have been removed from YouTube. Further, depending on geographic location, some videos may not be available. This makes it very challenging for researchers in different countries and at different times to equally benefit from the data and reproduce the results, making the trained models to be biased based on when and where they were trained. They are not static datasets (Figure 3). AViD, unlike previous datasets, contains videos from diverse groups of people all over the world. Existing datasets, such as Kinetics, have videos mostly from from North America (Kay et al., 2017) due to being sampled from YouTube and English queries. AViD videos are distributed more broadly across the globe (Fig. 1) since they are sampled from many sites using many different languages. This is important as certain actions are done differently in different cultures, such as greetings (shown 1The dataset is available https://github.com/piergiaj/AViD 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. in Fig. 2), nodding, etc. As many videos contain text, such as news broadcasts, the lack of diversity can further bias results to rely on English text which may not be present in videos from different regions of the world. Experimentally, we show diversity and lack of diversity affects the recognition. Further, we anonymize the videos by blurring all the faces. This prevents humans and machines from identifying people in the videos. This is an important property for institutions, research labs, and companies respecting privacy to take advantage the dataset. Due to this fact, face-based actions (e.g., smile, makeup, brush teeth, etc.) have to be removed as they would be very difficult to recognize with blurring, but we show that the other actions are still reliably recognized. Another technical limitation with YouTube-based datasets including Kinetics, ActivityNet (Caba Heilbron et al., 2015), YouTube-8M (Abu-El-Haija et al., 2016), HowTo100M (Miech et al., 2019), AVA (Gu et al., 2017) and others, is that downloading videos from YouTube is often blocked. The standard tools for downloading videos can run into request errors (many issues on GitHub exist, with no permanent solution). These factors limit many researchers from being able to use large-scale video datasets. To address these challenges, we introduce a new, largescale dataset designed to solve these problems. The key benefits of this dataset is that it captures the same actions as Kinetics plus hundreds of new ones. Further, we choose videos from a variety of sources (Flickr, Instagram, etc.) that have a creative-commons licence. This license allows us to download, modify and distribute the videos as needed. We create a static video dataset that can easily be downloaded. We further provide tags based on the user-generated tags for the video, enabling studying of weakly-labeled data learning. Also unique is the ability to add ‘no action’ which we show helps in action localization tasks. To summarize, • AViD contains actions from diverse countries obtained by querying with many languages. • AViD is a dataset with face identities removed • AViD is a static dataset with all the videos having the creative-commons licence. 2 Dataset Creation The dataset creation process follows multiple steps. First we generated a set of action classes. Next, we sampled videos from a variety of sources to obtain a diverse sample of all actions. Then we generate candidate clips from each video. These clips are then annotated by human. We now provide more details about this process. 2.1 Action Classes Unlike images, where objects are clearly defined and have physical boundaries, determining an action is in videos is a far more ambiguous task. In AViD, we follow many previous works such as Kinetics (Kay et al., 2017), where an action consists of a verb and a noun when needed. For example, ‘cutting apples’ is an action with both a verb and noun while ‘digging’ is just verb. To create the AViD datasets, the action classes begin by combining the actions in Kinetics, Charades, and Moments in Time, as these cover a wide variety of possible actions. We then remove all actions involving the face (e.g., ‘smiling,’ ‘eyeliner,’ etc.) since we are blurring faces, as this makes it extremely difficult to recognize these actions. Note that we do leave actions like ‘burping’ or ‘eating’ which can be recognized by other contextual cues and motion. We then manually combine duplicate/similar actions. This resulted in a set of 736 actions. During the manual annotation process, we allowed users to provide a text description of the actions in the video if none of the candidate actions were suitable and the additional ‘no action’ if there was no action in the video. Based on this process, we found another 159 actions, resulting in 887 total actions. Examples of some of the new ones are ‘medical procedures,’ ‘gardening,’ ‘gokarting,’ etc. Previous works have studied using different forms of actions, some finding actions associated with nouns to be better (Sigurdsson et al., 2017) while others prefer atomic, generic action (Gu et al., 2017). The Moments in Time (Monfort et al., 2018) takes the most common verbs to use as actions, while Charades (Sigurdsson et al., 2016) uses a verb and noun to describe each action. Our choice of action closely follows these, and we further build a hierarchy that will enable studying of verb-only actions compared to verb+noun actions and levels of fine-grained recognition. 2.1.1 Hierarchy After deciding the action classes, we realized there was a noticeable hierarchy capturing these different actions. Hierarchies have been created for ImageNet (Deng et al., 2009) to represent relationships such as fine-grained image classification, but they have not been widely used in video understanding. ActivityNet (Caba Heilbron et al., 2015) has a hierarchy, but is a smaller dataset and the hierarchy mostly capture broad differences and only has 200 action classes. We introduce a hierarchy that captures more interesting relationships between actions, such as ‘fishing’ → ‘fly tying,’ ‘casting fishing line,’ ‘catching fish,’ etc. And more broad differences such as ‘ice fishing’ and ‘recreational fishing.’ Similarly, in the ‘cooking class’ we have ‘cutting fruit’ which has both ‘cutting apples’ and ‘cutting pineapple’. Some actions, like ‘cutting strawberries’ didn’t provide enough clips (e.g., less than 10), and in such case, we did not create the action category and made the videos only belong to the ‘cutting fruit’ class. This hierarchy provides a starting point to study various aspects of what an action is, and how we should define actions and use the hierarchy in classifiers. Part of the hierarchy is shown in Fig. 4, the full hierarchy is provided in the supplementary material. 2.2 Video Collection AViD videos are collected from several websites: Flickr, Instagram, etc. But we ensure all videos are licensed with the creative commons license. This allows us to download, modify (blur faces), and distribute the videos. This enables the construction of a static, anonymized, easily downloadable video dataset for reproducible research. In order to collect a diverse set of candidate videos to have in the dataset, we translated the initial action categories into 22 different languages (e.g., English, Spanish, Portuguese, Chinese, Japanese, Afrikaans, Swahili, Hindi, etc.) covering every continent. We then searched multiple video websites (Instagram, Flickr, Youku, etc.) for these actions to obtain initial video samples. This process resulted in a set of 800k videos. From these videos, we took multiple sample clips. As shown in Fig. 1, this process found videos from all over the globe. We ensured there was no overlap of AViD videos and those in the validation or testing sets of Kinetics. There is some minor overlap between some of AViD videos and the training set of Kinetics, which is an outcome due to that the both datasets were collected from the web. 2.3 Action Annotation We annotate the candidate clips using Amazon Mechanical Turk. In order to make human annotations more efficient, we use I3D model (Carreira and Zisserman, 2017) to generate a set of potential candidate labels for each clip (the exact number depends on how many actions I3D predicted, usually 2-3) and provide them as suggestions to the human annotators. We also provide annotators an option to select the ‘other’ and ‘none’ category and manually specify what the action is. For each task, one of the videos was from an existing dataset where the label was known. This served as a quality check and the annotations were rejected if the worker did not correctly annotate the test video. A subset of the videos where I3D (trained with Kinetics) had very high confidence (> 90%) were verified manually by the authors. As a result, a total of 500k video clips were annotated. Human annotators labeled 300k videos manually, and 200k videos with very high-confidence I3D predictions were checked by the authors and the turkers. Of these, about 100k videos were labeled as the ‘other’ action by the human annotators, suggesting that I3D + Kinetics training does not perform well on these actions. Of these, about 50k videos were discarded due to poor labeling or other errors, resulting in a dataset of 450k total samples. We found the distribution of actions follows a Zipf distribution (shown in Fig. 5, similar to the observation of AVA (Gu et al., 2017). We split the dataset into train/test sets by taking 10% of each class as the test videos. This preserves the Zipf distribution. 2.4 Weak Tag Annotation In addition to action category annotation per video clips, AviD dataset also provides a set of weak text tags. To generate the weak tags for the videos, we start by translating each tag (provided from the web) into English. We then remove stopwords (e.g., ‘to,’ ‘the,’ ‘and,’ etc.) and lemmatize the words (e.g., ‘stopping’ to ‘stop’). This transforms each tag into its base English word. Next, we use word2vec (Mikolov et al., 2013) to compute the distance between each pair of tags, and use affinity propagation and agglomerative clustering to generate 1768 and 4939 clusters, respectively. Each video is then tagged based on these clusters. This results in two different sets of tags for the videos, both of which are provided for further analysis, since it is unclear which tagging strategy will more benefit future approaches. The overall distribution of tags is shown in Fig. 6, also following an exponential distribution. 3 Experiments We conducted a series of experiments with the new AViD dataset. This not only includes testing existing video CNN models on the AViD dataset and further evaluating effectiveness of the dataset for pretraining, but also includes quantitative analysis comparing different datasets. Specifically, we measure video source statistics to check dataset biases, and experimentally confirm how well a model trained with action videos from biased countries generalize to videos from different countries. We also evaluate how face blurring influences the classification accuracy, and introduce weak annotations of the dataset. Implementation Details We implemented the models in PyTorch and trained them using four Titan V GPUs. To enable faster learning, we followed the multi-grid training schedule (Wu et al., 2019). The models, I3D (Carreira and Zisserman, 2017), 2D/(2+1D)/3D ResNets (He et al., 2016; Tran et al., 2018, 2014), Two-stream (Simonyan and Zisserman, 2014), and SlowFast (Feichtenhofer et al., 2018), were trained for 256 epochs. The learning rate followed a cosine decay schedule with a max of 0.1 and a linear warm-up for the first 2k steps. Each GPU used a base batch size of 8 clips, which was then scaled according to the multi-grid schedule (code provided in supplementary materials). The base clip size was 32 frames at 224× 224 image resolution. For evaluation, we compared both convolutional evaluation where the entire T frames at 256× 256 were given as input as well as a multi-crop evaluation where 30 random crops of 32 frames at 224× 224 are used and the prediction is the average over all clips. Baseline Results In Table 2, we report the results of multiple common video model baseline networks. Overall, our findings are consistent with the literature. Diversity Analysis Since AViD is designed to capture various actions from diverse countries, we conduct a set of experiments to measure the diversity and determine the effect of having diverse videos. First, we computed geo-location statistics of AViD and other datasets, and compared them. To obtain the locations of AViD videos, we extract the geo-tagged location for videos where it was available (about 75% of total AViD videos). We used the public API of the site where each AViD video came from to gather the geolocation statistics. Similarly, we used the public YouTube API to gather the geolocation statistics for the Kinetics, HACS, and HVU videos. Further, after the initial release of AViD (on arXiv), the Kinetics team provided us their location statistics estimate (Smaira et al., 2020). As it is a bit different from our estimate, we also directly include such data for the comparison.2 To measure the diversity of each dataset, we report a few metrics: (1) percentage of videos in North America, Latin America, Europe, Asia, and Africa. (2) As a proxy for diversity and bias, we assume a uniform distribution over all countries would be the most fair (this assumption is debatable), then using the Wasserstein distance, we report the distance from the distribution of videos to the uniform distribution. The results are shown in Table 3. We note that due to the large overlap in videos between HVU and Kinetics-600, their diversity stats are nearly identical. Similarly, as HACS is based on English queries of YouTube, it also results in a highly North American biases dataset. We note that Kinetics-600 and -700 made efforts to improve diversity by querying in Spanish and Portuguese, which did improve diversity in those countries (Carreira et al., 2018; Smaira et al., 2020). In addition, we ran an experiment training the baseline model on each dataset, and testing it on videos from different regions of the world. Specifically, we train the baseline 3D ResNet model with either Kinetics-400/600 or AViD. Then we evaluated the models on AViD videos using action classes shared by both Kinetics-400 and AViD (about 397 classes) while splitting evaluation into North American, Rest of World, or other regions. The results are summarized in Table 4. We find that the models trained with any of the three datasets perform quite similarly on the North American videos. However, the Kinetics trained models do not perform as well on the diverse videos, while AViD models show a much smaller drop. This suggests that current datasets do not generalize well to diverse world data, showing the importance of building diverse datasets. In Table 5, we show the results when using all 2We believe the main difference comes from the use of public YouTube API vs. YouTube’s internal geolocation metadata estimated based on various factors. Please see the appendix for more details. AViD classes, but using training on a specific region then testing on that region vs. all other regions3. We observe that the performance drops when training vs. testing are from different regions. This further suggests that having a training set of videos from diverse countries are essential. Fine-tuning We pretrain several of the models with AViD dataset, and fine-tune on HMDB-51 (Kuehne et al., 2011) and Charades (Sigurdsson et al., 2016). The objective is to compare AViD with exising datasets in terms of pretraining, including Kinetics400/600 (Kay et al., 2017) and Moments-in-time (MiT) (Monfort et al., 2018). Note that these results are based on using RGB-only as input; no optical flow is used. In Table 6, we compare the results on HMDB. We find that AViD performs quite similarly to both Kinetics and MiT. Note that the original Kinetics has far more videos than are currently available (as shown in Figure 3), thus the original fine-tuning performance is higher (indicated in parenthesis). In Table 7, we compare the results on the Charades dataset. Because the AViD dataset also provides videos with ‘no action’ in contrast to MiT and Kinetics which only have action videos, we compare the effect of using ‘no action’ as well. While AViD nearly matches or improves performance even without ‘no action’ videos in the classification setting, we find that the inclusion of the ‘no action’ 3There are only ∼35k training clips from Africa, and the smaller training set reduces overall performance. greatly benefits the localization setting, establishing a new state-of-the-art for Charades-localization (25.2 vs. 22.3 in (Piergiovanni and Ryoo, 2019b)). Learning from Weak Tags We compare the effect of using the weak tags generated for the AViD dataset compared to using the manually labeled data. The results are shown in Table 8. Surprisingly, we find that using the weak tags provides strong initial features that can be fine-tuned on HMDB without much different in performance. Future works can explore how to best use the weak tag data. Blurred Face Effect During preprocessing, we use a face detector to blur any found faces in the videos. We utilize a strong Gaussian blur with random parameters. Gaussian blurring can be reversed if the location and parameters are known, however, due to the randomization of the parameters, it would be practically impossible to reverse the blur and recover true identity. Since we are modifying the videos by blurring faces, we conducted experiments to see how face blurring impacts performance. We compare performance on AViD (accuracy) as well as fine-tuning on HMDB (accuracy) and Charades (mAP) classification. The results are shown in Table 9. While face blurring slightly reduces performance, the impact is not that great. This suggests it has a good balance of anonymization, yet still recognizable actions. Importance of Time In videos, the use of temporal information is often important when recognizing actions by using optical flow (Simonyan and Zisserman, 2014), stacking frames, RNNs (Ng et al., 2015), temporal pooling (Piergiovanni et al., 2017), and other approaches. In order to determine how much temporal information AViD needs, we compared single-frame models to multi-frame. We then shuffled the frames to measure the performance drop. The results are shown in Table 10. We find that adding more frames benefits performance, while shuffling them harms multi-frame model performance. This suggests that temporal information is quite useful for recognizing actions in AViD, making it an appropriate dataset for developing spatio-temporal video models. 4 Conclusions We present AViD, a new, static, diverse and anonymized video dataset. We showed the importance of collecting and learning from diverse videos, which is not captured in existing video datasets. Further, AViD is static and easily distributed, enabling reproducible research. Finally, we showed that AViD produces similar or better results on datasets like HMDB and Charades. Broader Impacts We quantitatively confirmed that existing video datasets for action recognition are highly biased. In order to make people and researchers in diverse countries more fairly benefit from a public action recognition dataset, we propose the AViD dataset. We took care to query multiple websites from many countries in many languages to build a dataset that represents as many countries as possible. We experimentally showed that by doing this, we can reduce the bias of learned models. We are not aware of any other large-scales datasets (with hundreds of video hours) which took such country diversity into the consideration during the collection process. As this dataset contains a wide variety of actions, it could enable malicious parties to build systems to monitor people. However, we took many steps to preserve the identity of people and eliminate the ability to learn face-based actions, which greatly reduces the negative uses of the data. The positive impacts of this dataset are enabling reproducible research on video understanding which will help more advance video understanding research with consistent and reliable baselines. We emphasize once more that our dataset is a static dataset respecting the licences of all its videos. Acknowledgement This work was supported in part by the National Science Foundation (IIS-1812943 and CNS1814985).
1. What is the focus and contribution of the paper regarding action recognition? 2. What are the strengths of the proposed approach, particularly in addressing data imbalance and video deletion issues? 3. What are the weaknesses of the paper, especially regarding cultural differences, performance improvement, and dataset bias? 4. How can the authors provide more comprehensive experiments to support their claim of training diverse models with the AViD dataset? 5. Are there any concerns regarding the potential misuse of the AViD dataset in promoting unfair models?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The authors collect a video dataset AViD for action recognition by collecting videos from different countries, in contrast with previous datasets, which are mainly from North America. The faces in the videos are blurred, and they also make sure that the collected videos are licensed so that the dataset keep static. Update: The authors partly addressed my concerns, and I am raising my ratings from 4 to 5. Strengths This paper considers the data imbalance issue in terms of countries and cultures, which I am really glad to see. I think this is an important problem in fair AI. Secondly, it has been a long existing problem that original videos on the Internet (e.g. YouTube) that are included in CV datasets get deleted over time, and it is rather cumbersome for both authors and other researchers since usually the only option is to ask the original authors for raw videos, and so I am also glad to see this factor being considered during data collection. Weaknesses 1) The same action could mean very different, even opposite things in different cultures (such as nodding and shaking your head). The paper does not discuss how often this happens in the dataset, and how it effects performance of models trained or tested on this dataset. 2) The results in table 5 and 6 show that models trained on AViD surpass the ones trained on other datasets. However, it is unclear where the improvements come from (larger training size of AViD, country diversity, or better video quality). 3) I feel that the paper does not have sufficient evidence to support their claim that models trained on AViD are more diverse in terms of countries. Only table 4 is related to this topic. Since this is the paper's main contribution, I feel that the authors need more comprehensive experiments to support it. 4) From table 3, AViD is still quite unbalanced (even under the simplest metric). Being a benchmark dataset that claim to be from diverse countries, AViD might lead to other researchers claiming their models to be diverse and fair simply because they train their models on AViD, which is still inherently biased. I am not an expert on this, but I think there should be a discussion.
NIPS
Title AViD Dataset: Anonymized Videos from Diverse Countries Abstract We introduce a new public video dataset for action recognition: Anonymized Videos from Diverse countries (AViD). Unlike existing public video datasets, AViD is a collection of action videos from many different countries. The motivation is to create a public dataset that would benefit training and pretraining of action recognition models for everybody, rather than making it useful for limited countries. Further, all the face identities in the AViD videos are properly anonymized to protect their privacy. It also is a static dataset where each video is licensed with the creative commons license. We confirm that most of the existing video datasets are statistically biased to only capture action videos from a limited number of countries. We experimentally illustrate that models trained with such biased datasets do not transfer perfectly to action videos from the other countries, and show that AViD addresses such problem. We also confirm that the new AViD dataset could serve as a good dataset for pretraining the models, performing comparably or better than prior datasets1. 1 Introduction Video recognition is an important problem with many potential applications. One key challenge in training a video model (e.g., 3D spatio-temporal convolutional neural networks) is the lack of data, as these models generally have more parameters than image models requiring even more data. Kinetics (Kay et al., 2017) found that by training on a hundreds of thousands of labeled video clips, one is able to increase the performance of video models significantly. Other large-scale datasets, such as HVU (Diba et al., 2019), Moments-in-Time (Monfort et al., 2018), and HACS (Zhao et al., 2019) also have been introduced, motivated by such findings. However, many of today’s large-scale datasets suffer from multiple problems: First, due to their collection process, the videos in the datasets are very biased particularly in terms of where the videos are from (Fig. 1 and Table 3). Secondly, many of these datasets become inconsistent as YouTube videos get deleted. For instance, in the years since Kinetics-400 was first released, over 10% of the videos have been removed from YouTube. Further, depending on geographic location, some videos may not be available. This makes it very challenging for researchers in different countries and at different times to equally benefit from the data and reproduce the results, making the trained models to be biased based on when and where they were trained. They are not static datasets (Figure 3). AViD, unlike previous datasets, contains videos from diverse groups of people all over the world. Existing datasets, such as Kinetics, have videos mostly from from North America (Kay et al., 2017) due to being sampled from YouTube and English queries. AViD videos are distributed more broadly across the globe (Fig. 1) since they are sampled from many sites using many different languages. This is important as certain actions are done differently in different cultures, such as greetings (shown 1The dataset is available https://github.com/piergiaj/AViD 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. in Fig. 2), nodding, etc. As many videos contain text, such as news broadcasts, the lack of diversity can further bias results to rely on English text which may not be present in videos from different regions of the world. Experimentally, we show diversity and lack of diversity affects the recognition. Further, we anonymize the videos by blurring all the faces. This prevents humans and machines from identifying people in the videos. This is an important property for institutions, research labs, and companies respecting privacy to take advantage the dataset. Due to this fact, face-based actions (e.g., smile, makeup, brush teeth, etc.) have to be removed as they would be very difficult to recognize with blurring, but we show that the other actions are still reliably recognized. Another technical limitation with YouTube-based datasets including Kinetics, ActivityNet (Caba Heilbron et al., 2015), YouTube-8M (Abu-El-Haija et al., 2016), HowTo100M (Miech et al., 2019), AVA (Gu et al., 2017) and others, is that downloading videos from YouTube is often blocked. The standard tools for downloading videos can run into request errors (many issues on GitHub exist, with no permanent solution). These factors limit many researchers from being able to use large-scale video datasets. To address these challenges, we introduce a new, largescale dataset designed to solve these problems. The key benefits of this dataset is that it captures the same actions as Kinetics plus hundreds of new ones. Further, we choose videos from a variety of sources (Flickr, Instagram, etc.) that have a creative-commons licence. This license allows us to download, modify and distribute the videos as needed. We create a static video dataset that can easily be downloaded. We further provide tags based on the user-generated tags for the video, enabling studying of weakly-labeled data learning. Also unique is the ability to add ‘no action’ which we show helps in action localization tasks. To summarize, • AViD contains actions from diverse countries obtained by querying with many languages. • AViD is a dataset with face identities removed • AViD is a static dataset with all the videos having the creative-commons licence. 2 Dataset Creation The dataset creation process follows multiple steps. First we generated a set of action classes. Next, we sampled videos from a variety of sources to obtain a diverse sample of all actions. Then we generate candidate clips from each video. These clips are then annotated by human. We now provide more details about this process. 2.1 Action Classes Unlike images, where objects are clearly defined and have physical boundaries, determining an action is in videos is a far more ambiguous task. In AViD, we follow many previous works such as Kinetics (Kay et al., 2017), where an action consists of a verb and a noun when needed. For example, ‘cutting apples’ is an action with both a verb and noun while ‘digging’ is just verb. To create the AViD datasets, the action classes begin by combining the actions in Kinetics, Charades, and Moments in Time, as these cover a wide variety of possible actions. We then remove all actions involving the face (e.g., ‘smiling,’ ‘eyeliner,’ etc.) since we are blurring faces, as this makes it extremely difficult to recognize these actions. Note that we do leave actions like ‘burping’ or ‘eating’ which can be recognized by other contextual cues and motion. We then manually combine duplicate/similar actions. This resulted in a set of 736 actions. During the manual annotation process, we allowed users to provide a text description of the actions in the video if none of the candidate actions were suitable and the additional ‘no action’ if there was no action in the video. Based on this process, we found another 159 actions, resulting in 887 total actions. Examples of some of the new ones are ‘medical procedures,’ ‘gardening,’ ‘gokarting,’ etc. Previous works have studied using different forms of actions, some finding actions associated with nouns to be better (Sigurdsson et al., 2017) while others prefer atomic, generic action (Gu et al., 2017). The Moments in Time (Monfort et al., 2018) takes the most common verbs to use as actions, while Charades (Sigurdsson et al., 2016) uses a verb and noun to describe each action. Our choice of action closely follows these, and we further build a hierarchy that will enable studying of verb-only actions compared to verb+noun actions and levels of fine-grained recognition. 2.1.1 Hierarchy After deciding the action classes, we realized there was a noticeable hierarchy capturing these different actions. Hierarchies have been created for ImageNet (Deng et al., 2009) to represent relationships such as fine-grained image classification, but they have not been widely used in video understanding. ActivityNet (Caba Heilbron et al., 2015) has a hierarchy, but is a smaller dataset and the hierarchy mostly capture broad differences and only has 200 action classes. We introduce a hierarchy that captures more interesting relationships between actions, such as ‘fishing’ → ‘fly tying,’ ‘casting fishing line,’ ‘catching fish,’ etc. And more broad differences such as ‘ice fishing’ and ‘recreational fishing.’ Similarly, in the ‘cooking class’ we have ‘cutting fruit’ which has both ‘cutting apples’ and ‘cutting pineapple’. Some actions, like ‘cutting strawberries’ didn’t provide enough clips (e.g., less than 10), and in such case, we did not create the action category and made the videos only belong to the ‘cutting fruit’ class. This hierarchy provides a starting point to study various aspects of what an action is, and how we should define actions and use the hierarchy in classifiers. Part of the hierarchy is shown in Fig. 4, the full hierarchy is provided in the supplementary material. 2.2 Video Collection AViD videos are collected from several websites: Flickr, Instagram, etc. But we ensure all videos are licensed with the creative commons license. This allows us to download, modify (blur faces), and distribute the videos. This enables the construction of a static, anonymized, easily downloadable video dataset for reproducible research. In order to collect a diverse set of candidate videos to have in the dataset, we translated the initial action categories into 22 different languages (e.g., English, Spanish, Portuguese, Chinese, Japanese, Afrikaans, Swahili, Hindi, etc.) covering every continent. We then searched multiple video websites (Instagram, Flickr, Youku, etc.) for these actions to obtain initial video samples. This process resulted in a set of 800k videos. From these videos, we took multiple sample clips. As shown in Fig. 1, this process found videos from all over the globe. We ensured there was no overlap of AViD videos and those in the validation or testing sets of Kinetics. There is some minor overlap between some of AViD videos and the training set of Kinetics, which is an outcome due to that the both datasets were collected from the web. 2.3 Action Annotation We annotate the candidate clips using Amazon Mechanical Turk. In order to make human annotations more efficient, we use I3D model (Carreira and Zisserman, 2017) to generate a set of potential candidate labels for each clip (the exact number depends on how many actions I3D predicted, usually 2-3) and provide them as suggestions to the human annotators. We also provide annotators an option to select the ‘other’ and ‘none’ category and manually specify what the action is. For each task, one of the videos was from an existing dataset where the label was known. This served as a quality check and the annotations were rejected if the worker did not correctly annotate the test video. A subset of the videos where I3D (trained with Kinetics) had very high confidence (> 90%) were verified manually by the authors. As a result, a total of 500k video clips were annotated. Human annotators labeled 300k videos manually, and 200k videos with very high-confidence I3D predictions were checked by the authors and the turkers. Of these, about 100k videos were labeled as the ‘other’ action by the human annotators, suggesting that I3D + Kinetics training does not perform well on these actions. Of these, about 50k videos were discarded due to poor labeling or other errors, resulting in a dataset of 450k total samples. We found the distribution of actions follows a Zipf distribution (shown in Fig. 5, similar to the observation of AVA (Gu et al., 2017). We split the dataset into train/test sets by taking 10% of each class as the test videos. This preserves the Zipf distribution. 2.4 Weak Tag Annotation In addition to action category annotation per video clips, AviD dataset also provides a set of weak text tags. To generate the weak tags for the videos, we start by translating each tag (provided from the web) into English. We then remove stopwords (e.g., ‘to,’ ‘the,’ ‘and,’ etc.) and lemmatize the words (e.g., ‘stopping’ to ‘stop’). This transforms each tag into its base English word. Next, we use word2vec (Mikolov et al., 2013) to compute the distance between each pair of tags, and use affinity propagation and agglomerative clustering to generate 1768 and 4939 clusters, respectively. Each video is then tagged based on these clusters. This results in two different sets of tags for the videos, both of which are provided for further analysis, since it is unclear which tagging strategy will more benefit future approaches. The overall distribution of tags is shown in Fig. 6, also following an exponential distribution. 3 Experiments We conducted a series of experiments with the new AViD dataset. This not only includes testing existing video CNN models on the AViD dataset and further evaluating effectiveness of the dataset for pretraining, but also includes quantitative analysis comparing different datasets. Specifically, we measure video source statistics to check dataset biases, and experimentally confirm how well a model trained with action videos from biased countries generalize to videos from different countries. We also evaluate how face blurring influences the classification accuracy, and introduce weak annotations of the dataset. Implementation Details We implemented the models in PyTorch and trained them using four Titan V GPUs. To enable faster learning, we followed the multi-grid training schedule (Wu et al., 2019). The models, I3D (Carreira and Zisserman, 2017), 2D/(2+1D)/3D ResNets (He et al., 2016; Tran et al., 2018, 2014), Two-stream (Simonyan and Zisserman, 2014), and SlowFast (Feichtenhofer et al., 2018), were trained for 256 epochs. The learning rate followed a cosine decay schedule with a max of 0.1 and a linear warm-up for the first 2k steps. Each GPU used a base batch size of 8 clips, which was then scaled according to the multi-grid schedule (code provided in supplementary materials). The base clip size was 32 frames at 224× 224 image resolution. For evaluation, we compared both convolutional evaluation where the entire T frames at 256× 256 were given as input as well as a multi-crop evaluation where 30 random crops of 32 frames at 224× 224 are used and the prediction is the average over all clips. Baseline Results In Table 2, we report the results of multiple common video model baseline networks. Overall, our findings are consistent with the literature. Diversity Analysis Since AViD is designed to capture various actions from diverse countries, we conduct a set of experiments to measure the diversity and determine the effect of having diverse videos. First, we computed geo-location statistics of AViD and other datasets, and compared them. To obtain the locations of AViD videos, we extract the geo-tagged location for videos where it was available (about 75% of total AViD videos). We used the public API of the site where each AViD video came from to gather the geolocation statistics. Similarly, we used the public YouTube API to gather the geolocation statistics for the Kinetics, HACS, and HVU videos. Further, after the initial release of AViD (on arXiv), the Kinetics team provided us their location statistics estimate (Smaira et al., 2020). As it is a bit different from our estimate, we also directly include such data for the comparison.2 To measure the diversity of each dataset, we report a few metrics: (1) percentage of videos in North America, Latin America, Europe, Asia, and Africa. (2) As a proxy for diversity and bias, we assume a uniform distribution over all countries would be the most fair (this assumption is debatable), then using the Wasserstein distance, we report the distance from the distribution of videos to the uniform distribution. The results are shown in Table 3. We note that due to the large overlap in videos between HVU and Kinetics-600, their diversity stats are nearly identical. Similarly, as HACS is based on English queries of YouTube, it also results in a highly North American biases dataset. We note that Kinetics-600 and -700 made efforts to improve diversity by querying in Spanish and Portuguese, which did improve diversity in those countries (Carreira et al., 2018; Smaira et al., 2020). In addition, we ran an experiment training the baseline model on each dataset, and testing it on videos from different regions of the world. Specifically, we train the baseline 3D ResNet model with either Kinetics-400/600 or AViD. Then we evaluated the models on AViD videos using action classes shared by both Kinetics-400 and AViD (about 397 classes) while splitting evaluation into North American, Rest of World, or other regions. The results are summarized in Table 4. We find that the models trained with any of the three datasets perform quite similarly on the North American videos. However, the Kinetics trained models do not perform as well on the diverse videos, while AViD models show a much smaller drop. This suggests that current datasets do not generalize well to diverse world data, showing the importance of building diverse datasets. In Table 5, we show the results when using all 2We believe the main difference comes from the use of public YouTube API vs. YouTube’s internal geolocation metadata estimated based on various factors. Please see the appendix for more details. AViD classes, but using training on a specific region then testing on that region vs. all other regions3. We observe that the performance drops when training vs. testing are from different regions. This further suggests that having a training set of videos from diverse countries are essential. Fine-tuning We pretrain several of the models with AViD dataset, and fine-tune on HMDB-51 (Kuehne et al., 2011) and Charades (Sigurdsson et al., 2016). The objective is to compare AViD with exising datasets in terms of pretraining, including Kinetics400/600 (Kay et al., 2017) and Moments-in-time (MiT) (Monfort et al., 2018). Note that these results are based on using RGB-only as input; no optical flow is used. In Table 6, we compare the results on HMDB. We find that AViD performs quite similarly to both Kinetics and MiT. Note that the original Kinetics has far more videos than are currently available (as shown in Figure 3), thus the original fine-tuning performance is higher (indicated in parenthesis). In Table 7, we compare the results on the Charades dataset. Because the AViD dataset also provides videos with ‘no action’ in contrast to MiT and Kinetics which only have action videos, we compare the effect of using ‘no action’ as well. While AViD nearly matches or improves performance even without ‘no action’ videos in the classification setting, we find that the inclusion of the ‘no action’ 3There are only ∼35k training clips from Africa, and the smaller training set reduces overall performance. greatly benefits the localization setting, establishing a new state-of-the-art for Charades-localization (25.2 vs. 22.3 in (Piergiovanni and Ryoo, 2019b)). Learning from Weak Tags We compare the effect of using the weak tags generated for the AViD dataset compared to using the manually labeled data. The results are shown in Table 8. Surprisingly, we find that using the weak tags provides strong initial features that can be fine-tuned on HMDB without much different in performance. Future works can explore how to best use the weak tag data. Blurred Face Effect During preprocessing, we use a face detector to blur any found faces in the videos. We utilize a strong Gaussian blur with random parameters. Gaussian blurring can be reversed if the location and parameters are known, however, due to the randomization of the parameters, it would be practically impossible to reverse the blur and recover true identity. Since we are modifying the videos by blurring faces, we conducted experiments to see how face blurring impacts performance. We compare performance on AViD (accuracy) as well as fine-tuning on HMDB (accuracy) and Charades (mAP) classification. The results are shown in Table 9. While face blurring slightly reduces performance, the impact is not that great. This suggests it has a good balance of anonymization, yet still recognizable actions. Importance of Time In videos, the use of temporal information is often important when recognizing actions by using optical flow (Simonyan and Zisserman, 2014), stacking frames, RNNs (Ng et al., 2015), temporal pooling (Piergiovanni et al., 2017), and other approaches. In order to determine how much temporal information AViD needs, we compared single-frame models to multi-frame. We then shuffled the frames to measure the performance drop. The results are shown in Table 10. We find that adding more frames benefits performance, while shuffling them harms multi-frame model performance. This suggests that temporal information is quite useful for recognizing actions in AViD, making it an appropriate dataset for developing spatio-temporal video models. 4 Conclusions We present AViD, a new, static, diverse and anonymized video dataset. We showed the importance of collecting and learning from diverse videos, which is not captured in existing video datasets. Further, AViD is static and easily distributed, enabling reproducible research. Finally, we showed that AViD produces similar or better results on datasets like HMDB and Charades. Broader Impacts We quantitatively confirmed that existing video datasets for action recognition are highly biased. In order to make people and researchers in diverse countries more fairly benefit from a public action recognition dataset, we propose the AViD dataset. We took care to query multiple websites from many countries in many languages to build a dataset that represents as many countries as possible. We experimentally showed that by doing this, we can reduce the bias of learned models. We are not aware of any other large-scales datasets (with hundreds of video hours) which took such country diversity into the consideration during the collection process. As this dataset contains a wide variety of actions, it could enable malicious parties to build systems to monitor people. However, we took many steps to preserve the identity of people and eliminate the ability to learn face-based actions, which greatly reduces the negative uses of the data. The positive impacts of this dataset are enabling reproducible research on video understanding which will help more advance video understanding research with consistent and reliable baselines. We emphasize once more that our dataset is a static dataset respecting the licences of all its videos. Acknowledgement This work was supported in part by the National Science Foundation (IIS-1812943 and CNS1814985).
1. What is the focus and contribution of the paper regarding video action recognition? 2. What are the strengths of the proposed dataset, particularly in terms of diversity, privacy protection, and generalization? 3. What are the weaknesses of the paper, especially regarding the annotation process and human verification?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The authors propose a new video action recognition dataset, where the videos are made from diverse countries, static with creative license and blurred to protect the privacy. The authors demonstrate the necessity of creating such a dataset by comprehensive benchmark experiments. This work will be impactful for the community. Strengths 1. A new diverse, static, and privacy-protected video action recognition dataset. 2. It is large enough compared to existing datasets. 3. The authors demonstrate good generalization of the models pretrained on the proposed dataset. And sufficient detailed analysis is provided. Weaknesses 1. Currently, all the videos are short videos. However, it is unlikely that all the source videos are already well-trimmed. If they are not, then how are the temporal boundary are determined and will these annotations be released? I would like to invite the authors to provide more clarification on this. 2. Is there a human verification or voting mechanism to make sure human annotations accurate?
NIPS
Title Necessary and sufficient graphical conditions for optimal adjustment sets in causal graphical models with hidden variables Abstract The problem of selecting optimal backdoor adjustment sets to estimate causal effects in graphical models with hidden and conditioned variables is addressed. Previous work has defined optimality as achieving the smallest asymptotic estimation variance and derived an optimal set for the case without hidden variables. For the case with hidden variables there can be settings where no optimal set exists and currently only a sufficient graphical optimality criterion of limited applicability has been derived. In the present work optimality is characterized as maximizing a certain adjustment information which allows to derive a necessary and sufficient graphical criterion for the existence of an optimal adjustment set and a definition and algorithm to construct it. Further, the optimal set is valid if and only if a valid adjustment set exists and has higher (or equal) adjustment information than the Adjust-set proposed in Perković et al. [Journal of Machine Learning Research, 18: 1–62, 2018] for any graph. The results translate to minimal asymptotic estimation variance for a class of estimators whose asymptotic variance follows a certain information-theoretic relation. Numerical experiments indicate that the asymptotic results also hold for relatively small sample sizes and that the optimal adjustment set or minimized variants thereof often yield better variance also beyond that estimator class. Surprisingly, among the randomly created setups more than 90% fulfill the optimality conditions indicating that also in many real-world scenarios graphical optimality may hold. 1 Introduction A standard problem setting in causal inference is to estimate the causal effect between two variables given a causal graphical model that specifies qualitative causal relations among observed variables [Pearl, 2009], including a possible presence of hidden confounding variables. The graphical model then allows to employ graphical criteria to identify valid adjustment sets, the most well-known being the backdoor criterion [Pearl, 1993] and the generalized adjustment criterion [Shpitser et al., 2010, Perković et al., 2015, 2018], providing a complete identification of all valid adjustment sets. Estimators of causal effects based on such a valid adjustment set as a covariate are then unbiased, but for different adjustment sets the estimation variance may strongly vary. An optimal adjustment set may be characterized as one that has minimal asymptotic estimation variance. In current work, following Kuroki and Cai [2004] and Kuroki and Miyakawa [2003], Henckel et al. [2019] (abbreviated 35th Conference on Neural Information Processing Systems (NeurIPS 2021). HPM19 in the following) showed that graphical optimality always holds for linear models in the causally sufficient case where all relevant variables are observed. In Witte et al. [2020] an alternative characterization of the optimal adjustment set is discussed and the approach was integrated into the IDA algorithm [Maathuis et al., 2009, 2010] that does not require the causal graph to be known. Rotnitzky and Smucler [2019] extended the results in HPM19 to asymptotically linear non-parametric graphical models. HPM19’s optimal adjustment set holds for the causally sufficient case (no hidden variables) and the authors gave an example with hidden variables where optimality does not hold in general, i.e., the optimal adjustment set depends on the coefficients and noise terms (more generally, the distribution), rather than just the graph. Most recently, Smucler et al. [2021] (SSR20) partially extended these results to the non-parametric hidden variables case together with dynamic treatment regimes, i.e., conditional causal effects. SSR20 provide a sufficient criterion for an optimal set to exist and a definition based on a certain undirected graph-construction using a result by van der Zander et al. [2019]. However, their sufficient criterion is very restrictive and a current major open problem is a necessary and sufficient condition for an optimal adjustment set to exist in the hidden variable case and a corresponding definition of an optimal set. My main theoretical contribution is a solution to this problem. Optimality for conditional causal effects in the hidden variables case is fully characterized by an information-theoretic approach involving a certain difference of conditional mutual informations among the observed variables termed the adjustment information. Maximizing the adjustment information formalizes the common intuition to choose adjustment sets that maximally constrain the effect variable and minimally constrain the cause variable. This allows to derive a necessary and sufficient graphical criterion for the existence of an optimal adjustment set. The derived optimal adjustment set also has the property of minimum cardinality, i.e., no node can be removed without sacrificing optimality. Further, the optimal set is valid if and only if a valid adjustment set exists and has higher (or equal) adjustment information than the Adjust-set proposed in Perković et al. [2018] for any graph, whether graphical optimality holds or not. The results translate to minimal asymptotic estimation variance for a class of estimators whose asymptotic variance follows a certain information-theoretic relation that, at present, I could only verify theoretically for the linear case. As practical contributions the paper provides extensive numerical experiments that corroborate the theoretical results and show that the optimal adjustment set or minimized variants thereof often yield better variance also beyond the theoretically analyzed estimator class. Code is available in the python package https://github.com/jakobrunge/tigramite. More detailed preliminaries, proofs, algorithms, and further numerical experiments are given in the Supplementary Material. 1.1 Preliminaries and problem setting We consider causal effects in causal graphical models over a set of variables V with a joint distribution P = P(V) that is consistent with an acyclic directed mixed graph (ADMG) G = (V, E). Two nodes can have possibly more than one edge which can be directed (←) or bi-directed (↔). See Fig. 1A for an example. Kinships are defined as usual: parents pa(X) for “•→X”, spouses sp(X) for “X↔•”, children ch(X) for “X→•”. These sets all exclude X . Correspondingly descendants des(X) and ancestors an(X) are defined, which, on the other hand, both include X . The mediator nodes on causal paths from X to Y are denoted M = M(X,Y ) and exclude X and Y . For detailed preliminaries, including the definition of open and blocked paths, see Supplementary Section A. In this work we only consider a univariate intervention variable X and effect variable Y . We simplify set notation and denote unions of variables as {W} ∪M ∪A = WMA. A (possibly empty) set of adjustment variables Z for the total causal effect of X on Y in an ADMG is called valid relative to (X,Y ) if the interventional distribution for setting do(X = x) [Pearl, 2009] factorizes as p(Y |do(X = x)) = ∫ p(Y |x, z)p(z)dz for non-empty Z and as p(Y |do(X = x)) = p(Y |x) for empty Z. Valid adjustment sets, the set of which is here denoted Z , can be read off from a given causal graph using the generalized adjustment criterion [Perković et al., 2015, 2018] which generalizes Pearl’s back-door criterion [Pearl, 2009]. To this end define forb(X,Y ) = X ∪ des(YM) (1) (henceforth just denoted as forb). A set Z is valid if both of the following conditions hold: (i) Z ∩ forb = ∅, and (ii) all non-causal paths from X to Y are blocked by Z. An adjustment set is called minimal if no strict subset of Z is still valid. The validity conditions can in principle be manually checked directly from the graph, but, more conveniently, Perković et al. [2018] define an adjustment set called ‘Adjust’ that is valid if and only if a valid adjustment set exist. In our setting including conditioning variables S we call this set the valid ancestors defined as vancs(X,Y,S) = an(XY S) \ forb (2) and refer to this set as vancs or Adjust-set. Our quantity of interest is the average total causal effect of an intervention to set X to x vs. x′ on the effect variable Y given a set of selected (conditioned) variables S = s ∆yxx′|s = E(Y |do(x), s)− E(Y |do(x′), s) . (3) We denote an estimator given a valid adjustment set Z as ∆̂yxx′|s.z. In the linear case ∆yxx′|s for x = x′ + 1 corresponds to the regression coefficient βY X·ZS in the regression of Y on X , Z, and S. The ordinary least squares (OLS) estimator β̂Y X·ZS is a consistent estimator of βY X·ZS. Figure 1A illustrates the problem setting: We are interested in the total causal effect of (here univariate) X on Y (conditioned on S), which is here due to a direct link and an indirect causal path through a mediator M . There are six valid backdoor adjustment sets Z = {Z1, Z2, Z1Z2, Z2Z3, Z1Z3, Z1Z2Z3}. Z4 ∈ forb cannot be included in any set because it is a descendant of YM. Here vancs = Z1Z2S. All valid adjustment sets remove the bias due to confounding by their definition. The question is which of these valid adjustment sets is statistically optimal in that it minimizes the asymptotic estimation variance? More formally, the task is, given a graph G and (X,Y,S), to chose a valid optimal set Zoptimal ∈ Z such that the causal effect estimator’s asymptotic variance Var(∆̂yxx′|s.z) = E[(∆yxx′|s − ∆̂yxx′|s.z)2] is minimal: Zoptimal ∈ argminZ∈ZVar(∆̂yxx′|s.z) . (4) My proposed approach to optimal adjustment sets is based on information theory [Cover and Thomas, 2006]. The main quantity of interest there is the conditional mutual information (CMI) defined as a difference IX;Y |Z = HY |Z − HY |ZX of two (conditional) Shannon entropies HY |X = − ∫ x,y p(x, y) ln p(y|x)dxdy. Its main properties are non-negativity, IX;Y |Z = 0 if and only if X ⊥⊥ Y |Z, and the chain rule IXW ;Y |Z = IX;Y |Z + IW ;Y |ZX . All random variables in a CMI can be multivariate. Throughout the present paper we will assume the following. Assumptions 1 (General setting and assumptions). We assume a causal graphical model over a set of variables V with a joint distribution P = P(V) that is consistent with an ADMG G = (V, E). We assume a non-zero causal effect from X on Y , potentially through a set of mediators M, and given selected conditioned variables S, where S ∩ forb = ∅. We assume that at least one valid adjustment set (given S) exists and, hence, the causal effect is identifiable (except when stated otherwise). Finally, we assume the usual Causal Markov Condition (implicit in semi-Markovian models) and Faithfulness. 2 Optimal adjustment sets 2.1 Information-theoretic characterization Figure 1B illustrates two causal effect estimates for a linear Gaussian model consistent with the graph in Fig. 1A. With Z = Z1Z2 (blue) the error is much larger than with O = Z2Z3 (orange) for two reasons: Z constrains the residual variance V ar(Y |ZS) of the effect variable Y less than O and, on the other hand, Z constrains the residual variance V ar(X|ZS) of the cause variable X more than O. Smaller estimator variance also holds for O compared to any other valid set in Z here. We information-theoretically formalize the resulting intuition to choose an adjustment set Z that maximally constrains the effect variable Y and minimally constrains the cause variable X . In terms of CMIs and given selected fixed conditions S the quantity to maximize can be stated as follows. Definition 1 (Adjustment information). Consider a causal effect of X on Y for an adjustment set Z given a condition set S. The (conditional) adjustment (set) information, abbreviated JZ, is defined as JXY |S.Z ≡ IZ;Y |XS − IX;Z|S (5) = HY |XS −HX|S︸ ︷︷ ︸ not related to Z − (HY |XZS −HX|ZS)︸ ︷︷ ︸ adjustment entropy (6) JZ is not necessarily positive if the dependence between X and Z (given S) is larger than that between Z and Y given XS. Equation (6) follows from the CMI definition. Fig. 1C illustrates the two CMIs in Eq. (5) in a Venn diagram. Before discussing the range of estimators for which maximizing the adjustment information JZ leads to a minimal asymptotic estimation variance in Sect. 2.2, we characterize graphical optimality in an information-theoretic framework. Our goal is to provide graphical criteria for optimal adjustment sets, i.e., criteria that depend only on the structure of the graph G and not on the distribution. Definition 2 (Information-theoretical graphical optimality). Given Assumptions 1 we say that (information-theoretical) graphical optimality holds if there is a Z ∈ Z such that either there is no other Z′ ̸= Z ∈ Z or for all other Z′ ̸= Z ∈ Z and all distributions P consistent with G we have JZ ≥ JZ′ . My main result builds on the following lemma which relates graphical optimality to informationtheoretic inequalities in a necessary and sufficient comparison condition for an optimal set to exist. Lemma 1 (Necessary and sufficient comparison criterion for existence of an optimal set). Given Assumptions 1, if and only if there is a Z ∈ Z such that either there is no other Z′ ̸= Z ∈ Z or for all other Z′ ̸= Z ∈ Z and all distributions P consistent with G it holds that IZ\Z′;Y |Z′XS︸ ︷︷ ︸ (i) ≥ IZ′\Z;Y |ZXS︸ ︷︷ ︸ (iii) and IX;Z′\Z|ZS︸ ︷︷ ︸ (ii) ≥ IX;Z\Z′|Z′S︸ ︷︷ ︸ (iv) , (7) then graphical optimality holds and Z is optimal implying JZ ≥ JZ′ . In SSR20 and HPM19 the corresponding conditional independence statements to the terms (iii) and (iv) in the inequalities (7) are used as a sufficient pairwise comparison criterion. However, Lemma 1 shows that for graphical optimality it is not necessary that terms (iii) and (iv) vanish, they just need to fulfill the inequalities (7) for a necessary and sufficient criterion. In principle, Lemma 1 can be used to cross-compare all pairs of sets, but firstly, it is difficult to explicitly evaluate (7) for all distributions P consistent with G and, secondly, iterating through all valid adjustment sets is computationally prohibitive even for small graph sizes. As an example, consider a confounding path consisting of 5 nodes. Then this path can be blocked by 25 − 1 different subsets. In the main result of this work (Thm. 3) a necessary and sufficient criterion based purely on graphical properties is given. 2.2 Applicable estimator class The above characterization only relates optimality of adjustment sets to the adjustment information JZ defined in Eq. (5), but not to any particular estimator. Now the question is for which class of causal effect estimators ∆̂yxx′|s.z the intuition of maximizing the adjustment information JZ leads to a minimal asymptotic estimation variance. In its most general form this class is characterized as fulfilling Zoptimal ∈ argmaxZ∈ZJZ ⇔ Var(∆̂yxx′|s.zoptimal) = min Z∈Z Var(∆̂yxx′|s.z) , (8) where we assume that ∆̂yxx′|s.z is consistent due to a valid adjustment set and correct functional model specification. One can also further restrict the class to estimators whose (square-root of the) asymptotic variance can be expressed as√ Var(∆̂yxx′|s.z) = f(HY |XZS −HX|ZS) , (9) for a real-valued, strictly monotonously increasing function of the adjustment entropy. Minimizing the adjustment entropy is by Eq. (6) equivalent to maximizing the adjustment information. The following assumption and lemma then relates JZ ≥ JZ′ to the corresponding asymptotic variances of a given estimator. Assumptions 2 (Estimator class assumption). The model class of the estimator for the causal effect (3) is correctly specified and its asymptotic variance can be expressed as in relation (9). Lemma 2 (Asymptotic variance and adjustment information). Given Assumptions 1 and an estimator fulfilling Assumptions 2, if and only if for two different adjustment sets Z,Z′ ∈ Z we have JZ ≥ JZ′ , then the adjustment set Z has a smaller or equal asymptotic variance compared to Z′. Proof. By Equations (6) and (9) JZ ≥ JZ′ (for fixed X,Y,S) is directly related to a smaller or equal asymptotic variance for Z compared to Z′, and vice versa. □ The paper’s theoretical results currently hold for estimators fulfilling relation (9), but at least the main result on graphical optimality in Thm. 3 can also be relaxed to estimators fulfilling the less restrictive relation (8). In this work, we leave the question of which general classes of estimators fulfill either relation (8) or the more restricted relation (9) to further research and only show that it holds for the OLS estimator β̂Y X·ZS for Gaussian distributions. For Gaussians the entropies in (9) are given by H(Y |XZS) = 12+ 1 2 ln(2πσ 2 Y |XZS) and H(X|ZS) = 1 2 + 1 2 ln(2πσ 2 X|ZS) where σ(·|·) denotes the square-root of the conditional variance. Then√ Var(∆̂yxx′|s.z) = 1√ n eHY |XZS−HX|ZS = 1√ n σY |XZS σX|ZS . (10) This relation is also the basis of the results for the causally sufficient case in Henckel et al. [2019] where it is shown that it holds more generally for causal linear models that do not require the noise terms to be Gaussian. 2.3 Definition of O-set The optimal adjustment set for the causally sufficient case is simply P = pa(YM) \ forb and was derived in HPM19 and Rotnitzky and Smucler [2019]. In Section B.2 the derivation is discussed from an information-theoretic perspective. In the case with hidden variables we need to account for bidirected edges “↔” which considerably complicate the situation. Then the parents of YM are not sufficient to block all non-causal paths. Further, just like conditioning on parents of YM leads to optimality in the sufficient case since parents constrain information in YM, in the hidden variables case also conditioning on spouses of YM constrains information about YM. Example A. A simple graph (ADMG) to illustrate this is X→Y↔Z1 (shown with an additional S in Fig. 2A below, or Fig. 4 in SSR20). Here Z′ = ∅ = vancs is a valid set, but it is not optimal. Consider O = Z1, then term (iii) = 0 in the inequalities (7) since Z′ \O = ∅. Even though not needed to block non-causal paths (there is none), Z1 still constrains information in Y while being independent of X (hence, term (iv) = 0) which leads to JO > J∅ according to the inequalities (7). Not only direct spouses can constrain information in Y as Fig. 2B below illustrates. Since for W ∈ YM the motif “W↔ C1 ←∗C2” (“∗” denotes either edge mark) is open, it holds that I(C1C2;W ) = I(C1;Y )+I(C2;W |C1) ≥ I(C1;Y ) and we can even further increase the first term in the adjustment information by conditioning also on subsequent spouses. This chain of colliders only ends if we reach a tail or there is no further adjacency. However, we have to make sure that conditioning on colliders does not open non-causal paths. This leads to the notion of a valid collider path (related to the notion of a district in Evans and Richardson [2014]). Definition 3 (Valid collider paths). Given a graph G, a collider path of W for k ≥ 1 is defined by a sequence of edges W↔C1↔· · ·↔Ck. We denote the set of path nodes (excluding W ) along a path indexed by i as πiW . Using the set of valid ancestors vancs = an(XY S) \ forb for the causal effect of X on Y given S we call a collider path node set πiW for W ∈ YM valid wrt. to (X,Y,S) if for each path node C ∈ πiW both of the following conditions are fulfilled: (1) C /∈ forb, and (2a) C ∈ vancs or (2b) C ⊥⊥ X | vancs . (11) Condition (1) is required for any valid adjustment set. If jointly (2a) and (2b) are not fulfilled, i.e. C /∈ vancs and C ⊥⊥X | vancs, then the collider path stops before C. Our candidate optimal adjustment set is now constructed based on the parents of YM, valid collider path nodes of YM, and their parents to ‘close’ these collider paths. Definition 4 (O-set). Given Assumptions 1 and the definition of valid colliders in Def. 3, define the set O(X,Y,S) = P ∪C ∪PC where P = pa(YM) \ forb, C = ⋓W∈YM ⋓i {πiW : πiW is valid wrt. to (X,Y,S)}, PC = pa(C) . In the following we will abbreviate O = O(X,Y,S). Algorithm C.1 states efficient pseudo-code to construct the O-set and detect whether a valid adjustment set exists. Since none of the conditions of Def. 3 for adding collider nodes depends on previously added nodes, the algorithm is orderindependent. The statement occurring in lines 11 and 21 (“No valid adjustment set exists.”) is proven in Thm. 1. If the graph is a DAG, then lines 4-22 can be omitted. The algorithm is of low complexity and the most time-consuming part is checking for a path in line 12, Def. 3(2b) C ⊥⊥ X | vancs, which can be implemented with (bi-directional) breadth-first search as proposed in van der Zander et al. [2019]. Numerical experiments in Section 3 will show that further interesting adjustment sets are the minimized O-set Omin, where O is minimized such that no subset can be removed without making Omin invalid, and the collider-minimized O-set OCmin where only CPC \ P ⊆ O is minimized such that no collider-subset can be removed without making OCmin invalid. Both adjustment sets can be constructed with Alg. C.2 similar to the efficient algorithms in van der Zander et al. [2019]. Also the minimized sets are order-independent since the nodes are removed only after the for-loops. Based on the idea in OCmin, in the numerical experiments we also consider AdjustXmin, where only Adjust \ pa(YM) is minimized and pa(YM) is always included. Finally, we also evaluate Adjustmin where Adjust is fully minimized. Before discussing the optimality of the O-set, we need to assure that it is a valid adjustment set. Similar to the proof given in Perković et al. [2018] for the validity of the vancs-set (for the case without S), we can state that the O-set is valid if and only if a valid adjustment set exists. Theorem 1 (Validity of O-set). Given Assumptions 1 but without a priori assuming that a valid adjustment set exists (apart from the requirement S ∩ forb = ∅). If and only if a valid backdoor adjustment set exists, then O is a valid adjustment set. 2.4 Graphical optimality We now move to the question of optimality. It is known that there are graphs where no graphical criterion exists to determine optimality. Examples, discussed later, are the graphs in Figs. 2E,F. Before stating necessary and sufficient conditions for graphical optimality, I mention that next to the O-set defined above and the Adjust set vancs [Perković et al., 2018], I am not aware of any other systematically constructed set that will yield a valid adjustment set for the case with hidden variables. van der Zander et al. [2019] provide algorithms to list all valid adjustment sets, but the question is which of these a user should choose. As mentioned above, Lemma 1 can be used to cross-compare all pairs of sets, but this is not really feasible. Hence, for automated causal effect estimation, rather than the question of whether graphical optimality holds, it is crucial to have a set with better properties than other systematically constructable sets. The following theorem states that the adjustment informations follow JO ≥ Jvancs for any graph (whether graphical optimality holds or not). Theorem 2 (O-set vs. Adjust-set ). Given Assumptions 1 with O defined in Def. 4 and the Adjust-set defined in Eq. (2), it holds that JO ≥ Jvancs for any graph G. We have JO = Jvancs only if (1) O = vancs, or (2) O ⊆ vancs and X ⊥⊥ vancs \O |OS. In the following the O-set is illustrated and conditions for graphical optimality are explored. SSR20 provide a sufficient condition for optimality, which states that either all nodes are observed (no bidirected edges exist) or for all observed nodes V ⊂ vancs. This is a very strict assumption and not fulfilled for any of the examples (except for Example G) discussed in the following. Example B. Figure 2B depicts a larger example to illustrate the O-set O = PCPC with P = Z1Z2Z3Z4 (blue boxes) and CPC \ P = Z5Z6Z7Z8 (green boxes). We also have a conditioned variable S. Among P, only Z1Z2 are needed to block non-causal paths to X , Z3Z4 are only there to constrain information in Y . Here the same holds for the whole set CPC \ P which was constructed from the paths Y↔Z5↔Z6←Z7 and Y↔Z5↔Z8 which does not include Z9 since it is a descendant of YM. Including an independent variable like Z12 in O would not decrease the adjustment information JO, but then O would not be of minimum cardinality anymore (proven in Cor. B.1). Here, again, the condition of SSR20 does not hold (e.g., Z5 is not an ancestor of XY S). O is optimal here which can be seen as follows: For term (iii) in the inequalities (7) to even be non-zero, we would need a valid Z such that Z \O has a path to Y given OSX . But these are all blocked. Note that while Z10 or Z9 ∈ Z would open a path to Y , both of these are descendants of M or Y and, hence, cannot be in a valid Z. For term (iv) to even be non-zero O \ Z would need to have a path to X given ZS. But since any valid Z has to contain Z1 and Z2 (or Z11), the only nodes in O with a path to X are parents of YM and paths from these parents to X all need to be blocked for a valid Z. Hence, O is optimal here. Example C. In Fig. 2C a case is shown where O = Z1Z5. Z2 is not part of O because none of the conditions in Def. 3(2) is fulfilled: Z2 /∈ vancs = Z1Z4Z5 and Z2 ⊥⊥X | vancs. Hence, we call Z2 an N-node. But Z2 cannot be part of any valid Z because it has a collider path to X through Z1 which is always open because it is part of vancs. Hence, term (iii) is always zero. Term (iv) is zero because O \ Z is empty for any valid Z here. Here even JO > JZ since O is minimal and term (ii) IX;Z\O|O > 0 for any Z ̸= O (generally proven in Corollary B.1). Example D. The example in Fig. 2D depicts a case with O = Z2 where Z1 is an N-node. Next to Z = ∅ another valid set is Z = Z1. Then term (iii) is non-zero and in the same way term (iv) is non-zero. The sufficient pairwise comparison criterion in SSR20 and HPM19 is, hence, not applicable. However, it holds that always term (iii) ≤ (i) because the dependence between Z1 and Y given X is always smaller than the dependence between Z2 and Y given X and correspondingly term (iv) ≤ (ii). Hence, O is optimal here. If a link Y→Z1 exists, then the only other valid set is Z = ∅ and both terms are strictly zero. Example E. The example in Fig. 2E (Fig. 3 in SSR20 and also discussed in HPM19) is not graphically optimal. Here O = Z1Z2. Other valid adjustment sets are Z1 or the empty set. From using Z1 ⊥⊥ Y |X and X ⊥⊥ Z2|Z1 in the inequalities (7) one can derive in information-theoretic terms that both Z1Z2 and ∅ are better than vancs = Z1, but since JZ1Z2 = J∅ + IZ2;Y |XZ1 − IX;Z1 , a superior adjustment set depends on how strong the link Z1→X vs. Z2↔Y is. The graph stays non-optimal also with a link Z1↔Z2. Example F. The example in Fig. 2F is also not graphically optimal. Here O = ∅ and Z2 is an N-node with a non-collider path to X . Other valid adjustment sets are Z1 and Z1Z2. Higher adjustment information here depends on the distribution. Also the same graph with the link Z1↔X is non-optimal. If, however, there is another link Z1→Y , then O = ∅ is optimal (then Z1 is a mediator). Example G. The example in Fig. 2G is only a slight modification of Example E with an added selected condition S. Then Z1, Z2 ∈ vancs. We still get O = Z1Z2 and this is now optimal since Z2 is always open and any valid set has to contain Z1. The main result of this work is a set of necessary and sufficient conditions for the existence of graphical optimality and the proof of optimality of the O-set which is based on the intuition gained in the preceding examples. Theorem 3 (Necessary and sufficient graphical conditions for optimality and optimality of O-set). Given Assumptions 1 and with O = PCPC defined in Def. 4. Denote the set of N-nodes by N = sp(YMC)\ (forbOS). Finally, given an N ∈ N and a collider path N↔· · ·↔C↔· · ·↔W (including N↔W ) for C ∈ C and W ∈ YM (indexed by i) with the collider path nodes denoted by πNi (excluding N and W ), denote by OπNi = O(X,Y,S ′ = SNπNi ) the O-set for the causal effect of X on Y given S′ = S ∪ {N} ∪ πNi . If and only if exactly one valid adjustment set exists, or both of the following conditions are fulfilled, then graphical optimality holds and O is optimal: (I) For all N ∈ N and all its collider paths i to W ∈ YM that are inside C it holds that OπNi does not block all non-causal paths from X to Y , i.e., OπNi is non-valid, and (II) for all E ∈ O \P with an open path to X given SO \ {E} there is a link E↔W or an extended collider path E∗→C↔· · ·↔W inside C for W ∈ YM where all colliders C ∈ vancs. Condition (I) and (II) essentially rule out the two canonical cases in Examples F and E, respectively, on which non-optimality in any graph is based. Applied to the examples, we obtain that in Example A Cond. (I) holds since no N-node exists and Cond. (II) holds since X ⊥⊥ Z1 | S. In Example B also no N-node exists and Cond. (II) holds as X ⊥⊥ E | SO \ {E} for every E ∈ O \ P. In example C Z2 is an N-node, but there is a collider path to X through Z1 which is in vancs such that Cond. I is fulfilled. Further, while X ⊥⊥ Z5 | SO \ {Z5}, there is a link Z5↔Y such that Cond. II holds. In example D Z1 is an N-node, but it has a bidirected link with X and Cond. (II) holds since X ⊥⊥ Z2 | SO \ {Z2}. In Example E optimality does not hold, but Cond. (I) actually holds since there is no N-node. Cond. (II) is not fulfilled for E = Z1, which has a path to X given O and on the extended collider path Z1→Z2↔Y Z2 /∈ vancs. For Z′ = ∅ and a distribution P ′ where the link Z2↔Y almost vanishes we then have JO < JZ′ . Example F has an N-node Z2 and OπNi = O(X,Y,S ′ = Z2) = Z1Z2 is valid implying that Cond. (I) does not hold, while Cond. (II) is actually fulfilled with O = ∅. For Z′ = OπNi = Z1Z2 and a distribution P ′ where the link X→Z1 almost vanishes we then have JO < JZ′ . Example G is optimal since there are no N-nodes and Z2 ∈ vancs. Similar to SSR20, HPM19, and Witte et al. [2020], I also provide results regarding minimality and minimum cardinality for the hidden variables case in the Supplement. 3 Numerical experiments We now investigate graphical optimality empirically to answer three questions: Firstly, whether for a linear estimator under Assumptions 2 the asymptotically optimal variance also translates into better finite-sample variance. Secondly, how the O-set performs in non-optimal settings (according to Thm. 3). Thirdly, how the O-set and variants thereof perform for estimators not captured by the class for which the theoretical results were derived (Assumptions 2). To this end, we compare the performance of O, Adjust, OCmin, Omin, AdjustXmin, and Adjustmin (see definitions in Section 2.3) together with linear least squares estimation (LinReg) on linear models. In the Supplement we also investigate nonlinear models using nearest neighbor regression (kNN), a multilayer perceptron (MLP), random forest regression, and double machine learning for partially linear regression models (DML) [Chernozhukov et al., 2018]. The experiments are based on a generalized additive model and described in detail in Section D. Among these 12,000 randomly created configurations 93% fulfill the optimality conditions in Thm. 3. The results in Fig. 3 confirm our first hypothesis that for linear experiments with an estimator fulfilling Assumptions 2 and in settings where graphical optimality is fulfilled (Thm. 3) the O-set either has similar RMSE or significantly outperforms all other tested variants. In particular, Omin and Adjustmin are bad choices for this setting. Adjust is intermediate and OCmin and AdjustXmin come closest to O, but may still yield significantly higher variance. Secondly, in non-optimal settings (only 7% of configurations) the O-set still outperforms Adjust (as expected by Thm. 2). Compared to OCmin and AdjustXmin the O-set leads to worse results for about half of the studied configurations, while Omin and Adjustmin are still bad choices. Cardinality is slightly higher for O compared to all other sets. In Fig. S7 we further differentiate the results by the cardinality of the O-set and find that for small cardinalities (up to 4) the O-set has the lowest variance in a majority of cases, but for higher cardinalities either OCmin or again O have the lowest variance (slightly beating AdjustXmin). Hence, either O or OCmin performs best in non-optimal configurations. For very small sample sizes n = 30 (see Fig. S2) that become comparable to the adjustment set cardinality, there tends to be a trade-off and smaller cardinality helps. Then OCmin tends to be better than O for high cardinalities also in optimal settings, but here this effect is only present for n = 30 and for n = 50 already negligible compared to the gain in JO. In Appendix D.2 are RMSE ratios for all combinations of adjustment approaches considered here and it is shown that, in general, results are very similar for other sample sizes. Thirdly, we investigate non-parametric estimators on linear as well as nonlinear models (implementations described in Section D, results in the figures of Section D.3) The different classes of estimators exhibit quite different behavior. For kNN (Figs. S8,S9) the O-set has the lowest variance in around 50% of the configurations followed by OCmin and Omin. More specifically (Figs. S15,S16), for small O-set cardinalities up to 2 the O-set and for higher either Omin or OCmin (the latter only in non-optimal configurations) perform best. For nonlinear experiments the results are less clear for O-set cardinalities greater than 2, but Omin is still a good choice. Regarding RMSE ratios, we see that, for the cases where O is not the best, the O-set can have considerably higher variance, while Omin seems to be most robust and may be a better choice if O is too large. MLP (Figs. S10,S11) behaves much differently. Here in optimal cases neither method outperforms any other for small O-set cardinalities, but for higher cardinalities (Figs. S15,S16) the O-set is best in more than 50% of configurations (slightly less for nonlinear experiments) and the others share the rest (except Adjustmin). For non-optimal cases O, OCmin and AdjustXmin share the ranks. Regarding RMSE, for linear experiments the O-results are almost as optimal as for the LinReg estimator in the optimal setting. However, for non-optimal cases OCmin can have considerably smaller variance and seems to be a robust option then, similarly to AdjustXmin. Also for nonlinear experiments OCmin is more robust. The RF estimator (Figs. S12,S13) is again different. Here no method clearly is top-ranked, Omin and Adjustmin are slightly better for linear experiments and O for nonlinear experiments. OCmin and Omin are more robust regarding RMSE ratios (similar to AdjustXmin). Finally, the DML estimator (Fig. S14) was here applied only to linear experiments since its model assumption does not allow for fully nonlinear settings. For optimal settings here O is top-ranked in a majority of cases, but closely followed by OCmin and AdjustXmin. In non-optimal cases for higher O-set cardinalities these two seem like a better choice. Quantitatively, OCmin and AdjustXmin are the most robust choices. Overall, the O-set and its variants seem to outperform or match the Adjust-variants and whether higher cardinality of the O-set reduces performance depends strongly on the estimator and data. 4 Discussion and Conclusions The proposed adjustment information formalizes the common intuition to choose adjustment sets that maximally constrain the effect variable and minimally constrain the cause variable. The main theoretical contributions are a necessary and sufficient graphical criterion for the existence of an optimal adjustment set in the hidden variables case and a definition and algorithm to construct it. To emphasize, graphical optimality implies that the O-set is optimal for any distribution consistent with the graph. Note that in cases where graphical optimality does not hold, there will still be distributions for which the O-set has maximal adjustment information. Further, the optimal set is valid if and only if a valid adjustment set exists and has smaller (or equal) asymptotic variance compared to the Adjust-set proposed in Perković et al. [2018] for any graph, whether graphical optimality holds or not. This makes the O-set a natural choice in automated causal inference analyses. Practical contributions comprise Python code to construct adjustment sets and check optimality, as well as extensive numerical experiments that demonstrate that the theoretical results also hold for relatively small sample sizes. The theoretical optimality results are limited to estimators for which the asymptotic variance becomes minimal for adjustment sets with maximal adjustment information (relation (8)). This is fulfilled for least-squares estimators, where even the direct relation (9) holds, but it is unclear whether this also holds for more general classes. The numerical results show that the O-set or minimized variants thereof often yield smaller variance also in non-optimal settings and beyond that estimator class. I speculate that further theoretical properties of maximizing adjustment information can be shown because relation (9) for f(·) = 1√ n eHY |XZS−HX|ZS seems related to the lower bound of the estimation variance counterpart to Fano’s inequality (Theorem 8.6.6 in Cover and Thomas [2006]). For estimators sensitive to high-dimensionality one may consider data-driven criteria or penalties to step-wisely minimize the O-set. However, estimating, for example, the adjustment information from a potentially small sample size carries considerable errors itself. Another current limitation is that relation (9) only holds for univariate singleton cause variables X . The information-theoretical results, however, also hold for multivariate X and preliminary results indicate that, while relation (9) does not hold for multivariate X, the less restrictive relation (8) still seems to hold. The proposed information-theoretic approach can guide further research, for example, to theoretically study relations (8),(9) for other estimators and to address other types of graphs as emerge from the output of causal discovery algorithms and the setting where the graph is unknown [Witte et al., 2020, Maathuis et al., 2009, 2010]. At present, the approach only applies to ADMGs and Maximal Ancestral Graphs (MAG) [Richardson and Spirtes, 2002] without selection variables. Last, it remains an open problem to identify optimal adjustment estimands for the hidden variables case based on other criteria such as the front-door formula and Pearl’s general do-calculus [Pearl, 2009]. The results may carry considerable practical impact since, surprisingly, among the randomly created configurations more than 90% fulfill the optimality conditions indicating that also in many realworld scenarios graphical optimality may hold. Code is available in the python package https: //github.com/jakobrunge/tigramite. Acknowledgments and Disclosure of Funding I thank Andreas Gerhardus for very helpful comments. This work was funded by the ERC Starting Grant CausalEarth (grant no. 948112).
1. What are the key contributions and novel aspects introduced by the paper in graphical models? 2. What are the weaknesses of the paper compared to prior works? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Do you have any concerns or questions regarding the proposed adjustment information and information-theoretical graphical optimality? 5. Are there any limitations in the paper regarding its claims, experiments, and comparisons with other works?
Summary Of The Paper Review
Summary Of The Paper This paper provides necessary and sufficient graphical conditions for ‘optimal’ adjustment sets in causal graphical models with hidden variables, where the optimality is defined by a newly proposed metric, instead of the commonly used smallest asymptotic estimation variance. Review I think this paper needs to justify and further explain the proposed adjustment information (Definitions 1) and information-theoretical graphical optimality (Definition 2). Intuitively, an estimator is statistically preferred if it is less biased and less variable than others. In Section 1, the authors mentioned that maximizing the adjustment information formalizes the common intuition to choose adjustment sets that maximally constrain the effect variable and minimally constrain the cause variable. I am wondering whether and how this “common intuition” is related to statistically preferred estimators. (Note that, the variance is a measure of variability, but not the only one.) Besides, please provide a more detailed explanation of the linkage between this “common intuition” and the definition of adjustment information.
NIPS
Title Necessary and sufficient graphical conditions for optimal adjustment sets in causal graphical models with hidden variables Abstract The problem of selecting optimal backdoor adjustment sets to estimate causal effects in graphical models with hidden and conditioned variables is addressed. Previous work has defined optimality as achieving the smallest asymptotic estimation variance and derived an optimal set for the case without hidden variables. For the case with hidden variables there can be settings where no optimal set exists and currently only a sufficient graphical optimality criterion of limited applicability has been derived. In the present work optimality is characterized as maximizing a certain adjustment information which allows to derive a necessary and sufficient graphical criterion for the existence of an optimal adjustment set and a definition and algorithm to construct it. Further, the optimal set is valid if and only if a valid adjustment set exists and has higher (or equal) adjustment information than the Adjust-set proposed in Perković et al. [Journal of Machine Learning Research, 18: 1–62, 2018] for any graph. The results translate to minimal asymptotic estimation variance for a class of estimators whose asymptotic variance follows a certain information-theoretic relation. Numerical experiments indicate that the asymptotic results also hold for relatively small sample sizes and that the optimal adjustment set or minimized variants thereof often yield better variance also beyond that estimator class. Surprisingly, among the randomly created setups more than 90% fulfill the optimality conditions indicating that also in many real-world scenarios graphical optimality may hold. 1 Introduction A standard problem setting in causal inference is to estimate the causal effect between two variables given a causal graphical model that specifies qualitative causal relations among observed variables [Pearl, 2009], including a possible presence of hidden confounding variables. The graphical model then allows to employ graphical criteria to identify valid adjustment sets, the most well-known being the backdoor criterion [Pearl, 1993] and the generalized adjustment criterion [Shpitser et al., 2010, Perković et al., 2015, 2018], providing a complete identification of all valid adjustment sets. Estimators of causal effects based on such a valid adjustment set as a covariate are then unbiased, but for different adjustment sets the estimation variance may strongly vary. An optimal adjustment set may be characterized as one that has minimal asymptotic estimation variance. In current work, following Kuroki and Cai [2004] and Kuroki and Miyakawa [2003], Henckel et al. [2019] (abbreviated 35th Conference on Neural Information Processing Systems (NeurIPS 2021). HPM19 in the following) showed that graphical optimality always holds for linear models in the causally sufficient case where all relevant variables are observed. In Witte et al. [2020] an alternative characterization of the optimal adjustment set is discussed and the approach was integrated into the IDA algorithm [Maathuis et al., 2009, 2010] that does not require the causal graph to be known. Rotnitzky and Smucler [2019] extended the results in HPM19 to asymptotically linear non-parametric graphical models. HPM19’s optimal adjustment set holds for the causally sufficient case (no hidden variables) and the authors gave an example with hidden variables where optimality does not hold in general, i.e., the optimal adjustment set depends on the coefficients and noise terms (more generally, the distribution), rather than just the graph. Most recently, Smucler et al. [2021] (SSR20) partially extended these results to the non-parametric hidden variables case together with dynamic treatment regimes, i.e., conditional causal effects. SSR20 provide a sufficient criterion for an optimal set to exist and a definition based on a certain undirected graph-construction using a result by van der Zander et al. [2019]. However, their sufficient criterion is very restrictive and a current major open problem is a necessary and sufficient condition for an optimal adjustment set to exist in the hidden variable case and a corresponding definition of an optimal set. My main theoretical contribution is a solution to this problem. Optimality for conditional causal effects in the hidden variables case is fully characterized by an information-theoretic approach involving a certain difference of conditional mutual informations among the observed variables termed the adjustment information. Maximizing the adjustment information formalizes the common intuition to choose adjustment sets that maximally constrain the effect variable and minimally constrain the cause variable. This allows to derive a necessary and sufficient graphical criterion for the existence of an optimal adjustment set. The derived optimal adjustment set also has the property of minimum cardinality, i.e., no node can be removed without sacrificing optimality. Further, the optimal set is valid if and only if a valid adjustment set exists and has higher (or equal) adjustment information than the Adjust-set proposed in Perković et al. [2018] for any graph, whether graphical optimality holds or not. The results translate to minimal asymptotic estimation variance for a class of estimators whose asymptotic variance follows a certain information-theoretic relation that, at present, I could only verify theoretically for the linear case. As practical contributions the paper provides extensive numerical experiments that corroborate the theoretical results and show that the optimal adjustment set or minimized variants thereof often yield better variance also beyond the theoretically analyzed estimator class. Code is available in the python package https://github.com/jakobrunge/tigramite. More detailed preliminaries, proofs, algorithms, and further numerical experiments are given in the Supplementary Material. 1.1 Preliminaries and problem setting We consider causal effects in causal graphical models over a set of variables V with a joint distribution P = P(V) that is consistent with an acyclic directed mixed graph (ADMG) G = (V, E). Two nodes can have possibly more than one edge which can be directed (←) or bi-directed (↔). See Fig. 1A for an example. Kinships are defined as usual: parents pa(X) for “•→X”, spouses sp(X) for “X↔•”, children ch(X) for “X→•”. These sets all exclude X . Correspondingly descendants des(X) and ancestors an(X) are defined, which, on the other hand, both include X . The mediator nodes on causal paths from X to Y are denoted M = M(X,Y ) and exclude X and Y . For detailed preliminaries, including the definition of open and blocked paths, see Supplementary Section A. In this work we only consider a univariate intervention variable X and effect variable Y . We simplify set notation and denote unions of variables as {W} ∪M ∪A = WMA. A (possibly empty) set of adjustment variables Z for the total causal effect of X on Y in an ADMG is called valid relative to (X,Y ) if the interventional distribution for setting do(X = x) [Pearl, 2009] factorizes as p(Y |do(X = x)) = ∫ p(Y |x, z)p(z)dz for non-empty Z and as p(Y |do(X = x)) = p(Y |x) for empty Z. Valid adjustment sets, the set of which is here denoted Z , can be read off from a given causal graph using the generalized adjustment criterion [Perković et al., 2015, 2018] which generalizes Pearl’s back-door criterion [Pearl, 2009]. To this end define forb(X,Y ) = X ∪ des(YM) (1) (henceforth just denoted as forb). A set Z is valid if both of the following conditions hold: (i) Z ∩ forb = ∅, and (ii) all non-causal paths from X to Y are blocked by Z. An adjustment set is called minimal if no strict subset of Z is still valid. The validity conditions can in principle be manually checked directly from the graph, but, more conveniently, Perković et al. [2018] define an adjustment set called ‘Adjust’ that is valid if and only if a valid adjustment set exist. In our setting including conditioning variables S we call this set the valid ancestors defined as vancs(X,Y,S) = an(XY S) \ forb (2) and refer to this set as vancs or Adjust-set. Our quantity of interest is the average total causal effect of an intervention to set X to x vs. x′ on the effect variable Y given a set of selected (conditioned) variables S = s ∆yxx′|s = E(Y |do(x), s)− E(Y |do(x′), s) . (3) We denote an estimator given a valid adjustment set Z as ∆̂yxx′|s.z. In the linear case ∆yxx′|s for x = x′ + 1 corresponds to the regression coefficient βY X·ZS in the regression of Y on X , Z, and S. The ordinary least squares (OLS) estimator β̂Y X·ZS is a consistent estimator of βY X·ZS. Figure 1A illustrates the problem setting: We are interested in the total causal effect of (here univariate) X on Y (conditioned on S), which is here due to a direct link and an indirect causal path through a mediator M . There are six valid backdoor adjustment sets Z = {Z1, Z2, Z1Z2, Z2Z3, Z1Z3, Z1Z2Z3}. Z4 ∈ forb cannot be included in any set because it is a descendant of YM. Here vancs = Z1Z2S. All valid adjustment sets remove the bias due to confounding by their definition. The question is which of these valid adjustment sets is statistically optimal in that it minimizes the asymptotic estimation variance? More formally, the task is, given a graph G and (X,Y,S), to chose a valid optimal set Zoptimal ∈ Z such that the causal effect estimator’s asymptotic variance Var(∆̂yxx′|s.z) = E[(∆yxx′|s − ∆̂yxx′|s.z)2] is minimal: Zoptimal ∈ argminZ∈ZVar(∆̂yxx′|s.z) . (4) My proposed approach to optimal adjustment sets is based on information theory [Cover and Thomas, 2006]. The main quantity of interest there is the conditional mutual information (CMI) defined as a difference IX;Y |Z = HY |Z − HY |ZX of two (conditional) Shannon entropies HY |X = − ∫ x,y p(x, y) ln p(y|x)dxdy. Its main properties are non-negativity, IX;Y |Z = 0 if and only if X ⊥⊥ Y |Z, and the chain rule IXW ;Y |Z = IX;Y |Z + IW ;Y |ZX . All random variables in a CMI can be multivariate. Throughout the present paper we will assume the following. Assumptions 1 (General setting and assumptions). We assume a causal graphical model over a set of variables V with a joint distribution P = P(V) that is consistent with an ADMG G = (V, E). We assume a non-zero causal effect from X on Y , potentially through a set of mediators M, and given selected conditioned variables S, where S ∩ forb = ∅. We assume that at least one valid adjustment set (given S) exists and, hence, the causal effect is identifiable (except when stated otherwise). Finally, we assume the usual Causal Markov Condition (implicit in semi-Markovian models) and Faithfulness. 2 Optimal adjustment sets 2.1 Information-theoretic characterization Figure 1B illustrates two causal effect estimates for a linear Gaussian model consistent with the graph in Fig. 1A. With Z = Z1Z2 (blue) the error is much larger than with O = Z2Z3 (orange) for two reasons: Z constrains the residual variance V ar(Y |ZS) of the effect variable Y less than O and, on the other hand, Z constrains the residual variance V ar(X|ZS) of the cause variable X more than O. Smaller estimator variance also holds for O compared to any other valid set in Z here. We information-theoretically formalize the resulting intuition to choose an adjustment set Z that maximally constrains the effect variable Y and minimally constrains the cause variable X . In terms of CMIs and given selected fixed conditions S the quantity to maximize can be stated as follows. Definition 1 (Adjustment information). Consider a causal effect of X on Y for an adjustment set Z given a condition set S. The (conditional) adjustment (set) information, abbreviated JZ, is defined as JXY |S.Z ≡ IZ;Y |XS − IX;Z|S (5) = HY |XS −HX|S︸ ︷︷ ︸ not related to Z − (HY |XZS −HX|ZS)︸ ︷︷ ︸ adjustment entropy (6) JZ is not necessarily positive if the dependence between X and Z (given S) is larger than that between Z and Y given XS. Equation (6) follows from the CMI definition. Fig. 1C illustrates the two CMIs in Eq. (5) in a Venn diagram. Before discussing the range of estimators for which maximizing the adjustment information JZ leads to a minimal asymptotic estimation variance in Sect. 2.2, we characterize graphical optimality in an information-theoretic framework. Our goal is to provide graphical criteria for optimal adjustment sets, i.e., criteria that depend only on the structure of the graph G and not on the distribution. Definition 2 (Information-theoretical graphical optimality). Given Assumptions 1 we say that (information-theoretical) graphical optimality holds if there is a Z ∈ Z such that either there is no other Z′ ̸= Z ∈ Z or for all other Z′ ̸= Z ∈ Z and all distributions P consistent with G we have JZ ≥ JZ′ . My main result builds on the following lemma which relates graphical optimality to informationtheoretic inequalities in a necessary and sufficient comparison condition for an optimal set to exist. Lemma 1 (Necessary and sufficient comparison criterion for existence of an optimal set). Given Assumptions 1, if and only if there is a Z ∈ Z such that either there is no other Z′ ̸= Z ∈ Z or for all other Z′ ̸= Z ∈ Z and all distributions P consistent with G it holds that IZ\Z′;Y |Z′XS︸ ︷︷ ︸ (i) ≥ IZ′\Z;Y |ZXS︸ ︷︷ ︸ (iii) and IX;Z′\Z|ZS︸ ︷︷ ︸ (ii) ≥ IX;Z\Z′|Z′S︸ ︷︷ ︸ (iv) , (7) then graphical optimality holds and Z is optimal implying JZ ≥ JZ′ . In SSR20 and HPM19 the corresponding conditional independence statements to the terms (iii) and (iv) in the inequalities (7) are used as a sufficient pairwise comparison criterion. However, Lemma 1 shows that for graphical optimality it is not necessary that terms (iii) and (iv) vanish, they just need to fulfill the inequalities (7) for a necessary and sufficient criterion. In principle, Lemma 1 can be used to cross-compare all pairs of sets, but firstly, it is difficult to explicitly evaluate (7) for all distributions P consistent with G and, secondly, iterating through all valid adjustment sets is computationally prohibitive even for small graph sizes. As an example, consider a confounding path consisting of 5 nodes. Then this path can be blocked by 25 − 1 different subsets. In the main result of this work (Thm. 3) a necessary and sufficient criterion based purely on graphical properties is given. 2.2 Applicable estimator class The above characterization only relates optimality of adjustment sets to the adjustment information JZ defined in Eq. (5), but not to any particular estimator. Now the question is for which class of causal effect estimators ∆̂yxx′|s.z the intuition of maximizing the adjustment information JZ leads to a minimal asymptotic estimation variance. In its most general form this class is characterized as fulfilling Zoptimal ∈ argmaxZ∈ZJZ ⇔ Var(∆̂yxx′|s.zoptimal) = min Z∈Z Var(∆̂yxx′|s.z) , (8) where we assume that ∆̂yxx′|s.z is consistent due to a valid adjustment set and correct functional model specification. One can also further restrict the class to estimators whose (square-root of the) asymptotic variance can be expressed as√ Var(∆̂yxx′|s.z) = f(HY |XZS −HX|ZS) , (9) for a real-valued, strictly monotonously increasing function of the adjustment entropy. Minimizing the adjustment entropy is by Eq. (6) equivalent to maximizing the adjustment information. The following assumption and lemma then relates JZ ≥ JZ′ to the corresponding asymptotic variances of a given estimator. Assumptions 2 (Estimator class assumption). The model class of the estimator for the causal effect (3) is correctly specified and its asymptotic variance can be expressed as in relation (9). Lemma 2 (Asymptotic variance and adjustment information). Given Assumptions 1 and an estimator fulfilling Assumptions 2, if and only if for two different adjustment sets Z,Z′ ∈ Z we have JZ ≥ JZ′ , then the adjustment set Z has a smaller or equal asymptotic variance compared to Z′. Proof. By Equations (6) and (9) JZ ≥ JZ′ (for fixed X,Y,S) is directly related to a smaller or equal asymptotic variance for Z compared to Z′, and vice versa. □ The paper’s theoretical results currently hold for estimators fulfilling relation (9), but at least the main result on graphical optimality in Thm. 3 can also be relaxed to estimators fulfilling the less restrictive relation (8). In this work, we leave the question of which general classes of estimators fulfill either relation (8) or the more restricted relation (9) to further research and only show that it holds for the OLS estimator β̂Y X·ZS for Gaussian distributions. For Gaussians the entropies in (9) are given by H(Y |XZS) = 12+ 1 2 ln(2πσ 2 Y |XZS) and H(X|ZS) = 1 2 + 1 2 ln(2πσ 2 X|ZS) where σ(·|·) denotes the square-root of the conditional variance. Then√ Var(∆̂yxx′|s.z) = 1√ n eHY |XZS−HX|ZS = 1√ n σY |XZS σX|ZS . (10) This relation is also the basis of the results for the causally sufficient case in Henckel et al. [2019] where it is shown that it holds more generally for causal linear models that do not require the noise terms to be Gaussian. 2.3 Definition of O-set The optimal adjustment set for the causally sufficient case is simply P = pa(YM) \ forb and was derived in HPM19 and Rotnitzky and Smucler [2019]. In Section B.2 the derivation is discussed from an information-theoretic perspective. In the case with hidden variables we need to account for bidirected edges “↔” which considerably complicate the situation. Then the parents of YM are not sufficient to block all non-causal paths. Further, just like conditioning on parents of YM leads to optimality in the sufficient case since parents constrain information in YM, in the hidden variables case also conditioning on spouses of YM constrains information about YM. Example A. A simple graph (ADMG) to illustrate this is X→Y↔Z1 (shown with an additional S in Fig. 2A below, or Fig. 4 in SSR20). Here Z′ = ∅ = vancs is a valid set, but it is not optimal. Consider O = Z1, then term (iii) = 0 in the inequalities (7) since Z′ \O = ∅. Even though not needed to block non-causal paths (there is none), Z1 still constrains information in Y while being independent of X (hence, term (iv) = 0) which leads to JO > J∅ according to the inequalities (7). Not only direct spouses can constrain information in Y as Fig. 2B below illustrates. Since for W ∈ YM the motif “W↔ C1 ←∗C2” (“∗” denotes either edge mark) is open, it holds that I(C1C2;W ) = I(C1;Y )+I(C2;W |C1) ≥ I(C1;Y ) and we can even further increase the first term in the adjustment information by conditioning also on subsequent spouses. This chain of colliders only ends if we reach a tail or there is no further adjacency. However, we have to make sure that conditioning on colliders does not open non-causal paths. This leads to the notion of a valid collider path (related to the notion of a district in Evans and Richardson [2014]). Definition 3 (Valid collider paths). Given a graph G, a collider path of W for k ≥ 1 is defined by a sequence of edges W↔C1↔· · ·↔Ck. We denote the set of path nodes (excluding W ) along a path indexed by i as πiW . Using the set of valid ancestors vancs = an(XY S) \ forb for the causal effect of X on Y given S we call a collider path node set πiW for W ∈ YM valid wrt. to (X,Y,S) if for each path node C ∈ πiW both of the following conditions are fulfilled: (1) C /∈ forb, and (2a) C ∈ vancs or (2b) C ⊥⊥ X | vancs . (11) Condition (1) is required for any valid adjustment set. If jointly (2a) and (2b) are not fulfilled, i.e. C /∈ vancs and C ⊥⊥X | vancs, then the collider path stops before C. Our candidate optimal adjustment set is now constructed based on the parents of YM, valid collider path nodes of YM, and their parents to ‘close’ these collider paths. Definition 4 (O-set). Given Assumptions 1 and the definition of valid colliders in Def. 3, define the set O(X,Y,S) = P ∪C ∪PC where P = pa(YM) \ forb, C = ⋓W∈YM ⋓i {πiW : πiW is valid wrt. to (X,Y,S)}, PC = pa(C) . In the following we will abbreviate O = O(X,Y,S). Algorithm C.1 states efficient pseudo-code to construct the O-set and detect whether a valid adjustment set exists. Since none of the conditions of Def. 3 for adding collider nodes depends on previously added nodes, the algorithm is orderindependent. The statement occurring in lines 11 and 21 (“No valid adjustment set exists.”) is proven in Thm. 1. If the graph is a DAG, then lines 4-22 can be omitted. The algorithm is of low complexity and the most time-consuming part is checking for a path in line 12, Def. 3(2b) C ⊥⊥ X | vancs, which can be implemented with (bi-directional) breadth-first search as proposed in van der Zander et al. [2019]. Numerical experiments in Section 3 will show that further interesting adjustment sets are the minimized O-set Omin, where O is minimized such that no subset can be removed without making Omin invalid, and the collider-minimized O-set OCmin where only CPC \ P ⊆ O is minimized such that no collider-subset can be removed without making OCmin invalid. Both adjustment sets can be constructed with Alg. C.2 similar to the efficient algorithms in van der Zander et al. [2019]. Also the minimized sets are order-independent since the nodes are removed only after the for-loops. Based on the idea in OCmin, in the numerical experiments we also consider AdjustXmin, where only Adjust \ pa(YM) is minimized and pa(YM) is always included. Finally, we also evaluate Adjustmin where Adjust is fully minimized. Before discussing the optimality of the O-set, we need to assure that it is a valid adjustment set. Similar to the proof given in Perković et al. [2018] for the validity of the vancs-set (for the case without S), we can state that the O-set is valid if and only if a valid adjustment set exists. Theorem 1 (Validity of O-set). Given Assumptions 1 but without a priori assuming that a valid adjustment set exists (apart from the requirement S ∩ forb = ∅). If and only if a valid backdoor adjustment set exists, then O is a valid adjustment set. 2.4 Graphical optimality We now move to the question of optimality. It is known that there are graphs where no graphical criterion exists to determine optimality. Examples, discussed later, are the graphs in Figs. 2E,F. Before stating necessary and sufficient conditions for graphical optimality, I mention that next to the O-set defined above and the Adjust set vancs [Perković et al., 2018], I am not aware of any other systematically constructed set that will yield a valid adjustment set for the case with hidden variables. van der Zander et al. [2019] provide algorithms to list all valid adjustment sets, but the question is which of these a user should choose. As mentioned above, Lemma 1 can be used to cross-compare all pairs of sets, but this is not really feasible. Hence, for automated causal effect estimation, rather than the question of whether graphical optimality holds, it is crucial to have a set with better properties than other systematically constructable sets. The following theorem states that the adjustment informations follow JO ≥ Jvancs for any graph (whether graphical optimality holds or not). Theorem 2 (O-set vs. Adjust-set ). Given Assumptions 1 with O defined in Def. 4 and the Adjust-set defined in Eq. (2), it holds that JO ≥ Jvancs for any graph G. We have JO = Jvancs only if (1) O = vancs, or (2) O ⊆ vancs and X ⊥⊥ vancs \O |OS. In the following the O-set is illustrated and conditions for graphical optimality are explored. SSR20 provide a sufficient condition for optimality, which states that either all nodes are observed (no bidirected edges exist) or for all observed nodes V ⊂ vancs. This is a very strict assumption and not fulfilled for any of the examples (except for Example G) discussed in the following. Example B. Figure 2B depicts a larger example to illustrate the O-set O = PCPC with P = Z1Z2Z3Z4 (blue boxes) and CPC \ P = Z5Z6Z7Z8 (green boxes). We also have a conditioned variable S. Among P, only Z1Z2 are needed to block non-causal paths to X , Z3Z4 are only there to constrain information in Y . Here the same holds for the whole set CPC \ P which was constructed from the paths Y↔Z5↔Z6←Z7 and Y↔Z5↔Z8 which does not include Z9 since it is a descendant of YM. Including an independent variable like Z12 in O would not decrease the adjustment information JO, but then O would not be of minimum cardinality anymore (proven in Cor. B.1). Here, again, the condition of SSR20 does not hold (e.g., Z5 is not an ancestor of XY S). O is optimal here which can be seen as follows: For term (iii) in the inequalities (7) to even be non-zero, we would need a valid Z such that Z \O has a path to Y given OSX . But these are all blocked. Note that while Z10 or Z9 ∈ Z would open a path to Y , both of these are descendants of M or Y and, hence, cannot be in a valid Z. For term (iv) to even be non-zero O \ Z would need to have a path to X given ZS. But since any valid Z has to contain Z1 and Z2 (or Z11), the only nodes in O with a path to X are parents of YM and paths from these parents to X all need to be blocked for a valid Z. Hence, O is optimal here. Example C. In Fig. 2C a case is shown where O = Z1Z5. Z2 is not part of O because none of the conditions in Def. 3(2) is fulfilled: Z2 /∈ vancs = Z1Z4Z5 and Z2 ⊥⊥X | vancs. Hence, we call Z2 an N-node. But Z2 cannot be part of any valid Z because it has a collider path to X through Z1 which is always open because it is part of vancs. Hence, term (iii) is always zero. Term (iv) is zero because O \ Z is empty for any valid Z here. Here even JO > JZ since O is minimal and term (ii) IX;Z\O|O > 0 for any Z ̸= O (generally proven in Corollary B.1). Example D. The example in Fig. 2D depicts a case with O = Z2 where Z1 is an N-node. Next to Z = ∅ another valid set is Z = Z1. Then term (iii) is non-zero and in the same way term (iv) is non-zero. The sufficient pairwise comparison criterion in SSR20 and HPM19 is, hence, not applicable. However, it holds that always term (iii) ≤ (i) because the dependence between Z1 and Y given X is always smaller than the dependence between Z2 and Y given X and correspondingly term (iv) ≤ (ii). Hence, O is optimal here. If a link Y→Z1 exists, then the only other valid set is Z = ∅ and both terms are strictly zero. Example E. The example in Fig. 2E (Fig. 3 in SSR20 and also discussed in HPM19) is not graphically optimal. Here O = Z1Z2. Other valid adjustment sets are Z1 or the empty set. From using Z1 ⊥⊥ Y |X and X ⊥⊥ Z2|Z1 in the inequalities (7) one can derive in information-theoretic terms that both Z1Z2 and ∅ are better than vancs = Z1, but since JZ1Z2 = J∅ + IZ2;Y |XZ1 − IX;Z1 , a superior adjustment set depends on how strong the link Z1→X vs. Z2↔Y is. The graph stays non-optimal also with a link Z1↔Z2. Example F. The example in Fig. 2F is also not graphically optimal. Here O = ∅ and Z2 is an N-node with a non-collider path to X . Other valid adjustment sets are Z1 and Z1Z2. Higher adjustment information here depends on the distribution. Also the same graph with the link Z1↔X is non-optimal. If, however, there is another link Z1→Y , then O = ∅ is optimal (then Z1 is a mediator). Example G. The example in Fig. 2G is only a slight modification of Example E with an added selected condition S. Then Z1, Z2 ∈ vancs. We still get O = Z1Z2 and this is now optimal since Z2 is always open and any valid set has to contain Z1. The main result of this work is a set of necessary and sufficient conditions for the existence of graphical optimality and the proof of optimality of the O-set which is based on the intuition gained in the preceding examples. Theorem 3 (Necessary and sufficient graphical conditions for optimality and optimality of O-set). Given Assumptions 1 and with O = PCPC defined in Def. 4. Denote the set of N-nodes by N = sp(YMC)\ (forbOS). Finally, given an N ∈ N and a collider path N↔· · ·↔C↔· · ·↔W (including N↔W ) for C ∈ C and W ∈ YM (indexed by i) with the collider path nodes denoted by πNi (excluding N and W ), denote by OπNi = O(X,Y,S ′ = SNπNi ) the O-set for the causal effect of X on Y given S′ = S ∪ {N} ∪ πNi . If and only if exactly one valid adjustment set exists, or both of the following conditions are fulfilled, then graphical optimality holds and O is optimal: (I) For all N ∈ N and all its collider paths i to W ∈ YM that are inside C it holds that OπNi does not block all non-causal paths from X to Y , i.e., OπNi is non-valid, and (II) for all E ∈ O \P with an open path to X given SO \ {E} there is a link E↔W or an extended collider path E∗→C↔· · ·↔W inside C for W ∈ YM where all colliders C ∈ vancs. Condition (I) and (II) essentially rule out the two canonical cases in Examples F and E, respectively, on which non-optimality in any graph is based. Applied to the examples, we obtain that in Example A Cond. (I) holds since no N-node exists and Cond. (II) holds since X ⊥⊥ Z1 | S. In Example B also no N-node exists and Cond. (II) holds as X ⊥⊥ E | SO \ {E} for every E ∈ O \ P. In example C Z2 is an N-node, but there is a collider path to X through Z1 which is in vancs such that Cond. I is fulfilled. Further, while X ⊥⊥ Z5 | SO \ {Z5}, there is a link Z5↔Y such that Cond. II holds. In example D Z1 is an N-node, but it has a bidirected link with X and Cond. (II) holds since X ⊥⊥ Z2 | SO \ {Z2}. In Example E optimality does not hold, but Cond. (I) actually holds since there is no N-node. Cond. (II) is not fulfilled for E = Z1, which has a path to X given O and on the extended collider path Z1→Z2↔Y Z2 /∈ vancs. For Z′ = ∅ and a distribution P ′ where the link Z2↔Y almost vanishes we then have JO < JZ′ . Example F has an N-node Z2 and OπNi = O(X,Y,S ′ = Z2) = Z1Z2 is valid implying that Cond. (I) does not hold, while Cond. (II) is actually fulfilled with O = ∅. For Z′ = OπNi = Z1Z2 and a distribution P ′ where the link X→Z1 almost vanishes we then have JO < JZ′ . Example G is optimal since there are no N-nodes and Z2 ∈ vancs. Similar to SSR20, HPM19, and Witte et al. [2020], I also provide results regarding minimality and minimum cardinality for the hidden variables case in the Supplement. 3 Numerical experiments We now investigate graphical optimality empirically to answer three questions: Firstly, whether for a linear estimator under Assumptions 2 the asymptotically optimal variance also translates into better finite-sample variance. Secondly, how the O-set performs in non-optimal settings (according to Thm. 3). Thirdly, how the O-set and variants thereof perform for estimators not captured by the class for which the theoretical results were derived (Assumptions 2). To this end, we compare the performance of O, Adjust, OCmin, Omin, AdjustXmin, and Adjustmin (see definitions in Section 2.3) together with linear least squares estimation (LinReg) on linear models. In the Supplement we also investigate nonlinear models using nearest neighbor regression (kNN), a multilayer perceptron (MLP), random forest regression, and double machine learning for partially linear regression models (DML) [Chernozhukov et al., 2018]. The experiments are based on a generalized additive model and described in detail in Section D. Among these 12,000 randomly created configurations 93% fulfill the optimality conditions in Thm. 3. The results in Fig. 3 confirm our first hypothesis that for linear experiments with an estimator fulfilling Assumptions 2 and in settings where graphical optimality is fulfilled (Thm. 3) the O-set either has similar RMSE or significantly outperforms all other tested variants. In particular, Omin and Adjustmin are bad choices for this setting. Adjust is intermediate and OCmin and AdjustXmin come closest to O, but may still yield significantly higher variance. Secondly, in non-optimal settings (only 7% of configurations) the O-set still outperforms Adjust (as expected by Thm. 2). Compared to OCmin and AdjustXmin the O-set leads to worse results for about half of the studied configurations, while Omin and Adjustmin are still bad choices. Cardinality is slightly higher for O compared to all other sets. In Fig. S7 we further differentiate the results by the cardinality of the O-set and find that for small cardinalities (up to 4) the O-set has the lowest variance in a majority of cases, but for higher cardinalities either OCmin or again O have the lowest variance (slightly beating AdjustXmin). Hence, either O or OCmin performs best in non-optimal configurations. For very small sample sizes n = 30 (see Fig. S2) that become comparable to the adjustment set cardinality, there tends to be a trade-off and smaller cardinality helps. Then OCmin tends to be better than O for high cardinalities also in optimal settings, but here this effect is only present for n = 30 and for n = 50 already negligible compared to the gain in JO. In Appendix D.2 are RMSE ratios for all combinations of adjustment approaches considered here and it is shown that, in general, results are very similar for other sample sizes. Thirdly, we investigate non-parametric estimators on linear as well as nonlinear models (implementations described in Section D, results in the figures of Section D.3) The different classes of estimators exhibit quite different behavior. For kNN (Figs. S8,S9) the O-set has the lowest variance in around 50% of the configurations followed by OCmin and Omin. More specifically (Figs. S15,S16), for small O-set cardinalities up to 2 the O-set and for higher either Omin or OCmin (the latter only in non-optimal configurations) perform best. For nonlinear experiments the results are less clear for O-set cardinalities greater than 2, but Omin is still a good choice. Regarding RMSE ratios, we see that, for the cases where O is not the best, the O-set can have considerably higher variance, while Omin seems to be most robust and may be a better choice if O is too large. MLP (Figs. S10,S11) behaves much differently. Here in optimal cases neither method outperforms any other for small O-set cardinalities, but for higher cardinalities (Figs. S15,S16) the O-set is best in more than 50% of configurations (slightly less for nonlinear experiments) and the others share the rest (except Adjustmin). For non-optimal cases O, OCmin and AdjustXmin share the ranks. Regarding RMSE, for linear experiments the O-results are almost as optimal as for the LinReg estimator in the optimal setting. However, for non-optimal cases OCmin can have considerably smaller variance and seems to be a robust option then, similarly to AdjustXmin. Also for nonlinear experiments OCmin is more robust. The RF estimator (Figs. S12,S13) is again different. Here no method clearly is top-ranked, Omin and Adjustmin are slightly better for linear experiments and O for nonlinear experiments. OCmin and Omin are more robust regarding RMSE ratios (similar to AdjustXmin). Finally, the DML estimator (Fig. S14) was here applied only to linear experiments since its model assumption does not allow for fully nonlinear settings. For optimal settings here O is top-ranked in a majority of cases, but closely followed by OCmin and AdjustXmin. In non-optimal cases for higher O-set cardinalities these two seem like a better choice. Quantitatively, OCmin and AdjustXmin are the most robust choices. Overall, the O-set and its variants seem to outperform or match the Adjust-variants and whether higher cardinality of the O-set reduces performance depends strongly on the estimator and data. 4 Discussion and Conclusions The proposed adjustment information formalizes the common intuition to choose adjustment sets that maximally constrain the effect variable and minimally constrain the cause variable. The main theoretical contributions are a necessary and sufficient graphical criterion for the existence of an optimal adjustment set in the hidden variables case and a definition and algorithm to construct it. To emphasize, graphical optimality implies that the O-set is optimal for any distribution consistent with the graph. Note that in cases where graphical optimality does not hold, there will still be distributions for which the O-set has maximal adjustment information. Further, the optimal set is valid if and only if a valid adjustment set exists and has smaller (or equal) asymptotic variance compared to the Adjust-set proposed in Perković et al. [2018] for any graph, whether graphical optimality holds or not. This makes the O-set a natural choice in automated causal inference analyses. Practical contributions comprise Python code to construct adjustment sets and check optimality, as well as extensive numerical experiments that demonstrate that the theoretical results also hold for relatively small sample sizes. The theoretical optimality results are limited to estimators for which the asymptotic variance becomes minimal for adjustment sets with maximal adjustment information (relation (8)). This is fulfilled for least-squares estimators, where even the direct relation (9) holds, but it is unclear whether this also holds for more general classes. The numerical results show that the O-set or minimized variants thereof often yield smaller variance also in non-optimal settings and beyond that estimator class. I speculate that further theoretical properties of maximizing adjustment information can be shown because relation (9) for f(·) = 1√ n eHY |XZS−HX|ZS seems related to the lower bound of the estimation variance counterpart to Fano’s inequality (Theorem 8.6.6 in Cover and Thomas [2006]). For estimators sensitive to high-dimensionality one may consider data-driven criteria or penalties to step-wisely minimize the O-set. However, estimating, for example, the adjustment information from a potentially small sample size carries considerable errors itself. Another current limitation is that relation (9) only holds for univariate singleton cause variables X . The information-theoretical results, however, also hold for multivariate X and preliminary results indicate that, while relation (9) does not hold for multivariate X, the less restrictive relation (8) still seems to hold. The proposed information-theoretic approach can guide further research, for example, to theoretically study relations (8),(9) for other estimators and to address other types of graphs as emerge from the output of causal discovery algorithms and the setting where the graph is unknown [Witte et al., 2020, Maathuis et al., 2009, 2010]. At present, the approach only applies to ADMGs and Maximal Ancestral Graphs (MAG) [Richardson and Spirtes, 2002] without selection variables. Last, it remains an open problem to identify optimal adjustment estimands for the hidden variables case based on other criteria such as the front-door formula and Pearl’s general do-calculus [Pearl, 2009]. The results may carry considerable practical impact since, surprisingly, among the randomly created configurations more than 90% fulfill the optimality conditions indicating that also in many realworld scenarios graphical optimality may hold. Code is available in the python package https: //github.com/jakobrunge/tigramite. Acknowledgments and Disclosure of Funding I thank Andreas Gerhardus for very helpful comments. This work was funded by the ERC Starting Grant CausalEarth (grant no. 948112).
1. What is the focus of the paper regarding adjustment sets and estimation variance? 2. What are the strengths of the proposed approach, particularly in terms of its novelty and advancement over prior works? 3. Are there any concerns or limitations regarding the assumptions and applications of the proposed method? 4. How does the reviewer assess the clarity, quality, and impact of the paper's content?
Summary Of The Paper Review
Summary Of The Paper The paper defines the optimality of a (valid) adjustment set in terms of a notion called 'adjustment information.' Then, it focuses on estimators for which maximizing adjustment information equates to minimizing asymptotic estimation variance. The authors provide a sufficient and necessary graphical criterion for the existence of an optimal adjustment set under this notion. They also give an explicit and efficient construction of an adjustment set called the O set, which is proven to be optimal whenever graph optimality holds (the constraints in the graph are sufficient to determine an optimal adjustment set). Review The results presented by the authors are novel and advance previous work by considering the hidden variable setting and characterizing the existence of optimal adjustment sets for estimators satisfying certain information-theoretic properties. It appears relevant work has been appropriately cited and discussed. As far as I can tell, the results and claims presented are supported both theoretically and experimentally. The authors are very clear about the assumptions needed for their results to hold and also consider its limitations. Experiments consider both scenarios where the assumptions hold and where they don't; making sense of observed results. I find the paper well written. Relevant previous work, definitions, and results are introduced clearly and making good use of several examples. The experimental section and further discussion help understand the benefits of using the proposed result and its limitations. I was wondering why the experimental section only considers the case where S = ∅ . Overall, the paper makes important theoretical and practical contributions to the problem of identifying causal effects, using covariate adjustment, in settings where the graphical structure is not fully known. In particular, the results advance the state of the art for choosing and constructing optimal adjustment sets for certain estimators.
NIPS
Title Necessary and sufficient graphical conditions for optimal adjustment sets in causal graphical models with hidden variables Abstract The problem of selecting optimal backdoor adjustment sets to estimate causal effects in graphical models with hidden and conditioned variables is addressed. Previous work has defined optimality as achieving the smallest asymptotic estimation variance and derived an optimal set for the case without hidden variables. For the case with hidden variables there can be settings where no optimal set exists and currently only a sufficient graphical optimality criterion of limited applicability has been derived. In the present work optimality is characterized as maximizing a certain adjustment information which allows to derive a necessary and sufficient graphical criterion for the existence of an optimal adjustment set and a definition and algorithm to construct it. Further, the optimal set is valid if and only if a valid adjustment set exists and has higher (or equal) adjustment information than the Adjust-set proposed in Perković et al. [Journal of Machine Learning Research, 18: 1–62, 2018] for any graph. The results translate to minimal asymptotic estimation variance for a class of estimators whose asymptotic variance follows a certain information-theoretic relation. Numerical experiments indicate that the asymptotic results also hold for relatively small sample sizes and that the optimal adjustment set or minimized variants thereof often yield better variance also beyond that estimator class. Surprisingly, among the randomly created setups more than 90% fulfill the optimality conditions indicating that also in many real-world scenarios graphical optimality may hold. 1 Introduction A standard problem setting in causal inference is to estimate the causal effect between two variables given a causal graphical model that specifies qualitative causal relations among observed variables [Pearl, 2009], including a possible presence of hidden confounding variables. The graphical model then allows to employ graphical criteria to identify valid adjustment sets, the most well-known being the backdoor criterion [Pearl, 1993] and the generalized adjustment criterion [Shpitser et al., 2010, Perković et al., 2015, 2018], providing a complete identification of all valid adjustment sets. Estimators of causal effects based on such a valid adjustment set as a covariate are then unbiased, but for different adjustment sets the estimation variance may strongly vary. An optimal adjustment set may be characterized as one that has minimal asymptotic estimation variance. In current work, following Kuroki and Cai [2004] and Kuroki and Miyakawa [2003], Henckel et al. [2019] (abbreviated 35th Conference on Neural Information Processing Systems (NeurIPS 2021). HPM19 in the following) showed that graphical optimality always holds for linear models in the causally sufficient case where all relevant variables are observed. In Witte et al. [2020] an alternative characterization of the optimal adjustment set is discussed and the approach was integrated into the IDA algorithm [Maathuis et al., 2009, 2010] that does not require the causal graph to be known. Rotnitzky and Smucler [2019] extended the results in HPM19 to asymptotically linear non-parametric graphical models. HPM19’s optimal adjustment set holds for the causally sufficient case (no hidden variables) and the authors gave an example with hidden variables where optimality does not hold in general, i.e., the optimal adjustment set depends on the coefficients and noise terms (more generally, the distribution), rather than just the graph. Most recently, Smucler et al. [2021] (SSR20) partially extended these results to the non-parametric hidden variables case together with dynamic treatment regimes, i.e., conditional causal effects. SSR20 provide a sufficient criterion for an optimal set to exist and a definition based on a certain undirected graph-construction using a result by van der Zander et al. [2019]. However, their sufficient criterion is very restrictive and a current major open problem is a necessary and sufficient condition for an optimal adjustment set to exist in the hidden variable case and a corresponding definition of an optimal set. My main theoretical contribution is a solution to this problem. Optimality for conditional causal effects in the hidden variables case is fully characterized by an information-theoretic approach involving a certain difference of conditional mutual informations among the observed variables termed the adjustment information. Maximizing the adjustment information formalizes the common intuition to choose adjustment sets that maximally constrain the effect variable and minimally constrain the cause variable. This allows to derive a necessary and sufficient graphical criterion for the existence of an optimal adjustment set. The derived optimal adjustment set also has the property of minimum cardinality, i.e., no node can be removed without sacrificing optimality. Further, the optimal set is valid if and only if a valid adjustment set exists and has higher (or equal) adjustment information than the Adjust-set proposed in Perković et al. [2018] for any graph, whether graphical optimality holds or not. The results translate to minimal asymptotic estimation variance for a class of estimators whose asymptotic variance follows a certain information-theoretic relation that, at present, I could only verify theoretically for the linear case. As practical contributions the paper provides extensive numerical experiments that corroborate the theoretical results and show that the optimal adjustment set or minimized variants thereof often yield better variance also beyond the theoretically analyzed estimator class. Code is available in the python package https://github.com/jakobrunge/tigramite. More detailed preliminaries, proofs, algorithms, and further numerical experiments are given in the Supplementary Material. 1.1 Preliminaries and problem setting We consider causal effects in causal graphical models over a set of variables V with a joint distribution P = P(V) that is consistent with an acyclic directed mixed graph (ADMG) G = (V, E). Two nodes can have possibly more than one edge which can be directed (←) or bi-directed (↔). See Fig. 1A for an example. Kinships are defined as usual: parents pa(X) for “•→X”, spouses sp(X) for “X↔•”, children ch(X) for “X→•”. These sets all exclude X . Correspondingly descendants des(X) and ancestors an(X) are defined, which, on the other hand, both include X . The mediator nodes on causal paths from X to Y are denoted M = M(X,Y ) and exclude X and Y . For detailed preliminaries, including the definition of open and blocked paths, see Supplementary Section A. In this work we only consider a univariate intervention variable X and effect variable Y . We simplify set notation and denote unions of variables as {W} ∪M ∪A = WMA. A (possibly empty) set of adjustment variables Z for the total causal effect of X on Y in an ADMG is called valid relative to (X,Y ) if the interventional distribution for setting do(X = x) [Pearl, 2009] factorizes as p(Y |do(X = x)) = ∫ p(Y |x, z)p(z)dz for non-empty Z and as p(Y |do(X = x)) = p(Y |x) for empty Z. Valid adjustment sets, the set of which is here denoted Z , can be read off from a given causal graph using the generalized adjustment criterion [Perković et al., 2015, 2018] which generalizes Pearl’s back-door criterion [Pearl, 2009]. To this end define forb(X,Y ) = X ∪ des(YM) (1) (henceforth just denoted as forb). A set Z is valid if both of the following conditions hold: (i) Z ∩ forb = ∅, and (ii) all non-causal paths from X to Y are blocked by Z. An adjustment set is called minimal if no strict subset of Z is still valid. The validity conditions can in principle be manually checked directly from the graph, but, more conveniently, Perković et al. [2018] define an adjustment set called ‘Adjust’ that is valid if and only if a valid adjustment set exist. In our setting including conditioning variables S we call this set the valid ancestors defined as vancs(X,Y,S) = an(XY S) \ forb (2) and refer to this set as vancs or Adjust-set. Our quantity of interest is the average total causal effect of an intervention to set X to x vs. x′ on the effect variable Y given a set of selected (conditioned) variables S = s ∆yxx′|s = E(Y |do(x), s)− E(Y |do(x′), s) . (3) We denote an estimator given a valid adjustment set Z as ∆̂yxx′|s.z. In the linear case ∆yxx′|s for x = x′ + 1 corresponds to the regression coefficient βY X·ZS in the regression of Y on X , Z, and S. The ordinary least squares (OLS) estimator β̂Y X·ZS is a consistent estimator of βY X·ZS. Figure 1A illustrates the problem setting: We are interested in the total causal effect of (here univariate) X on Y (conditioned on S), which is here due to a direct link and an indirect causal path through a mediator M . There are six valid backdoor adjustment sets Z = {Z1, Z2, Z1Z2, Z2Z3, Z1Z3, Z1Z2Z3}. Z4 ∈ forb cannot be included in any set because it is a descendant of YM. Here vancs = Z1Z2S. All valid adjustment sets remove the bias due to confounding by their definition. The question is which of these valid adjustment sets is statistically optimal in that it minimizes the asymptotic estimation variance? More formally, the task is, given a graph G and (X,Y,S), to chose a valid optimal set Zoptimal ∈ Z such that the causal effect estimator’s asymptotic variance Var(∆̂yxx′|s.z) = E[(∆yxx′|s − ∆̂yxx′|s.z)2] is minimal: Zoptimal ∈ argminZ∈ZVar(∆̂yxx′|s.z) . (4) My proposed approach to optimal adjustment sets is based on information theory [Cover and Thomas, 2006]. The main quantity of interest there is the conditional mutual information (CMI) defined as a difference IX;Y |Z = HY |Z − HY |ZX of two (conditional) Shannon entropies HY |X = − ∫ x,y p(x, y) ln p(y|x)dxdy. Its main properties are non-negativity, IX;Y |Z = 0 if and only if X ⊥⊥ Y |Z, and the chain rule IXW ;Y |Z = IX;Y |Z + IW ;Y |ZX . All random variables in a CMI can be multivariate. Throughout the present paper we will assume the following. Assumptions 1 (General setting and assumptions). We assume a causal graphical model over a set of variables V with a joint distribution P = P(V) that is consistent with an ADMG G = (V, E). We assume a non-zero causal effect from X on Y , potentially through a set of mediators M, and given selected conditioned variables S, where S ∩ forb = ∅. We assume that at least one valid adjustment set (given S) exists and, hence, the causal effect is identifiable (except when stated otherwise). Finally, we assume the usual Causal Markov Condition (implicit in semi-Markovian models) and Faithfulness. 2 Optimal adjustment sets 2.1 Information-theoretic characterization Figure 1B illustrates two causal effect estimates for a linear Gaussian model consistent with the graph in Fig. 1A. With Z = Z1Z2 (blue) the error is much larger than with O = Z2Z3 (orange) for two reasons: Z constrains the residual variance V ar(Y |ZS) of the effect variable Y less than O and, on the other hand, Z constrains the residual variance V ar(X|ZS) of the cause variable X more than O. Smaller estimator variance also holds for O compared to any other valid set in Z here. We information-theoretically formalize the resulting intuition to choose an adjustment set Z that maximally constrains the effect variable Y and minimally constrains the cause variable X . In terms of CMIs and given selected fixed conditions S the quantity to maximize can be stated as follows. Definition 1 (Adjustment information). Consider a causal effect of X on Y for an adjustment set Z given a condition set S. The (conditional) adjustment (set) information, abbreviated JZ, is defined as JXY |S.Z ≡ IZ;Y |XS − IX;Z|S (5) = HY |XS −HX|S︸ ︷︷ ︸ not related to Z − (HY |XZS −HX|ZS)︸ ︷︷ ︸ adjustment entropy (6) JZ is not necessarily positive if the dependence between X and Z (given S) is larger than that between Z and Y given XS. Equation (6) follows from the CMI definition. Fig. 1C illustrates the two CMIs in Eq. (5) in a Venn diagram. Before discussing the range of estimators for which maximizing the adjustment information JZ leads to a minimal asymptotic estimation variance in Sect. 2.2, we characterize graphical optimality in an information-theoretic framework. Our goal is to provide graphical criteria for optimal adjustment sets, i.e., criteria that depend only on the structure of the graph G and not on the distribution. Definition 2 (Information-theoretical graphical optimality). Given Assumptions 1 we say that (information-theoretical) graphical optimality holds if there is a Z ∈ Z such that either there is no other Z′ ̸= Z ∈ Z or for all other Z′ ̸= Z ∈ Z and all distributions P consistent with G we have JZ ≥ JZ′ . My main result builds on the following lemma which relates graphical optimality to informationtheoretic inequalities in a necessary and sufficient comparison condition for an optimal set to exist. Lemma 1 (Necessary and sufficient comparison criterion for existence of an optimal set). Given Assumptions 1, if and only if there is a Z ∈ Z such that either there is no other Z′ ̸= Z ∈ Z or for all other Z′ ̸= Z ∈ Z and all distributions P consistent with G it holds that IZ\Z′;Y |Z′XS︸ ︷︷ ︸ (i) ≥ IZ′\Z;Y |ZXS︸ ︷︷ ︸ (iii) and IX;Z′\Z|ZS︸ ︷︷ ︸ (ii) ≥ IX;Z\Z′|Z′S︸ ︷︷ ︸ (iv) , (7) then graphical optimality holds and Z is optimal implying JZ ≥ JZ′ . In SSR20 and HPM19 the corresponding conditional independence statements to the terms (iii) and (iv) in the inequalities (7) are used as a sufficient pairwise comparison criterion. However, Lemma 1 shows that for graphical optimality it is not necessary that terms (iii) and (iv) vanish, they just need to fulfill the inequalities (7) for a necessary and sufficient criterion. In principle, Lemma 1 can be used to cross-compare all pairs of sets, but firstly, it is difficult to explicitly evaluate (7) for all distributions P consistent with G and, secondly, iterating through all valid adjustment sets is computationally prohibitive even for small graph sizes. As an example, consider a confounding path consisting of 5 nodes. Then this path can be blocked by 25 − 1 different subsets. In the main result of this work (Thm. 3) a necessary and sufficient criterion based purely on graphical properties is given. 2.2 Applicable estimator class The above characterization only relates optimality of adjustment sets to the adjustment information JZ defined in Eq. (5), but not to any particular estimator. Now the question is for which class of causal effect estimators ∆̂yxx′|s.z the intuition of maximizing the adjustment information JZ leads to a minimal asymptotic estimation variance. In its most general form this class is characterized as fulfilling Zoptimal ∈ argmaxZ∈ZJZ ⇔ Var(∆̂yxx′|s.zoptimal) = min Z∈Z Var(∆̂yxx′|s.z) , (8) where we assume that ∆̂yxx′|s.z is consistent due to a valid adjustment set and correct functional model specification. One can also further restrict the class to estimators whose (square-root of the) asymptotic variance can be expressed as√ Var(∆̂yxx′|s.z) = f(HY |XZS −HX|ZS) , (9) for a real-valued, strictly monotonously increasing function of the adjustment entropy. Minimizing the adjustment entropy is by Eq. (6) equivalent to maximizing the adjustment information. The following assumption and lemma then relates JZ ≥ JZ′ to the corresponding asymptotic variances of a given estimator. Assumptions 2 (Estimator class assumption). The model class of the estimator for the causal effect (3) is correctly specified and its asymptotic variance can be expressed as in relation (9). Lemma 2 (Asymptotic variance and adjustment information). Given Assumptions 1 and an estimator fulfilling Assumptions 2, if and only if for two different adjustment sets Z,Z′ ∈ Z we have JZ ≥ JZ′ , then the adjustment set Z has a smaller or equal asymptotic variance compared to Z′. Proof. By Equations (6) and (9) JZ ≥ JZ′ (for fixed X,Y,S) is directly related to a smaller or equal asymptotic variance for Z compared to Z′, and vice versa. □ The paper’s theoretical results currently hold for estimators fulfilling relation (9), but at least the main result on graphical optimality in Thm. 3 can also be relaxed to estimators fulfilling the less restrictive relation (8). In this work, we leave the question of which general classes of estimators fulfill either relation (8) or the more restricted relation (9) to further research and only show that it holds for the OLS estimator β̂Y X·ZS for Gaussian distributions. For Gaussians the entropies in (9) are given by H(Y |XZS) = 12+ 1 2 ln(2πσ 2 Y |XZS) and H(X|ZS) = 1 2 + 1 2 ln(2πσ 2 X|ZS) where σ(·|·) denotes the square-root of the conditional variance. Then√ Var(∆̂yxx′|s.z) = 1√ n eHY |XZS−HX|ZS = 1√ n σY |XZS σX|ZS . (10) This relation is also the basis of the results for the causally sufficient case in Henckel et al. [2019] where it is shown that it holds more generally for causal linear models that do not require the noise terms to be Gaussian. 2.3 Definition of O-set The optimal adjustment set for the causally sufficient case is simply P = pa(YM) \ forb and was derived in HPM19 and Rotnitzky and Smucler [2019]. In Section B.2 the derivation is discussed from an information-theoretic perspective. In the case with hidden variables we need to account for bidirected edges “↔” which considerably complicate the situation. Then the parents of YM are not sufficient to block all non-causal paths. Further, just like conditioning on parents of YM leads to optimality in the sufficient case since parents constrain information in YM, in the hidden variables case also conditioning on spouses of YM constrains information about YM. Example A. A simple graph (ADMG) to illustrate this is X→Y↔Z1 (shown with an additional S in Fig. 2A below, or Fig. 4 in SSR20). Here Z′ = ∅ = vancs is a valid set, but it is not optimal. Consider O = Z1, then term (iii) = 0 in the inequalities (7) since Z′ \O = ∅. Even though not needed to block non-causal paths (there is none), Z1 still constrains information in Y while being independent of X (hence, term (iv) = 0) which leads to JO > J∅ according to the inequalities (7). Not only direct spouses can constrain information in Y as Fig. 2B below illustrates. Since for W ∈ YM the motif “W↔ C1 ←∗C2” (“∗” denotes either edge mark) is open, it holds that I(C1C2;W ) = I(C1;Y )+I(C2;W |C1) ≥ I(C1;Y ) and we can even further increase the first term in the adjustment information by conditioning also on subsequent spouses. This chain of colliders only ends if we reach a tail or there is no further adjacency. However, we have to make sure that conditioning on colliders does not open non-causal paths. This leads to the notion of a valid collider path (related to the notion of a district in Evans and Richardson [2014]). Definition 3 (Valid collider paths). Given a graph G, a collider path of W for k ≥ 1 is defined by a sequence of edges W↔C1↔· · ·↔Ck. We denote the set of path nodes (excluding W ) along a path indexed by i as πiW . Using the set of valid ancestors vancs = an(XY S) \ forb for the causal effect of X on Y given S we call a collider path node set πiW for W ∈ YM valid wrt. to (X,Y,S) if for each path node C ∈ πiW both of the following conditions are fulfilled: (1) C /∈ forb, and (2a) C ∈ vancs or (2b) C ⊥⊥ X | vancs . (11) Condition (1) is required for any valid adjustment set. If jointly (2a) and (2b) are not fulfilled, i.e. C /∈ vancs and C ⊥⊥X | vancs, then the collider path stops before C. Our candidate optimal adjustment set is now constructed based on the parents of YM, valid collider path nodes of YM, and their parents to ‘close’ these collider paths. Definition 4 (O-set). Given Assumptions 1 and the definition of valid colliders in Def. 3, define the set O(X,Y,S) = P ∪C ∪PC where P = pa(YM) \ forb, C = ⋓W∈YM ⋓i {πiW : πiW is valid wrt. to (X,Y,S)}, PC = pa(C) . In the following we will abbreviate O = O(X,Y,S). Algorithm C.1 states efficient pseudo-code to construct the O-set and detect whether a valid adjustment set exists. Since none of the conditions of Def. 3 for adding collider nodes depends on previously added nodes, the algorithm is orderindependent. The statement occurring in lines 11 and 21 (“No valid adjustment set exists.”) is proven in Thm. 1. If the graph is a DAG, then lines 4-22 can be omitted. The algorithm is of low complexity and the most time-consuming part is checking for a path in line 12, Def. 3(2b) C ⊥⊥ X | vancs, which can be implemented with (bi-directional) breadth-first search as proposed in van der Zander et al. [2019]. Numerical experiments in Section 3 will show that further interesting adjustment sets are the minimized O-set Omin, where O is minimized such that no subset can be removed without making Omin invalid, and the collider-minimized O-set OCmin where only CPC \ P ⊆ O is minimized such that no collider-subset can be removed without making OCmin invalid. Both adjustment sets can be constructed with Alg. C.2 similar to the efficient algorithms in van der Zander et al. [2019]. Also the minimized sets are order-independent since the nodes are removed only after the for-loops. Based on the idea in OCmin, in the numerical experiments we also consider AdjustXmin, where only Adjust \ pa(YM) is minimized and pa(YM) is always included. Finally, we also evaluate Adjustmin where Adjust is fully minimized. Before discussing the optimality of the O-set, we need to assure that it is a valid adjustment set. Similar to the proof given in Perković et al. [2018] for the validity of the vancs-set (for the case without S), we can state that the O-set is valid if and only if a valid adjustment set exists. Theorem 1 (Validity of O-set). Given Assumptions 1 but without a priori assuming that a valid adjustment set exists (apart from the requirement S ∩ forb = ∅). If and only if a valid backdoor adjustment set exists, then O is a valid adjustment set. 2.4 Graphical optimality We now move to the question of optimality. It is known that there are graphs where no graphical criterion exists to determine optimality. Examples, discussed later, are the graphs in Figs. 2E,F. Before stating necessary and sufficient conditions for graphical optimality, I mention that next to the O-set defined above and the Adjust set vancs [Perković et al., 2018], I am not aware of any other systematically constructed set that will yield a valid adjustment set for the case with hidden variables. van der Zander et al. [2019] provide algorithms to list all valid adjustment sets, but the question is which of these a user should choose. As mentioned above, Lemma 1 can be used to cross-compare all pairs of sets, but this is not really feasible. Hence, for automated causal effect estimation, rather than the question of whether graphical optimality holds, it is crucial to have a set with better properties than other systematically constructable sets. The following theorem states that the adjustment informations follow JO ≥ Jvancs for any graph (whether graphical optimality holds or not). Theorem 2 (O-set vs. Adjust-set ). Given Assumptions 1 with O defined in Def. 4 and the Adjust-set defined in Eq. (2), it holds that JO ≥ Jvancs for any graph G. We have JO = Jvancs only if (1) O = vancs, or (2) O ⊆ vancs and X ⊥⊥ vancs \O |OS. In the following the O-set is illustrated and conditions for graphical optimality are explored. SSR20 provide a sufficient condition for optimality, which states that either all nodes are observed (no bidirected edges exist) or for all observed nodes V ⊂ vancs. This is a very strict assumption and not fulfilled for any of the examples (except for Example G) discussed in the following. Example B. Figure 2B depicts a larger example to illustrate the O-set O = PCPC with P = Z1Z2Z3Z4 (blue boxes) and CPC \ P = Z5Z6Z7Z8 (green boxes). We also have a conditioned variable S. Among P, only Z1Z2 are needed to block non-causal paths to X , Z3Z4 are only there to constrain information in Y . Here the same holds for the whole set CPC \ P which was constructed from the paths Y↔Z5↔Z6←Z7 and Y↔Z5↔Z8 which does not include Z9 since it is a descendant of YM. Including an independent variable like Z12 in O would not decrease the adjustment information JO, but then O would not be of minimum cardinality anymore (proven in Cor. B.1). Here, again, the condition of SSR20 does not hold (e.g., Z5 is not an ancestor of XY S). O is optimal here which can be seen as follows: For term (iii) in the inequalities (7) to even be non-zero, we would need a valid Z such that Z \O has a path to Y given OSX . But these are all blocked. Note that while Z10 or Z9 ∈ Z would open a path to Y , both of these are descendants of M or Y and, hence, cannot be in a valid Z. For term (iv) to even be non-zero O \ Z would need to have a path to X given ZS. But since any valid Z has to contain Z1 and Z2 (or Z11), the only nodes in O with a path to X are parents of YM and paths from these parents to X all need to be blocked for a valid Z. Hence, O is optimal here. Example C. In Fig. 2C a case is shown where O = Z1Z5. Z2 is not part of O because none of the conditions in Def. 3(2) is fulfilled: Z2 /∈ vancs = Z1Z4Z5 and Z2 ⊥⊥X | vancs. Hence, we call Z2 an N-node. But Z2 cannot be part of any valid Z because it has a collider path to X through Z1 which is always open because it is part of vancs. Hence, term (iii) is always zero. Term (iv) is zero because O \ Z is empty for any valid Z here. Here even JO > JZ since O is minimal and term (ii) IX;Z\O|O > 0 for any Z ̸= O (generally proven in Corollary B.1). Example D. The example in Fig. 2D depicts a case with O = Z2 where Z1 is an N-node. Next to Z = ∅ another valid set is Z = Z1. Then term (iii) is non-zero and in the same way term (iv) is non-zero. The sufficient pairwise comparison criterion in SSR20 and HPM19 is, hence, not applicable. However, it holds that always term (iii) ≤ (i) because the dependence between Z1 and Y given X is always smaller than the dependence between Z2 and Y given X and correspondingly term (iv) ≤ (ii). Hence, O is optimal here. If a link Y→Z1 exists, then the only other valid set is Z = ∅ and both terms are strictly zero. Example E. The example in Fig. 2E (Fig. 3 in SSR20 and also discussed in HPM19) is not graphically optimal. Here O = Z1Z2. Other valid adjustment sets are Z1 or the empty set. From using Z1 ⊥⊥ Y |X and X ⊥⊥ Z2|Z1 in the inequalities (7) one can derive in information-theoretic terms that both Z1Z2 and ∅ are better than vancs = Z1, but since JZ1Z2 = J∅ + IZ2;Y |XZ1 − IX;Z1 , a superior adjustment set depends on how strong the link Z1→X vs. Z2↔Y is. The graph stays non-optimal also with a link Z1↔Z2. Example F. The example in Fig. 2F is also not graphically optimal. Here O = ∅ and Z2 is an N-node with a non-collider path to X . Other valid adjustment sets are Z1 and Z1Z2. Higher adjustment information here depends on the distribution. Also the same graph with the link Z1↔X is non-optimal. If, however, there is another link Z1→Y , then O = ∅ is optimal (then Z1 is a mediator). Example G. The example in Fig. 2G is only a slight modification of Example E with an added selected condition S. Then Z1, Z2 ∈ vancs. We still get O = Z1Z2 and this is now optimal since Z2 is always open and any valid set has to contain Z1. The main result of this work is a set of necessary and sufficient conditions for the existence of graphical optimality and the proof of optimality of the O-set which is based on the intuition gained in the preceding examples. Theorem 3 (Necessary and sufficient graphical conditions for optimality and optimality of O-set). Given Assumptions 1 and with O = PCPC defined in Def. 4. Denote the set of N-nodes by N = sp(YMC)\ (forbOS). Finally, given an N ∈ N and a collider path N↔· · ·↔C↔· · ·↔W (including N↔W ) for C ∈ C and W ∈ YM (indexed by i) with the collider path nodes denoted by πNi (excluding N and W ), denote by OπNi = O(X,Y,S ′ = SNπNi ) the O-set for the causal effect of X on Y given S′ = S ∪ {N} ∪ πNi . If and only if exactly one valid adjustment set exists, or both of the following conditions are fulfilled, then graphical optimality holds and O is optimal: (I) For all N ∈ N and all its collider paths i to W ∈ YM that are inside C it holds that OπNi does not block all non-causal paths from X to Y , i.e., OπNi is non-valid, and (II) for all E ∈ O \P with an open path to X given SO \ {E} there is a link E↔W or an extended collider path E∗→C↔· · ·↔W inside C for W ∈ YM where all colliders C ∈ vancs. Condition (I) and (II) essentially rule out the two canonical cases in Examples F and E, respectively, on which non-optimality in any graph is based. Applied to the examples, we obtain that in Example A Cond. (I) holds since no N-node exists and Cond. (II) holds since X ⊥⊥ Z1 | S. In Example B also no N-node exists and Cond. (II) holds as X ⊥⊥ E | SO \ {E} for every E ∈ O \ P. In example C Z2 is an N-node, but there is a collider path to X through Z1 which is in vancs such that Cond. I is fulfilled. Further, while X ⊥⊥ Z5 | SO \ {Z5}, there is a link Z5↔Y such that Cond. II holds. In example D Z1 is an N-node, but it has a bidirected link with X and Cond. (II) holds since X ⊥⊥ Z2 | SO \ {Z2}. In Example E optimality does not hold, but Cond. (I) actually holds since there is no N-node. Cond. (II) is not fulfilled for E = Z1, which has a path to X given O and on the extended collider path Z1→Z2↔Y Z2 /∈ vancs. For Z′ = ∅ and a distribution P ′ where the link Z2↔Y almost vanishes we then have JO < JZ′ . Example F has an N-node Z2 and OπNi = O(X,Y,S ′ = Z2) = Z1Z2 is valid implying that Cond. (I) does not hold, while Cond. (II) is actually fulfilled with O = ∅. For Z′ = OπNi = Z1Z2 and a distribution P ′ where the link X→Z1 almost vanishes we then have JO < JZ′ . Example G is optimal since there are no N-nodes and Z2 ∈ vancs. Similar to SSR20, HPM19, and Witte et al. [2020], I also provide results regarding minimality and minimum cardinality for the hidden variables case in the Supplement. 3 Numerical experiments We now investigate graphical optimality empirically to answer three questions: Firstly, whether for a linear estimator under Assumptions 2 the asymptotically optimal variance also translates into better finite-sample variance. Secondly, how the O-set performs in non-optimal settings (according to Thm. 3). Thirdly, how the O-set and variants thereof perform for estimators not captured by the class for which the theoretical results were derived (Assumptions 2). To this end, we compare the performance of O, Adjust, OCmin, Omin, AdjustXmin, and Adjustmin (see definitions in Section 2.3) together with linear least squares estimation (LinReg) on linear models. In the Supplement we also investigate nonlinear models using nearest neighbor regression (kNN), a multilayer perceptron (MLP), random forest regression, and double machine learning for partially linear regression models (DML) [Chernozhukov et al., 2018]. The experiments are based on a generalized additive model and described in detail in Section D. Among these 12,000 randomly created configurations 93% fulfill the optimality conditions in Thm. 3. The results in Fig. 3 confirm our first hypothesis that for linear experiments with an estimator fulfilling Assumptions 2 and in settings where graphical optimality is fulfilled (Thm. 3) the O-set either has similar RMSE or significantly outperforms all other tested variants. In particular, Omin and Adjustmin are bad choices for this setting. Adjust is intermediate and OCmin and AdjustXmin come closest to O, but may still yield significantly higher variance. Secondly, in non-optimal settings (only 7% of configurations) the O-set still outperforms Adjust (as expected by Thm. 2). Compared to OCmin and AdjustXmin the O-set leads to worse results for about half of the studied configurations, while Omin and Adjustmin are still bad choices. Cardinality is slightly higher for O compared to all other sets. In Fig. S7 we further differentiate the results by the cardinality of the O-set and find that for small cardinalities (up to 4) the O-set has the lowest variance in a majority of cases, but for higher cardinalities either OCmin or again O have the lowest variance (slightly beating AdjustXmin). Hence, either O or OCmin performs best in non-optimal configurations. For very small sample sizes n = 30 (see Fig. S2) that become comparable to the adjustment set cardinality, there tends to be a trade-off and smaller cardinality helps. Then OCmin tends to be better than O for high cardinalities also in optimal settings, but here this effect is only present for n = 30 and for n = 50 already negligible compared to the gain in JO. In Appendix D.2 are RMSE ratios for all combinations of adjustment approaches considered here and it is shown that, in general, results are very similar for other sample sizes. Thirdly, we investigate non-parametric estimators on linear as well as nonlinear models (implementations described in Section D, results in the figures of Section D.3) The different classes of estimators exhibit quite different behavior. For kNN (Figs. S8,S9) the O-set has the lowest variance in around 50% of the configurations followed by OCmin and Omin. More specifically (Figs. S15,S16), for small O-set cardinalities up to 2 the O-set and for higher either Omin or OCmin (the latter only in non-optimal configurations) perform best. For nonlinear experiments the results are less clear for O-set cardinalities greater than 2, but Omin is still a good choice. Regarding RMSE ratios, we see that, for the cases where O is not the best, the O-set can have considerably higher variance, while Omin seems to be most robust and may be a better choice if O is too large. MLP (Figs. S10,S11) behaves much differently. Here in optimal cases neither method outperforms any other for small O-set cardinalities, but for higher cardinalities (Figs. S15,S16) the O-set is best in more than 50% of configurations (slightly less for nonlinear experiments) and the others share the rest (except Adjustmin). For non-optimal cases O, OCmin and AdjustXmin share the ranks. Regarding RMSE, for linear experiments the O-results are almost as optimal as for the LinReg estimator in the optimal setting. However, for non-optimal cases OCmin can have considerably smaller variance and seems to be a robust option then, similarly to AdjustXmin. Also for nonlinear experiments OCmin is more robust. The RF estimator (Figs. S12,S13) is again different. Here no method clearly is top-ranked, Omin and Adjustmin are slightly better for linear experiments and O for nonlinear experiments. OCmin and Omin are more robust regarding RMSE ratios (similar to AdjustXmin). Finally, the DML estimator (Fig. S14) was here applied only to linear experiments since its model assumption does not allow for fully nonlinear settings. For optimal settings here O is top-ranked in a majority of cases, but closely followed by OCmin and AdjustXmin. In non-optimal cases for higher O-set cardinalities these two seem like a better choice. Quantitatively, OCmin and AdjustXmin are the most robust choices. Overall, the O-set and its variants seem to outperform or match the Adjust-variants and whether higher cardinality of the O-set reduces performance depends strongly on the estimator and data. 4 Discussion and Conclusions The proposed adjustment information formalizes the common intuition to choose adjustment sets that maximally constrain the effect variable and minimally constrain the cause variable. The main theoretical contributions are a necessary and sufficient graphical criterion for the existence of an optimal adjustment set in the hidden variables case and a definition and algorithm to construct it. To emphasize, graphical optimality implies that the O-set is optimal for any distribution consistent with the graph. Note that in cases where graphical optimality does not hold, there will still be distributions for which the O-set has maximal adjustment information. Further, the optimal set is valid if and only if a valid adjustment set exists and has smaller (or equal) asymptotic variance compared to the Adjust-set proposed in Perković et al. [2018] for any graph, whether graphical optimality holds or not. This makes the O-set a natural choice in automated causal inference analyses. Practical contributions comprise Python code to construct adjustment sets and check optimality, as well as extensive numerical experiments that demonstrate that the theoretical results also hold for relatively small sample sizes. The theoretical optimality results are limited to estimators for which the asymptotic variance becomes minimal for adjustment sets with maximal adjustment information (relation (8)). This is fulfilled for least-squares estimators, where even the direct relation (9) holds, but it is unclear whether this also holds for more general classes. The numerical results show that the O-set or minimized variants thereof often yield smaller variance also in non-optimal settings and beyond that estimator class. I speculate that further theoretical properties of maximizing adjustment information can be shown because relation (9) for f(·) = 1√ n eHY |XZS−HX|ZS seems related to the lower bound of the estimation variance counterpart to Fano’s inequality (Theorem 8.6.6 in Cover and Thomas [2006]). For estimators sensitive to high-dimensionality one may consider data-driven criteria or penalties to step-wisely minimize the O-set. However, estimating, for example, the adjustment information from a potentially small sample size carries considerable errors itself. Another current limitation is that relation (9) only holds for univariate singleton cause variables X . The information-theoretical results, however, also hold for multivariate X and preliminary results indicate that, while relation (9) does not hold for multivariate X, the less restrictive relation (8) still seems to hold. The proposed information-theoretic approach can guide further research, for example, to theoretically study relations (8),(9) for other estimators and to address other types of graphs as emerge from the output of causal discovery algorithms and the setting where the graph is unknown [Witte et al., 2020, Maathuis et al., 2009, 2010]. At present, the approach only applies to ADMGs and Maximal Ancestral Graphs (MAG) [Richardson and Spirtes, 2002] without selection variables. Last, it remains an open problem to identify optimal adjustment estimands for the hidden variables case based on other criteria such as the front-door formula and Pearl’s general do-calculus [Pearl, 2009]. The results may carry considerable practical impact since, surprisingly, among the randomly created configurations more than 90% fulfill the optimality conditions indicating that also in many realworld scenarios graphical optimality may hold. Code is available in the python package https: //github.com/jakobrunge/tigramite. Acknowledgments and Disclosure of Funding I thank Andreas Gerhardus for very helpful comments. This work was funded by the ERC Starting Grant CausalEarth (grant no. 948112).
1. What is the main contribution of the paper regarding selecting optimal backdoor adjustment sets in graphical models with hidden and conditioned variables? 2. Do you have any questions about the distinction between SSR20 and the results of this paper? 3. Why did the authors consider conditioned variables S in Equation (3)? 4. What are the specific benefits of invoking information theory according to the authors' intuition in Line 109-110? 5. How does the paper improve readability by providing explicit definitions, assumptions, and overviews at the beginning of each section? 6. Is there any confusion regarding the introduction of MAG, PAG, Markov equivalence, and "visible edge" concepts? 7. What is the research question posed in Line 106, and how could it be rephrased more explicitly? 8. Is there any inconsistency between the stated goal of minimizing asymptotic estimation variance and the usage of mean-square error in Line 122 and Equation (7)? 9. Would moving lines 125-126 after Assumption 2 improve comprehensibility? 10. Could you clarify the meaning of "no other Z'" or "for all other Z'" in Def. 2? 11. How does Definition 4 lack clarity regarding the notation used for defining C? 12. Does Theorem 3 provide special cases where O-set is the optimal backdoor admissible set, and what happens when the conditions in Thm. 3 fail?
Summary Of The Paper Review
Summary Of The Paper The problem of selecting optimal backdoor adjustment sets to estimate causal effects in graphical models with hidden and conditioned variables is addressed. Review This paper solves the problem of selecting the optimal backdoor adjustment sets by leveraging the theory of information theories. This paper is motivated by the interesting observation that the optimal admissible set is the set providing the most information (adjustment information, Def. 1). However, I have to confess that I couldn't capture the main claim of the paper. Does this paper answer that O-set in Def. 4 is the optimal backdoor admissible set whenever there exists a valid admissible set? It seems that the main claim in Thm.3 only guarantees the optimality of O-set for some special cases. Despite its interesting and novel approaches, I also had to agree that it is hard to understand this paper. I will discuss those in detail in Readability paragraphs. Here are some main questions regarding the contribution/novelty of this work: I am not sure about the distinction between SSR20 and the result of this paper. Up to my understanding, Thm 2 of SSR20 provides an (statistically-) optimal adjustment set for the nonparametric setting, and the only condition is the existence of the back-door admissible set. That is, I don't understand which particular points have never been addressed by SSR20. In Eq. (3), why do you consider the conditioned variables S? Why did you prevent the possibility of S being arbitrary? (what's the role of the assumption S \cap des(X) = \emptyset in Line 87?) Line 109-110, why is this a valid intuition? That is, what would be the specific benefits of invoking the information theory? Readability Here are comments discussing which particular parts I felt difficult to understand. I also added my comments and recommendations to help to improve comprehensibility of the paper. In general, it is not desirable to put the essential contents in the Appendix, because readers must go back and forth to parse the main and appendix at the same time. For example, it's hard for me to understand Thm. 3 and Line 78. Please write the assumption more explicitly. I think one of the crucial assumption is the linear SCM setting (where all the variables are in the linear relation), as you wrote in line 125-126. I'd like to add some overview or guideline for every starts of the section, Definitions, Theorems, and Lemma. For example, I couldn't guess what authors will discuss in Section 2.2 even after reading the first paragraph. In Line 62-63, I'd recommend to give a detailed definition of ADMG, because readers might not want to look for the definition of ADMG from other sources. Without the definition, it's hard to parse the graph in Fig. 1; For example, what does the bidirected edge mean? In Line 62-70, what is the definition of An(X), De(X)? Do they include the variable X? Please write it more explicitly. In Line 62-70, I am not sure why you wanted to introduce the notion of MAG, given that this notion has never been used. Since MAG, PAG, Markov equivalence, "visible edge" are difficult concepts, I recommend to drop those in your work unless it's essential. In Section 1.1, {X,Y} are singleton? Otherwise, I recommend to use the bold letters. In Figure 1, what is the meaning of "-" and "+"? I think the question in Line 106 is misleading and unclear what the research question is. Why did you give vancs (in line 105) before giving the research question? It distracts readers who want to focus on the research question. Also, the goal is supposed to minimize asymptotic estimation variance, but the word "asymptotic" is absent. Finally, since the line 106 is the main statement giving the research question, I think this deserves more than a line. Please highlight it more explicitly, because, as a reader, this is the very sentence that I want to know from Section 1. In Line 122 and Eq. (7), I think this term is a mean-square error, not asymptotic variance. I'd recommend to move line 125-126 after Assumption 2, given that this gives a concrete picture what you want to study within the assumption 2. Without concrete examples for Assumption 2, it's hard to parse this paper. I don't parse Def. 2 (what is the meaning of "no other Z' or for all other Z'"?). Does it mean that Z is optimal if Jz is maximized among all Z' that is a valid adjustment set? It's hard to parse Def. 4, because there is no definition for the notation that you used for defining C. It seems that Thm 3 provides special cases when O-set is the optimal. It seems that there must be the optimal set even when the conditions in Thm. 3 fails (for sure, O-set is not the optimal one in this case). Can we have a result that we always can have a O-set whenever there are valid back-door admissible set?
NIPS
Title Necessary and sufficient graphical conditions for optimal adjustment sets in causal graphical models with hidden variables Abstract The problem of selecting optimal backdoor adjustment sets to estimate causal effects in graphical models with hidden and conditioned variables is addressed. Previous work has defined optimality as achieving the smallest asymptotic estimation variance and derived an optimal set for the case without hidden variables. For the case with hidden variables there can be settings where no optimal set exists and currently only a sufficient graphical optimality criterion of limited applicability has been derived. In the present work optimality is characterized as maximizing a certain adjustment information which allows to derive a necessary and sufficient graphical criterion for the existence of an optimal adjustment set and a definition and algorithm to construct it. Further, the optimal set is valid if and only if a valid adjustment set exists and has higher (or equal) adjustment information than the Adjust-set proposed in Perković et al. [Journal of Machine Learning Research, 18: 1–62, 2018] for any graph. The results translate to minimal asymptotic estimation variance for a class of estimators whose asymptotic variance follows a certain information-theoretic relation. Numerical experiments indicate that the asymptotic results also hold for relatively small sample sizes and that the optimal adjustment set or minimized variants thereof often yield better variance also beyond that estimator class. Surprisingly, among the randomly created setups more than 90% fulfill the optimality conditions indicating that also in many real-world scenarios graphical optimality may hold. 1 Introduction A standard problem setting in causal inference is to estimate the causal effect between two variables given a causal graphical model that specifies qualitative causal relations among observed variables [Pearl, 2009], including a possible presence of hidden confounding variables. The graphical model then allows to employ graphical criteria to identify valid adjustment sets, the most well-known being the backdoor criterion [Pearl, 1993] and the generalized adjustment criterion [Shpitser et al., 2010, Perković et al., 2015, 2018], providing a complete identification of all valid adjustment sets. Estimators of causal effects based on such a valid adjustment set as a covariate are then unbiased, but for different adjustment sets the estimation variance may strongly vary. An optimal adjustment set may be characterized as one that has minimal asymptotic estimation variance. In current work, following Kuroki and Cai [2004] and Kuroki and Miyakawa [2003], Henckel et al. [2019] (abbreviated 35th Conference on Neural Information Processing Systems (NeurIPS 2021). HPM19 in the following) showed that graphical optimality always holds for linear models in the causally sufficient case where all relevant variables are observed. In Witte et al. [2020] an alternative characterization of the optimal adjustment set is discussed and the approach was integrated into the IDA algorithm [Maathuis et al., 2009, 2010] that does not require the causal graph to be known. Rotnitzky and Smucler [2019] extended the results in HPM19 to asymptotically linear non-parametric graphical models. HPM19’s optimal adjustment set holds for the causally sufficient case (no hidden variables) and the authors gave an example with hidden variables where optimality does not hold in general, i.e., the optimal adjustment set depends on the coefficients and noise terms (more generally, the distribution), rather than just the graph. Most recently, Smucler et al. [2021] (SSR20) partially extended these results to the non-parametric hidden variables case together with dynamic treatment regimes, i.e., conditional causal effects. SSR20 provide a sufficient criterion for an optimal set to exist and a definition based on a certain undirected graph-construction using a result by van der Zander et al. [2019]. However, their sufficient criterion is very restrictive and a current major open problem is a necessary and sufficient condition for an optimal adjustment set to exist in the hidden variable case and a corresponding definition of an optimal set. My main theoretical contribution is a solution to this problem. Optimality for conditional causal effects in the hidden variables case is fully characterized by an information-theoretic approach involving a certain difference of conditional mutual informations among the observed variables termed the adjustment information. Maximizing the adjustment information formalizes the common intuition to choose adjustment sets that maximally constrain the effect variable and minimally constrain the cause variable. This allows to derive a necessary and sufficient graphical criterion for the existence of an optimal adjustment set. The derived optimal adjustment set also has the property of minimum cardinality, i.e., no node can be removed without sacrificing optimality. Further, the optimal set is valid if and only if a valid adjustment set exists and has higher (or equal) adjustment information than the Adjust-set proposed in Perković et al. [2018] for any graph, whether graphical optimality holds or not. The results translate to minimal asymptotic estimation variance for a class of estimators whose asymptotic variance follows a certain information-theoretic relation that, at present, I could only verify theoretically for the linear case. As practical contributions the paper provides extensive numerical experiments that corroborate the theoretical results and show that the optimal adjustment set or minimized variants thereof often yield better variance also beyond the theoretically analyzed estimator class. Code is available in the python package https://github.com/jakobrunge/tigramite. More detailed preliminaries, proofs, algorithms, and further numerical experiments are given in the Supplementary Material. 1.1 Preliminaries and problem setting We consider causal effects in causal graphical models over a set of variables V with a joint distribution P = P(V) that is consistent with an acyclic directed mixed graph (ADMG) G = (V, E). Two nodes can have possibly more than one edge which can be directed (←) or bi-directed (↔). See Fig. 1A for an example. Kinships are defined as usual: parents pa(X) for “•→X”, spouses sp(X) for “X↔•”, children ch(X) for “X→•”. These sets all exclude X . Correspondingly descendants des(X) and ancestors an(X) are defined, which, on the other hand, both include X . The mediator nodes on causal paths from X to Y are denoted M = M(X,Y ) and exclude X and Y . For detailed preliminaries, including the definition of open and blocked paths, see Supplementary Section A. In this work we only consider a univariate intervention variable X and effect variable Y . We simplify set notation and denote unions of variables as {W} ∪M ∪A = WMA. A (possibly empty) set of adjustment variables Z for the total causal effect of X on Y in an ADMG is called valid relative to (X,Y ) if the interventional distribution for setting do(X = x) [Pearl, 2009] factorizes as p(Y |do(X = x)) = ∫ p(Y |x, z)p(z)dz for non-empty Z and as p(Y |do(X = x)) = p(Y |x) for empty Z. Valid adjustment sets, the set of which is here denoted Z , can be read off from a given causal graph using the generalized adjustment criterion [Perković et al., 2015, 2018] which generalizes Pearl’s back-door criterion [Pearl, 2009]. To this end define forb(X,Y ) = X ∪ des(YM) (1) (henceforth just denoted as forb). A set Z is valid if both of the following conditions hold: (i) Z ∩ forb = ∅, and (ii) all non-causal paths from X to Y are blocked by Z. An adjustment set is called minimal if no strict subset of Z is still valid. The validity conditions can in principle be manually checked directly from the graph, but, more conveniently, Perković et al. [2018] define an adjustment set called ‘Adjust’ that is valid if and only if a valid adjustment set exist. In our setting including conditioning variables S we call this set the valid ancestors defined as vancs(X,Y,S) = an(XY S) \ forb (2) and refer to this set as vancs or Adjust-set. Our quantity of interest is the average total causal effect of an intervention to set X to x vs. x′ on the effect variable Y given a set of selected (conditioned) variables S = s ∆yxx′|s = E(Y |do(x), s)− E(Y |do(x′), s) . (3) We denote an estimator given a valid adjustment set Z as ∆̂yxx′|s.z. In the linear case ∆yxx′|s for x = x′ + 1 corresponds to the regression coefficient βY X·ZS in the regression of Y on X , Z, and S. The ordinary least squares (OLS) estimator β̂Y X·ZS is a consistent estimator of βY X·ZS. Figure 1A illustrates the problem setting: We are interested in the total causal effect of (here univariate) X on Y (conditioned on S), which is here due to a direct link and an indirect causal path through a mediator M . There are six valid backdoor adjustment sets Z = {Z1, Z2, Z1Z2, Z2Z3, Z1Z3, Z1Z2Z3}. Z4 ∈ forb cannot be included in any set because it is a descendant of YM. Here vancs = Z1Z2S. All valid adjustment sets remove the bias due to confounding by their definition. The question is which of these valid adjustment sets is statistically optimal in that it minimizes the asymptotic estimation variance? More formally, the task is, given a graph G and (X,Y,S), to chose a valid optimal set Zoptimal ∈ Z such that the causal effect estimator’s asymptotic variance Var(∆̂yxx′|s.z) = E[(∆yxx′|s − ∆̂yxx′|s.z)2] is minimal: Zoptimal ∈ argminZ∈ZVar(∆̂yxx′|s.z) . (4) My proposed approach to optimal adjustment sets is based on information theory [Cover and Thomas, 2006]. The main quantity of interest there is the conditional mutual information (CMI) defined as a difference IX;Y |Z = HY |Z − HY |ZX of two (conditional) Shannon entropies HY |X = − ∫ x,y p(x, y) ln p(y|x)dxdy. Its main properties are non-negativity, IX;Y |Z = 0 if and only if X ⊥⊥ Y |Z, and the chain rule IXW ;Y |Z = IX;Y |Z + IW ;Y |ZX . All random variables in a CMI can be multivariate. Throughout the present paper we will assume the following. Assumptions 1 (General setting and assumptions). We assume a causal graphical model over a set of variables V with a joint distribution P = P(V) that is consistent with an ADMG G = (V, E). We assume a non-zero causal effect from X on Y , potentially through a set of mediators M, and given selected conditioned variables S, where S ∩ forb = ∅. We assume that at least one valid adjustment set (given S) exists and, hence, the causal effect is identifiable (except when stated otherwise). Finally, we assume the usual Causal Markov Condition (implicit in semi-Markovian models) and Faithfulness. 2 Optimal adjustment sets 2.1 Information-theoretic characterization Figure 1B illustrates two causal effect estimates for a linear Gaussian model consistent with the graph in Fig. 1A. With Z = Z1Z2 (blue) the error is much larger than with O = Z2Z3 (orange) for two reasons: Z constrains the residual variance V ar(Y |ZS) of the effect variable Y less than O and, on the other hand, Z constrains the residual variance V ar(X|ZS) of the cause variable X more than O. Smaller estimator variance also holds for O compared to any other valid set in Z here. We information-theoretically formalize the resulting intuition to choose an adjustment set Z that maximally constrains the effect variable Y and minimally constrains the cause variable X . In terms of CMIs and given selected fixed conditions S the quantity to maximize can be stated as follows. Definition 1 (Adjustment information). Consider a causal effect of X on Y for an adjustment set Z given a condition set S. The (conditional) adjustment (set) information, abbreviated JZ, is defined as JXY |S.Z ≡ IZ;Y |XS − IX;Z|S (5) = HY |XS −HX|S︸ ︷︷ ︸ not related to Z − (HY |XZS −HX|ZS)︸ ︷︷ ︸ adjustment entropy (6) JZ is not necessarily positive if the dependence between X and Z (given S) is larger than that between Z and Y given XS. Equation (6) follows from the CMI definition. Fig. 1C illustrates the two CMIs in Eq. (5) in a Venn diagram. Before discussing the range of estimators for which maximizing the adjustment information JZ leads to a minimal asymptotic estimation variance in Sect. 2.2, we characterize graphical optimality in an information-theoretic framework. Our goal is to provide graphical criteria for optimal adjustment sets, i.e., criteria that depend only on the structure of the graph G and not on the distribution. Definition 2 (Information-theoretical graphical optimality). Given Assumptions 1 we say that (information-theoretical) graphical optimality holds if there is a Z ∈ Z such that either there is no other Z′ ̸= Z ∈ Z or for all other Z′ ̸= Z ∈ Z and all distributions P consistent with G we have JZ ≥ JZ′ . My main result builds on the following lemma which relates graphical optimality to informationtheoretic inequalities in a necessary and sufficient comparison condition for an optimal set to exist. Lemma 1 (Necessary and sufficient comparison criterion for existence of an optimal set). Given Assumptions 1, if and only if there is a Z ∈ Z such that either there is no other Z′ ̸= Z ∈ Z or for all other Z′ ̸= Z ∈ Z and all distributions P consistent with G it holds that IZ\Z′;Y |Z′XS︸ ︷︷ ︸ (i) ≥ IZ′\Z;Y |ZXS︸ ︷︷ ︸ (iii) and IX;Z′\Z|ZS︸ ︷︷ ︸ (ii) ≥ IX;Z\Z′|Z′S︸ ︷︷ ︸ (iv) , (7) then graphical optimality holds and Z is optimal implying JZ ≥ JZ′ . In SSR20 and HPM19 the corresponding conditional independence statements to the terms (iii) and (iv) in the inequalities (7) are used as a sufficient pairwise comparison criterion. However, Lemma 1 shows that for graphical optimality it is not necessary that terms (iii) and (iv) vanish, they just need to fulfill the inequalities (7) for a necessary and sufficient criterion. In principle, Lemma 1 can be used to cross-compare all pairs of sets, but firstly, it is difficult to explicitly evaluate (7) for all distributions P consistent with G and, secondly, iterating through all valid adjustment sets is computationally prohibitive even for small graph sizes. As an example, consider a confounding path consisting of 5 nodes. Then this path can be blocked by 25 − 1 different subsets. In the main result of this work (Thm. 3) a necessary and sufficient criterion based purely on graphical properties is given. 2.2 Applicable estimator class The above characterization only relates optimality of adjustment sets to the adjustment information JZ defined in Eq. (5), but not to any particular estimator. Now the question is for which class of causal effect estimators ∆̂yxx′|s.z the intuition of maximizing the adjustment information JZ leads to a minimal asymptotic estimation variance. In its most general form this class is characterized as fulfilling Zoptimal ∈ argmaxZ∈ZJZ ⇔ Var(∆̂yxx′|s.zoptimal) = min Z∈Z Var(∆̂yxx′|s.z) , (8) where we assume that ∆̂yxx′|s.z is consistent due to a valid adjustment set and correct functional model specification. One can also further restrict the class to estimators whose (square-root of the) asymptotic variance can be expressed as√ Var(∆̂yxx′|s.z) = f(HY |XZS −HX|ZS) , (9) for a real-valued, strictly monotonously increasing function of the adjustment entropy. Minimizing the adjustment entropy is by Eq. (6) equivalent to maximizing the adjustment information. The following assumption and lemma then relates JZ ≥ JZ′ to the corresponding asymptotic variances of a given estimator. Assumptions 2 (Estimator class assumption). The model class of the estimator for the causal effect (3) is correctly specified and its asymptotic variance can be expressed as in relation (9). Lemma 2 (Asymptotic variance and adjustment information). Given Assumptions 1 and an estimator fulfilling Assumptions 2, if and only if for two different adjustment sets Z,Z′ ∈ Z we have JZ ≥ JZ′ , then the adjustment set Z has a smaller or equal asymptotic variance compared to Z′. Proof. By Equations (6) and (9) JZ ≥ JZ′ (for fixed X,Y,S) is directly related to a smaller or equal asymptotic variance for Z compared to Z′, and vice versa. □ The paper’s theoretical results currently hold for estimators fulfilling relation (9), but at least the main result on graphical optimality in Thm. 3 can also be relaxed to estimators fulfilling the less restrictive relation (8). In this work, we leave the question of which general classes of estimators fulfill either relation (8) or the more restricted relation (9) to further research and only show that it holds for the OLS estimator β̂Y X·ZS for Gaussian distributions. For Gaussians the entropies in (9) are given by H(Y |XZS) = 12+ 1 2 ln(2πσ 2 Y |XZS) and H(X|ZS) = 1 2 + 1 2 ln(2πσ 2 X|ZS) where σ(·|·) denotes the square-root of the conditional variance. Then√ Var(∆̂yxx′|s.z) = 1√ n eHY |XZS−HX|ZS = 1√ n σY |XZS σX|ZS . (10) This relation is also the basis of the results for the causally sufficient case in Henckel et al. [2019] where it is shown that it holds more generally for causal linear models that do not require the noise terms to be Gaussian. 2.3 Definition of O-set The optimal adjustment set for the causally sufficient case is simply P = pa(YM) \ forb and was derived in HPM19 and Rotnitzky and Smucler [2019]. In Section B.2 the derivation is discussed from an information-theoretic perspective. In the case with hidden variables we need to account for bidirected edges “↔” which considerably complicate the situation. Then the parents of YM are not sufficient to block all non-causal paths. Further, just like conditioning on parents of YM leads to optimality in the sufficient case since parents constrain information in YM, in the hidden variables case also conditioning on spouses of YM constrains information about YM. Example A. A simple graph (ADMG) to illustrate this is X→Y↔Z1 (shown with an additional S in Fig. 2A below, or Fig. 4 in SSR20). Here Z′ = ∅ = vancs is a valid set, but it is not optimal. Consider O = Z1, then term (iii) = 0 in the inequalities (7) since Z′ \O = ∅. Even though not needed to block non-causal paths (there is none), Z1 still constrains information in Y while being independent of X (hence, term (iv) = 0) which leads to JO > J∅ according to the inequalities (7). Not only direct spouses can constrain information in Y as Fig. 2B below illustrates. Since for W ∈ YM the motif “W↔ C1 ←∗C2” (“∗” denotes either edge mark) is open, it holds that I(C1C2;W ) = I(C1;Y )+I(C2;W |C1) ≥ I(C1;Y ) and we can even further increase the first term in the adjustment information by conditioning also on subsequent spouses. This chain of colliders only ends if we reach a tail or there is no further adjacency. However, we have to make sure that conditioning on colliders does not open non-causal paths. This leads to the notion of a valid collider path (related to the notion of a district in Evans and Richardson [2014]). Definition 3 (Valid collider paths). Given a graph G, a collider path of W for k ≥ 1 is defined by a sequence of edges W↔C1↔· · ·↔Ck. We denote the set of path nodes (excluding W ) along a path indexed by i as πiW . Using the set of valid ancestors vancs = an(XY S) \ forb for the causal effect of X on Y given S we call a collider path node set πiW for W ∈ YM valid wrt. to (X,Y,S) if for each path node C ∈ πiW both of the following conditions are fulfilled: (1) C /∈ forb, and (2a) C ∈ vancs or (2b) C ⊥⊥ X | vancs . (11) Condition (1) is required for any valid adjustment set. If jointly (2a) and (2b) are not fulfilled, i.e. C /∈ vancs and C ⊥⊥X | vancs, then the collider path stops before C. Our candidate optimal adjustment set is now constructed based on the parents of YM, valid collider path nodes of YM, and their parents to ‘close’ these collider paths. Definition 4 (O-set). Given Assumptions 1 and the definition of valid colliders in Def. 3, define the set O(X,Y,S) = P ∪C ∪PC where P = pa(YM) \ forb, C = ⋓W∈YM ⋓i {πiW : πiW is valid wrt. to (X,Y,S)}, PC = pa(C) . In the following we will abbreviate O = O(X,Y,S). Algorithm C.1 states efficient pseudo-code to construct the O-set and detect whether a valid adjustment set exists. Since none of the conditions of Def. 3 for adding collider nodes depends on previously added nodes, the algorithm is orderindependent. The statement occurring in lines 11 and 21 (“No valid adjustment set exists.”) is proven in Thm. 1. If the graph is a DAG, then lines 4-22 can be omitted. The algorithm is of low complexity and the most time-consuming part is checking for a path in line 12, Def. 3(2b) C ⊥⊥ X | vancs, which can be implemented with (bi-directional) breadth-first search as proposed in van der Zander et al. [2019]. Numerical experiments in Section 3 will show that further interesting adjustment sets are the minimized O-set Omin, where O is minimized such that no subset can be removed without making Omin invalid, and the collider-minimized O-set OCmin where only CPC \ P ⊆ O is minimized such that no collider-subset can be removed without making OCmin invalid. Both adjustment sets can be constructed with Alg. C.2 similar to the efficient algorithms in van der Zander et al. [2019]. Also the minimized sets are order-independent since the nodes are removed only after the for-loops. Based on the idea in OCmin, in the numerical experiments we also consider AdjustXmin, where only Adjust \ pa(YM) is minimized and pa(YM) is always included. Finally, we also evaluate Adjustmin where Adjust is fully minimized. Before discussing the optimality of the O-set, we need to assure that it is a valid adjustment set. Similar to the proof given in Perković et al. [2018] for the validity of the vancs-set (for the case without S), we can state that the O-set is valid if and only if a valid adjustment set exists. Theorem 1 (Validity of O-set). Given Assumptions 1 but without a priori assuming that a valid adjustment set exists (apart from the requirement S ∩ forb = ∅). If and only if a valid backdoor adjustment set exists, then O is a valid adjustment set. 2.4 Graphical optimality We now move to the question of optimality. It is known that there are graphs where no graphical criterion exists to determine optimality. Examples, discussed later, are the graphs in Figs. 2E,F. Before stating necessary and sufficient conditions for graphical optimality, I mention that next to the O-set defined above and the Adjust set vancs [Perković et al., 2018], I am not aware of any other systematically constructed set that will yield a valid adjustment set for the case with hidden variables. van der Zander et al. [2019] provide algorithms to list all valid adjustment sets, but the question is which of these a user should choose. As mentioned above, Lemma 1 can be used to cross-compare all pairs of sets, but this is not really feasible. Hence, for automated causal effect estimation, rather than the question of whether graphical optimality holds, it is crucial to have a set with better properties than other systematically constructable sets. The following theorem states that the adjustment informations follow JO ≥ Jvancs for any graph (whether graphical optimality holds or not). Theorem 2 (O-set vs. Adjust-set ). Given Assumptions 1 with O defined in Def. 4 and the Adjust-set defined in Eq. (2), it holds that JO ≥ Jvancs for any graph G. We have JO = Jvancs only if (1) O = vancs, or (2) O ⊆ vancs and X ⊥⊥ vancs \O |OS. In the following the O-set is illustrated and conditions for graphical optimality are explored. SSR20 provide a sufficient condition for optimality, which states that either all nodes are observed (no bidirected edges exist) or for all observed nodes V ⊂ vancs. This is a very strict assumption and not fulfilled for any of the examples (except for Example G) discussed in the following. Example B. Figure 2B depicts a larger example to illustrate the O-set O = PCPC with P = Z1Z2Z3Z4 (blue boxes) and CPC \ P = Z5Z6Z7Z8 (green boxes). We also have a conditioned variable S. Among P, only Z1Z2 are needed to block non-causal paths to X , Z3Z4 are only there to constrain information in Y . Here the same holds for the whole set CPC \ P which was constructed from the paths Y↔Z5↔Z6←Z7 and Y↔Z5↔Z8 which does not include Z9 since it is a descendant of YM. Including an independent variable like Z12 in O would not decrease the adjustment information JO, but then O would not be of minimum cardinality anymore (proven in Cor. B.1). Here, again, the condition of SSR20 does not hold (e.g., Z5 is not an ancestor of XY S). O is optimal here which can be seen as follows: For term (iii) in the inequalities (7) to even be non-zero, we would need a valid Z such that Z \O has a path to Y given OSX . But these are all blocked. Note that while Z10 or Z9 ∈ Z would open a path to Y , both of these are descendants of M or Y and, hence, cannot be in a valid Z. For term (iv) to even be non-zero O \ Z would need to have a path to X given ZS. But since any valid Z has to contain Z1 and Z2 (or Z11), the only nodes in O with a path to X are parents of YM and paths from these parents to X all need to be blocked for a valid Z. Hence, O is optimal here. Example C. In Fig. 2C a case is shown where O = Z1Z5. Z2 is not part of O because none of the conditions in Def. 3(2) is fulfilled: Z2 /∈ vancs = Z1Z4Z5 and Z2 ⊥⊥X | vancs. Hence, we call Z2 an N-node. But Z2 cannot be part of any valid Z because it has a collider path to X through Z1 which is always open because it is part of vancs. Hence, term (iii) is always zero. Term (iv) is zero because O \ Z is empty for any valid Z here. Here even JO > JZ since O is minimal and term (ii) IX;Z\O|O > 0 for any Z ̸= O (generally proven in Corollary B.1). Example D. The example in Fig. 2D depicts a case with O = Z2 where Z1 is an N-node. Next to Z = ∅ another valid set is Z = Z1. Then term (iii) is non-zero and in the same way term (iv) is non-zero. The sufficient pairwise comparison criterion in SSR20 and HPM19 is, hence, not applicable. However, it holds that always term (iii) ≤ (i) because the dependence between Z1 and Y given X is always smaller than the dependence between Z2 and Y given X and correspondingly term (iv) ≤ (ii). Hence, O is optimal here. If a link Y→Z1 exists, then the only other valid set is Z = ∅ and both terms are strictly zero. Example E. The example in Fig. 2E (Fig. 3 in SSR20 and also discussed in HPM19) is not graphically optimal. Here O = Z1Z2. Other valid adjustment sets are Z1 or the empty set. From using Z1 ⊥⊥ Y |X and X ⊥⊥ Z2|Z1 in the inequalities (7) one can derive in information-theoretic terms that both Z1Z2 and ∅ are better than vancs = Z1, but since JZ1Z2 = J∅ + IZ2;Y |XZ1 − IX;Z1 , a superior adjustment set depends on how strong the link Z1→X vs. Z2↔Y is. The graph stays non-optimal also with a link Z1↔Z2. Example F. The example in Fig. 2F is also not graphically optimal. Here O = ∅ and Z2 is an N-node with a non-collider path to X . Other valid adjustment sets are Z1 and Z1Z2. Higher adjustment information here depends on the distribution. Also the same graph with the link Z1↔X is non-optimal. If, however, there is another link Z1→Y , then O = ∅ is optimal (then Z1 is a mediator). Example G. The example in Fig. 2G is only a slight modification of Example E with an added selected condition S. Then Z1, Z2 ∈ vancs. We still get O = Z1Z2 and this is now optimal since Z2 is always open and any valid set has to contain Z1. The main result of this work is a set of necessary and sufficient conditions for the existence of graphical optimality and the proof of optimality of the O-set which is based on the intuition gained in the preceding examples. Theorem 3 (Necessary and sufficient graphical conditions for optimality and optimality of O-set). Given Assumptions 1 and with O = PCPC defined in Def. 4. Denote the set of N-nodes by N = sp(YMC)\ (forbOS). Finally, given an N ∈ N and a collider path N↔· · ·↔C↔· · ·↔W (including N↔W ) for C ∈ C and W ∈ YM (indexed by i) with the collider path nodes denoted by πNi (excluding N and W ), denote by OπNi = O(X,Y,S ′ = SNπNi ) the O-set for the causal effect of X on Y given S′ = S ∪ {N} ∪ πNi . If and only if exactly one valid adjustment set exists, or both of the following conditions are fulfilled, then graphical optimality holds and O is optimal: (I) For all N ∈ N and all its collider paths i to W ∈ YM that are inside C it holds that OπNi does not block all non-causal paths from X to Y , i.e., OπNi is non-valid, and (II) for all E ∈ O \P with an open path to X given SO \ {E} there is a link E↔W or an extended collider path E∗→C↔· · ·↔W inside C for W ∈ YM where all colliders C ∈ vancs. Condition (I) and (II) essentially rule out the two canonical cases in Examples F and E, respectively, on which non-optimality in any graph is based. Applied to the examples, we obtain that in Example A Cond. (I) holds since no N-node exists and Cond. (II) holds since X ⊥⊥ Z1 | S. In Example B also no N-node exists and Cond. (II) holds as X ⊥⊥ E | SO \ {E} for every E ∈ O \ P. In example C Z2 is an N-node, but there is a collider path to X through Z1 which is in vancs such that Cond. I is fulfilled. Further, while X ⊥⊥ Z5 | SO \ {Z5}, there is a link Z5↔Y such that Cond. II holds. In example D Z1 is an N-node, but it has a bidirected link with X and Cond. (II) holds since X ⊥⊥ Z2 | SO \ {Z2}. In Example E optimality does not hold, but Cond. (I) actually holds since there is no N-node. Cond. (II) is not fulfilled for E = Z1, which has a path to X given O and on the extended collider path Z1→Z2↔Y Z2 /∈ vancs. For Z′ = ∅ and a distribution P ′ where the link Z2↔Y almost vanishes we then have JO < JZ′ . Example F has an N-node Z2 and OπNi = O(X,Y,S ′ = Z2) = Z1Z2 is valid implying that Cond. (I) does not hold, while Cond. (II) is actually fulfilled with O = ∅. For Z′ = OπNi = Z1Z2 and a distribution P ′ where the link X→Z1 almost vanishes we then have JO < JZ′ . Example G is optimal since there are no N-nodes and Z2 ∈ vancs. Similar to SSR20, HPM19, and Witte et al. [2020], I also provide results regarding minimality and minimum cardinality for the hidden variables case in the Supplement. 3 Numerical experiments We now investigate graphical optimality empirically to answer three questions: Firstly, whether for a linear estimator under Assumptions 2 the asymptotically optimal variance also translates into better finite-sample variance. Secondly, how the O-set performs in non-optimal settings (according to Thm. 3). Thirdly, how the O-set and variants thereof perform for estimators not captured by the class for which the theoretical results were derived (Assumptions 2). To this end, we compare the performance of O, Adjust, OCmin, Omin, AdjustXmin, and Adjustmin (see definitions in Section 2.3) together with linear least squares estimation (LinReg) on linear models. In the Supplement we also investigate nonlinear models using nearest neighbor regression (kNN), a multilayer perceptron (MLP), random forest regression, and double machine learning for partially linear regression models (DML) [Chernozhukov et al., 2018]. The experiments are based on a generalized additive model and described in detail in Section D. Among these 12,000 randomly created configurations 93% fulfill the optimality conditions in Thm. 3. The results in Fig. 3 confirm our first hypothesis that for linear experiments with an estimator fulfilling Assumptions 2 and in settings where graphical optimality is fulfilled (Thm. 3) the O-set either has similar RMSE or significantly outperforms all other tested variants. In particular, Omin and Adjustmin are bad choices for this setting. Adjust is intermediate and OCmin and AdjustXmin come closest to O, but may still yield significantly higher variance. Secondly, in non-optimal settings (only 7% of configurations) the O-set still outperforms Adjust (as expected by Thm. 2). Compared to OCmin and AdjustXmin the O-set leads to worse results for about half of the studied configurations, while Omin and Adjustmin are still bad choices. Cardinality is slightly higher for O compared to all other sets. In Fig. S7 we further differentiate the results by the cardinality of the O-set and find that for small cardinalities (up to 4) the O-set has the lowest variance in a majority of cases, but for higher cardinalities either OCmin or again O have the lowest variance (slightly beating AdjustXmin). Hence, either O or OCmin performs best in non-optimal configurations. For very small sample sizes n = 30 (see Fig. S2) that become comparable to the adjustment set cardinality, there tends to be a trade-off and smaller cardinality helps. Then OCmin tends to be better than O for high cardinalities also in optimal settings, but here this effect is only present for n = 30 and for n = 50 already negligible compared to the gain in JO. In Appendix D.2 are RMSE ratios for all combinations of adjustment approaches considered here and it is shown that, in general, results are very similar for other sample sizes. Thirdly, we investigate non-parametric estimators on linear as well as nonlinear models (implementations described in Section D, results in the figures of Section D.3) The different classes of estimators exhibit quite different behavior. For kNN (Figs. S8,S9) the O-set has the lowest variance in around 50% of the configurations followed by OCmin and Omin. More specifically (Figs. S15,S16), for small O-set cardinalities up to 2 the O-set and for higher either Omin or OCmin (the latter only in non-optimal configurations) perform best. For nonlinear experiments the results are less clear for O-set cardinalities greater than 2, but Omin is still a good choice. Regarding RMSE ratios, we see that, for the cases where O is not the best, the O-set can have considerably higher variance, while Omin seems to be most robust and may be a better choice if O is too large. MLP (Figs. S10,S11) behaves much differently. Here in optimal cases neither method outperforms any other for small O-set cardinalities, but for higher cardinalities (Figs. S15,S16) the O-set is best in more than 50% of configurations (slightly less for nonlinear experiments) and the others share the rest (except Adjustmin). For non-optimal cases O, OCmin and AdjustXmin share the ranks. Regarding RMSE, for linear experiments the O-results are almost as optimal as for the LinReg estimator in the optimal setting. However, for non-optimal cases OCmin can have considerably smaller variance and seems to be a robust option then, similarly to AdjustXmin. Also for nonlinear experiments OCmin is more robust. The RF estimator (Figs. S12,S13) is again different. Here no method clearly is top-ranked, Omin and Adjustmin are slightly better for linear experiments and O for nonlinear experiments. OCmin and Omin are more robust regarding RMSE ratios (similar to AdjustXmin). Finally, the DML estimator (Fig. S14) was here applied only to linear experiments since its model assumption does not allow for fully nonlinear settings. For optimal settings here O is top-ranked in a majority of cases, but closely followed by OCmin and AdjustXmin. In non-optimal cases for higher O-set cardinalities these two seem like a better choice. Quantitatively, OCmin and AdjustXmin are the most robust choices. Overall, the O-set and its variants seem to outperform or match the Adjust-variants and whether higher cardinality of the O-set reduces performance depends strongly on the estimator and data. 4 Discussion and Conclusions The proposed adjustment information formalizes the common intuition to choose adjustment sets that maximally constrain the effect variable and minimally constrain the cause variable. The main theoretical contributions are a necessary and sufficient graphical criterion for the existence of an optimal adjustment set in the hidden variables case and a definition and algorithm to construct it. To emphasize, graphical optimality implies that the O-set is optimal for any distribution consistent with the graph. Note that in cases where graphical optimality does not hold, there will still be distributions for which the O-set has maximal adjustment information. Further, the optimal set is valid if and only if a valid adjustment set exists and has smaller (or equal) asymptotic variance compared to the Adjust-set proposed in Perković et al. [2018] for any graph, whether graphical optimality holds or not. This makes the O-set a natural choice in automated causal inference analyses. Practical contributions comprise Python code to construct adjustment sets and check optimality, as well as extensive numerical experiments that demonstrate that the theoretical results also hold for relatively small sample sizes. The theoretical optimality results are limited to estimators for which the asymptotic variance becomes minimal for adjustment sets with maximal adjustment information (relation (8)). This is fulfilled for least-squares estimators, where even the direct relation (9) holds, but it is unclear whether this also holds for more general classes. The numerical results show that the O-set or minimized variants thereof often yield smaller variance also in non-optimal settings and beyond that estimator class. I speculate that further theoretical properties of maximizing adjustment information can be shown because relation (9) for f(·) = 1√ n eHY |XZS−HX|ZS seems related to the lower bound of the estimation variance counterpart to Fano’s inequality (Theorem 8.6.6 in Cover and Thomas [2006]). For estimators sensitive to high-dimensionality one may consider data-driven criteria or penalties to step-wisely minimize the O-set. However, estimating, for example, the adjustment information from a potentially small sample size carries considerable errors itself. Another current limitation is that relation (9) only holds for univariate singleton cause variables X . The information-theoretical results, however, also hold for multivariate X and preliminary results indicate that, while relation (9) does not hold for multivariate X, the less restrictive relation (8) still seems to hold. The proposed information-theoretic approach can guide further research, for example, to theoretically study relations (8),(9) for other estimators and to address other types of graphs as emerge from the output of causal discovery algorithms and the setting where the graph is unknown [Witte et al., 2020, Maathuis et al., 2009, 2010]. At present, the approach only applies to ADMGs and Maximal Ancestral Graphs (MAG) [Richardson and Spirtes, 2002] without selection variables. Last, it remains an open problem to identify optimal adjustment estimands for the hidden variables case based on other criteria such as the front-door formula and Pearl’s general do-calculus [Pearl, 2009]. The results may carry considerable practical impact since, surprisingly, among the randomly created configurations more than 90% fulfill the optimality conditions indicating that also in many realworld scenarios graphical optimality may hold. Code is available in the python package https: //github.com/jakobrunge/tigramite. Acknowledgments and Disclosure of Funding I thank Andreas Gerhardus for very helpful comments. This work was funded by the ERC Starting Grant CausalEarth (grant no. 948112).
1. What is the main contribution of the paper regarding selecting the optimal adjustment set? 2. What are the strengths of the proposed method, particularly in its technical aspect and experimental results? 3. Do you have any concerns or questions regarding the motivation behind providing a criterion independent of the distribution? 4. How would you explain the computational complexity of Lemma 2, and how does it relate to finding the optimal set? 5. Can you provide further clarification on the phrase "maximally constrains the effect variable Y and minimally constrains the cause variable X"? 6. Is there anything else you would like to suggest for improving the paper, such as rephrasing Equation 124 more clearly?
Summary Of The Paper Review
Summary Of The Paper In this paper, the authors propose a criterion to select the optimal adjustment set. The authors propose a criterion (Eq. 4 and Def. 2) based on information theory to select which adjustment set is best. And they build the connections between this criterion to the common used minimized asymptotic variance (Eq. 7) for linear Gaussian causal model. Based on these, they propose a necessary and sufficient criterion for the existence of an optimal set. However, it is inefficient due to the large number of comparisons. Hence, the authors present an algorithm to find the o-set, which could be as the optimal set under some conditions as Thm. 3. Review After rebuttal: The authors have addressed my questions. I think this paper tackles an interesting problem. I am happy to see it accepted. Although I try my best to read this paper, it is hard for me to understand the details. Hence if some parts I summarized in the ``Summary'' part are inconsistent to the paper, I will be grateful if the authors could point them out. The proposed method is technically solid. And the experimental results are convincing. My main concern is about the motivation to provide the criterion that does not depend on the distribution. As shown by ``Our goal is to provide graphical criteria for optimal adjustment sets, i.e., criteria that depend only on the structure of the graph G and not on the distribution'' on Line 143. I do not think this target is well motivated. In reality, we often have observational data. In that case, the distribution is known (although it is estimated) and which set is optimal for example in Example F seems not to be agnostic. Hence I look forward to an illustration for the motivation to provide the criterion that does not depend on the distribution. I am curious about the computational complexity of Lemma 2. It seems that this step will not cost so much because there are usually not so many valid adjustment sets. If it does not cost so much in reality, it is feasible to directly find the set by Lemma 2. What's more, if we know the (estimated) distribution of all the observed variables, we could just find the optimal st by Lemma. It seems to be more sensible than the o-set. It is a bit puzzling by the sentence ``Z that maximally constrains the effect variable Y and minimally constrains the cause variable X'' on Line 110 implies. Could the authors please provide a detailed illustration? What does the constrain here mean? A suggestion: I understand the equation on line 124 as f(H_{Y|X,Z,S}-H_{X|Z,S})=\frac{1}{\sqrt{n}}e^{H_{Y|X,Z,S}-H_{X|Z,S}}, is it right? The format right now is not quite strict.
NIPS
Title Understanding the Role of Training Regimes in Continual Learning Abstract Catastrophic forgetting affects the training of neural networks, limiting their ability to learn multiple tasks sequentially. From the perspective of the well established plasticity-stability dilemma, neural networks tend to be overly plastic, lacking the stability necessary to prevent the forgetting of previous knowledge, which means that as learning progresses, networks tend to forget previously seen tasks. This phenomenon coined in the continual learning literature, has attracted much attention lately, and several families of approaches have been proposed with different degrees of success. However, there has been limited prior work extensively analyzing the impact that different training regimes – learning rate, batch size, regularization method– can have on forgetting. In this work, we depart from the typical approach of altering the learning algorithm to improve stability. Instead, we hypothesize that the geometrical properties of the local minima found for each task play an important role in the overall degree of forgetting. In particular, we study the effect of dropout, learning rate decay, and batch size, on forming training regimes that widen the tasks’ local minima and consequently, on helping it not to forget catastrophically. Our study provides practical insights to improve stability via simple yet effective techniques that outperform alternative baselines. 1 Introduction We study the continual learning problem, where a neural network model should learn a sequence of tasks rather than a single one. A significant challenge in continual learning (CL) is that during training on each task, the data from previous ones are unavailable. One consequence of applying typical learning algorithms under such a scenario is that as the model learns newer tasks, the performance of the model on older ones degrades. This phenomenon is known as “catastrophic forgetting” [52]. This forgetting problem is closely related to the “stability-plasticity dilemma” [53], which is a common challenge for both biological and artificial neural networks. Ideally, a model needs plasticity to obtain new knowledge and adapt to new environments, while it also requires stability to prevent forgetting the knowledge from previous environments. If the model is very plastic but not stable, it can learn fast, but it also forgets quickly. Without further modifications in training, a naively trained neural network tends to be plastic but not stable. Note that plasticity in this scenario does not necessarily imply that neural nets can learn new tasks efficiently. In fact, they tend to be extremely data inefficient. By being plastic, we mean a single update can change the function considerably. 1The code is available at: https://github.com/imirzadeh/stable-continual-learning 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. With the recent advances in the deep learning field, continual learning has gained more attention since the catastrophic forgetting problem poses a critical challenge for various applications [43, 37]. A growing body of research has attempted to tackle this problem in recent years [58, 72, 57, 29]. Despite the tangible improvements in the continual learning field, the core problem of catastrophic forgetting is still under-studied. In particular, a variety of neural network models and training approaches have been proposed, however, to the best of our knowledge, there has been little work on systematically understanding the effect of common training regimes created by varying dropout regularization, batch size, and learning rate on overcoming catastrophic forgetting2. Fig. 1 shows how significantly these techniques can overcome catastrophic forgetting. In this work, we explore the catastrophic forgetting problem from an optimization and loss landscape perspective (Section 3) and hypothesize that the geometry of the local minima found for the different learned tasks correlates with the ability of the model to not catastrophically forget. Empirically we show how a few well-known techniques, such as dropout and large learning rate with decay and shrinking batch size, can create a training regime to affect the stability of neural networks (Section 4). Some of them, like dropout, had been previously proposed to help continual learning [22, 54]. However, in this work, we provide an alternative justification of why these techniques are effective. Crucially, we empirically show that jointly with a carefully tuned learning rate schedule and batch size, these simple techniques can outperform considerably more complex algorithms meant to deal with continual learning (Section 5). Our analysis can be applied to any other training technique that widens the tasks’ local minima or shrinks the distance between them. Our work shows that plain neural networks can be much stronger baselines for continual learning than previously thought, provided that we use the right hyperparameters. Moreover, the choice for the hyperparameters is orthogonal to other continual learning methods and can be integrated with these methods, as we show in Appendix C.8. 2 Related work Several continual learning methods have been proposed to tackle catastrophic forgetting. Following [43], we categorize these algorithms into three general groups. The first group consists of replay based methods that build and store a memory of the knowledge learned from old tasks [48, 62, 82, 68, 63], known as experience replay. iCaRL [61] learns in a class-incremental way by having a fixed memory that stores samples that are close to the center of each class. Averaged Gradient Episodic Memory (A-GEM) [7] is another example of these methods which build a dynamic episodic memory of parameter gradients during the learning process while ER-Reservoir [9] uses a Reservoir sampling method as its selection strategy. The methods in the second group use explicit regularization techniques to supervise the learning algorithm such that the network parameters are consistent during the learning process [41, 80, 44, 1, 42]. As a notable work, Elastic weight consolidation (EWC) [41], uses the Fisher information matrix as a proxy for weights’ importance and guides the gradient updates. They are usually inspired by a Bayesian perspective [56, 71, 67, 13, 64]. With a frequentist view, some other regularization based methods have utilized gradient information to protect previous knowledge [14, 24, 79]. For example, Orthogonal Gradient Descent (OGD) [14] uses the projection of the prediction gradients from new tasks on the subspace of previous tasks’ gradients to maintain the learned knowledge. Finally, in parameter isolation methods, in addition to potentially a shared part, different subsets of the model parameters are dedicated to each task [65, 78, 35, 60, 46]. This approach can be viewed 2Potential exceptions being the early work of Goodefellow et. al [22] and a recent one by Mirzadeh et. al [54] as a flexible gating mechanism, which enhances stability and controls the plasticity by activating different gates for each task. [50] proposes a neuroscience-inspired method for a context-dependent gating signal, such that only sparse, mostly non-overlapping patterns of units are active for any one task. PackNet [49] implements a controlled version of gating by using network pruning techniques to free up parameters after finishing each task. While continual learning is broader than just solving the catastrophic forgetting, in this work we will squarely focus on catastrophic forgetting, which has been an important aspect, if not the most important one, of research for Continual Learning in the last few years. Continual Learning as an emerging field in Artificial Intelligence is connected to many other areas such as Meta Learning [4, 35, 24, 62], Few Shot Learning [75, 20], Multi-task and Transfer Learning [24, 35] and the closely related problem of exploring task boundary or task detection [60, 2, 25, 36]. Very recently, Mirzadeh et al. [55] have studied the mode connectivity of continual learning and multitask learning minima. Moreover, Wallingford et al. [73] proposed a framework for integration of solutions across supervised learning, few-shot learning, continual learning, and efficient machine learning to facilitate the research in the intersection of these fields. 3 Forgetting during training Let us begin this section by introducing some notation to express the effect of forgetting during the sequential learning of tasks. For simplicity, let us consider the supervised learning case, which will be the focus of this work. We consider a sequence of K tasks Tk for k ∈ {1, 2, . . . ,K}. Let W ∈ R d be the parameter space for our model. The total loss on the training set for task k is denoted by Lk(w) = E[ℓk(w;x, y)] ≈ 1 |Tk| ∑ (x,y)∈Tk , ℓk(w;x, y) (1) where, the expectation is over the data distribution of task k and ℓk is a differentiable non-negative loss function associated with data point (x, y) for task k. In the continual learning setting, the model learns sequentially, without access to examples of previously seen tasks. For simplicity and brevity, let us focus on measuring the forgetting in continual learning with two tasks. It is easy to extend these findings to more tasks. Let w∗1 and w ∗ 2 be the convergent or optimum parameters after training has been finished for the first and second task sequentially. We formally define the forgetting (of the first task) as: F1 , L1(w ∗ 2)− L1(w ∗ 1). (2) We hypothesize that F1 strongly correlates with properties of the curvature of L1 around w ∗ 1 and L2 around w∗2 . In what follows, we will formalize this hypothesis. One important assumption that we rely on throughout this section is that we can use a secondorder Taylor expansion of our losses to understand the learning dynamics during the training of the model. While this might seem as a crude approximation in general — for a nonlinear function and non-infinitesimal displacement this approximation can be arbitrarily bad — we argue that the approximation has merit for our setting. In particular, we rely on the wealth of observations for overparametrized models where the loss tends to be very well behaved and almost convex in a reasonable neighborhood around their local minima. E.g. for deep linear models this property has been studied in [66]. Works as [11, 23] make similar claims for generic models. [31] also corroborates that within the NTK regime learning is well behaved. In continual learning, similar strong assumptions are made by most approaches that rely on approximating the posterior on the weights by a Gaussian [41] or a first-order approximation of the loss surface around the optimum [14]. Armed with this analytical tool, to compute the forgetting, we can approximate L1(w ∗ 2) around w ∗ 1 : L1(w ∗ 2) ≈ L1(w ∗ 1) + (w ∗ 2 − w ∗ 1) ⊤∇L1(w ∗ 1) + 1 2 (w∗2 − w ∗ 1) ⊤∇2L1(w ∗ 1)(w ∗ 2 − w ∗ 1) (3) ≈ L1(w ∗ 1) + 1 2 (w∗2 − w ∗ 1) ⊤∇2L1(w ∗ 1)(w ∗ 2 − w ∗ 1), (4) where, ∇2L1(w ∗ 1) is the Hessian for loss L1 at w ∗ 1 and the last equality holds because the model is assumed to converge to a stationary point where gradient’s norm vanishes, thus ∇L1(w ∗ 1) ≈ 0. Under the assumption that the critical point is a minimum (or the plateau we get stuck in surrounds a minimum), we know that the Hessian needs to be positive semi-definite. Defining ∆w = w∗2 −w ∗ 1 as the relocation vector, we can bound the forgetting F1 as follows: F1 = L1(w ∗ 2)− L1(w ∗ 1) ≈ 1 2 ∆w⊤∇2L1(w ∗ 1)∆w ≤ 1 2 λmax1 ‖∆w‖ 2 , (5) where λmax1 is the maximum eigenvalue of ∇ 2L1(w ∗ 1). Fig. 2a shows how wider L1 (lower λ max 1 ) leads to less forgetting, both in terms of an illustrative example as well as showing empirical evidence for this relationship on Rotated MNIST and Permuted MNIST. A few notes on this bound. First, the bound is achieved when ∆w is co-aligned to the eigenvector corresponding to λ1. Given that the displacement is generated by the learning process on task 2, we can think of it as a random vector with respect to the eigenvectors of the Hessian of the first task. We know, therefore, that the tightness of the bound correlates with the dimensionality of w. In the extreme one-dimensional case, the Hessian matrix becomes a scalar given by λmax1 , and the bound is exact. As we increase the number of dimensions, the probability of two vectors to be perpendicular goes to 1. Hence in high-dimensional spaces is more likely for the bound to be relatively loose and for the entire spectrum of eigenvalues to play a much more important role in defining F1. Namely, as the number of eigenvalues with low magnitude increases, the more likely it is for F1 to be small. Assuming that the displacement vector is equally distributed over all the eigenvectors, then the trace of the Hessian will correlate stronger with F1 than the largest eigenvalue. However, reasoning in terms of the spectrum can be impractical (note, for example, that one can not trivially re-write the bound in terms of the trace). So we believe it is useful to think about λmax1 as long as any conclusion is contextualized correctly and the training regime we consider, implicitly, is aimed at lowering the entire spectrum, not just the largest eigenvalue. We also want to highlight λmax1 has been used previously to describe the width of a local minima [28, 39], with similar notes regarding the role of the entire spectrum [12, 39]. This property is central to the wide/narrow minima hypothesis for why neural networks generalize well. Our hypothesis is not tied to the generalization of wide minima, but we rely on the same or at least a very related concept of width. Therefore, to reduce forgetting, each task should push its learning towards wider minima and can employ the same techniques used to widen the minima to improve generalization. Resuming from Eq. 5, controlling the Hessian spectrum without controlling the norm of the displacement, however, might not ensure that F1 is minimized. ‖∆w‖ is technically controlled by the subsequent tasks. We first notice, empirically, that enforcing widening the minima of the next task (for the same reason of reducing forgetting on itself) inhibits additionally forgetting for the first task (see Table 1; not stable/plastic means relying on training regimes that encourage/do not encourage wide minima. We empirically estimate the width of minima as well, see appendix C for details). We make the observation that the width of the minima (norm of the eigenvalues) correlates with the norm of the weights. Hence the solutions in the stable learning regime tend to be closer to 0, which automatically decrease ‖∆w‖. Additionally, ‖∆w‖ relates to λmax2 also due to typical learning terminating near a minima, rather than at the minima. Refer to fig. 2b for an illustration. Specifically, the convergence criterion is usually satisfied in the ǫ-neighborhood of w∗2 . If we write the second order Taylor approximation of L2 around w ∗ 2 , we get: L2(ŵ2)− L2(w ∗ 2) ≈ 1 2 (ŵ2 − w ∗ 2) ⊤∇2L2(ŵ2)(ŵ2 − w ∗ 2) ≤ 1 2 λmax2 ‖w ∗ 2 − ŵ2‖ 2 ≤ ǫ, (6) where, the first equality holds since ∇L2(w ∗ 2) = 0. Thus, by decreasing λ max 2 , ŵ2 can be reached further from w∗2 since the ǫ-neighborhood is larger, and closer to w ∗ 1 . A more formal analysis is given in the appendix. Note that as we enforce the error on task 2 to be lower, the argument above weakens. In the limit, if we assume you converged on task 2, the distance does not depend on curvature, just ‖w∗1 − w ∗ 2‖. However, the choice of which minima w ∗ 2 learning prefers will still affect the distance, and as argued above, if wider minima tend to be closer to 0, then they tend to be closer to each other too. Collating all of these observations together we propose the following hypothesis: Hypothesis. The amount of forgetting that a neural network exhibits from learning the tasks sequentially, correlates with the geometrical properties of the convergent points. In particular, the wider these minima are, the less forgetting happens. We empirically verify the relationship between forgetting and the upper bound derived in E.q. (5). We approximate the Hessian with the largest eigenvalue of the loss function. The results in two common continual learning benchmarks is shown in Figures 2c and 2d. In the figure, the dots represent different neural network training regimes with different settings (e.g., with and without dropout, with and without learning rate decay, different initial learning rates, different batch sizes, different random initialization). See section 4 to find out how these techniques can lead to different loss geometries. All of the models have roughly 90% accuracy on task 2. We can see that our derived measure has high correlation with the forgetting. 4 Training Regimes: techniques affecting stability and forgetting In this section, we describe a set of widely used techniques that are known to affect the width of the minima (eigenspectrum of Hessian) as well as the length of the path taken by learning (‖∆w‖). These observations had been generally made with respect to improving generalization, following the wide/narrow minima hypothesis. Based on the argumentation of the previous section, we believe these techniques can have an important role in affecting forgetting as well. In the following section, we will validate this through solid empirical evidence that agrees with our stated hypothesis. 4.1 Optimization setting: learning rate, batch size, and optimizer There has been a large body of prior work studying the relationship between the learning rate, batch size, and generalization. One common technique of analysis in this direction is to measure the largest eigenvalues of the Hessian of the loss function, which quantifies the local curvature around minima [39, 16, 33, 32, 33]. Followed by the work by Keskar et al. [39], several other papers studied the correlation between minima wideness and generalization [32, 51, 33]. The learning rate and batch size influence both the endpoint curvature and the whole trajectory [77, 17, 45]. A high learning rate or a small batch size limits the maximum spectral norm along the path found by SGD from the beginning of training [33]. This is further studied by Jastrzebski et al. [34], showing that using a large learning rate in the initial phase of training reduces the variance of the gradient, and improves the conditioning of the covariance of gradients which itself is close to the Hessian in terms of the largest eigenvalues [81]. Although having a higher learning rate tends to be helpful since it increases the probability of converging to a wider minima [34], considering a continual optimization problem, we can see it has another consequence: it contributes to the rate of change (i.e., ∆w in (5)). Using a higher learning rate means applying a higher update to the neural network weights. Therefore, since the objective function changes thorough time, having a high learning rate is a double-edged sword. Intuitively speaking, decreasing the learning rate across tasks prevents the parameters from going far from the current optimum, which helps reduce forgetting. One natural solution could be to start with a high initial learning rate for the first task to obtain a wide and stable minima. Then, for each subsequent task, slightly decrease the learning rate but also decrease the batch-size instead, as suggested in [69]. Regarding the choice of an optimizer, we argue for the effectiveness of SGD in continual learning setting compared to adaptive optimizers. Although the adaptive gradient methods such as Adam [40] tend to perform well in the initial phase of training, they are outperformed by SGD at later stages of training due to generalization issues [38, 10]. Wilson et al. [76] show that even for a toy quadratic problem, Adam generalizes provably worse than SGD. Moreover, Ge et al. [19] study the effectiveness of exponentially decaying learning rate and show that in its final iterate, it can achieve a near-optimal error in stochastic optimization problems. Connection to continual learning. The effect of learning rate and batch-size has not been directly studied in the context of continual learning, to the extent of our knowledge. However, we find that the reported hyper-parameters in several works match our findings. Using a small batch size is very common across the continual learning methods. OGD [14] uses a small batch size of 10, similar to several other works [9, 7, 8]. EWC [41] uses a batch size of 32 and also the learning rate decay of 0.95. iCaRL [61] starts from a large learning rate and decays the learning at certain epochs exponentially by a factor of 0.2, while it uses a larger batch size of 128. Finally, PackNet [49] also reports using a learning rate decay by a factor of 0.1. When it comes to choosing the optimizer, the literature mostly prefers SGD with momentum over the adaptive gradient methods. With the exception of VCL [56], which uses Adam, several other algorithms such as A-GEM, OGD, EWC, iCaRL, and PackNet use the standard SGD. 4.2 Regularization: dropout and weight decay We relate the theoretical insights on dropout and L2 regularization (weight decay) to our analysis in the previous section. We first argue for the effectiveness of dropout, and then we discuss why L2 regularization might hurt the performance in a continual learning setting. Dropout [27] is a well-established technique in deep learning, which is also well-studied theoretically [3, 70, 26, 18, 74]. Wei et al. [74] showed that dropout has both implicit and explicit but entangled regularization effects: More specifically, they provided an approximation for the explicit effect on the i-th hidden layer (denoted by hi) under input x by: (p/p−1)[∇ 2 hi L]T [diag(h2i )], where p is the dropout probability, L is the loss function, and diag(v) is a diagonal matrix of vector v. This term encourages the flatness of the minima since it minimizes the second derivative of the loss with respect to the hidden activation (that is tightly correlated with the curvature with respect to weights). Thus, dropout regularization reduces the R.H.S of Eq. (5). Note that by regularizing the activations norm, it also pushes down the norm of the weights, hence encouraging to find a minima close to 0, which in turn could reduce the norm of ∆w. Intuitively, we can understand this effect also from the tendency of dropout to create redundant features. This will reduce the effective number of dimensions of the representation, increasing the number of small-magnitude eigenvalues for the Hessian. As a consequence, gradient updates of the new tasks are less likely to lie on the space spanned by significant eigendirections of previous losses, which results in lesser forgetting. With respect to L2 regularization, while intuitively it should help, we make two observations. First, dropout is data-dependent, while L2 is not. That means balancing the effect of regularization with learning is harder, and in practice, it seems to work worse (both for the currently learned task and for reducing forgetting while maintaining good performance on the new task). Secondly, when combined with Batch Normalization [30], L2 regularization leads to an Exponential Learning Rate Schedule [47], meaning that in each iteration, multiplying the initial learning rate by (1 + α) for some α > 0 that depends on the momentum and weight decay rate. Hence, using L2 regularization is equivalent to increasing the learning rate overall, potentially leading to larger displacements ∆w. Connection to continual learning. To the best of our knowledge, the work by Goodfellow et al. [22] is the first to empirically study the importance of the dropout technique in the continual learning setting. They hypothesize that dropout increases the optimal size of the network by regularizing and constraining the capacity to be just barely sufficient to perform the first task. However, by observing some inconsistent results on dissimilar tasks, they suggested dropout may have other beneficial effects too. Very recently, [54] studied the effectiveness of dropout for continual learning from the gating perspective. Our work extends their analysis in a more general setting by studying the regularization effect of dropout and its connection to loss landscape. Finally, [43] conducted a comprehensive empirical study on the effect of weight decay and dropout on the continual learning performance and reported that the model consistently benefits from dropout regularization as opposed to weight decay which results in increased forgetting and lower performance on the final model. 5 Experiments and results In this section, after explaining our experimental setup, we show the relationship between the curvature of the loss function and the amount of forgetting. We use the terms Stable and Plastic (Naive) to distinguish two different training regimes. The stable network (or stable SGD) exploits the dropout regularization, large initial learning rate with exponential decay schedule at the end of each task, and small batch size, as explained in Sec. 4. In contrast, the plastic (naive) SGD model does not exploit these techniques. In the second experiment, we challenge the stable network and compare it to various state of the art methods on a large number of tasks and more difficult benchmarks. In Appendix C.8, we will show that the stable training regime can be integrated into other continual learning methods and improve their performance significantly. 5.1 Experimental setup Here, we discuss our experimental methodologies. The decisions regarding the datasets, network architectures, continual learning setup (e.g., number of tasks, training epochs per task), hyperparameters, and evaluation metrics are chosen to be consistent with several other studies [7, 9, 8, 14], making it easy to compare our results. For all experiments, we report the average and standard deviation over five runs, each with a different random seed. For brevity, we include the detailed hyper-parameters, the code, and instructions for reproducing the results in the supplementary file. Datasets. We perform our experiments on three standard continual learning benchmarks: Permuted MNIST [22], Rotated MNIST, and Split CIFAR-100. While we agree with [15] regarding the drawbacks of Permuted MNIST in continual learning settings, we believe for consistency with other studies, it is essential to report the results on this dataset as well. Moreover, we report our results on Rotated MNIST and CIFAR-100 that are more challenging and realistic datasets for continual learning benchmarks, once the number of tasks is large. Each task of permuted MNIST is generated by random shuffling of the pixels of images such that the permutation is the same for the images of the same task, but different across different tasks. Rotated MNIST is generated by the continual rotation of the MNIST images where each task applies a fixed random image rotation (between 0 and 180 degrees) to the original dataset. Split CIFAR-100 is a variant of the CIFAR-100 where each task contains the data from 5 random classes (without replacement) out of the total 100 classes. Models. In our first experiment (Sec. 5.2), we evaluate the continual learning performance over five sequential tasks to provide fine-grained metrics for each task. For this experiment, we use a feed-forward neural network with two hidden layers, each with 100 ReLU neurons and use the deflated power iteration for computing eigenvalues [21]. For the second experiment (Sec. 5.3), we scale the experiments to 20 tasks and use a two-layer network with 256 ReLU neurons in each layer for MNIST datasets, and a ResNet18, with three times fewer feature maps across all layers for CIFAR experiments. These architectures have been previously chosen in several studies [7, 9, 14, 8]. Evaluation. We use two metrics from [6, 7, 9] to evaluate continual learning algorithms when the number of tasks is large. (1) Average Accuracy: The average validation accuracy after the model has been trained sequentially up to task t, defined by: At = 1 t t∑ i=1 at,i (7) where, at,i is the validation accuracy on dataset i when the model finished learning task t. (2) Average Forgetting: The average forgetting after the model has been trained sequentially on all tasks. Forgetting is defined as the decrease in performance at each of the tasks between their peak accuracy and their accuracy after the continual learning experience has finished. For a continual learning dataset with T sequential tasks, it is defined by: F = 1 T − 1 T−1∑ i=1 maxt∈{1,...,T−1} (at,i − aT,i) (8) To prevent confusion, we note that this definition of forgetting is different from what we studied in Section 3. Here, the average forgetting is an evaluation metric that is computed from validation accuracy of the model. 5.2 Stable versus Plastic networks Here, we verify the significance of the training regime in continual learning performance (Sec 3.) and demonstrate the effectiveness of the stability techniques (Sec 4.) in reducing the forgetting. Each row of Figure 3, represents one of three related concepts for each training regime on each dataset. First, the top row shows the evolution of accuracy on validation sets of each task during the continual learning experience. For instance, the blue lines in this row show the validation accuracy of task 1 throughout the learning experience. In the Middle row, we show the twenty sharpest eigenvalues of the curvatures of each task. In the bottom row, we measure the ℓ2 distance of network parameters between the parameters learned for each task, and the parameters learned for subsequent tasks. Aligned with our analysis in Section 3, we show that in contrast to the plastic regime, the stable training reduces the catastrophic forgetting (Fig. 3 (Top)) thanks to (1) decreasing the curvature (Fig. 3 (Middle)) and (2) shrinking the change of parameters (Fig. 3 (Bottom)). 5.3 Comparison with other methods In this experiment, we show that the stable network is a strong competitor for various continual learning algorithms. In this scaled experiment, we increase the number of tasks from 5 to 20, and provide results for Split CIFAR-100, which is a challenging benchmark for continual learning algorithms. The episodic memory size for A-GEM and ER-Reservoir is limited to be one example per class per task (i.e., 200 examples for MNIST experiments and 100 for CIFAR-100), similar to [9, 8]. To have a consistent naming with other studies, in this section, we use the word “Naive” to describe a plastic network in our paper. To evaluate each algorithm, we measure the average accuracy and forgetting (i.e., At and F in Sec. 5.1). Table 2 compares these metrics for each method once the continual learning experience is finished (i.e., after learning task 20). Moreover, Fig. 4 provides a more detailed picture of the average accuracy during the continual learning experience. To show that stable networks suffer less from catastrophic forgetting, we provide a comparison of the first task’s accuracy in the appendix. While our stable network performs consistently better than other algorithms, we note that our proposed techniques are orthogonal to other works and can also be incorporated in them, as we show in Appendix C.8 6 Conclusion In this work, we have revisited the catastrophic forgetting problem from loss landscapes and optimization perspective and identify learning regimes and training techniques that contribute to the forgetting. The analytical insights yielded a series of effective and practical techniques that can reduce forgetting and increase the stability of neural networks in maintaining previous knowledge. We have studied these techniques through the lens of optimization by studied the wideness of the loss surfaces around the local minima. However, they might have other confounding factors for reducing catastrophic forgetting as well. We call for more theoretical research to further their role in demystifying trading off the stability plasticity dilemma and its effect on continual learning. Finally, we have empirically observed that these simple techniques proved to be more effective than some of the recent approaches (e.g., regularization based methods, or memory-based methods) but are orthogonal to them in the sense that our practical recommendations and provided insights on loss perspective can be incorporated to them. Broader Impact Continual Learning aims for effectively training a model from sequential tasks while making sure the model maintains a reasonable performance on the previous ones. It’s an integral part of Artificial General Intelligence (AGI) that reduces the cost of retraining (time, computation, resources, energy) and mitigates the need for storing all previous data to respect users’ privacy concerns better. Reducing catastrophic forgetting may potentially risk privacy for data that are explicitly wanted to be forgotten. This calls for more future research into formalizing and proposing continual learning agents that allow the identifiable parts of data to be forgotten, but the general knowledge is maintained. The research presented in this paper can be used for many different application areas and a particular use may have both positive or negative implications. Besides those, we are not aware of any immediate short term negative impact. Acknowledgment SIM and HG acknowledge support from the United States National Science Foundation through grant CNS-1750679. The authors thank Anonymous Reviewers, Jonathan Schwarz, Sepehr Sameni, Hooman Shahrokhi, and Mohammad Sadegh Jazayeri for their valuable comments and feedback.
1. What is the main contribution of the paper regarding catastrophic forgetting in neural networks? 2. How does the proposed measure of forgetting relate to the geometric properties of local minima on the loss surface? 3. What are the suggested techniques to reduce the spectral norm of the Hessian around the loss minimum, and how do they impact continual learning tasks? 4. Why does the reviewer remain concerned about the performance of some baselines used in the paper's experiments? 5. How does the paper synthesize a broad range of results from the literature, and what are the implications for choosing hyperparameters in continual learning problems? 6. What are the two distinct contributions of the paper, according to the reviewer, and how do they relate to each other?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper hypothesizes that geometric properties around local minima on the loss surface of neural networks, which have previously been linked in the literature to generalization, affect catastrophic forgetting. Specifically, it proposes the product of the maximum eigenvalue of the Hessian at the minimum times the L2 norm of the difference between the weights as an (approximate) upper bound on the difference in loss on a task after training on the task itself and a consecutive one as a measure of forgetting. This bound suggests the use of techniques that reduce the spectral norm of the Hessian around the loss of the minimum that training converges to, of which there are plenty in the literature. The paper makes specific suggestions regarding batch size, initial learning rate and learning rate decay, which match choices made in previous works that tuned them as hyperparameters. Further it argues for the use of dropout. It empirically demonstrates that a neural network trained using only these hyperaparameter settings can perform well on continual learning tasks, even without the use of dedicated methods for preventing forgetting. ************** POST REBUTTAL UPDATE: I remain somewhat concerned about the performance of some baselines. E.g. since EWC is just SGD with an additional L2-like regularizer from the 2nd task on, I find it hard to imagine that Stable-SGD would outperform it. Better performing baselines would not change the conclusions from the paper (how to set hyperparameters for continual learning problems and that SGD can perform surprisingly well if tuned properly), so I'm still happy to argue in favor of acceptance, but not willing to give a higher score. Strengths The paper is overall well-argued and -motivated and clearly written. It does a good job of synthesizing a broad range of results from the literature. While previous works on continual learning (at least the ones that I'm more familiar with) had tuned hyperparameters like learning rate, batch size etc relatively "blindly", this paper makes reasonable suggestions for how to choose them (or at least their range) in a more informed manner. Weaknesses The technical novelty of the paper is fairly low, but given that it builds on a rather broad selection of results from the literature and synthesizes them well, I don't find this too problematic. The presentation of the experiments is somewhat unsatisfying to me. All the techniques that the paper suggests are options that would be perfectly reasonable to consider for a continual learning method. So reporting that the "vanilla" neural network with the right hyperparameters outperforms continual learning methods leaves me with the impression that the baselines were not tuned properly. Indeed the grid search over hyperparameters for the baselines as per the appendix is not exactly exhaustive (and it would be better to do a random search over the range anyway). This has been mitigated to a degree by the additional results using the insights from the paper to improve the baselines. My suggestion would be to emphasize more clearly that the paper makes two distinct contributions: (1) plain neural networks can be much stronger baselines for continual learning than previously thought if using the right hyperparameters and (2) those hyperparameters choices are also useful improving continual learning techniques. I personally would expect the latter point to be much more relevant for the community, however this is unfortunately the point that does not come across as clearly from the way the paper is currently written. All things considered, I would see this as a borderline paper tending towards an accept. The observations are interesting and both write-up and experimental design are polished enough, although I think that the paper could be strengthened by restructuring the emphasis in the presentation of its contributions as discussed above, so I would not mind seeing an improved version at a future conference either.
NIPS
Title Understanding the Role of Training Regimes in Continual Learning Abstract Catastrophic forgetting affects the training of neural networks, limiting their ability to learn multiple tasks sequentially. From the perspective of the well established plasticity-stability dilemma, neural networks tend to be overly plastic, lacking the stability necessary to prevent the forgetting of previous knowledge, which means that as learning progresses, networks tend to forget previously seen tasks. This phenomenon coined in the continual learning literature, has attracted much attention lately, and several families of approaches have been proposed with different degrees of success. However, there has been limited prior work extensively analyzing the impact that different training regimes – learning rate, batch size, regularization method– can have on forgetting. In this work, we depart from the typical approach of altering the learning algorithm to improve stability. Instead, we hypothesize that the geometrical properties of the local minima found for each task play an important role in the overall degree of forgetting. In particular, we study the effect of dropout, learning rate decay, and batch size, on forming training regimes that widen the tasks’ local minima and consequently, on helping it not to forget catastrophically. Our study provides practical insights to improve stability via simple yet effective techniques that outperform alternative baselines. 1 Introduction We study the continual learning problem, where a neural network model should learn a sequence of tasks rather than a single one. A significant challenge in continual learning (CL) is that during training on each task, the data from previous ones are unavailable. One consequence of applying typical learning algorithms under such a scenario is that as the model learns newer tasks, the performance of the model on older ones degrades. This phenomenon is known as “catastrophic forgetting” [52]. This forgetting problem is closely related to the “stability-plasticity dilemma” [53], which is a common challenge for both biological and artificial neural networks. Ideally, a model needs plasticity to obtain new knowledge and adapt to new environments, while it also requires stability to prevent forgetting the knowledge from previous environments. If the model is very plastic but not stable, it can learn fast, but it also forgets quickly. Without further modifications in training, a naively trained neural network tends to be plastic but not stable. Note that plasticity in this scenario does not necessarily imply that neural nets can learn new tasks efficiently. In fact, they tend to be extremely data inefficient. By being plastic, we mean a single update can change the function considerably. 1The code is available at: https://github.com/imirzadeh/stable-continual-learning 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. With the recent advances in the deep learning field, continual learning has gained more attention since the catastrophic forgetting problem poses a critical challenge for various applications [43, 37]. A growing body of research has attempted to tackle this problem in recent years [58, 72, 57, 29]. Despite the tangible improvements in the continual learning field, the core problem of catastrophic forgetting is still under-studied. In particular, a variety of neural network models and training approaches have been proposed, however, to the best of our knowledge, there has been little work on systematically understanding the effect of common training regimes created by varying dropout regularization, batch size, and learning rate on overcoming catastrophic forgetting2. Fig. 1 shows how significantly these techniques can overcome catastrophic forgetting. In this work, we explore the catastrophic forgetting problem from an optimization and loss landscape perspective (Section 3) and hypothesize that the geometry of the local minima found for the different learned tasks correlates with the ability of the model to not catastrophically forget. Empirically we show how a few well-known techniques, such as dropout and large learning rate with decay and shrinking batch size, can create a training regime to affect the stability of neural networks (Section 4). Some of them, like dropout, had been previously proposed to help continual learning [22, 54]. However, in this work, we provide an alternative justification of why these techniques are effective. Crucially, we empirically show that jointly with a carefully tuned learning rate schedule and batch size, these simple techniques can outperform considerably more complex algorithms meant to deal with continual learning (Section 5). Our analysis can be applied to any other training technique that widens the tasks’ local minima or shrinks the distance between them. Our work shows that plain neural networks can be much stronger baselines for continual learning than previously thought, provided that we use the right hyperparameters. Moreover, the choice for the hyperparameters is orthogonal to other continual learning methods and can be integrated with these methods, as we show in Appendix C.8. 2 Related work Several continual learning methods have been proposed to tackle catastrophic forgetting. Following [43], we categorize these algorithms into three general groups. The first group consists of replay based methods that build and store a memory of the knowledge learned from old tasks [48, 62, 82, 68, 63], known as experience replay. iCaRL [61] learns in a class-incremental way by having a fixed memory that stores samples that are close to the center of each class. Averaged Gradient Episodic Memory (A-GEM) [7] is another example of these methods which build a dynamic episodic memory of parameter gradients during the learning process while ER-Reservoir [9] uses a Reservoir sampling method as its selection strategy. The methods in the second group use explicit regularization techniques to supervise the learning algorithm such that the network parameters are consistent during the learning process [41, 80, 44, 1, 42]. As a notable work, Elastic weight consolidation (EWC) [41], uses the Fisher information matrix as a proxy for weights’ importance and guides the gradient updates. They are usually inspired by a Bayesian perspective [56, 71, 67, 13, 64]. With a frequentist view, some other regularization based methods have utilized gradient information to protect previous knowledge [14, 24, 79]. For example, Orthogonal Gradient Descent (OGD) [14] uses the projection of the prediction gradients from new tasks on the subspace of previous tasks’ gradients to maintain the learned knowledge. Finally, in parameter isolation methods, in addition to potentially a shared part, different subsets of the model parameters are dedicated to each task [65, 78, 35, 60, 46]. This approach can be viewed 2Potential exceptions being the early work of Goodefellow et. al [22] and a recent one by Mirzadeh et. al [54] as a flexible gating mechanism, which enhances stability and controls the plasticity by activating different gates for each task. [50] proposes a neuroscience-inspired method for a context-dependent gating signal, such that only sparse, mostly non-overlapping patterns of units are active for any one task. PackNet [49] implements a controlled version of gating by using network pruning techniques to free up parameters after finishing each task. While continual learning is broader than just solving the catastrophic forgetting, in this work we will squarely focus on catastrophic forgetting, which has been an important aspect, if not the most important one, of research for Continual Learning in the last few years. Continual Learning as an emerging field in Artificial Intelligence is connected to many other areas such as Meta Learning [4, 35, 24, 62], Few Shot Learning [75, 20], Multi-task and Transfer Learning [24, 35] and the closely related problem of exploring task boundary or task detection [60, 2, 25, 36]. Very recently, Mirzadeh et al. [55] have studied the mode connectivity of continual learning and multitask learning minima. Moreover, Wallingford et al. [73] proposed a framework for integration of solutions across supervised learning, few-shot learning, continual learning, and efficient machine learning to facilitate the research in the intersection of these fields. 3 Forgetting during training Let us begin this section by introducing some notation to express the effect of forgetting during the sequential learning of tasks. For simplicity, let us consider the supervised learning case, which will be the focus of this work. We consider a sequence of K tasks Tk for k ∈ {1, 2, . . . ,K}. Let W ∈ R d be the parameter space for our model. The total loss on the training set for task k is denoted by Lk(w) = E[ℓk(w;x, y)] ≈ 1 |Tk| ∑ (x,y)∈Tk , ℓk(w;x, y) (1) where, the expectation is over the data distribution of task k and ℓk is a differentiable non-negative loss function associated with data point (x, y) for task k. In the continual learning setting, the model learns sequentially, without access to examples of previously seen tasks. For simplicity and brevity, let us focus on measuring the forgetting in continual learning with two tasks. It is easy to extend these findings to more tasks. Let w∗1 and w ∗ 2 be the convergent or optimum parameters after training has been finished for the first and second task sequentially. We formally define the forgetting (of the first task) as: F1 , L1(w ∗ 2)− L1(w ∗ 1). (2) We hypothesize that F1 strongly correlates with properties of the curvature of L1 around w ∗ 1 and L2 around w∗2 . In what follows, we will formalize this hypothesis. One important assumption that we rely on throughout this section is that we can use a secondorder Taylor expansion of our losses to understand the learning dynamics during the training of the model. While this might seem as a crude approximation in general — for a nonlinear function and non-infinitesimal displacement this approximation can be arbitrarily bad — we argue that the approximation has merit for our setting. In particular, we rely on the wealth of observations for overparametrized models where the loss tends to be very well behaved and almost convex in a reasonable neighborhood around their local minima. E.g. for deep linear models this property has been studied in [66]. Works as [11, 23] make similar claims for generic models. [31] also corroborates that within the NTK regime learning is well behaved. In continual learning, similar strong assumptions are made by most approaches that rely on approximating the posterior on the weights by a Gaussian [41] or a first-order approximation of the loss surface around the optimum [14]. Armed with this analytical tool, to compute the forgetting, we can approximate L1(w ∗ 2) around w ∗ 1 : L1(w ∗ 2) ≈ L1(w ∗ 1) + (w ∗ 2 − w ∗ 1) ⊤∇L1(w ∗ 1) + 1 2 (w∗2 − w ∗ 1) ⊤∇2L1(w ∗ 1)(w ∗ 2 − w ∗ 1) (3) ≈ L1(w ∗ 1) + 1 2 (w∗2 − w ∗ 1) ⊤∇2L1(w ∗ 1)(w ∗ 2 − w ∗ 1), (4) where, ∇2L1(w ∗ 1) is the Hessian for loss L1 at w ∗ 1 and the last equality holds because the model is assumed to converge to a stationary point where gradient’s norm vanishes, thus ∇L1(w ∗ 1) ≈ 0. Under the assumption that the critical point is a minimum (or the plateau we get stuck in surrounds a minimum), we know that the Hessian needs to be positive semi-definite. Defining ∆w = w∗2 −w ∗ 1 as the relocation vector, we can bound the forgetting F1 as follows: F1 = L1(w ∗ 2)− L1(w ∗ 1) ≈ 1 2 ∆w⊤∇2L1(w ∗ 1)∆w ≤ 1 2 λmax1 ‖∆w‖ 2 , (5) where λmax1 is the maximum eigenvalue of ∇ 2L1(w ∗ 1). Fig. 2a shows how wider L1 (lower λ max 1 ) leads to less forgetting, both in terms of an illustrative example as well as showing empirical evidence for this relationship on Rotated MNIST and Permuted MNIST. A few notes on this bound. First, the bound is achieved when ∆w is co-aligned to the eigenvector corresponding to λ1. Given that the displacement is generated by the learning process on task 2, we can think of it as a random vector with respect to the eigenvectors of the Hessian of the first task. We know, therefore, that the tightness of the bound correlates with the dimensionality of w. In the extreme one-dimensional case, the Hessian matrix becomes a scalar given by λmax1 , and the bound is exact. As we increase the number of dimensions, the probability of two vectors to be perpendicular goes to 1. Hence in high-dimensional spaces is more likely for the bound to be relatively loose and for the entire spectrum of eigenvalues to play a much more important role in defining F1. Namely, as the number of eigenvalues with low magnitude increases, the more likely it is for F1 to be small. Assuming that the displacement vector is equally distributed over all the eigenvectors, then the trace of the Hessian will correlate stronger with F1 than the largest eigenvalue. However, reasoning in terms of the spectrum can be impractical (note, for example, that one can not trivially re-write the bound in terms of the trace). So we believe it is useful to think about λmax1 as long as any conclusion is contextualized correctly and the training regime we consider, implicitly, is aimed at lowering the entire spectrum, not just the largest eigenvalue. We also want to highlight λmax1 has been used previously to describe the width of a local minima [28, 39], with similar notes regarding the role of the entire spectrum [12, 39]. This property is central to the wide/narrow minima hypothesis for why neural networks generalize well. Our hypothesis is not tied to the generalization of wide minima, but we rely on the same or at least a very related concept of width. Therefore, to reduce forgetting, each task should push its learning towards wider minima and can employ the same techniques used to widen the minima to improve generalization. Resuming from Eq. 5, controlling the Hessian spectrum without controlling the norm of the displacement, however, might not ensure that F1 is minimized. ‖∆w‖ is technically controlled by the subsequent tasks. We first notice, empirically, that enforcing widening the minima of the next task (for the same reason of reducing forgetting on itself) inhibits additionally forgetting for the first task (see Table 1; not stable/plastic means relying on training regimes that encourage/do not encourage wide minima. We empirically estimate the width of minima as well, see appendix C for details). We make the observation that the width of the minima (norm of the eigenvalues) correlates with the norm of the weights. Hence the solutions in the stable learning regime tend to be closer to 0, which automatically decrease ‖∆w‖. Additionally, ‖∆w‖ relates to λmax2 also due to typical learning terminating near a minima, rather than at the minima. Refer to fig. 2b for an illustration. Specifically, the convergence criterion is usually satisfied in the ǫ-neighborhood of w∗2 . If we write the second order Taylor approximation of L2 around w ∗ 2 , we get: L2(ŵ2)− L2(w ∗ 2) ≈ 1 2 (ŵ2 − w ∗ 2) ⊤∇2L2(ŵ2)(ŵ2 − w ∗ 2) ≤ 1 2 λmax2 ‖w ∗ 2 − ŵ2‖ 2 ≤ ǫ, (6) where, the first equality holds since ∇L2(w ∗ 2) = 0. Thus, by decreasing λ max 2 , ŵ2 can be reached further from w∗2 since the ǫ-neighborhood is larger, and closer to w ∗ 1 . A more formal analysis is given in the appendix. Note that as we enforce the error on task 2 to be lower, the argument above weakens. In the limit, if we assume you converged on task 2, the distance does not depend on curvature, just ‖w∗1 − w ∗ 2‖. However, the choice of which minima w ∗ 2 learning prefers will still affect the distance, and as argued above, if wider minima tend to be closer to 0, then they tend to be closer to each other too. Collating all of these observations together we propose the following hypothesis: Hypothesis. The amount of forgetting that a neural network exhibits from learning the tasks sequentially, correlates with the geometrical properties of the convergent points. In particular, the wider these minima are, the less forgetting happens. We empirically verify the relationship between forgetting and the upper bound derived in E.q. (5). We approximate the Hessian with the largest eigenvalue of the loss function. The results in two common continual learning benchmarks is shown in Figures 2c and 2d. In the figure, the dots represent different neural network training regimes with different settings (e.g., with and without dropout, with and without learning rate decay, different initial learning rates, different batch sizes, different random initialization). See section 4 to find out how these techniques can lead to different loss geometries. All of the models have roughly 90% accuracy on task 2. We can see that our derived measure has high correlation with the forgetting. 4 Training Regimes: techniques affecting stability and forgetting In this section, we describe a set of widely used techniques that are known to affect the width of the minima (eigenspectrum of Hessian) as well as the length of the path taken by learning (‖∆w‖). These observations had been generally made with respect to improving generalization, following the wide/narrow minima hypothesis. Based on the argumentation of the previous section, we believe these techniques can have an important role in affecting forgetting as well. In the following section, we will validate this through solid empirical evidence that agrees with our stated hypothesis. 4.1 Optimization setting: learning rate, batch size, and optimizer There has been a large body of prior work studying the relationship between the learning rate, batch size, and generalization. One common technique of analysis in this direction is to measure the largest eigenvalues of the Hessian of the loss function, which quantifies the local curvature around minima [39, 16, 33, 32, 33]. Followed by the work by Keskar et al. [39], several other papers studied the correlation between minima wideness and generalization [32, 51, 33]. The learning rate and batch size influence both the endpoint curvature and the whole trajectory [77, 17, 45]. A high learning rate or a small batch size limits the maximum spectral norm along the path found by SGD from the beginning of training [33]. This is further studied by Jastrzebski et al. [34], showing that using a large learning rate in the initial phase of training reduces the variance of the gradient, and improves the conditioning of the covariance of gradients which itself is close to the Hessian in terms of the largest eigenvalues [81]. Although having a higher learning rate tends to be helpful since it increases the probability of converging to a wider minima [34], considering a continual optimization problem, we can see it has another consequence: it contributes to the rate of change (i.e., ∆w in (5)). Using a higher learning rate means applying a higher update to the neural network weights. Therefore, since the objective function changes thorough time, having a high learning rate is a double-edged sword. Intuitively speaking, decreasing the learning rate across tasks prevents the parameters from going far from the current optimum, which helps reduce forgetting. One natural solution could be to start with a high initial learning rate for the first task to obtain a wide and stable minima. Then, for each subsequent task, slightly decrease the learning rate but also decrease the batch-size instead, as suggested in [69]. Regarding the choice of an optimizer, we argue for the effectiveness of SGD in continual learning setting compared to adaptive optimizers. Although the adaptive gradient methods such as Adam [40] tend to perform well in the initial phase of training, they are outperformed by SGD at later stages of training due to generalization issues [38, 10]. Wilson et al. [76] show that even for a toy quadratic problem, Adam generalizes provably worse than SGD. Moreover, Ge et al. [19] study the effectiveness of exponentially decaying learning rate and show that in its final iterate, it can achieve a near-optimal error in stochastic optimization problems. Connection to continual learning. The effect of learning rate and batch-size has not been directly studied in the context of continual learning, to the extent of our knowledge. However, we find that the reported hyper-parameters in several works match our findings. Using a small batch size is very common across the continual learning methods. OGD [14] uses a small batch size of 10, similar to several other works [9, 7, 8]. EWC [41] uses a batch size of 32 and also the learning rate decay of 0.95. iCaRL [61] starts from a large learning rate and decays the learning at certain epochs exponentially by a factor of 0.2, while it uses a larger batch size of 128. Finally, PackNet [49] also reports using a learning rate decay by a factor of 0.1. When it comes to choosing the optimizer, the literature mostly prefers SGD with momentum over the adaptive gradient methods. With the exception of VCL [56], which uses Adam, several other algorithms such as A-GEM, OGD, EWC, iCaRL, and PackNet use the standard SGD. 4.2 Regularization: dropout and weight decay We relate the theoretical insights on dropout and L2 regularization (weight decay) to our analysis in the previous section. We first argue for the effectiveness of dropout, and then we discuss why L2 regularization might hurt the performance in a continual learning setting. Dropout [27] is a well-established technique in deep learning, which is also well-studied theoretically [3, 70, 26, 18, 74]. Wei et al. [74] showed that dropout has both implicit and explicit but entangled regularization effects: More specifically, they provided an approximation for the explicit effect on the i-th hidden layer (denoted by hi) under input x by: (p/p−1)[∇ 2 hi L]T [diag(h2i )], where p is the dropout probability, L is the loss function, and diag(v) is a diagonal matrix of vector v. This term encourages the flatness of the minima since it minimizes the second derivative of the loss with respect to the hidden activation (that is tightly correlated with the curvature with respect to weights). Thus, dropout regularization reduces the R.H.S of Eq. (5). Note that by regularizing the activations norm, it also pushes down the norm of the weights, hence encouraging to find a minima close to 0, which in turn could reduce the norm of ∆w. Intuitively, we can understand this effect also from the tendency of dropout to create redundant features. This will reduce the effective number of dimensions of the representation, increasing the number of small-magnitude eigenvalues for the Hessian. As a consequence, gradient updates of the new tasks are less likely to lie on the space spanned by significant eigendirections of previous losses, which results in lesser forgetting. With respect to L2 regularization, while intuitively it should help, we make two observations. First, dropout is data-dependent, while L2 is not. That means balancing the effect of regularization with learning is harder, and in practice, it seems to work worse (both for the currently learned task and for reducing forgetting while maintaining good performance on the new task). Secondly, when combined with Batch Normalization [30], L2 regularization leads to an Exponential Learning Rate Schedule [47], meaning that in each iteration, multiplying the initial learning rate by (1 + α) for some α > 0 that depends on the momentum and weight decay rate. Hence, using L2 regularization is equivalent to increasing the learning rate overall, potentially leading to larger displacements ∆w. Connection to continual learning. To the best of our knowledge, the work by Goodfellow et al. [22] is the first to empirically study the importance of the dropout technique in the continual learning setting. They hypothesize that dropout increases the optimal size of the network by regularizing and constraining the capacity to be just barely sufficient to perform the first task. However, by observing some inconsistent results on dissimilar tasks, they suggested dropout may have other beneficial effects too. Very recently, [54] studied the effectiveness of dropout for continual learning from the gating perspective. Our work extends their analysis in a more general setting by studying the regularization effect of dropout and its connection to loss landscape. Finally, [43] conducted a comprehensive empirical study on the effect of weight decay and dropout on the continual learning performance and reported that the model consistently benefits from dropout regularization as opposed to weight decay which results in increased forgetting and lower performance on the final model. 5 Experiments and results In this section, after explaining our experimental setup, we show the relationship between the curvature of the loss function and the amount of forgetting. We use the terms Stable and Plastic (Naive) to distinguish two different training regimes. The stable network (or stable SGD) exploits the dropout regularization, large initial learning rate with exponential decay schedule at the end of each task, and small batch size, as explained in Sec. 4. In contrast, the plastic (naive) SGD model does not exploit these techniques. In the second experiment, we challenge the stable network and compare it to various state of the art methods on a large number of tasks and more difficult benchmarks. In Appendix C.8, we will show that the stable training regime can be integrated into other continual learning methods and improve their performance significantly. 5.1 Experimental setup Here, we discuss our experimental methodologies. The decisions regarding the datasets, network architectures, continual learning setup (e.g., number of tasks, training epochs per task), hyperparameters, and evaluation metrics are chosen to be consistent with several other studies [7, 9, 8, 14], making it easy to compare our results. For all experiments, we report the average and standard deviation over five runs, each with a different random seed. For brevity, we include the detailed hyper-parameters, the code, and instructions for reproducing the results in the supplementary file. Datasets. We perform our experiments on three standard continual learning benchmarks: Permuted MNIST [22], Rotated MNIST, and Split CIFAR-100. While we agree with [15] regarding the drawbacks of Permuted MNIST in continual learning settings, we believe for consistency with other studies, it is essential to report the results on this dataset as well. Moreover, we report our results on Rotated MNIST and CIFAR-100 that are more challenging and realistic datasets for continual learning benchmarks, once the number of tasks is large. Each task of permuted MNIST is generated by random shuffling of the pixels of images such that the permutation is the same for the images of the same task, but different across different tasks. Rotated MNIST is generated by the continual rotation of the MNIST images where each task applies a fixed random image rotation (between 0 and 180 degrees) to the original dataset. Split CIFAR-100 is a variant of the CIFAR-100 where each task contains the data from 5 random classes (without replacement) out of the total 100 classes. Models. In our first experiment (Sec. 5.2), we evaluate the continual learning performance over five sequential tasks to provide fine-grained metrics for each task. For this experiment, we use a feed-forward neural network with two hidden layers, each with 100 ReLU neurons and use the deflated power iteration for computing eigenvalues [21]. For the second experiment (Sec. 5.3), we scale the experiments to 20 tasks and use a two-layer network with 256 ReLU neurons in each layer for MNIST datasets, and a ResNet18, with three times fewer feature maps across all layers for CIFAR experiments. These architectures have been previously chosen in several studies [7, 9, 14, 8]. Evaluation. We use two metrics from [6, 7, 9] to evaluate continual learning algorithms when the number of tasks is large. (1) Average Accuracy: The average validation accuracy after the model has been trained sequentially up to task t, defined by: At = 1 t t∑ i=1 at,i (7) where, at,i is the validation accuracy on dataset i when the model finished learning task t. (2) Average Forgetting: The average forgetting after the model has been trained sequentially on all tasks. Forgetting is defined as the decrease in performance at each of the tasks between their peak accuracy and their accuracy after the continual learning experience has finished. For a continual learning dataset with T sequential tasks, it is defined by: F = 1 T − 1 T−1∑ i=1 maxt∈{1,...,T−1} (at,i − aT,i) (8) To prevent confusion, we note that this definition of forgetting is different from what we studied in Section 3. Here, the average forgetting is an evaluation metric that is computed from validation accuracy of the model. 5.2 Stable versus Plastic networks Here, we verify the significance of the training regime in continual learning performance (Sec 3.) and demonstrate the effectiveness of the stability techniques (Sec 4.) in reducing the forgetting. Each row of Figure 3, represents one of three related concepts for each training regime on each dataset. First, the top row shows the evolution of accuracy on validation sets of each task during the continual learning experience. For instance, the blue lines in this row show the validation accuracy of task 1 throughout the learning experience. In the Middle row, we show the twenty sharpest eigenvalues of the curvatures of each task. In the bottom row, we measure the ℓ2 distance of network parameters between the parameters learned for each task, and the parameters learned for subsequent tasks. Aligned with our analysis in Section 3, we show that in contrast to the plastic regime, the stable training reduces the catastrophic forgetting (Fig. 3 (Top)) thanks to (1) decreasing the curvature (Fig. 3 (Middle)) and (2) shrinking the change of parameters (Fig. 3 (Bottom)). 5.3 Comparison with other methods In this experiment, we show that the stable network is a strong competitor for various continual learning algorithms. In this scaled experiment, we increase the number of tasks from 5 to 20, and provide results for Split CIFAR-100, which is a challenging benchmark for continual learning algorithms. The episodic memory size for A-GEM and ER-Reservoir is limited to be one example per class per task (i.e., 200 examples for MNIST experiments and 100 for CIFAR-100), similar to [9, 8]. To have a consistent naming with other studies, in this section, we use the word “Naive” to describe a plastic network in our paper. To evaluate each algorithm, we measure the average accuracy and forgetting (i.e., At and F in Sec. 5.1). Table 2 compares these metrics for each method once the continual learning experience is finished (i.e., after learning task 20). Moreover, Fig. 4 provides a more detailed picture of the average accuracy during the continual learning experience. To show that stable networks suffer less from catastrophic forgetting, we provide a comparison of the first task’s accuracy in the appendix. While our stable network performs consistently better than other algorithms, we note that our proposed techniques are orthogonal to other works and can also be incorporated in them, as we show in Appendix C.8 6 Conclusion In this work, we have revisited the catastrophic forgetting problem from loss landscapes and optimization perspective and identify learning regimes and training techniques that contribute to the forgetting. The analytical insights yielded a series of effective and practical techniques that can reduce forgetting and increase the stability of neural networks in maintaining previous knowledge. We have studied these techniques through the lens of optimization by studied the wideness of the loss surfaces around the local minima. However, they might have other confounding factors for reducing catastrophic forgetting as well. We call for more theoretical research to further their role in demystifying trading off the stability plasticity dilemma and its effect on continual learning. Finally, we have empirically observed that these simple techniques proved to be more effective than some of the recent approaches (e.g., regularization based methods, or memory-based methods) but are orthogonal to them in the sense that our practical recommendations and provided insights on loss perspective can be incorporated to them. Broader Impact Continual Learning aims for effectively training a model from sequential tasks while making sure the model maintains a reasonable performance on the previous ones. It’s an integral part of Artificial General Intelligence (AGI) that reduces the cost of retraining (time, computation, resources, energy) and mitigates the need for storing all previous data to respect users’ privacy concerns better. Reducing catastrophic forgetting may potentially risk privacy for data that are explicitly wanted to be forgotten. This calls for more future research into formalizing and proposing continual learning agents that allow the identifiable parts of data to be forgotten, but the general knowledge is maintained. The research presented in this paper can be used for many different application areas and a particular use may have both positive or negative implications. Besides those, we are not aware of any immediate short term negative impact. Acknowledgment SIM and HG acknowledge support from the United States National Science Foundation through grant CNS-1750679. The authors thank Anonymous Reviewers, Jonathan Schwarz, Sepehr Sameni, Hooman Shahrokhi, and Mohammad Sadegh Jazayeri for their valuable comments and feedback.
1. What is the main contribution of the paper on continual learning? 2. What are the strengths of the proposed approach, particularly in its theoretical grounding and empirical validation? 3. Do you have any concerns or suggestions regarding the paper's weaknesses, such as the choice of terminology or the motivation of the new forgetting measure? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The authors examine a novel quantity for continual learning which they call forgetting---a loss distance measure---which they use to motivate an examination of the geometry of minima to avoid catastrophic forgetting. They then show that a number of well-studied methods for encouraging wide minima also avoid catastrophic forgetting. Strengths I really like this paper. I have to admit that on reading the title and abstract I expected little more than a hyperparameter search but quickly realized what a substantive contribution this paper represents! The authors' observations line up with my own experiences on effective hyperparameters for 'naive' CL baselines, with a rigorous empirical grounding and a plausible theoretical justification. As the authors note, these observations can be used to inform and improve a wide range of other approaches as well as setting a higher bar for 'naive' baselines for other methods to compare to. I would hope that all papers will soon compare to SGD baselines inspired by the results of this paper. Weaknesses -Later in the paper, and in other work e.g., Chaudhry [5] which you cite, "forgetting" is a measure that looks at relative accuracy (I may be misremembering, but I think their definition is slightly different again from your definition in 5.1). But in eq (2) you define something related to loss. It would probably be better to pick a new word for this. -I think you could say a little more to motivate why your new forgetting measure is an important quantity for CL. I think it is quite important, but it would be good to explain more. Otherwise you are arguing that wide minima are good for reducing F_1, but not arguing why we care about reducing F_1. (If it weren't for the fact that your provide empirical evidence suggesting this leads to improvement of more standard metrics for CL performance, I would judge this more harshly, but it should still be addressed.) -Your analysis in l163 seems to point towards early stopping as an important tool for CL, perhaps that could get more attention.
NIPS
Title Understanding the Role of Training Regimes in Continual Learning Abstract Catastrophic forgetting affects the training of neural networks, limiting their ability to learn multiple tasks sequentially. From the perspective of the well established plasticity-stability dilemma, neural networks tend to be overly plastic, lacking the stability necessary to prevent the forgetting of previous knowledge, which means that as learning progresses, networks tend to forget previously seen tasks. This phenomenon coined in the continual learning literature, has attracted much attention lately, and several families of approaches have been proposed with different degrees of success. However, there has been limited prior work extensively analyzing the impact that different training regimes – learning rate, batch size, regularization method– can have on forgetting. In this work, we depart from the typical approach of altering the learning algorithm to improve stability. Instead, we hypothesize that the geometrical properties of the local minima found for each task play an important role in the overall degree of forgetting. In particular, we study the effect of dropout, learning rate decay, and batch size, on forming training regimes that widen the tasks’ local minima and consequently, on helping it not to forget catastrophically. Our study provides practical insights to improve stability via simple yet effective techniques that outperform alternative baselines. 1 Introduction We study the continual learning problem, where a neural network model should learn a sequence of tasks rather than a single one. A significant challenge in continual learning (CL) is that during training on each task, the data from previous ones are unavailable. One consequence of applying typical learning algorithms under such a scenario is that as the model learns newer tasks, the performance of the model on older ones degrades. This phenomenon is known as “catastrophic forgetting” [52]. This forgetting problem is closely related to the “stability-plasticity dilemma” [53], which is a common challenge for both biological and artificial neural networks. Ideally, a model needs plasticity to obtain new knowledge and adapt to new environments, while it also requires stability to prevent forgetting the knowledge from previous environments. If the model is very plastic but not stable, it can learn fast, but it also forgets quickly. Without further modifications in training, a naively trained neural network tends to be plastic but not stable. Note that plasticity in this scenario does not necessarily imply that neural nets can learn new tasks efficiently. In fact, they tend to be extremely data inefficient. By being plastic, we mean a single update can change the function considerably. 1The code is available at: https://github.com/imirzadeh/stable-continual-learning 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. With the recent advances in the deep learning field, continual learning has gained more attention since the catastrophic forgetting problem poses a critical challenge for various applications [43, 37]. A growing body of research has attempted to tackle this problem in recent years [58, 72, 57, 29]. Despite the tangible improvements in the continual learning field, the core problem of catastrophic forgetting is still under-studied. In particular, a variety of neural network models and training approaches have been proposed, however, to the best of our knowledge, there has been little work on systematically understanding the effect of common training regimes created by varying dropout regularization, batch size, and learning rate on overcoming catastrophic forgetting2. Fig. 1 shows how significantly these techniques can overcome catastrophic forgetting. In this work, we explore the catastrophic forgetting problem from an optimization and loss landscape perspective (Section 3) and hypothesize that the geometry of the local minima found for the different learned tasks correlates with the ability of the model to not catastrophically forget. Empirically we show how a few well-known techniques, such as dropout and large learning rate with decay and shrinking batch size, can create a training regime to affect the stability of neural networks (Section 4). Some of them, like dropout, had been previously proposed to help continual learning [22, 54]. However, in this work, we provide an alternative justification of why these techniques are effective. Crucially, we empirically show that jointly with a carefully tuned learning rate schedule and batch size, these simple techniques can outperform considerably more complex algorithms meant to deal with continual learning (Section 5). Our analysis can be applied to any other training technique that widens the tasks’ local minima or shrinks the distance between them. Our work shows that plain neural networks can be much stronger baselines for continual learning than previously thought, provided that we use the right hyperparameters. Moreover, the choice for the hyperparameters is orthogonal to other continual learning methods and can be integrated with these methods, as we show in Appendix C.8. 2 Related work Several continual learning methods have been proposed to tackle catastrophic forgetting. Following [43], we categorize these algorithms into three general groups. The first group consists of replay based methods that build and store a memory of the knowledge learned from old tasks [48, 62, 82, 68, 63], known as experience replay. iCaRL [61] learns in a class-incremental way by having a fixed memory that stores samples that are close to the center of each class. Averaged Gradient Episodic Memory (A-GEM) [7] is another example of these methods which build a dynamic episodic memory of parameter gradients during the learning process while ER-Reservoir [9] uses a Reservoir sampling method as its selection strategy. The methods in the second group use explicit regularization techniques to supervise the learning algorithm such that the network parameters are consistent during the learning process [41, 80, 44, 1, 42]. As a notable work, Elastic weight consolidation (EWC) [41], uses the Fisher information matrix as a proxy for weights’ importance and guides the gradient updates. They are usually inspired by a Bayesian perspective [56, 71, 67, 13, 64]. With a frequentist view, some other regularization based methods have utilized gradient information to protect previous knowledge [14, 24, 79]. For example, Orthogonal Gradient Descent (OGD) [14] uses the projection of the prediction gradients from new tasks on the subspace of previous tasks’ gradients to maintain the learned knowledge. Finally, in parameter isolation methods, in addition to potentially a shared part, different subsets of the model parameters are dedicated to each task [65, 78, 35, 60, 46]. This approach can be viewed 2Potential exceptions being the early work of Goodefellow et. al [22] and a recent one by Mirzadeh et. al [54] as a flexible gating mechanism, which enhances stability and controls the plasticity by activating different gates for each task. [50] proposes a neuroscience-inspired method for a context-dependent gating signal, such that only sparse, mostly non-overlapping patterns of units are active for any one task. PackNet [49] implements a controlled version of gating by using network pruning techniques to free up parameters after finishing each task. While continual learning is broader than just solving the catastrophic forgetting, in this work we will squarely focus on catastrophic forgetting, which has been an important aspect, if not the most important one, of research for Continual Learning in the last few years. Continual Learning as an emerging field in Artificial Intelligence is connected to many other areas such as Meta Learning [4, 35, 24, 62], Few Shot Learning [75, 20], Multi-task and Transfer Learning [24, 35] and the closely related problem of exploring task boundary or task detection [60, 2, 25, 36]. Very recently, Mirzadeh et al. [55] have studied the mode connectivity of continual learning and multitask learning minima. Moreover, Wallingford et al. [73] proposed a framework for integration of solutions across supervised learning, few-shot learning, continual learning, and efficient machine learning to facilitate the research in the intersection of these fields. 3 Forgetting during training Let us begin this section by introducing some notation to express the effect of forgetting during the sequential learning of tasks. For simplicity, let us consider the supervised learning case, which will be the focus of this work. We consider a sequence of K tasks Tk for k ∈ {1, 2, . . . ,K}. Let W ∈ R d be the parameter space for our model. The total loss on the training set for task k is denoted by Lk(w) = E[ℓk(w;x, y)] ≈ 1 |Tk| ∑ (x,y)∈Tk , ℓk(w;x, y) (1) where, the expectation is over the data distribution of task k and ℓk is a differentiable non-negative loss function associated with data point (x, y) for task k. In the continual learning setting, the model learns sequentially, without access to examples of previously seen tasks. For simplicity and brevity, let us focus on measuring the forgetting in continual learning with two tasks. It is easy to extend these findings to more tasks. Let w∗1 and w ∗ 2 be the convergent or optimum parameters after training has been finished for the first and second task sequentially. We formally define the forgetting (of the first task) as: F1 , L1(w ∗ 2)− L1(w ∗ 1). (2) We hypothesize that F1 strongly correlates with properties of the curvature of L1 around w ∗ 1 and L2 around w∗2 . In what follows, we will formalize this hypothesis. One important assumption that we rely on throughout this section is that we can use a secondorder Taylor expansion of our losses to understand the learning dynamics during the training of the model. While this might seem as a crude approximation in general — for a nonlinear function and non-infinitesimal displacement this approximation can be arbitrarily bad — we argue that the approximation has merit for our setting. In particular, we rely on the wealth of observations for overparametrized models where the loss tends to be very well behaved and almost convex in a reasonable neighborhood around their local minima. E.g. for deep linear models this property has been studied in [66]. Works as [11, 23] make similar claims for generic models. [31] also corroborates that within the NTK regime learning is well behaved. In continual learning, similar strong assumptions are made by most approaches that rely on approximating the posterior on the weights by a Gaussian [41] or a first-order approximation of the loss surface around the optimum [14]. Armed with this analytical tool, to compute the forgetting, we can approximate L1(w ∗ 2) around w ∗ 1 : L1(w ∗ 2) ≈ L1(w ∗ 1) + (w ∗ 2 − w ∗ 1) ⊤∇L1(w ∗ 1) + 1 2 (w∗2 − w ∗ 1) ⊤∇2L1(w ∗ 1)(w ∗ 2 − w ∗ 1) (3) ≈ L1(w ∗ 1) + 1 2 (w∗2 − w ∗ 1) ⊤∇2L1(w ∗ 1)(w ∗ 2 − w ∗ 1), (4) where, ∇2L1(w ∗ 1) is the Hessian for loss L1 at w ∗ 1 and the last equality holds because the model is assumed to converge to a stationary point where gradient’s norm vanishes, thus ∇L1(w ∗ 1) ≈ 0. Under the assumption that the critical point is a minimum (or the plateau we get stuck in surrounds a minimum), we know that the Hessian needs to be positive semi-definite. Defining ∆w = w∗2 −w ∗ 1 as the relocation vector, we can bound the forgetting F1 as follows: F1 = L1(w ∗ 2)− L1(w ∗ 1) ≈ 1 2 ∆w⊤∇2L1(w ∗ 1)∆w ≤ 1 2 λmax1 ‖∆w‖ 2 , (5) where λmax1 is the maximum eigenvalue of ∇ 2L1(w ∗ 1). Fig. 2a shows how wider L1 (lower λ max 1 ) leads to less forgetting, both in terms of an illustrative example as well as showing empirical evidence for this relationship on Rotated MNIST and Permuted MNIST. A few notes on this bound. First, the bound is achieved when ∆w is co-aligned to the eigenvector corresponding to λ1. Given that the displacement is generated by the learning process on task 2, we can think of it as a random vector with respect to the eigenvectors of the Hessian of the first task. We know, therefore, that the tightness of the bound correlates with the dimensionality of w. In the extreme one-dimensional case, the Hessian matrix becomes a scalar given by λmax1 , and the bound is exact. As we increase the number of dimensions, the probability of two vectors to be perpendicular goes to 1. Hence in high-dimensional spaces is more likely for the bound to be relatively loose and for the entire spectrum of eigenvalues to play a much more important role in defining F1. Namely, as the number of eigenvalues with low magnitude increases, the more likely it is for F1 to be small. Assuming that the displacement vector is equally distributed over all the eigenvectors, then the trace of the Hessian will correlate stronger with F1 than the largest eigenvalue. However, reasoning in terms of the spectrum can be impractical (note, for example, that one can not trivially re-write the bound in terms of the trace). So we believe it is useful to think about λmax1 as long as any conclusion is contextualized correctly and the training regime we consider, implicitly, is aimed at lowering the entire spectrum, not just the largest eigenvalue. We also want to highlight λmax1 has been used previously to describe the width of a local minima [28, 39], with similar notes regarding the role of the entire spectrum [12, 39]. This property is central to the wide/narrow minima hypothesis for why neural networks generalize well. Our hypothesis is not tied to the generalization of wide minima, but we rely on the same or at least a very related concept of width. Therefore, to reduce forgetting, each task should push its learning towards wider minima and can employ the same techniques used to widen the minima to improve generalization. Resuming from Eq. 5, controlling the Hessian spectrum without controlling the norm of the displacement, however, might not ensure that F1 is minimized. ‖∆w‖ is technically controlled by the subsequent tasks. We first notice, empirically, that enforcing widening the minima of the next task (for the same reason of reducing forgetting on itself) inhibits additionally forgetting for the first task (see Table 1; not stable/plastic means relying on training regimes that encourage/do not encourage wide minima. We empirically estimate the width of minima as well, see appendix C for details). We make the observation that the width of the minima (norm of the eigenvalues) correlates with the norm of the weights. Hence the solutions in the stable learning regime tend to be closer to 0, which automatically decrease ‖∆w‖. Additionally, ‖∆w‖ relates to λmax2 also due to typical learning terminating near a minima, rather than at the minima. Refer to fig. 2b for an illustration. Specifically, the convergence criterion is usually satisfied in the ǫ-neighborhood of w∗2 . If we write the second order Taylor approximation of L2 around w ∗ 2 , we get: L2(ŵ2)− L2(w ∗ 2) ≈ 1 2 (ŵ2 − w ∗ 2) ⊤∇2L2(ŵ2)(ŵ2 − w ∗ 2) ≤ 1 2 λmax2 ‖w ∗ 2 − ŵ2‖ 2 ≤ ǫ, (6) where, the first equality holds since ∇L2(w ∗ 2) = 0. Thus, by decreasing λ max 2 , ŵ2 can be reached further from w∗2 since the ǫ-neighborhood is larger, and closer to w ∗ 1 . A more formal analysis is given in the appendix. Note that as we enforce the error on task 2 to be lower, the argument above weakens. In the limit, if we assume you converged on task 2, the distance does not depend on curvature, just ‖w∗1 − w ∗ 2‖. However, the choice of which minima w ∗ 2 learning prefers will still affect the distance, and as argued above, if wider minima tend to be closer to 0, then they tend to be closer to each other too. Collating all of these observations together we propose the following hypothesis: Hypothesis. The amount of forgetting that a neural network exhibits from learning the tasks sequentially, correlates with the geometrical properties of the convergent points. In particular, the wider these minima are, the less forgetting happens. We empirically verify the relationship between forgetting and the upper bound derived in E.q. (5). We approximate the Hessian with the largest eigenvalue of the loss function. The results in two common continual learning benchmarks is shown in Figures 2c and 2d. In the figure, the dots represent different neural network training regimes with different settings (e.g., with and without dropout, with and without learning rate decay, different initial learning rates, different batch sizes, different random initialization). See section 4 to find out how these techniques can lead to different loss geometries. All of the models have roughly 90% accuracy on task 2. We can see that our derived measure has high correlation with the forgetting. 4 Training Regimes: techniques affecting stability and forgetting In this section, we describe a set of widely used techniques that are known to affect the width of the minima (eigenspectrum of Hessian) as well as the length of the path taken by learning (‖∆w‖). These observations had been generally made with respect to improving generalization, following the wide/narrow minima hypothesis. Based on the argumentation of the previous section, we believe these techniques can have an important role in affecting forgetting as well. In the following section, we will validate this through solid empirical evidence that agrees with our stated hypothesis. 4.1 Optimization setting: learning rate, batch size, and optimizer There has been a large body of prior work studying the relationship between the learning rate, batch size, and generalization. One common technique of analysis in this direction is to measure the largest eigenvalues of the Hessian of the loss function, which quantifies the local curvature around minima [39, 16, 33, 32, 33]. Followed by the work by Keskar et al. [39], several other papers studied the correlation between minima wideness and generalization [32, 51, 33]. The learning rate and batch size influence both the endpoint curvature and the whole trajectory [77, 17, 45]. A high learning rate or a small batch size limits the maximum spectral norm along the path found by SGD from the beginning of training [33]. This is further studied by Jastrzebski et al. [34], showing that using a large learning rate in the initial phase of training reduces the variance of the gradient, and improves the conditioning of the covariance of gradients which itself is close to the Hessian in terms of the largest eigenvalues [81]. Although having a higher learning rate tends to be helpful since it increases the probability of converging to a wider minima [34], considering a continual optimization problem, we can see it has another consequence: it contributes to the rate of change (i.e., ∆w in (5)). Using a higher learning rate means applying a higher update to the neural network weights. Therefore, since the objective function changes thorough time, having a high learning rate is a double-edged sword. Intuitively speaking, decreasing the learning rate across tasks prevents the parameters from going far from the current optimum, which helps reduce forgetting. One natural solution could be to start with a high initial learning rate for the first task to obtain a wide and stable minima. Then, for each subsequent task, slightly decrease the learning rate but also decrease the batch-size instead, as suggested in [69]. Regarding the choice of an optimizer, we argue for the effectiveness of SGD in continual learning setting compared to adaptive optimizers. Although the adaptive gradient methods such as Adam [40] tend to perform well in the initial phase of training, they are outperformed by SGD at later stages of training due to generalization issues [38, 10]. Wilson et al. [76] show that even for a toy quadratic problem, Adam generalizes provably worse than SGD. Moreover, Ge et al. [19] study the effectiveness of exponentially decaying learning rate and show that in its final iterate, it can achieve a near-optimal error in stochastic optimization problems. Connection to continual learning. The effect of learning rate and batch-size has not been directly studied in the context of continual learning, to the extent of our knowledge. However, we find that the reported hyper-parameters in several works match our findings. Using a small batch size is very common across the continual learning methods. OGD [14] uses a small batch size of 10, similar to several other works [9, 7, 8]. EWC [41] uses a batch size of 32 and also the learning rate decay of 0.95. iCaRL [61] starts from a large learning rate and decays the learning at certain epochs exponentially by a factor of 0.2, while it uses a larger batch size of 128. Finally, PackNet [49] also reports using a learning rate decay by a factor of 0.1. When it comes to choosing the optimizer, the literature mostly prefers SGD with momentum over the adaptive gradient methods. With the exception of VCL [56], which uses Adam, several other algorithms such as A-GEM, OGD, EWC, iCaRL, and PackNet use the standard SGD. 4.2 Regularization: dropout and weight decay We relate the theoretical insights on dropout and L2 regularization (weight decay) to our analysis in the previous section. We first argue for the effectiveness of dropout, and then we discuss why L2 regularization might hurt the performance in a continual learning setting. Dropout [27] is a well-established technique in deep learning, which is also well-studied theoretically [3, 70, 26, 18, 74]. Wei et al. [74] showed that dropout has both implicit and explicit but entangled regularization effects: More specifically, they provided an approximation for the explicit effect on the i-th hidden layer (denoted by hi) under input x by: (p/p−1)[∇ 2 hi L]T [diag(h2i )], where p is the dropout probability, L is the loss function, and diag(v) is a diagonal matrix of vector v. This term encourages the flatness of the minima since it minimizes the second derivative of the loss with respect to the hidden activation (that is tightly correlated with the curvature with respect to weights). Thus, dropout regularization reduces the R.H.S of Eq. (5). Note that by regularizing the activations norm, it also pushes down the norm of the weights, hence encouraging to find a minima close to 0, which in turn could reduce the norm of ∆w. Intuitively, we can understand this effect also from the tendency of dropout to create redundant features. This will reduce the effective number of dimensions of the representation, increasing the number of small-magnitude eigenvalues for the Hessian. As a consequence, gradient updates of the new tasks are less likely to lie on the space spanned by significant eigendirections of previous losses, which results in lesser forgetting. With respect to L2 regularization, while intuitively it should help, we make two observations. First, dropout is data-dependent, while L2 is not. That means balancing the effect of regularization with learning is harder, and in practice, it seems to work worse (both for the currently learned task and for reducing forgetting while maintaining good performance on the new task). Secondly, when combined with Batch Normalization [30], L2 regularization leads to an Exponential Learning Rate Schedule [47], meaning that in each iteration, multiplying the initial learning rate by (1 + α) for some α > 0 that depends on the momentum and weight decay rate. Hence, using L2 regularization is equivalent to increasing the learning rate overall, potentially leading to larger displacements ∆w. Connection to continual learning. To the best of our knowledge, the work by Goodfellow et al. [22] is the first to empirically study the importance of the dropout technique in the continual learning setting. They hypothesize that dropout increases the optimal size of the network by regularizing and constraining the capacity to be just barely sufficient to perform the first task. However, by observing some inconsistent results on dissimilar tasks, they suggested dropout may have other beneficial effects too. Very recently, [54] studied the effectiveness of dropout for continual learning from the gating perspective. Our work extends their analysis in a more general setting by studying the regularization effect of dropout and its connection to loss landscape. Finally, [43] conducted a comprehensive empirical study on the effect of weight decay and dropout on the continual learning performance and reported that the model consistently benefits from dropout regularization as opposed to weight decay which results in increased forgetting and lower performance on the final model. 5 Experiments and results In this section, after explaining our experimental setup, we show the relationship between the curvature of the loss function and the amount of forgetting. We use the terms Stable and Plastic (Naive) to distinguish two different training regimes. The stable network (or stable SGD) exploits the dropout regularization, large initial learning rate with exponential decay schedule at the end of each task, and small batch size, as explained in Sec. 4. In contrast, the plastic (naive) SGD model does not exploit these techniques. In the second experiment, we challenge the stable network and compare it to various state of the art methods on a large number of tasks and more difficult benchmarks. In Appendix C.8, we will show that the stable training regime can be integrated into other continual learning methods and improve their performance significantly. 5.1 Experimental setup Here, we discuss our experimental methodologies. The decisions regarding the datasets, network architectures, continual learning setup (e.g., number of tasks, training epochs per task), hyperparameters, and evaluation metrics are chosen to be consistent with several other studies [7, 9, 8, 14], making it easy to compare our results. For all experiments, we report the average and standard deviation over five runs, each with a different random seed. For brevity, we include the detailed hyper-parameters, the code, and instructions for reproducing the results in the supplementary file. Datasets. We perform our experiments on three standard continual learning benchmarks: Permuted MNIST [22], Rotated MNIST, and Split CIFAR-100. While we agree with [15] regarding the drawbacks of Permuted MNIST in continual learning settings, we believe for consistency with other studies, it is essential to report the results on this dataset as well. Moreover, we report our results on Rotated MNIST and CIFAR-100 that are more challenging and realistic datasets for continual learning benchmarks, once the number of tasks is large. Each task of permuted MNIST is generated by random shuffling of the pixels of images such that the permutation is the same for the images of the same task, but different across different tasks. Rotated MNIST is generated by the continual rotation of the MNIST images where each task applies a fixed random image rotation (between 0 and 180 degrees) to the original dataset. Split CIFAR-100 is a variant of the CIFAR-100 where each task contains the data from 5 random classes (without replacement) out of the total 100 classes. Models. In our first experiment (Sec. 5.2), we evaluate the continual learning performance over five sequential tasks to provide fine-grained metrics for each task. For this experiment, we use a feed-forward neural network with two hidden layers, each with 100 ReLU neurons and use the deflated power iteration for computing eigenvalues [21]. For the second experiment (Sec. 5.3), we scale the experiments to 20 tasks and use a two-layer network with 256 ReLU neurons in each layer for MNIST datasets, and a ResNet18, with three times fewer feature maps across all layers for CIFAR experiments. These architectures have been previously chosen in several studies [7, 9, 14, 8]. Evaluation. We use two metrics from [6, 7, 9] to evaluate continual learning algorithms when the number of tasks is large. (1) Average Accuracy: The average validation accuracy after the model has been trained sequentially up to task t, defined by: At = 1 t t∑ i=1 at,i (7) where, at,i is the validation accuracy on dataset i when the model finished learning task t. (2) Average Forgetting: The average forgetting after the model has been trained sequentially on all tasks. Forgetting is defined as the decrease in performance at each of the tasks between their peak accuracy and their accuracy after the continual learning experience has finished. For a continual learning dataset with T sequential tasks, it is defined by: F = 1 T − 1 T−1∑ i=1 maxt∈{1,...,T−1} (at,i − aT,i) (8) To prevent confusion, we note that this definition of forgetting is different from what we studied in Section 3. Here, the average forgetting is an evaluation metric that is computed from validation accuracy of the model. 5.2 Stable versus Plastic networks Here, we verify the significance of the training regime in continual learning performance (Sec 3.) and demonstrate the effectiveness of the stability techniques (Sec 4.) in reducing the forgetting. Each row of Figure 3, represents one of three related concepts for each training regime on each dataset. First, the top row shows the evolution of accuracy on validation sets of each task during the continual learning experience. For instance, the blue lines in this row show the validation accuracy of task 1 throughout the learning experience. In the Middle row, we show the twenty sharpest eigenvalues of the curvatures of each task. In the bottom row, we measure the ℓ2 distance of network parameters between the parameters learned for each task, and the parameters learned for subsequent tasks. Aligned with our analysis in Section 3, we show that in contrast to the plastic regime, the stable training reduces the catastrophic forgetting (Fig. 3 (Top)) thanks to (1) decreasing the curvature (Fig. 3 (Middle)) and (2) shrinking the change of parameters (Fig. 3 (Bottom)). 5.3 Comparison with other methods In this experiment, we show that the stable network is a strong competitor for various continual learning algorithms. In this scaled experiment, we increase the number of tasks from 5 to 20, and provide results for Split CIFAR-100, which is a challenging benchmark for continual learning algorithms. The episodic memory size for A-GEM and ER-Reservoir is limited to be one example per class per task (i.e., 200 examples for MNIST experiments and 100 for CIFAR-100), similar to [9, 8]. To have a consistent naming with other studies, in this section, we use the word “Naive” to describe a plastic network in our paper. To evaluate each algorithm, we measure the average accuracy and forgetting (i.e., At and F in Sec. 5.1). Table 2 compares these metrics for each method once the continual learning experience is finished (i.e., after learning task 20). Moreover, Fig. 4 provides a more detailed picture of the average accuracy during the continual learning experience. To show that stable networks suffer less from catastrophic forgetting, we provide a comparison of the first task’s accuracy in the appendix. While our stable network performs consistently better than other algorithms, we note that our proposed techniques are orthogonal to other works and can also be incorporated in them, as we show in Appendix C.8 6 Conclusion In this work, we have revisited the catastrophic forgetting problem from loss landscapes and optimization perspective and identify learning regimes and training techniques that contribute to the forgetting. The analytical insights yielded a series of effective and practical techniques that can reduce forgetting and increase the stability of neural networks in maintaining previous knowledge. We have studied these techniques through the lens of optimization by studied the wideness of the loss surfaces around the local minima. However, they might have other confounding factors for reducing catastrophic forgetting as well. We call for more theoretical research to further their role in demystifying trading off the stability plasticity dilemma and its effect on continual learning. Finally, we have empirically observed that these simple techniques proved to be more effective than some of the recent approaches (e.g., regularization based methods, or memory-based methods) but are orthogonal to them in the sense that our practical recommendations and provided insights on loss perspective can be incorporated to them. Broader Impact Continual Learning aims for effectively training a model from sequential tasks while making sure the model maintains a reasonable performance on the previous ones. It’s an integral part of Artificial General Intelligence (AGI) that reduces the cost of retraining (time, computation, resources, energy) and mitigates the need for storing all previous data to respect users’ privacy concerns better. Reducing catastrophic forgetting may potentially risk privacy for data that are explicitly wanted to be forgotten. This calls for more future research into formalizing and proposing continual learning agents that allow the identifiable parts of data to be forgotten, but the general knowledge is maintained. The research presented in this paper can be used for many different application areas and a particular use may have both positive or negative implications. Besides those, we are not aware of any immediate short term negative impact. Acknowledgment SIM and HG acknowledge support from the United States National Science Foundation through grant CNS-1750679. The authors thank Anonymous Reviewers, Jonathan Schwarz, Sepehr Sameni, Hooman Shahrokhi, and Mohammad Sadegh Jazayeri for their valuable comments and feedback.
1. What is the main contribution of the paper regarding continual learning? 2. What are the strengths of the proposed approach, particularly in terms of its theoretical foundation? 3. What are the weaknesses of the paper, especially regarding its experimental setting and results? 4. Do you have any concerns or questions about the methodology used in the experiments? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The authors provide a theoretical study on how some known training techniques affect training in continual learning. First, they set up a mathematical framework to explain the importance of the geometry of the reached minima. Second, they explain how learning rate, batch size, optimizer, dropout and weight decay fit in this framework. Finally, they present a series of experiments, comparing a simple SGD baseline (with carefully tuned training regime) with other SOTA methods. Strengths The paper is well written and revolves around clear and intuitive ideas, consistently motivated by theoretical insights. The mathematical explanations that are provided in section 3 are especially interesting and easy to follow. It is interesting to see a paper that explicitly addresses how the aspects of training regimes affect continual learning. Evaluations in this area of research typically vary meaningfully across different works, so it is very important to have work that stops and reconsiders how training should be done. Weaknesses The experimental setting does not make clear whether task-labels are provided at test time. This is especially unclear w.r.t. To the evaluation on CIFAR-100. By reading the provided source it seems that task labels are given, but this should be clearly specified in the paper. I found the choice of only performing 1 epoch on Split-CIFAR100 questionable. I agree with the authors claiming that this is a challenging dataset and, for this reason, I would like to see the results when more epochs on it are performed. Indeed, the authors of [5] increase the number of epochs to 30 for a fair comparison with EWC. In general, it is common to perform even more epochs-per-task on this dataset: [1] performs 70, [2] and [3] perform 250 and [4] performs 160. The results of Table 2 could not be correctly reproduced by using the provided source code. By running replicate_experiment_2.sh I obtained the following accuracies for Stable SGD: 27.87 for perm-mnist; 35.95 for rot-mnist; 49.61 for cifar100 (seemingly worse than the SGD baseline on MNIST-based datasets). This is in contrast with Table 2 showing SGD outperforming all other methods and impacts majorly on section 5.3. If this is due to a bug in the provided code, it is paramount to fix it. Also, by running replicate_experiment_1.sh, I obtained 87.00 for rot-mnist (stable) and 60.65 for perm-mnist (stable) as values of mean accuracy. Looking at Fig. 3, it seems to me that these numbers should be over 90. [ 1 ] Rebuffi et al., iCaRL: Incremental Classifier and Representation Learning [ 2 ] Wu et al., Large Scale Incremental Learning [ 3 ] Abati et al., Conditional Channel Gated Networks for Task-Aware Continual Learning [ 4 ] Hou et al., Learning a Unified Classifier Incrementally via Rebalancing [ 5 ] Chaudhry et al., Efficient Lifelong Learning with A-GEM
NIPS
Title Sound and Complete Causal Identification with Latent Variables Given Local Background Knowledge Abstract Great efforts have been devoted to causal discovery from observational data, and it is well known that introducing some background knowledge attained from experiments or human expertise can be very helpful. However, it remains unknown that what causal relations are identifiable given background knowledge in the presence of latent confounders. In this paper, we solve the problem with sound and complete orientation rules when the background knowledge is given in a local form. Furthermore, based on the solution to the problem, this paper proposes a general active learning framework for causal discovery in the presence of latent confounders, with its effectiveness and efficiency validated by experiments. 1 Introduction Causality has attracted tremendous attention in recent years, for its application on explainability [1], fairness [2, 3, 4], decision [5, 6, 7, 8, 9], and so on. In Pearl’s causality framework [10], one important problem is causal discovery, i.e., learning a causal graph to denote the causal relations among all the variables [11, 12, 13, 14, 15, 16]. Generally, we cannot identify all the causal relations from observational data, unless we make some additional functional assumptions [17, 18, 19] or exploit the abundant information in multiple or dynamic environments [20, 21]. In light of the uncertainty of the causal relations, a common practice to reveal them is introducing background knowledge, which is called BK for short. BK can be attained from experiments or human expertise. When experiments are available, we can collect interventional data to learn additional causal relations [22, 23, 24, 25, 26, 27, 28, 29, 30, 31]. And if in the causal discovery task, there are some variables familiar to humans, it is also possible that the human expertise can be helpful [32]. For example, if we study the causal relations among some variables including sales and prices, the causal relations such as price causes sales can be obtained directly according to human expertise. When BK is available in addition to observational data, a fundamental problem is: what causal relations are identifiable in the presence of latent variables? This problem is fundamental for its implication on the maximally identifiable causal knowledge with the observational data and BK. Its difficulty results from the fact that, in addition to the BK itself, some other causal relations can also be learned when incorporating BK. For example, they can be identified on the basis of some restrictions, such as the causal relations are acyclic. It is quite challenging to find the complete characterization for such additional causal knowledge in the presence of latent variables, and the complete characterization is necessarily accompanied with theoretical guarantee for the existence of causal graphs consistent to the observational data and local BK that have exactly different causal relations for the unidentifiable ones. Unfortunately, the problem remains open. In this paper, we solve the problem with sound and complete orientation rules when the background knowledge is given in a local form. In the presence of latent variables, a partial ancestral graph (PAG) 36th Conference on Neural Information Processing Systems (NeurIPS 2022). can be learned by FCI algorithm from observational data [33, 34, 35]. PAG can imply the existence of causal relation between any two variables but not necessarily imply the causal direction. We say BK is local, if when the BK contains the causal information with respect to a variable X , for each variable adjacent to X in the PAG, the BK implies whether X causes it or not. The local BK is common in real tasks no matter it is from experiments or human expertise. For example, when we make experiments and collect the data under intervention on X , for each variable V that has a causal relation with X , the interventional data can tell whether X causes V ; and businessman often has enough domain knowledge about price, thus they usually know whether price causes other variables or not, such as price causes sales and number of customers, and price is not caused by stocks. Given a PAG and local BK, we propose a set of orientation rules to determine some causal directions in the PAG. We prove that the rules are sound and complete, which state that all the causal relations that are identifiable given available information are exactly those determined by the proposed rules, thus closing the problem given local BK. The establishment of orientation rules compatible with local BK makes causal discovery by interventions possible in the presence of latent variables. We propose the first general active learning framework for causal discovery, with the target of identifying a maximal ancestral graph (MAG), which implies the causal relations when there are latent variables. Considering that intervention is expensive in reality, we hope to achieve the target with as few interventions as possible. Hence we present a baseline maximal entropy criterion, equipped with Metropolis-Hastings sampling, to select the intervention variable such that we can learn more causal relations by each intervention. Our contributions in this paper are twofold: (1) We show what causal relations are identifiable given local background knowledge in the presence of latent confounders with sound and complete orientation rules. (2) We give the first active learning framework for causal discovery that is applicable when latent variables exist, where maximal entropy criterion equipped with Metropolis-Hastings sampling is introduced to select intervention variables. Related works. In the literature, Meek [36] established sound and complete rules, generally called Meek rules, for causal identification given BK under the causal sufficiency assumption. The assumption requires that there are no latent variables that cause more than one observed variable simultaneously. However, causal sufficiency is untestable in practice. When we apply causality in subjects such as biology, sociology, and economics, it is quite often that there are latent variables. For example, the macroeconomic policy influences purchase price, the population of customers, and advertising cost, but it is hard to evaluate it, thereby a latent confounder. Andrews et al. [37] showed that FCI algorithm is complete given tiered BK, where all variables are partitioned into disjoint sets with explicit causal order. While in many cases, e.g., when BK is revealed by interventions, BK is not tiered. And Jaber et al. [28] investigated the complete algorithm to learn a graph when there are additional interventional distribution, while such knowledge is not needed in our paper. 2 Preliminary A graph G = (V,E) consists of a set of vertices V = {V1, · · · , Vp} and a set of edges E. For any subset V′ ⊆ V, the subgraph induced by V′ is GV′ = (V′,EV′), where EV′ is the set of edges in E whose both endpoints are in V′. For a graph G, V(G) denotes the set of vertices in G. G is a complete graph if there is an edge between any two vertices. The subgraph induced by an empty set is also a complete graph. G[−V′] denotes the subgraph GV\V′ induced by V\V′. Usually, bold letter (e.g., V) denotes a set of vertices and normal letter (e.g., V ) denotes a vertex. A graph is chordal if any cycle of length four or more has a chord, which is an edge joining two vertices that are not consecutive in the cycle. If G = (V,E) is chordal, the subgraph of G induced by V′ ⊆ V is chordal. A graph G is mixed if the edges in G are either directed → or bi-directed ↔. The two ends of an edge are called marks and have two types arrowhead or tail. A graph is a partial mixed graph (PMG) if it contains directed edges, bi-directed edges, and edges with circles (◦). The circle implies that the mark here could be either arrowhead or tail but is indefinite. Vi is adjacent to Vj in G if there is an edge between Vi and Vj . A path in a graph G is a sequence of distinct vertices 〈V0, · · · , Vn〉 such that for 0 ≤ i ≤ n − 1, Vi and Vi+1 are adjacent in G. An edge in the form of Vi ◦−◦ Vj is a circle edge. The circle component in G is the subgraph consisting of all the ◦−◦ edges in G. Denote the set of vertices adjacent to Vi in G by Adj(Vi, G). A vertex Vi is a parent of a vertex Vj if there is Vi → Vj . A directed path from Vi to Vj is a path comprised of directed edges pointing to the direction of Vj . A possible directed path from Vi to Vj is a path without an arrowhead at the mark near Vi on every edge in the path. Vi is an ancestor/possible ancestor of Vj if there is a directed path/possible directed path from Vi to Vj or Vi = Vj . Vi is a descendant/possible descendant of Vj if there is a directed path/possible directed path from Vj to Vi or Vj = Vi. Denote the set of parent/ancestor/possible ancestor/descendant/possible descendant of Vi in G by Pa(Vi, G)/Anc(Vi, G)/PossAn(Vi, G)/De(Vi, G)/PossDe(Vi, G). If Vi ∈ Anc(Vj , G) and Vi ← Vj /Vi ↔ Vj , it forms a directed cycle/almost directed cycle. ∗ is a wildcard that denotes any of the marks (arrowhead, tail, and circle). We make a convention that when an edge is in the form of ◦−∗, the ∗ here cannot be a tail since in this case the circle can be replaced by an arrowhead due to the assumption of no selection bias. A non-endpoint vertex Vi is a collider on a path if the path contains ∗→ Vi ←∗. A path p from Vi to Vj is a collider path if Vi and Vj are adjacent or all the passing vertices are colliders on p. p is a minimal path if there are no edges between any two non-consecutive vertices. A path p from Vi to Vj is a minimal collider path if p is a collider path and there is not a proper subset V′ of the vertices in p such that there is a collider path from Vi to Vj comprised of V′. A triple 〈Vi, Vj , Vk〉 on a path is unshielded if Vi and Vk are not adjacent. p is an uncovered path if every consecutive triple on p is unshielded. A path p is a minimal possible directed path if p is minimal and possible directed. A mixed graph is an ancestral graph if there is no directed or almost directed cycle (since we assume no selection bias, we do not consider undirected edges in this paper). An ancestral graph is a maximal ancestral graph (MAG, denoted byM) if it is maximal, i.e., for any two non-adjacent vertices, there is a set of vertices that m-separates them [33]. A path p between X and Y in an ancestral graph G is an inducing path if every non-endpoint vertex on p is a collider and meanwhile an ancestor of either X or Y . An ancestral graph is maximal if and only if there is no inducing path between any two non-adjacent vertices [33]. In an MAG, a path p = 〈X, · · · ,W, V, Y 〉 is a discriminating path for V if (1) X and Y are not adjacent, and (2) every vertex between X and V on the path is a collider on p and a parent of Y . Two MAGs are Markov equivalent if they share the same m-separations. A class comprised of all Markov equivalent MAGs is a Markov equivalence class (MEC). We use a partial ancestral graph (PAG, denoted by P) to denote an MEC, where a tail/arrowhead occurs if the corresponding mark is tail/arrowhead for all Markov equivalent MAGs, and a circle occurs otherwise. For a PMG M that is obtained from a PAG P by orienting some circles to either arrowheads or tails, an MAGM is consistent to the PMG M if (1) the non-circle marks in M are also inM, and (2)M is in the MEC represented by P . Sometimes we will omit the PAG P and just directly say a PMG M (obtained from the PAG P) since in this paper we study the rules to incorporate local BK to a PAG. We say an MAGM is consistent to the BK ifM is with the orientations dictated by the BK. 3 Sound and Complete Rules In this section, we present the sound and complete orientation rules to orient a PAG P with local background knowledge (BK), where P is learned by observational data [11, 35] and V(P) = {V1, V2, · · · , Vd}. The local BK regarding X means that the BK directly implies and only directly implies all the true marks at X , denoted by BK(X). We assume the absence of selection bias and that the BK is correct. The correctness indicates that there exists an MAG consistent to P and the BK. Without loss of generality, we suppose the local BK is regarding V1, V2, · · · , Vk, 1 ≤ k ≤ d. That is, for any vertex X ∈ V1, V2, · · · , Vk, all the marks at X are known according to the local BK; and for any vertex X ∈ Vk+1, · · · , Vd, the local BK does not directly imply any marks at X . First, we show the orientation rules to incorporate local BK. They follow the rules of Zhang [35] for learning a PAG but with one replacement and one addition. Due to the page limit, we do not list them here but the replaced and additional ones. See Appendix A for the rules proposed by Zhang [35]. R′4: If 〈K, · · · , A,B,R〉 is a discriminating path between K and R for B, and B ◦−∗ R, then orient B ◦−∗R as B → R. R11: If A−∗B, then A→ B. Algorithm 1: Update a PMG with local background knowledge Input: A PMG Mi, BK(X) Output: Updated graph Mi+1 1 For any K ∈ PossDe(X,Mi[−C]) and any T ∈ C such that K ◦−∗ T in Mi, orient K ←∗T (the mark at T remains); for all K ∈ PossDe(X,Mi[−C]) such that X ◦−∗K, orient X → K; 2 Orient the subgraph Mi[PossDe(X,Mi[−C])\{X}] as follows until no feasible updates: for any two vertices Vl and Vj such that Vl ◦−◦ Vj , orient it as Vl → Vj if (i) FVl\FVj 6= ∅ or (ii) FVl = FVj as well as there is a vertex Vm ∈ PossDe(X,Mi[−C])\{X} not adjacent to Vj such that Vm → Vl ◦−◦ Vj ; 3 Apply the orientation rules until the graph is closed under the orientation rules. Prop. 1 implies the soundness ofR′4 to orient a PAG P or a PMG obtained from P with local BK. See Appendix A for the proof. R11 is immediate due to no selection bias assumption. In the following, we make a convention that when we say the orientation rules, they refer to R1 − R3,R8 − R10 of Zhang [35] and R′4,R11. A PMG is closed under the orientation rules if the PMG cannot be oriented further by the orientation rules. Proposition 1. Given a PAG P , for any PMG M that is obtained from P by orienting some circles in P (or M = P),R′4 is sound to orient M with local background knowledge. Proof sketch: If there is B ←∗R in an MAG consistent with the case ofR′4, there must be a minimal collider path between K and R across B, in which case B ←∗R should have been identified in the PAG according to Zhao et al. [38], Zhang [35], contradiction. Next, we will prove the completeness of the proposed orientation rules. It is somewhat complicated. We first give a roadmap for the proof process. There are mainly two parts. The first is that we present a complete algorithm to orient P with the local BK regarding V1, V2, · · · , Vk. The second part is to prove that the algorithm orient the same marks as the proposed orientation rules. Combining these two parts, we conclude the orientation rules are complete to orient a PAG. The construction of the algorithm and the corresponding proof for the completeness of the algorithm in the first step is the most difficult part. To achieve the construction, we divide the whole process of orienting a PAG with BK regarding V1, V2, · · · , Vk into k steps. Beginning from the PAG P (P is also denoted by M0), in the (i+ 1)-th (0 ≤ i ≤ k − 1) step we obtain a PMG Mi+1 from Mi by incorporating BK(Vi+1) and orienting some other circles further. To obtain the updated graph in each step, we propose an algorithm orienting a PMG with local BK incorporated in this step. Repeat this process by incorporating BK(V1), BK(V2), . . . , BK(Vk) sequentially, we obtain the PMG with incorporated BK regarding V1, · · · , Vk. We will prove that the k-step algorithm to orient PAG with local BK regarding V1, · · · , Vk is complete, by an induction step that if the first i-step algorithm is complete to update the PAG P with BK regarding V1, · · · , Vi, then the (i+ 1)-step algorithm is complete to update P with BK regarding V1, · · · , Vi+1. Hence the proof in the first part completes. In the second part, we show that the k-step algorithm orients the same marks as the proposed orientation rules. We thus conclude that the orientation rules are sound and complete for causal identification in the presence of latent variables given local BK. We present Alg. 1 to obtain Mi+1 from Mi by incorporating BK(Vi+1). For brevity, we denote Vi+1 byX , and introduce a set of vertices C defined as C = {V ∈ V(P) | V ∗→ X ∈ BK(X)} to denote the vertices whose edges with X will be oriented to ones with arrowheads at X according to BK(X) directly. In Mi+1, there is X ←∗V for V ∈ C and X−∗V for V ∈ {V ∈ V(P) | V ∗−◦X in Mi}\C oriented directly according to BK(X). We define FMiVl = {V ∈ C ∪ {X} | V ∗−◦ Vl in Mi} for any Vl ∈ PossDe(X,Mi[−C])\{X}, which is denoted by FVl for short. FVl denotes the vertices in C ∪ {X} whose edges with Vl are oriented to ones with arrowheads at Vl in the first step of Alg. 1. In the first step of Alg. 1, the orientation at X follows BK(X), and the orientation at the vertices apart of X is motivated as the necessary condition for the ancestral property. Speaking roughly, if there is an oriented edge K → T in the case of the first step, then no matter how we orient the other circles, there will be a directed or almost directed cycle, unless we introduce new unshielded colliders (which takes new conditional independences relative to those in P), both of which are evidently invalid to obtain an MAG in the MEC represented by P . And the orientation in the second step is motivated as the necessary condition for that there are no new unshielded colliders in the oriented graph relative to the PAG P . If there is an MAG where there is an inconsistent edge with the edge oriented in this step, then there must be new unshielded colliders relative to P , which implies that the MAG is not consistent to P . The third step orients some other circles based on the updated structure. Example 1. See the example in Fig. 1. Suppose the input PMG Mi in Alg. 1 is the graph shown in Fig. 1(a). And there is local BK regardingX = V1, which is in the form of V1 ←∗V2, V1−∗V5, V1−∗V4. Hence C = {V2}. In this case, PossDe(X,Mi[−C]) = PossDe(V1,Mi[−V2]) = {V1, V3, V4, V5}. And FV3 = {V2}, FV4 = {V1}, FV5 = {V1, V2}. When we implement Alg. 1, in the first step, the edges denoted by red dashed lines in Fig. 1(b) are oriented. Among them, V1◦−◦V2/V1◦−◦V5/V1◦→ V4 is transformed to V1 ←◦V2/V1 → V5/V1 → V4 due to V1 = X,V2 ∈ C, V4, V5 ∈ {V ∈ V(P) | V ∗−◦ X in Mi}\C; and V2◦→ V5/V2◦→ V3 is oriented due to V2 ∈ C and V3, V5 ∈ PossDe(X,Mi[−C]). In the second step of Alg. 1, the edge denoted by red dashed line in Fig. 1(c) is oriented due to (1) a circle edge V3 ◦−◦ V5 after the first step, where V3, V5 ∈ PossDe(X,Mi[−C]); (2) FV3 = {V2} ⊂ {V1, V2} = FV5 . In the third step of Alg. 1, the edges denoted by red dashed lines in Fig. 1(d) is oriented byR1 of the orientation rules. Then, we present the key induction result in Thm. 1 for the graph obtained by Alg. 1 in each step. Due to the page limit, we only show a proof sketch, with a detailed version in Appendix B. Then with Thm. 1, we directly conclude that k-step algorithm is complete to orient the PAG with the local BK regarding V1, . . . , Vk in Cor. 1. Theorem 1. Given i, suppose Ms,∀s ∈ {0, 1, . . . , i} satisfies the five following properties: (Closed) Ms is closed under the orientation rules. (Invariant) The arrowheads and tails in Ms are invariant in all the MAGs consistent to P and BK regarding V1, . . . , Vs. (Chordal) The circle component in Ms is chordal. (Balanced) For any three vertices A,B,C in Ms, if A∗→ B ◦−∗C, then there is an edge between A and C with an arrowhead at C, namely, A∗→ C. Furthermore, if the edge between A and B is A→ B, then the edge between A and C is either A→ C or A◦→ C (i.e., it is not A↔ C). (Complete) For each circle at vertex A on any edge A◦−∗B in Ms, there exist MAGsM1 andM2 consistent to P and BK regarding V1, . . . , Vs with A←∗B ∈ E(M1) and A→ B ∈ E(M2). Then the PMG Mi+1 obtained from Mi with BK(Vi+1) by Alg. 1 also satisfies the five properties. Proof sketch: For brevity, we denote Vi+1 by X . (A) The closed property holds due to the third step of Alg. 1.(B) The invariant property holds because all the orientations in Alg. 1 either follow BK(X) or are motivated as the necessary condition for the ancestral property and the fact that there cannot be new unshielded colliders introduced relative to Mi. (C) The chordal property is proved based on the fact that only the first two steps of Alg. 1 possibly introduce new arrowheads, while the third step will only transform the edges as A◦→ B to A→ B, which is proved in Lemma 12 in Appendix B. With this fact, it suffices to prove that the circle component in the graph obtained after the first two steps is chordal. Denote the graph after the first two steps by M̄i+1. We can prove that the circle components in M̄i+1[PossDe(X,Mi[−C])] and in M̄i+1[−PossDe(X,Mi[−C])] are chordal, respectively. Since there are no circle edges connecting PossDe(X,Mi[−C]) and V\PossDe(X,Mi[−C]) (otherwise it has been oriented in the first step of Alg. 1), we conclude the desired result. (D) The balanced property of Mi+1 is proved based on three facts that (1) in Alg. 1, if we transform a circle to arrowhead at V , then V ∈ PossDe(X,Mi[−C]); (2) if there is A ∈ PossDe(X,Mi[−C]) and A ◦−∗ B, B 6∈ C, in Mi+1, then B ∈ PossDe(X,Mi[−C]); (3) Mi satisfies the balanced property. We can prove that it is impossible that there is a sub-structure Vi∗→ Vj ◦−∗ Vk where Vi is not adjacent to Vk or there is Vi ∗−◦ Vk in Mi+1 by discussing whether Vi, Vj , Vk belongs to PossDe(X,Mi[−C]). (E) The completeness property is proved by showing two results: (1) for edge circle edge A ◦−◦B and C◦→ D in Mi+1, C◦→ D can be transformed to C → D and the circle edge can be oriented as both A→ B andA← B in the MAGs consistent to P and local BK regarding V1, · · · , Vi+1; (2) in Mi+1, each edge A◦→ B can be oriented as A ↔ B in an MAG consistent to P and local BK regarding V1, · · · , Vi+1. In this part, the most difficult part is to prove the first result, with which the second result can be proved directly following the proof process of Thm. 3 of Zhang [35]. In the proof for the first result, we show that any MAG obtained from Mi+1 by transforming the edges as A◦→ B to A→ B and the circle component into a DAG without new unshielded colliders is consistent to P and local BK regarding V1, . . . , Vi+1. If not, we can always find an MAG obtained from Mi by transforming the edges as A◦→ B to A → B and the circle component into a DAG without new unshielded colliders that is not consistent to P and local BK regarding V1, . . . , Vi. By induction, there is an MAG obtained from P by transforming the edges as A◦→ B to A → B and the circle component into a DAG without new unshielded colliders that is not consistent to P , contradiction with Thm. 2 of Zhang [35]. We conclude the first result. Corollary 1. The k-step algorithm from M0(= P) to Mk is sound and complete. That is, the non-circle marks in Mk are invariant in all the MAGs consistent to P and BK regarding V1, . . . , Vk. And for each circle in Mk, there exist both MAGs with an arrowhead and MAGs with a tail here that are consistent to P and BK regarding V1, . . . , Vk. Proof. Previous studies [34, 35] show that the last four properties in Thm. 1 are fulfilled in PAG, the case inR′4 will never happen in P because such circles have been oriented byR4 in the process of learning P , and the case inR11 is never triggered by the rules to learn P . Hence P satisfies the five properties. With the induction step implied by Thm 1, we directly conclude that Mk satisfies the five properties, thereby satisfying the invariant and complete property. Theorem 2. The orientation rules are sound and complete to orient a PAG with the local background knowledge regarding V1, . . . , Vk. Proof. The soundness ofR′4 is shown by Prop. 1. The soundness of other rules immediately follows Thm. 4.1 of Ali et al. [34] and Thm. 1 of Zhang [35]. We do not show the details. Roughly speaking, the violation of these rules will lead to that there are new unshielded colliders or directed or almost directed cycles in the oriented graph relative to P . The main part is to prove the completeness. According to Cor. 1, it suffices to prove that in each step by Alg. 1 to incorporate BK(X) into a PMG Mi, the orientations in Alg. 1 either follow BK(X) directly, or can be achieved by the proposed orientation rules. The orientation in the second step of Alg. 1 can be achieved by R1, because no matter FVl\FVj 6= ∅ or Vm → Vl ◦−◦ Vj , there is F ∈ FVl\FVj or F = Vm respectively such that F∗→ Vl ◦−◦ Vj where F is not adjacent to Vj . The orientation in the third step naturally follows the orientation rules. For the orientation in the first step, X ←∗V for V ∈ C is dictated by BK(X), and X → V for V ∈ {V ∈ V(P) | X ◦−∗ V }\C is obtained from X −∗V dictated by BK(X) andR11. The remaining part is to prove for K ∈ PossDe(X,Mi[−C])\{X} and T ∈ C, if there is K ◦−∗ T in Mi, K ←∗T can be oriented by the proposed orientation rules when we incorporate BK(X). Due to K ∈ PossDe(X,Mi[−C])\{X}, there is a possible directed path from X to K that does not go through C. According to Lemma 2 in Appendix B, there is a minimal possible directed path p = 〈X(= F0), F1, . . . ,K(= Ft)〉, t ≥ 1 where each vertex does not belong to C. Hence X → F1 is oriented by BK(X) and R11 unless X → F1 has been in Mi. Hence, X → F1 → · · · → Ft can be oriented by R1 after incorporating BK(X) unless they have been in Mi. If t = 1, there is T∗→ X → K, thus K ←∗T can be oriented byR2. Next, we consider the case when t ≥ 2. We first prove that for any Fm ∈ F1, . . . , Ft, t ≥ 2, Fm is adjacent to T , and there is not Fm → T in Mi. Suppose Fm is not adjacent to T , there must be a sub-structure of Mi induced by Fm−s, Fm−s+1, . . . , Fm+l, T , 1 ≤ s ≤ m, 1 ≤ l ≤ t−m, such that T is only adjacent to Fm−s and Fm+l in this sub-structure. There are at least four vertices in this sub-structure. Hence there must be an unshielded collider (denoted by UC for short) in this sub-structure in P , otherwise no matter how we orient the circle there is either a new UC relative to P or a directed or almost directed cycle there. Since p is possibly directed, the UC is at either Fm+l or T (i.e., ∗→ Fm+l( or T )←∗). If there is a UC at Fm+l, T∗→ Fm+l and Fm+l−1∗→ Fm+l are identified in P . Thus Fm+l → Fm+l+1 · · · → Ft is identified in P . Due to the completeness of FCI algorithm to learn P , there is K ←∗T in P , because there is not an MAG with K → T (there has been T∗→ Fm+l → · · · → K in P). Hence there is K ←∗T in Mi, contradicting with K ◦−∗ T in Mi. If there is not a UC at Fm+l, UC can only be at T . Thus Fm−s∗→ T ←∗Fm+l is identified in P . Since p is possibly directed, Fm+l−1 is not adjacent to T , and there is not a UC at Fm+l in the sub-structure, there cannot be Fm+l ↔ T in P . Hence the path 〈Fm−s, Fm−s+1, . . . , Fm+l, T 〉 in P is an uncovered possible directed path, Fm−s → T is identified in P (otherwise R9 applies). When incorporating BK(X), there is a (almost) directed cycle T∗→ X → · · · → Fm−s → T , contradicting with the correctness of BK. Hence, Fm is adjacent to T . Similarly, if Fm → T in Mi, there is T∗→ X → · · · → Fm → T , impossibility. Finally, since F1 is adjacent to T , and T∗→ X → F1 is oriented according to BK(X), there is T∗→ F1 oriented by R2 unless T∗→ F1 has been in Mi. Hence there is always T∗→ F1 by the orientation rules. Consider T∗→ F1 → F2, there is T∗→ F2 oriented by R2 unless T∗→ F2 has been in Mi. Repeat the process for F3, F4, . . . , Ft(= K), we can prove that if there is Ft(= K)◦−∗T in Mi, there is T∗→ Ft(= K) oriented byR2. The rules thus orient the same marks as Alg. 1. Example 2. We give an example in Fig. 2. Suppose we obtain a PAG as Fig. 2(a) with observational data and have the local BK regarding V1, V2. We divide the whole process of obtaining a PMG from P with the local BK into obtaining M1 from P with BK(V1) by Alg. 1 and then obtaining M2 from M1 with BK(V2) by Alg. 1. M1 and M2 are shown in Fig. 2(b) and 2(c), respectively. It is not hard to verify that all of P , M1, M2 satisfy the closed, chordal, and balanced properties defined in Thm 1. Note if we do not considerR′4, the edge colored red in Fig. 2(b) cannot be oriented. Fig. 2(a) also shows a case where BK(V1) is not tiered [37]. The reason is that the vertices V1, V4, V5 cannot partitioned into disjoint subsets with explicit causal order because V1 and V4 belong to different subsets according to BK(V1) but V5 has ancestor relation with neither V1 nor V4. 4 Active Causal Discovery Framework The establishment of the orientation rules for causal identification with local BK makes causal discovery by interventions possible in the presence of latent variables. Hence, on the basis of the theoretical results, we propose an active learning framework for causal discovery in the presence of latent variables, with the target of learning the MAG with as fewer interventions as possible. The framework is comprised of three stages. In Stage 1, we learn a PAG with observational data. In Stage 2, we select a singleton variable X ∈ V1, . . . , Vd to intervene and collect the interventional data. In Stage 3, we learn causal relations with the data. For each edge X ◦−∗Vi, the circle at X can be learned by a two-sample test on whether the interventional distribution of Vi equals to the observational one. There is X ←∗Vi learned if they are equal, and X −∗Vi otherwise. Hence, the knowledge taken by the Algorithm 2: Intervention variable selection based on maximum entropy criterion with MH alg. Input: A PMG Mi oriented based on P and BK regarding V1, . . . , Vi, number of MAGs L Output: The selected intervention variable X 1 Obtain an MAGM0 based on Mi by transforming ◦→ to→ and the circle component into a DAG without new unshielded colliders; 2 for t = 1, 2, . . . , L′ do 3 Sample an MAGM′ from S(Mt−1); 4 ρ = min(1, |S(Mt−1)||S(M′)| ); 5 Sample u from uniform distribution U [0, 1]; 6 if u ≤ ρ then Mt =M′ else Mt =Mt−1 ; 7 S = {Mt,1≤t≤L′ | Mt has the non-circle marks in Mi} . The set of MAGs consistent to Mi; 8 s← 0, X ← ∅; 9 for Vj = Vi+1, . . . , Vd do 10 Denote V(Vj) = {V ∈ V(Mi) | Vj ◦−∗ V in Mi}, L = |S|; 11 For each possible local structure Lk of Vj , 1 ≤ k ≤ 2|V(Vj)|, we count the number Nk of the appearance of Lk in the L MAGs from S; 12 s′ = − ∑2|V(Vj)| k=1 Nk L log Nk L ; 13 if s ≤ s′ then X ← Vj , s← s′; 14 return X . interventional data is local. We repeat the second and third stages until we identify the MAG. Since the orientation rules are complete, the graph can be updated completely by each intervention. The only remaining problem is how to select the intervention variable in Stage 2. Considering that the whole process is sequential, we only focus on the intervention variable selection in one round. Without loss of generality, suppose we have obtained a PMG Mi by i interventions on V1, V2, . . . , Vi, and will select a variable from {Vi+1, . . . , Vd} to intervene. We adopt the maximum entropy criterion [22]. For Mi, we select the variable X that maximizes HX = − M∑ j=1 lj L log lj L , (1) where j is an index for a local structure of X (a local structure of X denotes a definite orientation of the marks at X), M denotes the number of different local structures, lj denotes the number of MAGs consistent to Mi which has the j-th local structure of X , and L denotes the total number of MAGs consistent to Mi. Intuitively, the maximum entropy criterion is devoted to selecting the intervention variable X such that there is a similar number of MAGs with each local structure of X and as more as possible local structures of X . A justification for intervening on such a variable is that we hope to have a small space of MAGs after the intervention no matter what the true local structure of X is. However, it is hard to count the number of MAGs consistent to Mi with each definite local structure. Even in causal sufficiency setting, implementing such operation (generally called counting maximally oriented partial DAGs) is #P-complete [39]. Considering DAG is a special case for MAG, the counting of MAGs is harder. Hence, we adopt a sampling method based on Metropolis-Hastings (MH) algorithm [40], to uniformly sample from the space of MAGs. The algorithm begins from an MAG consistent to Mi, and in each round we transform the MAG to a candidate MAG and decide to accept or reject it with some probability. Here, we introduce an important result of Zhang and Spirtes [41] for MAGs transformation in Prop. 2. Proposition 2 (Zhang and Spirtes [41], Tian [42]). LetM be an arbitrary MAG, and A → B an arbitrary directed edge inM. LetM′ be the graph identical toM except that the edge between A and B is A↔ B.M′ is an MAG Markov equivalent toM if and only if (1) there is no directed path from A to B other than A→ B inM; (2) for any C → A inM, C → B is also inM; and for any D ↔ A inM, either D → B or D ↔ B is inM; (3) there is no discriminating path for A on which B is the endpoint adjacent to A inM. In the MAG sampling algorithm, in each step we transform the current MAG to a new MAG by converting a directed edge to bi-directed edge or a bi-directed one to directed one, where we use Prop. 2 to determine whether an MAG Markov equivalent to the current MAG can be obtained by the conversion. For MH algorithm, a stationary distribution equal to the desired distribution can be obtained if any two states can be transformed to each other in limited steps [43]. As implied by Theorem 3 of Zhang and Spirtes [41], any MAG can be transformed to another Markov equivalent MAG in a limited number of transformations above. Hence, MH algorithm is valid to sample MAGs uniformly from the space of MAGs consistent to P . Then, we only remain the MAGs that have the same non-circle marks as Mi. In this way, we obtain a set of MAGs which are uniformly sampled from the space of MAGs consistent to Mi. Given an MAGM, let S(M) denote the set of MAGs that can be obtained fromM by transforming one bi-directed edge to directed edge or one directed edge to bi-directed edge according to Prop. 2. Denote the cardinality of S(M) by |S(M)|. We set the probability Q(M′ | M) of an MAGM transformed to another MAGM′ ∈ S(M) as 1/|S(M)|. Hence, the acceptance ratio ρ that is used to decide whether to accept or reject the candidate is ρ = min ( 1, p(M′)Q(M |M′) p(M)Q(M′ | M) ) = min ( 1, |S(M)| |S(M′)| ) . We propose Alg. 2 to select the intervention variable X . As shown by Lemma 15.1 in Appendix B, the graphM0 is an MAG consistent to Mi. From Line 2-Line 6, we execute MH algorithm to sample L′ MAGs. Then, we select the MAGs among them which are consistent to Mi on Line 7. Finally, we estimate the entropy by (1) and select X from Line 9-Line 14. 5 Experiments In this section, we conduct a simple simulation of the three-stage active learning framework. We generate 100 Erdös-Rényi random DAGs for each setting, where the number of variables d = 10 and the probability of including each edge p ∈ {0.1, 0.15, 0.2, 0.25, 0.3}. The weight of each edge is drawn from U [1, 2]. We generate 10000 samples from the linear structural equations, and take three variables as latent variables and the others as observed ones. In the implementation of the MH algorithm in Alg. 2, we discard the first 500 sampled MAGs and collect the following 1000 MAGs. For each intervention variable X , we collect 10000 samples under do(X = 2), and learn the circles at X by two-sample test with a significance level of 0.05. We compare the maximum entropy criterion with a baseline random criterion where we randomly select one variable with circles to intervene in each round. We show the results in Tab. 1. # int. denotes the number of interventions to achieve MAG identification. The effectiveness of the maximum entropy criterion is verified by noting that the number of interventions with maximum entropy criterion is fewer than that with random criterion. Further, we evaluate the three stages respectively. In Stage 1, we obtain a PAG by running FCI algorithm with a significance level of 0.05. In Stage 2, we adopt the two criteria to select intervention variables. In Stage 3, we learn the marks with corresponding interventional data and orientation rules. We evaluate the performance of Stage 1 by # correct PAG/# wrong PAG. # correct PAG/# wrong PAG denotes the number of edges that are correctly/wrongly identified by FCI. An edge is correctly/wrongly identified by FCI if the edge learned by FCI is identical/not identical to the true PAG. The performance of Stage 2 is evaluated by # int.. And we evaluate the performance of Stage 3 by # correct int./# wrong int., where # correct int./# wrong int. denotes the number of edges whose direction are correctly/wrongly identified by interventions. An edge is correctly/wrongly identified by interventions if its existence is correctly identified in P but the direction is uncertain, and after interventions we learn its direction correctly/wrongly. We evaluate the performance of the whole process by Norm. SHD and F1. Norm. SHD denotes the normalized structural hamming distance (SHD), which is calculated by dividing SHD by d(d− 1)/2. F1 score is calculated by the confusion matrix to indicate whether the edge between any two vertices is correctly learned. According to the SHD and F1 score, the active framework can learn the MAG accurately when p is not large. And as shown by the evaluations of Stage 1 and Stage 3, the marks are learned accurately in Stage 3, and most of the mistakes are generated in Stage 1. Hence, in the active learning framework, the PAG estimation in the first stage is the bottleneck of having a good performance. 6 Conclusion In this paper, we show what causal relations are identifiable in the presence of latent variables given local background knowledge with sound and complete orientation rules. Based on the theoretical results, we give the first active learning framework for causal discovery in the presence of latent variables. In the future, we will investigate the causal relations identifiability with general background knowledge. It is also worthy to study how our research may help some recent novel decision-making methodology [44]. Acknowledgment This research was supported by NSFC (61921006), the Collaborative Innovation Center of Novel Software Technology and Industrialization, and the program A for Outstanding PhD candidate of Nanjing University. We are grateful to the reviewers for their valuable comments.
1. What is the focus and contribution of the paper on incorporating local causal background knowledge to PAGs? 2. What are the strengths of the proposed approach, particularly in terms of its soundness and completeness? 3. Do you have any concerns regarding the assumption of local BK's correctness and its potential impact on the results? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. Are there any limitations or potential drawbacks of the proposed approach that the reviewer identifies?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper presents sound and complete orientation rules to incorporate local causal background knowledge to PAGs and applies these rules to active learning. Here, the local causal background knowledge regarding a variable X refers to the causal relations about X with any adjacent variable. Note that, this paper assumes no selection bias. Strengths And Weaknesses This paper is clearly written and the contributions are novel. The authors prove sound and complete orientation rules to orient a PAG with local background knowledge. Although I did not check the proof, the sketch seems reasonable. I only have one suggestion: it would be better to explain the steps to construct the PAGs in the example presented in Figure 1 in more detail. Questions I also only have some questions for the authors The authors assume the local BK is correct, meaning that the orientation rules can only be applied when the local BK is correct. When the local BK is not correct, will applying the orientation rules cause some mistakes (such as cycles or new colliders), or we can still get a PAG? Regarding the MH sampling, it seems that Proposition 2 only gives a method to construct an equivalent MAG from a given one. Can any two Markov equivalent DAGs be transformed into each other in a limited number of these transformations? Is it possible that a sample MAG from S ( M t − 1 ) is not consistent with the given BK (line 3, Algorithm 2)? What is the computational complexity of Algorithm 2? Limitations Not applicable.
NIPS
Title Sound and Complete Causal Identification with Latent Variables Given Local Background Knowledge Abstract Great efforts have been devoted to causal discovery from observational data, and it is well known that introducing some background knowledge attained from experiments or human expertise can be very helpful. However, it remains unknown that what causal relations are identifiable given background knowledge in the presence of latent confounders. In this paper, we solve the problem with sound and complete orientation rules when the background knowledge is given in a local form. Furthermore, based on the solution to the problem, this paper proposes a general active learning framework for causal discovery in the presence of latent confounders, with its effectiveness and efficiency validated by experiments. 1 Introduction Causality has attracted tremendous attention in recent years, for its application on explainability [1], fairness [2, 3, 4], decision [5, 6, 7, 8, 9], and so on. In Pearl’s causality framework [10], one important problem is causal discovery, i.e., learning a causal graph to denote the causal relations among all the variables [11, 12, 13, 14, 15, 16]. Generally, we cannot identify all the causal relations from observational data, unless we make some additional functional assumptions [17, 18, 19] or exploit the abundant information in multiple or dynamic environments [20, 21]. In light of the uncertainty of the causal relations, a common practice to reveal them is introducing background knowledge, which is called BK for short. BK can be attained from experiments or human expertise. When experiments are available, we can collect interventional data to learn additional causal relations [22, 23, 24, 25, 26, 27, 28, 29, 30, 31]. And if in the causal discovery task, there are some variables familiar to humans, it is also possible that the human expertise can be helpful [32]. For example, if we study the causal relations among some variables including sales and prices, the causal relations such as price causes sales can be obtained directly according to human expertise. When BK is available in addition to observational data, a fundamental problem is: what causal relations are identifiable in the presence of latent variables? This problem is fundamental for its implication on the maximally identifiable causal knowledge with the observational data and BK. Its difficulty results from the fact that, in addition to the BK itself, some other causal relations can also be learned when incorporating BK. For example, they can be identified on the basis of some restrictions, such as the causal relations are acyclic. It is quite challenging to find the complete characterization for such additional causal knowledge in the presence of latent variables, and the complete characterization is necessarily accompanied with theoretical guarantee for the existence of causal graphs consistent to the observational data and local BK that have exactly different causal relations for the unidentifiable ones. Unfortunately, the problem remains open. In this paper, we solve the problem with sound and complete orientation rules when the background knowledge is given in a local form. In the presence of latent variables, a partial ancestral graph (PAG) 36th Conference on Neural Information Processing Systems (NeurIPS 2022). can be learned by FCI algorithm from observational data [33, 34, 35]. PAG can imply the existence of causal relation between any two variables but not necessarily imply the causal direction. We say BK is local, if when the BK contains the causal information with respect to a variable X , for each variable adjacent to X in the PAG, the BK implies whether X causes it or not. The local BK is common in real tasks no matter it is from experiments or human expertise. For example, when we make experiments and collect the data under intervention on X , for each variable V that has a causal relation with X , the interventional data can tell whether X causes V ; and businessman often has enough domain knowledge about price, thus they usually know whether price causes other variables or not, such as price causes sales and number of customers, and price is not caused by stocks. Given a PAG and local BK, we propose a set of orientation rules to determine some causal directions in the PAG. We prove that the rules are sound and complete, which state that all the causal relations that are identifiable given available information are exactly those determined by the proposed rules, thus closing the problem given local BK. The establishment of orientation rules compatible with local BK makes causal discovery by interventions possible in the presence of latent variables. We propose the first general active learning framework for causal discovery, with the target of identifying a maximal ancestral graph (MAG), which implies the causal relations when there are latent variables. Considering that intervention is expensive in reality, we hope to achieve the target with as few interventions as possible. Hence we present a baseline maximal entropy criterion, equipped with Metropolis-Hastings sampling, to select the intervention variable such that we can learn more causal relations by each intervention. Our contributions in this paper are twofold: (1) We show what causal relations are identifiable given local background knowledge in the presence of latent confounders with sound and complete orientation rules. (2) We give the first active learning framework for causal discovery that is applicable when latent variables exist, where maximal entropy criterion equipped with Metropolis-Hastings sampling is introduced to select intervention variables. Related works. In the literature, Meek [36] established sound and complete rules, generally called Meek rules, for causal identification given BK under the causal sufficiency assumption. The assumption requires that there are no latent variables that cause more than one observed variable simultaneously. However, causal sufficiency is untestable in practice. When we apply causality in subjects such as biology, sociology, and economics, it is quite often that there are latent variables. For example, the macroeconomic policy influences purchase price, the population of customers, and advertising cost, but it is hard to evaluate it, thereby a latent confounder. Andrews et al. [37] showed that FCI algorithm is complete given tiered BK, where all variables are partitioned into disjoint sets with explicit causal order. While in many cases, e.g., when BK is revealed by interventions, BK is not tiered. And Jaber et al. [28] investigated the complete algorithm to learn a graph when there are additional interventional distribution, while such knowledge is not needed in our paper. 2 Preliminary A graph G = (V,E) consists of a set of vertices V = {V1, · · · , Vp} and a set of edges E. For any subset V′ ⊆ V, the subgraph induced by V′ is GV′ = (V′,EV′), where EV′ is the set of edges in E whose both endpoints are in V′. For a graph G, V(G) denotes the set of vertices in G. G is a complete graph if there is an edge between any two vertices. The subgraph induced by an empty set is also a complete graph. G[−V′] denotes the subgraph GV\V′ induced by V\V′. Usually, bold letter (e.g., V) denotes a set of vertices and normal letter (e.g., V ) denotes a vertex. A graph is chordal if any cycle of length four or more has a chord, which is an edge joining two vertices that are not consecutive in the cycle. If G = (V,E) is chordal, the subgraph of G induced by V′ ⊆ V is chordal. A graph G is mixed if the edges in G are either directed → or bi-directed ↔. The two ends of an edge are called marks and have two types arrowhead or tail. A graph is a partial mixed graph (PMG) if it contains directed edges, bi-directed edges, and edges with circles (◦). The circle implies that the mark here could be either arrowhead or tail but is indefinite. Vi is adjacent to Vj in G if there is an edge between Vi and Vj . A path in a graph G is a sequence of distinct vertices 〈V0, · · · , Vn〉 such that for 0 ≤ i ≤ n − 1, Vi and Vi+1 are adjacent in G. An edge in the form of Vi ◦−◦ Vj is a circle edge. The circle component in G is the subgraph consisting of all the ◦−◦ edges in G. Denote the set of vertices adjacent to Vi in G by Adj(Vi, G). A vertex Vi is a parent of a vertex Vj if there is Vi → Vj . A directed path from Vi to Vj is a path comprised of directed edges pointing to the direction of Vj . A possible directed path from Vi to Vj is a path without an arrowhead at the mark near Vi on every edge in the path. Vi is an ancestor/possible ancestor of Vj if there is a directed path/possible directed path from Vi to Vj or Vi = Vj . Vi is a descendant/possible descendant of Vj if there is a directed path/possible directed path from Vj to Vi or Vj = Vi. Denote the set of parent/ancestor/possible ancestor/descendant/possible descendant of Vi in G by Pa(Vi, G)/Anc(Vi, G)/PossAn(Vi, G)/De(Vi, G)/PossDe(Vi, G). If Vi ∈ Anc(Vj , G) and Vi ← Vj /Vi ↔ Vj , it forms a directed cycle/almost directed cycle. ∗ is a wildcard that denotes any of the marks (arrowhead, tail, and circle). We make a convention that when an edge is in the form of ◦−∗, the ∗ here cannot be a tail since in this case the circle can be replaced by an arrowhead due to the assumption of no selection bias. A non-endpoint vertex Vi is a collider on a path if the path contains ∗→ Vi ←∗. A path p from Vi to Vj is a collider path if Vi and Vj are adjacent or all the passing vertices are colliders on p. p is a minimal path if there are no edges between any two non-consecutive vertices. A path p from Vi to Vj is a minimal collider path if p is a collider path and there is not a proper subset V′ of the vertices in p such that there is a collider path from Vi to Vj comprised of V′. A triple 〈Vi, Vj , Vk〉 on a path is unshielded if Vi and Vk are not adjacent. p is an uncovered path if every consecutive triple on p is unshielded. A path p is a minimal possible directed path if p is minimal and possible directed. A mixed graph is an ancestral graph if there is no directed or almost directed cycle (since we assume no selection bias, we do not consider undirected edges in this paper). An ancestral graph is a maximal ancestral graph (MAG, denoted byM) if it is maximal, i.e., for any two non-adjacent vertices, there is a set of vertices that m-separates them [33]. A path p between X and Y in an ancestral graph G is an inducing path if every non-endpoint vertex on p is a collider and meanwhile an ancestor of either X or Y . An ancestral graph is maximal if and only if there is no inducing path between any two non-adjacent vertices [33]. In an MAG, a path p = 〈X, · · · ,W, V, Y 〉 is a discriminating path for V if (1) X and Y are not adjacent, and (2) every vertex between X and V on the path is a collider on p and a parent of Y . Two MAGs are Markov equivalent if they share the same m-separations. A class comprised of all Markov equivalent MAGs is a Markov equivalence class (MEC). We use a partial ancestral graph (PAG, denoted by P) to denote an MEC, where a tail/arrowhead occurs if the corresponding mark is tail/arrowhead for all Markov equivalent MAGs, and a circle occurs otherwise. For a PMG M that is obtained from a PAG P by orienting some circles to either arrowheads or tails, an MAGM is consistent to the PMG M if (1) the non-circle marks in M are also inM, and (2)M is in the MEC represented by P . Sometimes we will omit the PAG P and just directly say a PMG M (obtained from the PAG P) since in this paper we study the rules to incorporate local BK to a PAG. We say an MAGM is consistent to the BK ifM is with the orientations dictated by the BK. 3 Sound and Complete Rules In this section, we present the sound and complete orientation rules to orient a PAG P with local background knowledge (BK), where P is learned by observational data [11, 35] and V(P) = {V1, V2, · · · , Vd}. The local BK regarding X means that the BK directly implies and only directly implies all the true marks at X , denoted by BK(X). We assume the absence of selection bias and that the BK is correct. The correctness indicates that there exists an MAG consistent to P and the BK. Without loss of generality, we suppose the local BK is regarding V1, V2, · · · , Vk, 1 ≤ k ≤ d. That is, for any vertex X ∈ V1, V2, · · · , Vk, all the marks at X are known according to the local BK; and for any vertex X ∈ Vk+1, · · · , Vd, the local BK does not directly imply any marks at X . First, we show the orientation rules to incorporate local BK. They follow the rules of Zhang [35] for learning a PAG but with one replacement and one addition. Due to the page limit, we do not list them here but the replaced and additional ones. See Appendix A for the rules proposed by Zhang [35]. R′4: If 〈K, · · · , A,B,R〉 is a discriminating path between K and R for B, and B ◦−∗ R, then orient B ◦−∗R as B → R. R11: If A−∗B, then A→ B. Algorithm 1: Update a PMG with local background knowledge Input: A PMG Mi, BK(X) Output: Updated graph Mi+1 1 For any K ∈ PossDe(X,Mi[−C]) and any T ∈ C such that K ◦−∗ T in Mi, orient K ←∗T (the mark at T remains); for all K ∈ PossDe(X,Mi[−C]) such that X ◦−∗K, orient X → K; 2 Orient the subgraph Mi[PossDe(X,Mi[−C])\{X}] as follows until no feasible updates: for any two vertices Vl and Vj such that Vl ◦−◦ Vj , orient it as Vl → Vj if (i) FVl\FVj 6= ∅ or (ii) FVl = FVj as well as there is a vertex Vm ∈ PossDe(X,Mi[−C])\{X} not adjacent to Vj such that Vm → Vl ◦−◦ Vj ; 3 Apply the orientation rules until the graph is closed under the orientation rules. Prop. 1 implies the soundness ofR′4 to orient a PAG P or a PMG obtained from P with local BK. See Appendix A for the proof. R11 is immediate due to no selection bias assumption. In the following, we make a convention that when we say the orientation rules, they refer to R1 − R3,R8 − R10 of Zhang [35] and R′4,R11. A PMG is closed under the orientation rules if the PMG cannot be oriented further by the orientation rules. Proposition 1. Given a PAG P , for any PMG M that is obtained from P by orienting some circles in P (or M = P),R′4 is sound to orient M with local background knowledge. Proof sketch: If there is B ←∗R in an MAG consistent with the case ofR′4, there must be a minimal collider path between K and R across B, in which case B ←∗R should have been identified in the PAG according to Zhao et al. [38], Zhang [35], contradiction. Next, we will prove the completeness of the proposed orientation rules. It is somewhat complicated. We first give a roadmap for the proof process. There are mainly two parts. The first is that we present a complete algorithm to orient P with the local BK regarding V1, V2, · · · , Vk. The second part is to prove that the algorithm orient the same marks as the proposed orientation rules. Combining these two parts, we conclude the orientation rules are complete to orient a PAG. The construction of the algorithm and the corresponding proof for the completeness of the algorithm in the first step is the most difficult part. To achieve the construction, we divide the whole process of orienting a PAG with BK regarding V1, V2, · · · , Vk into k steps. Beginning from the PAG P (P is also denoted by M0), in the (i+ 1)-th (0 ≤ i ≤ k − 1) step we obtain a PMG Mi+1 from Mi by incorporating BK(Vi+1) and orienting some other circles further. To obtain the updated graph in each step, we propose an algorithm orienting a PMG with local BK incorporated in this step. Repeat this process by incorporating BK(V1), BK(V2), . . . , BK(Vk) sequentially, we obtain the PMG with incorporated BK regarding V1, · · · , Vk. We will prove that the k-step algorithm to orient PAG with local BK regarding V1, · · · , Vk is complete, by an induction step that if the first i-step algorithm is complete to update the PAG P with BK regarding V1, · · · , Vi, then the (i+ 1)-step algorithm is complete to update P with BK regarding V1, · · · , Vi+1. Hence the proof in the first part completes. In the second part, we show that the k-step algorithm orients the same marks as the proposed orientation rules. We thus conclude that the orientation rules are sound and complete for causal identification in the presence of latent variables given local BK. We present Alg. 1 to obtain Mi+1 from Mi by incorporating BK(Vi+1). For brevity, we denote Vi+1 byX , and introduce a set of vertices C defined as C = {V ∈ V(P) | V ∗→ X ∈ BK(X)} to denote the vertices whose edges with X will be oriented to ones with arrowheads at X according to BK(X) directly. In Mi+1, there is X ←∗V for V ∈ C and X−∗V for V ∈ {V ∈ V(P) | V ∗−◦X in Mi}\C oriented directly according to BK(X). We define FMiVl = {V ∈ C ∪ {X} | V ∗−◦ Vl in Mi} for any Vl ∈ PossDe(X,Mi[−C])\{X}, which is denoted by FVl for short. FVl denotes the vertices in C ∪ {X} whose edges with Vl are oriented to ones with arrowheads at Vl in the first step of Alg. 1. In the first step of Alg. 1, the orientation at X follows BK(X), and the orientation at the vertices apart of X is motivated as the necessary condition for the ancestral property. Speaking roughly, if there is an oriented edge K → T in the case of the first step, then no matter how we orient the other circles, there will be a directed or almost directed cycle, unless we introduce new unshielded colliders (which takes new conditional independences relative to those in P), both of which are evidently invalid to obtain an MAG in the MEC represented by P . And the orientation in the second step is motivated as the necessary condition for that there are no new unshielded colliders in the oriented graph relative to the PAG P . If there is an MAG where there is an inconsistent edge with the edge oriented in this step, then there must be new unshielded colliders relative to P , which implies that the MAG is not consistent to P . The third step orients some other circles based on the updated structure. Example 1. See the example in Fig. 1. Suppose the input PMG Mi in Alg. 1 is the graph shown in Fig. 1(a). And there is local BK regardingX = V1, which is in the form of V1 ←∗V2, V1−∗V5, V1−∗V4. Hence C = {V2}. In this case, PossDe(X,Mi[−C]) = PossDe(V1,Mi[−V2]) = {V1, V3, V4, V5}. And FV3 = {V2}, FV4 = {V1}, FV5 = {V1, V2}. When we implement Alg. 1, in the first step, the edges denoted by red dashed lines in Fig. 1(b) are oriented. Among them, V1◦−◦V2/V1◦−◦V5/V1◦→ V4 is transformed to V1 ←◦V2/V1 → V5/V1 → V4 due to V1 = X,V2 ∈ C, V4, V5 ∈ {V ∈ V(P) | V ∗−◦ X in Mi}\C; and V2◦→ V5/V2◦→ V3 is oriented due to V2 ∈ C and V3, V5 ∈ PossDe(X,Mi[−C]). In the second step of Alg. 1, the edge denoted by red dashed line in Fig. 1(c) is oriented due to (1) a circle edge V3 ◦−◦ V5 after the first step, where V3, V5 ∈ PossDe(X,Mi[−C]); (2) FV3 = {V2} ⊂ {V1, V2} = FV5 . In the third step of Alg. 1, the edges denoted by red dashed lines in Fig. 1(d) is oriented byR1 of the orientation rules. Then, we present the key induction result in Thm. 1 for the graph obtained by Alg. 1 in each step. Due to the page limit, we only show a proof sketch, with a detailed version in Appendix B. Then with Thm. 1, we directly conclude that k-step algorithm is complete to orient the PAG with the local BK regarding V1, . . . , Vk in Cor. 1. Theorem 1. Given i, suppose Ms,∀s ∈ {0, 1, . . . , i} satisfies the five following properties: (Closed) Ms is closed under the orientation rules. (Invariant) The arrowheads and tails in Ms are invariant in all the MAGs consistent to P and BK regarding V1, . . . , Vs. (Chordal) The circle component in Ms is chordal. (Balanced) For any three vertices A,B,C in Ms, if A∗→ B ◦−∗C, then there is an edge between A and C with an arrowhead at C, namely, A∗→ C. Furthermore, if the edge between A and B is A→ B, then the edge between A and C is either A→ C or A◦→ C (i.e., it is not A↔ C). (Complete) For each circle at vertex A on any edge A◦−∗B in Ms, there exist MAGsM1 andM2 consistent to P and BK regarding V1, . . . , Vs with A←∗B ∈ E(M1) and A→ B ∈ E(M2). Then the PMG Mi+1 obtained from Mi with BK(Vi+1) by Alg. 1 also satisfies the five properties. Proof sketch: For brevity, we denote Vi+1 by X . (A) The closed property holds due to the third step of Alg. 1.(B) The invariant property holds because all the orientations in Alg. 1 either follow BK(X) or are motivated as the necessary condition for the ancestral property and the fact that there cannot be new unshielded colliders introduced relative to Mi. (C) The chordal property is proved based on the fact that only the first two steps of Alg. 1 possibly introduce new arrowheads, while the third step will only transform the edges as A◦→ B to A→ B, which is proved in Lemma 12 in Appendix B. With this fact, it suffices to prove that the circle component in the graph obtained after the first two steps is chordal. Denote the graph after the first two steps by M̄i+1. We can prove that the circle components in M̄i+1[PossDe(X,Mi[−C])] and in M̄i+1[−PossDe(X,Mi[−C])] are chordal, respectively. Since there are no circle edges connecting PossDe(X,Mi[−C]) and V\PossDe(X,Mi[−C]) (otherwise it has been oriented in the first step of Alg. 1), we conclude the desired result. (D) The balanced property of Mi+1 is proved based on three facts that (1) in Alg. 1, if we transform a circle to arrowhead at V , then V ∈ PossDe(X,Mi[−C]); (2) if there is A ∈ PossDe(X,Mi[−C]) and A ◦−∗ B, B 6∈ C, in Mi+1, then B ∈ PossDe(X,Mi[−C]); (3) Mi satisfies the balanced property. We can prove that it is impossible that there is a sub-structure Vi∗→ Vj ◦−∗ Vk where Vi is not adjacent to Vk or there is Vi ∗−◦ Vk in Mi+1 by discussing whether Vi, Vj , Vk belongs to PossDe(X,Mi[−C]). (E) The completeness property is proved by showing two results: (1) for edge circle edge A ◦−◦B and C◦→ D in Mi+1, C◦→ D can be transformed to C → D and the circle edge can be oriented as both A→ B andA← B in the MAGs consistent to P and local BK regarding V1, · · · , Vi+1; (2) in Mi+1, each edge A◦→ B can be oriented as A ↔ B in an MAG consistent to P and local BK regarding V1, · · · , Vi+1. In this part, the most difficult part is to prove the first result, with which the second result can be proved directly following the proof process of Thm. 3 of Zhang [35]. In the proof for the first result, we show that any MAG obtained from Mi+1 by transforming the edges as A◦→ B to A→ B and the circle component into a DAG without new unshielded colliders is consistent to P and local BK regarding V1, . . . , Vi+1. If not, we can always find an MAG obtained from Mi by transforming the edges as A◦→ B to A → B and the circle component into a DAG without new unshielded colliders that is not consistent to P and local BK regarding V1, . . . , Vi. By induction, there is an MAG obtained from P by transforming the edges as A◦→ B to A → B and the circle component into a DAG without new unshielded colliders that is not consistent to P , contradiction with Thm. 2 of Zhang [35]. We conclude the first result. Corollary 1. The k-step algorithm from M0(= P) to Mk is sound and complete. That is, the non-circle marks in Mk are invariant in all the MAGs consistent to P and BK regarding V1, . . . , Vk. And for each circle in Mk, there exist both MAGs with an arrowhead and MAGs with a tail here that are consistent to P and BK regarding V1, . . . , Vk. Proof. Previous studies [34, 35] show that the last four properties in Thm. 1 are fulfilled in PAG, the case inR′4 will never happen in P because such circles have been oriented byR4 in the process of learning P , and the case inR11 is never triggered by the rules to learn P . Hence P satisfies the five properties. With the induction step implied by Thm 1, we directly conclude that Mk satisfies the five properties, thereby satisfying the invariant and complete property. Theorem 2. The orientation rules are sound and complete to orient a PAG with the local background knowledge regarding V1, . . . , Vk. Proof. The soundness ofR′4 is shown by Prop. 1. The soundness of other rules immediately follows Thm. 4.1 of Ali et al. [34] and Thm. 1 of Zhang [35]. We do not show the details. Roughly speaking, the violation of these rules will lead to that there are new unshielded colliders or directed or almost directed cycles in the oriented graph relative to P . The main part is to prove the completeness. According to Cor. 1, it suffices to prove that in each step by Alg. 1 to incorporate BK(X) into a PMG Mi, the orientations in Alg. 1 either follow BK(X) directly, or can be achieved by the proposed orientation rules. The orientation in the second step of Alg. 1 can be achieved by R1, because no matter FVl\FVj 6= ∅ or Vm → Vl ◦−◦ Vj , there is F ∈ FVl\FVj or F = Vm respectively such that F∗→ Vl ◦−◦ Vj where F is not adjacent to Vj . The orientation in the third step naturally follows the orientation rules. For the orientation in the first step, X ←∗V for V ∈ C is dictated by BK(X), and X → V for V ∈ {V ∈ V(P) | X ◦−∗ V }\C is obtained from X −∗V dictated by BK(X) andR11. The remaining part is to prove for K ∈ PossDe(X,Mi[−C])\{X} and T ∈ C, if there is K ◦−∗ T in Mi, K ←∗T can be oriented by the proposed orientation rules when we incorporate BK(X). Due to K ∈ PossDe(X,Mi[−C])\{X}, there is a possible directed path from X to K that does not go through C. According to Lemma 2 in Appendix B, there is a minimal possible directed path p = 〈X(= F0), F1, . . . ,K(= Ft)〉, t ≥ 1 where each vertex does not belong to C. Hence X → F1 is oriented by BK(X) and R11 unless X → F1 has been in Mi. Hence, X → F1 → · · · → Ft can be oriented by R1 after incorporating BK(X) unless they have been in Mi. If t = 1, there is T∗→ X → K, thus K ←∗T can be oriented byR2. Next, we consider the case when t ≥ 2. We first prove that for any Fm ∈ F1, . . . , Ft, t ≥ 2, Fm is adjacent to T , and there is not Fm → T in Mi. Suppose Fm is not adjacent to T , there must be a sub-structure of Mi induced by Fm−s, Fm−s+1, . . . , Fm+l, T , 1 ≤ s ≤ m, 1 ≤ l ≤ t−m, such that T is only adjacent to Fm−s and Fm+l in this sub-structure. There are at least four vertices in this sub-structure. Hence there must be an unshielded collider (denoted by UC for short) in this sub-structure in P , otherwise no matter how we orient the circle there is either a new UC relative to P or a directed or almost directed cycle there. Since p is possibly directed, the UC is at either Fm+l or T (i.e., ∗→ Fm+l( or T )←∗). If there is a UC at Fm+l, T∗→ Fm+l and Fm+l−1∗→ Fm+l are identified in P . Thus Fm+l → Fm+l+1 · · · → Ft is identified in P . Due to the completeness of FCI algorithm to learn P , there is K ←∗T in P , because there is not an MAG with K → T (there has been T∗→ Fm+l → · · · → K in P). Hence there is K ←∗T in Mi, contradicting with K ◦−∗ T in Mi. If there is not a UC at Fm+l, UC can only be at T . Thus Fm−s∗→ T ←∗Fm+l is identified in P . Since p is possibly directed, Fm+l−1 is not adjacent to T , and there is not a UC at Fm+l in the sub-structure, there cannot be Fm+l ↔ T in P . Hence the path 〈Fm−s, Fm−s+1, . . . , Fm+l, T 〉 in P is an uncovered possible directed path, Fm−s → T is identified in P (otherwise R9 applies). When incorporating BK(X), there is a (almost) directed cycle T∗→ X → · · · → Fm−s → T , contradicting with the correctness of BK. Hence, Fm is adjacent to T . Similarly, if Fm → T in Mi, there is T∗→ X → · · · → Fm → T , impossibility. Finally, since F1 is adjacent to T , and T∗→ X → F1 is oriented according to BK(X), there is T∗→ F1 oriented by R2 unless T∗→ F1 has been in Mi. Hence there is always T∗→ F1 by the orientation rules. Consider T∗→ F1 → F2, there is T∗→ F2 oriented by R2 unless T∗→ F2 has been in Mi. Repeat the process for F3, F4, . . . , Ft(= K), we can prove that if there is Ft(= K)◦−∗T in Mi, there is T∗→ Ft(= K) oriented byR2. The rules thus orient the same marks as Alg. 1. Example 2. We give an example in Fig. 2. Suppose we obtain a PAG as Fig. 2(a) with observational data and have the local BK regarding V1, V2. We divide the whole process of obtaining a PMG from P with the local BK into obtaining M1 from P with BK(V1) by Alg. 1 and then obtaining M2 from M1 with BK(V2) by Alg. 1. M1 and M2 are shown in Fig. 2(b) and 2(c), respectively. It is not hard to verify that all of P , M1, M2 satisfy the closed, chordal, and balanced properties defined in Thm 1. Note if we do not considerR′4, the edge colored red in Fig. 2(b) cannot be oriented. Fig. 2(a) also shows a case where BK(V1) is not tiered [37]. The reason is that the vertices V1, V4, V5 cannot partitioned into disjoint subsets with explicit causal order because V1 and V4 belong to different subsets according to BK(V1) but V5 has ancestor relation with neither V1 nor V4. 4 Active Causal Discovery Framework The establishment of the orientation rules for causal identification with local BK makes causal discovery by interventions possible in the presence of latent variables. Hence, on the basis of the theoretical results, we propose an active learning framework for causal discovery in the presence of latent variables, with the target of learning the MAG with as fewer interventions as possible. The framework is comprised of three stages. In Stage 1, we learn a PAG with observational data. In Stage 2, we select a singleton variable X ∈ V1, . . . , Vd to intervene and collect the interventional data. In Stage 3, we learn causal relations with the data. For each edge X ◦−∗Vi, the circle at X can be learned by a two-sample test on whether the interventional distribution of Vi equals to the observational one. There is X ←∗Vi learned if they are equal, and X −∗Vi otherwise. Hence, the knowledge taken by the Algorithm 2: Intervention variable selection based on maximum entropy criterion with MH alg. Input: A PMG Mi oriented based on P and BK regarding V1, . . . , Vi, number of MAGs L Output: The selected intervention variable X 1 Obtain an MAGM0 based on Mi by transforming ◦→ to→ and the circle component into a DAG without new unshielded colliders; 2 for t = 1, 2, . . . , L′ do 3 Sample an MAGM′ from S(Mt−1); 4 ρ = min(1, |S(Mt−1)||S(M′)| ); 5 Sample u from uniform distribution U [0, 1]; 6 if u ≤ ρ then Mt =M′ else Mt =Mt−1 ; 7 S = {Mt,1≤t≤L′ | Mt has the non-circle marks in Mi} . The set of MAGs consistent to Mi; 8 s← 0, X ← ∅; 9 for Vj = Vi+1, . . . , Vd do 10 Denote V(Vj) = {V ∈ V(Mi) | Vj ◦−∗ V in Mi}, L = |S|; 11 For each possible local structure Lk of Vj , 1 ≤ k ≤ 2|V(Vj)|, we count the number Nk of the appearance of Lk in the L MAGs from S; 12 s′ = − ∑2|V(Vj)| k=1 Nk L log Nk L ; 13 if s ≤ s′ then X ← Vj , s← s′; 14 return X . interventional data is local. We repeat the second and third stages until we identify the MAG. Since the orientation rules are complete, the graph can be updated completely by each intervention. The only remaining problem is how to select the intervention variable in Stage 2. Considering that the whole process is sequential, we only focus on the intervention variable selection in one round. Without loss of generality, suppose we have obtained a PMG Mi by i interventions on V1, V2, . . . , Vi, and will select a variable from {Vi+1, . . . , Vd} to intervene. We adopt the maximum entropy criterion [22]. For Mi, we select the variable X that maximizes HX = − M∑ j=1 lj L log lj L , (1) where j is an index for a local structure of X (a local structure of X denotes a definite orientation of the marks at X), M denotes the number of different local structures, lj denotes the number of MAGs consistent to Mi which has the j-th local structure of X , and L denotes the total number of MAGs consistent to Mi. Intuitively, the maximum entropy criterion is devoted to selecting the intervention variable X such that there is a similar number of MAGs with each local structure of X and as more as possible local structures of X . A justification for intervening on such a variable is that we hope to have a small space of MAGs after the intervention no matter what the true local structure of X is. However, it is hard to count the number of MAGs consistent to Mi with each definite local structure. Even in causal sufficiency setting, implementing such operation (generally called counting maximally oriented partial DAGs) is #P-complete [39]. Considering DAG is a special case for MAG, the counting of MAGs is harder. Hence, we adopt a sampling method based on Metropolis-Hastings (MH) algorithm [40], to uniformly sample from the space of MAGs. The algorithm begins from an MAG consistent to Mi, and in each round we transform the MAG to a candidate MAG and decide to accept or reject it with some probability. Here, we introduce an important result of Zhang and Spirtes [41] for MAGs transformation in Prop. 2. Proposition 2 (Zhang and Spirtes [41], Tian [42]). LetM be an arbitrary MAG, and A → B an arbitrary directed edge inM. LetM′ be the graph identical toM except that the edge between A and B is A↔ B.M′ is an MAG Markov equivalent toM if and only if (1) there is no directed path from A to B other than A→ B inM; (2) for any C → A inM, C → B is also inM; and for any D ↔ A inM, either D → B or D ↔ B is inM; (3) there is no discriminating path for A on which B is the endpoint adjacent to A inM. In the MAG sampling algorithm, in each step we transform the current MAG to a new MAG by converting a directed edge to bi-directed edge or a bi-directed one to directed one, where we use Prop. 2 to determine whether an MAG Markov equivalent to the current MAG can be obtained by the conversion. For MH algorithm, a stationary distribution equal to the desired distribution can be obtained if any two states can be transformed to each other in limited steps [43]. As implied by Theorem 3 of Zhang and Spirtes [41], any MAG can be transformed to another Markov equivalent MAG in a limited number of transformations above. Hence, MH algorithm is valid to sample MAGs uniformly from the space of MAGs consistent to P . Then, we only remain the MAGs that have the same non-circle marks as Mi. In this way, we obtain a set of MAGs which are uniformly sampled from the space of MAGs consistent to Mi. Given an MAGM, let S(M) denote the set of MAGs that can be obtained fromM by transforming one bi-directed edge to directed edge or one directed edge to bi-directed edge according to Prop. 2. Denote the cardinality of S(M) by |S(M)|. We set the probability Q(M′ | M) of an MAGM transformed to another MAGM′ ∈ S(M) as 1/|S(M)|. Hence, the acceptance ratio ρ that is used to decide whether to accept or reject the candidate is ρ = min ( 1, p(M′)Q(M |M′) p(M)Q(M′ | M) ) = min ( 1, |S(M)| |S(M′)| ) . We propose Alg. 2 to select the intervention variable X . As shown by Lemma 15.1 in Appendix B, the graphM0 is an MAG consistent to Mi. From Line 2-Line 6, we execute MH algorithm to sample L′ MAGs. Then, we select the MAGs among them which are consistent to Mi on Line 7. Finally, we estimate the entropy by (1) and select X from Line 9-Line 14. 5 Experiments In this section, we conduct a simple simulation of the three-stage active learning framework. We generate 100 Erdös-Rényi random DAGs for each setting, where the number of variables d = 10 and the probability of including each edge p ∈ {0.1, 0.15, 0.2, 0.25, 0.3}. The weight of each edge is drawn from U [1, 2]. We generate 10000 samples from the linear structural equations, and take three variables as latent variables and the others as observed ones. In the implementation of the MH algorithm in Alg. 2, we discard the first 500 sampled MAGs and collect the following 1000 MAGs. For each intervention variable X , we collect 10000 samples under do(X = 2), and learn the circles at X by two-sample test with a significance level of 0.05. We compare the maximum entropy criterion with a baseline random criterion where we randomly select one variable with circles to intervene in each round. We show the results in Tab. 1. # int. denotes the number of interventions to achieve MAG identification. The effectiveness of the maximum entropy criterion is verified by noting that the number of interventions with maximum entropy criterion is fewer than that with random criterion. Further, we evaluate the three stages respectively. In Stage 1, we obtain a PAG by running FCI algorithm with a significance level of 0.05. In Stage 2, we adopt the two criteria to select intervention variables. In Stage 3, we learn the marks with corresponding interventional data and orientation rules. We evaluate the performance of Stage 1 by # correct PAG/# wrong PAG. # correct PAG/# wrong PAG denotes the number of edges that are correctly/wrongly identified by FCI. An edge is correctly/wrongly identified by FCI if the edge learned by FCI is identical/not identical to the true PAG. The performance of Stage 2 is evaluated by # int.. And we evaluate the performance of Stage 3 by # correct int./# wrong int., where # correct int./# wrong int. denotes the number of edges whose direction are correctly/wrongly identified by interventions. An edge is correctly/wrongly identified by interventions if its existence is correctly identified in P but the direction is uncertain, and after interventions we learn its direction correctly/wrongly. We evaluate the performance of the whole process by Norm. SHD and F1. Norm. SHD denotes the normalized structural hamming distance (SHD), which is calculated by dividing SHD by d(d− 1)/2. F1 score is calculated by the confusion matrix to indicate whether the edge between any two vertices is correctly learned. According to the SHD and F1 score, the active framework can learn the MAG accurately when p is not large. And as shown by the evaluations of Stage 1 and Stage 3, the marks are learned accurately in Stage 3, and most of the mistakes are generated in Stage 1. Hence, in the active learning framework, the PAG estimation in the first stage is the bottleneck of having a good performance. 6 Conclusion In this paper, we show what causal relations are identifiable in the presence of latent variables given local background knowledge with sound and complete orientation rules. Based on the theoretical results, we give the first active learning framework for causal discovery in the presence of latent variables. In the future, we will investigate the causal relations identifiability with general background knowledge. It is also worthy to study how our research may help some recent novel decision-making methodology [44]. Acknowledgment This research was supported by NSFC (61921006), the Collaborative Innovation Center of Novel Software Technology and Industrialization, and the program A for Outstanding PhD candidate of Nanjing University. We are grateful to the reviewers for their valuable comments.
1. What is the focus and contribution of the paper on nonparametric causal discovery? 2. What are the strengths of the proposed approach, particularly in its technical quality and clarity? 3. Do you have any concerns regarding the definition of background knowledge (BK) and its incorporation into the framework? 4. How does the method handle latent variables, and what assumptions are made about their nature? 5. What are the potential limitations of the active learning framework used in the proposed method? 6. Are there any originality concerns regarding the use of maximal entropy criterion and rejection metropolis sampling in the method? 7. Can you explain the accuracy difference between MCMC and random orientations in Table 1? 8. How do the proposed orientation rules handle conflicts between the learned PAG and the given BK?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper Nonparametric causal discovery can identify the true DAG up to some undertainty. The uncertainty can be eliminated only by incorporating background knowledge (BK). Under causal sufficiency assumption, Meek rules established a complete rule to orient the CPDAG to DAG with BK. While in the presence of latent variables, authors provide the first sound and complete rules to orient the PAG. The orientation rules are based on local form of BK. With these rules, authos propose an active learning framework, aiming to orient a given PAG with the minimum number of interventions (BKs). Since it is hard to count the number of MAGs in some consistent space, authors further equipped the method with the adaptive rejection Metropolis-Hastings sampling. Experimental results demonstrate the effectiveness of the proposed framework. Strengths And Weaknesses Pros: Clarity. The paper is generally well-written and organized. The problem definition is set clear, the preliminaries are described in a detailed and clean way, and theoretical analyses are also well carried out. Technical quality of the proposed rules. The proposed orientation rules are technically dense. I haven't read the rules (section 3) in detail, and could not ensure the correctness. But I tried on some simple graphs example, and the rules appear to be correctly identifiable to me. Cons: Overall this is good to me. Currently I could not give any specific weaknesses of this paper, but only some questions. See the questions section. Questions local BK: now BK is defined as all marks on a variable X - which seems expensive - and usually BK is given as some specific direct edges. Could the authors please give more motivation on why this BK is defined over some node, instead of edge? If some BK is given in the edge form, could it also be incorporated into this framework? Latent: by with latent variables, is it assumed that all latent variables are exogenous (or mutually independent), similar as assumptions in FCI, so that the Markov equivalence class can be represented as a PAG. And thus, in experiments, L348 "take three nodes as latent variables and the others as observed ones" - are they taken randomly or with some criteria? Correctness of BK: L134: "The correctness indicates that there exists an MAG consistent to P and the BK." So the authors also assume that the learned P at the first stage is correct. What if P is incorrect (which is often the case), and the given BK has confict with P - is there any correction rule to prevent cascading error? About originality: the active learning framework (maximal entropy criterion, rejection metropolis sampling etc, section 4) seems to be a classic paradigm in combinational search. Is this originally proposed by authors, or very similar to some existed methods on orientation rules? Accuracy difference in Table 1: why there is accuracy difference between MCMC and random? Since the orientation rules are deterministic, given the same PAG, the maximum oriented graph should be exactly the same, with only the difference in the number of interventions required. Where does the accuracy difference come from? What if the starting PAG is exactly the true PAG (asymptotic result, not by FCI)? L86: letters (V, E) should be bold face. Limitations The problem setup is quite clear, and the authors have adequately addressed the limitations. I only have some questions above, regarding assumptions on latent, and assumptions on correct P and BK.
NIPS
Title Sound and Complete Causal Identification with Latent Variables Given Local Background Knowledge Abstract Great efforts have been devoted to causal discovery from observational data, and it is well known that introducing some background knowledge attained from experiments or human expertise can be very helpful. However, it remains unknown that what causal relations are identifiable given background knowledge in the presence of latent confounders. In this paper, we solve the problem with sound and complete orientation rules when the background knowledge is given in a local form. Furthermore, based on the solution to the problem, this paper proposes a general active learning framework for causal discovery in the presence of latent confounders, with its effectiveness and efficiency validated by experiments. 1 Introduction Causality has attracted tremendous attention in recent years, for its application on explainability [1], fairness [2, 3, 4], decision [5, 6, 7, 8, 9], and so on. In Pearl’s causality framework [10], one important problem is causal discovery, i.e., learning a causal graph to denote the causal relations among all the variables [11, 12, 13, 14, 15, 16]. Generally, we cannot identify all the causal relations from observational data, unless we make some additional functional assumptions [17, 18, 19] or exploit the abundant information in multiple or dynamic environments [20, 21]. In light of the uncertainty of the causal relations, a common practice to reveal them is introducing background knowledge, which is called BK for short. BK can be attained from experiments or human expertise. When experiments are available, we can collect interventional data to learn additional causal relations [22, 23, 24, 25, 26, 27, 28, 29, 30, 31]. And if in the causal discovery task, there are some variables familiar to humans, it is also possible that the human expertise can be helpful [32]. For example, if we study the causal relations among some variables including sales and prices, the causal relations such as price causes sales can be obtained directly according to human expertise. When BK is available in addition to observational data, a fundamental problem is: what causal relations are identifiable in the presence of latent variables? This problem is fundamental for its implication on the maximally identifiable causal knowledge with the observational data and BK. Its difficulty results from the fact that, in addition to the BK itself, some other causal relations can also be learned when incorporating BK. For example, they can be identified on the basis of some restrictions, such as the causal relations are acyclic. It is quite challenging to find the complete characterization for such additional causal knowledge in the presence of latent variables, and the complete characterization is necessarily accompanied with theoretical guarantee for the existence of causal graphs consistent to the observational data and local BK that have exactly different causal relations for the unidentifiable ones. Unfortunately, the problem remains open. In this paper, we solve the problem with sound and complete orientation rules when the background knowledge is given in a local form. In the presence of latent variables, a partial ancestral graph (PAG) 36th Conference on Neural Information Processing Systems (NeurIPS 2022). can be learned by FCI algorithm from observational data [33, 34, 35]. PAG can imply the existence of causal relation between any two variables but not necessarily imply the causal direction. We say BK is local, if when the BK contains the causal information with respect to a variable X , for each variable adjacent to X in the PAG, the BK implies whether X causes it or not. The local BK is common in real tasks no matter it is from experiments or human expertise. For example, when we make experiments and collect the data under intervention on X , for each variable V that has a causal relation with X , the interventional data can tell whether X causes V ; and businessman often has enough domain knowledge about price, thus they usually know whether price causes other variables or not, such as price causes sales and number of customers, and price is not caused by stocks. Given a PAG and local BK, we propose a set of orientation rules to determine some causal directions in the PAG. We prove that the rules are sound and complete, which state that all the causal relations that are identifiable given available information are exactly those determined by the proposed rules, thus closing the problem given local BK. The establishment of orientation rules compatible with local BK makes causal discovery by interventions possible in the presence of latent variables. We propose the first general active learning framework for causal discovery, with the target of identifying a maximal ancestral graph (MAG), which implies the causal relations when there are latent variables. Considering that intervention is expensive in reality, we hope to achieve the target with as few interventions as possible. Hence we present a baseline maximal entropy criterion, equipped with Metropolis-Hastings sampling, to select the intervention variable such that we can learn more causal relations by each intervention. Our contributions in this paper are twofold: (1) We show what causal relations are identifiable given local background knowledge in the presence of latent confounders with sound and complete orientation rules. (2) We give the first active learning framework for causal discovery that is applicable when latent variables exist, where maximal entropy criterion equipped with Metropolis-Hastings sampling is introduced to select intervention variables. Related works. In the literature, Meek [36] established sound and complete rules, generally called Meek rules, for causal identification given BK under the causal sufficiency assumption. The assumption requires that there are no latent variables that cause more than one observed variable simultaneously. However, causal sufficiency is untestable in practice. When we apply causality in subjects such as biology, sociology, and economics, it is quite often that there are latent variables. For example, the macroeconomic policy influences purchase price, the population of customers, and advertising cost, but it is hard to evaluate it, thereby a latent confounder. Andrews et al. [37] showed that FCI algorithm is complete given tiered BK, where all variables are partitioned into disjoint sets with explicit causal order. While in many cases, e.g., when BK is revealed by interventions, BK is not tiered. And Jaber et al. [28] investigated the complete algorithm to learn a graph when there are additional interventional distribution, while such knowledge is not needed in our paper. 2 Preliminary A graph G = (V,E) consists of a set of vertices V = {V1, · · · , Vp} and a set of edges E. For any subset V′ ⊆ V, the subgraph induced by V′ is GV′ = (V′,EV′), where EV′ is the set of edges in E whose both endpoints are in V′. For a graph G, V(G) denotes the set of vertices in G. G is a complete graph if there is an edge between any two vertices. The subgraph induced by an empty set is also a complete graph. G[−V′] denotes the subgraph GV\V′ induced by V\V′. Usually, bold letter (e.g., V) denotes a set of vertices and normal letter (e.g., V ) denotes a vertex. A graph is chordal if any cycle of length four or more has a chord, which is an edge joining two vertices that are not consecutive in the cycle. If G = (V,E) is chordal, the subgraph of G induced by V′ ⊆ V is chordal. A graph G is mixed if the edges in G are either directed → or bi-directed ↔. The two ends of an edge are called marks and have two types arrowhead or tail. A graph is a partial mixed graph (PMG) if it contains directed edges, bi-directed edges, and edges with circles (◦). The circle implies that the mark here could be either arrowhead or tail but is indefinite. Vi is adjacent to Vj in G if there is an edge between Vi and Vj . A path in a graph G is a sequence of distinct vertices 〈V0, · · · , Vn〉 such that for 0 ≤ i ≤ n − 1, Vi and Vi+1 are adjacent in G. An edge in the form of Vi ◦−◦ Vj is a circle edge. The circle component in G is the subgraph consisting of all the ◦−◦ edges in G. Denote the set of vertices adjacent to Vi in G by Adj(Vi, G). A vertex Vi is a parent of a vertex Vj if there is Vi → Vj . A directed path from Vi to Vj is a path comprised of directed edges pointing to the direction of Vj . A possible directed path from Vi to Vj is a path without an arrowhead at the mark near Vi on every edge in the path. Vi is an ancestor/possible ancestor of Vj if there is a directed path/possible directed path from Vi to Vj or Vi = Vj . Vi is a descendant/possible descendant of Vj if there is a directed path/possible directed path from Vj to Vi or Vj = Vi. Denote the set of parent/ancestor/possible ancestor/descendant/possible descendant of Vi in G by Pa(Vi, G)/Anc(Vi, G)/PossAn(Vi, G)/De(Vi, G)/PossDe(Vi, G). If Vi ∈ Anc(Vj , G) and Vi ← Vj /Vi ↔ Vj , it forms a directed cycle/almost directed cycle. ∗ is a wildcard that denotes any of the marks (arrowhead, tail, and circle). We make a convention that when an edge is in the form of ◦−∗, the ∗ here cannot be a tail since in this case the circle can be replaced by an arrowhead due to the assumption of no selection bias. A non-endpoint vertex Vi is a collider on a path if the path contains ∗→ Vi ←∗. A path p from Vi to Vj is a collider path if Vi and Vj are adjacent or all the passing vertices are colliders on p. p is a minimal path if there are no edges between any two non-consecutive vertices. A path p from Vi to Vj is a minimal collider path if p is a collider path and there is not a proper subset V′ of the vertices in p such that there is a collider path from Vi to Vj comprised of V′. A triple 〈Vi, Vj , Vk〉 on a path is unshielded if Vi and Vk are not adjacent. p is an uncovered path if every consecutive triple on p is unshielded. A path p is a minimal possible directed path if p is minimal and possible directed. A mixed graph is an ancestral graph if there is no directed or almost directed cycle (since we assume no selection bias, we do not consider undirected edges in this paper). An ancestral graph is a maximal ancestral graph (MAG, denoted byM) if it is maximal, i.e., for any two non-adjacent vertices, there is a set of vertices that m-separates them [33]. A path p between X and Y in an ancestral graph G is an inducing path if every non-endpoint vertex on p is a collider and meanwhile an ancestor of either X or Y . An ancestral graph is maximal if and only if there is no inducing path between any two non-adjacent vertices [33]. In an MAG, a path p = 〈X, · · · ,W, V, Y 〉 is a discriminating path for V if (1) X and Y are not adjacent, and (2) every vertex between X and V on the path is a collider on p and a parent of Y . Two MAGs are Markov equivalent if they share the same m-separations. A class comprised of all Markov equivalent MAGs is a Markov equivalence class (MEC). We use a partial ancestral graph (PAG, denoted by P) to denote an MEC, where a tail/arrowhead occurs if the corresponding mark is tail/arrowhead for all Markov equivalent MAGs, and a circle occurs otherwise. For a PMG M that is obtained from a PAG P by orienting some circles to either arrowheads or tails, an MAGM is consistent to the PMG M if (1) the non-circle marks in M are also inM, and (2)M is in the MEC represented by P . Sometimes we will omit the PAG P and just directly say a PMG M (obtained from the PAG P) since in this paper we study the rules to incorporate local BK to a PAG. We say an MAGM is consistent to the BK ifM is with the orientations dictated by the BK. 3 Sound and Complete Rules In this section, we present the sound and complete orientation rules to orient a PAG P with local background knowledge (BK), where P is learned by observational data [11, 35] and V(P) = {V1, V2, · · · , Vd}. The local BK regarding X means that the BK directly implies and only directly implies all the true marks at X , denoted by BK(X). We assume the absence of selection bias and that the BK is correct. The correctness indicates that there exists an MAG consistent to P and the BK. Without loss of generality, we suppose the local BK is regarding V1, V2, · · · , Vk, 1 ≤ k ≤ d. That is, for any vertex X ∈ V1, V2, · · · , Vk, all the marks at X are known according to the local BK; and for any vertex X ∈ Vk+1, · · · , Vd, the local BK does not directly imply any marks at X . First, we show the orientation rules to incorporate local BK. They follow the rules of Zhang [35] for learning a PAG but with one replacement and one addition. Due to the page limit, we do not list them here but the replaced and additional ones. See Appendix A for the rules proposed by Zhang [35]. R′4: If 〈K, · · · , A,B,R〉 is a discriminating path between K and R for B, and B ◦−∗ R, then orient B ◦−∗R as B → R. R11: If A−∗B, then A→ B. Algorithm 1: Update a PMG with local background knowledge Input: A PMG Mi, BK(X) Output: Updated graph Mi+1 1 For any K ∈ PossDe(X,Mi[−C]) and any T ∈ C such that K ◦−∗ T in Mi, orient K ←∗T (the mark at T remains); for all K ∈ PossDe(X,Mi[−C]) such that X ◦−∗K, orient X → K; 2 Orient the subgraph Mi[PossDe(X,Mi[−C])\{X}] as follows until no feasible updates: for any two vertices Vl and Vj such that Vl ◦−◦ Vj , orient it as Vl → Vj if (i) FVl\FVj 6= ∅ or (ii) FVl = FVj as well as there is a vertex Vm ∈ PossDe(X,Mi[−C])\{X} not adjacent to Vj such that Vm → Vl ◦−◦ Vj ; 3 Apply the orientation rules until the graph is closed under the orientation rules. Prop. 1 implies the soundness ofR′4 to orient a PAG P or a PMG obtained from P with local BK. See Appendix A for the proof. R11 is immediate due to no selection bias assumption. In the following, we make a convention that when we say the orientation rules, they refer to R1 − R3,R8 − R10 of Zhang [35] and R′4,R11. A PMG is closed under the orientation rules if the PMG cannot be oriented further by the orientation rules. Proposition 1. Given a PAG P , for any PMG M that is obtained from P by orienting some circles in P (or M = P),R′4 is sound to orient M with local background knowledge. Proof sketch: If there is B ←∗R in an MAG consistent with the case ofR′4, there must be a minimal collider path between K and R across B, in which case B ←∗R should have been identified in the PAG according to Zhao et al. [38], Zhang [35], contradiction. Next, we will prove the completeness of the proposed orientation rules. It is somewhat complicated. We first give a roadmap for the proof process. There are mainly two parts. The first is that we present a complete algorithm to orient P with the local BK regarding V1, V2, · · · , Vk. The second part is to prove that the algorithm orient the same marks as the proposed orientation rules. Combining these two parts, we conclude the orientation rules are complete to orient a PAG. The construction of the algorithm and the corresponding proof for the completeness of the algorithm in the first step is the most difficult part. To achieve the construction, we divide the whole process of orienting a PAG with BK regarding V1, V2, · · · , Vk into k steps. Beginning from the PAG P (P is also denoted by M0), in the (i+ 1)-th (0 ≤ i ≤ k − 1) step we obtain a PMG Mi+1 from Mi by incorporating BK(Vi+1) and orienting some other circles further. To obtain the updated graph in each step, we propose an algorithm orienting a PMG with local BK incorporated in this step. Repeat this process by incorporating BK(V1), BK(V2), . . . , BK(Vk) sequentially, we obtain the PMG with incorporated BK regarding V1, · · · , Vk. We will prove that the k-step algorithm to orient PAG with local BK regarding V1, · · · , Vk is complete, by an induction step that if the first i-step algorithm is complete to update the PAG P with BK regarding V1, · · · , Vi, then the (i+ 1)-step algorithm is complete to update P with BK regarding V1, · · · , Vi+1. Hence the proof in the first part completes. In the second part, we show that the k-step algorithm orients the same marks as the proposed orientation rules. We thus conclude that the orientation rules are sound and complete for causal identification in the presence of latent variables given local BK. We present Alg. 1 to obtain Mi+1 from Mi by incorporating BK(Vi+1). For brevity, we denote Vi+1 byX , and introduce a set of vertices C defined as C = {V ∈ V(P) | V ∗→ X ∈ BK(X)} to denote the vertices whose edges with X will be oriented to ones with arrowheads at X according to BK(X) directly. In Mi+1, there is X ←∗V for V ∈ C and X−∗V for V ∈ {V ∈ V(P) | V ∗−◦X in Mi}\C oriented directly according to BK(X). We define FMiVl = {V ∈ C ∪ {X} | V ∗−◦ Vl in Mi} for any Vl ∈ PossDe(X,Mi[−C])\{X}, which is denoted by FVl for short. FVl denotes the vertices in C ∪ {X} whose edges with Vl are oriented to ones with arrowheads at Vl in the first step of Alg. 1. In the first step of Alg. 1, the orientation at X follows BK(X), and the orientation at the vertices apart of X is motivated as the necessary condition for the ancestral property. Speaking roughly, if there is an oriented edge K → T in the case of the first step, then no matter how we orient the other circles, there will be a directed or almost directed cycle, unless we introduce new unshielded colliders (which takes new conditional independences relative to those in P), both of which are evidently invalid to obtain an MAG in the MEC represented by P . And the orientation in the second step is motivated as the necessary condition for that there are no new unshielded colliders in the oriented graph relative to the PAG P . If there is an MAG where there is an inconsistent edge with the edge oriented in this step, then there must be new unshielded colliders relative to P , which implies that the MAG is not consistent to P . The third step orients some other circles based on the updated structure. Example 1. See the example in Fig. 1. Suppose the input PMG Mi in Alg. 1 is the graph shown in Fig. 1(a). And there is local BK regardingX = V1, which is in the form of V1 ←∗V2, V1−∗V5, V1−∗V4. Hence C = {V2}. In this case, PossDe(X,Mi[−C]) = PossDe(V1,Mi[−V2]) = {V1, V3, V4, V5}. And FV3 = {V2}, FV4 = {V1}, FV5 = {V1, V2}. When we implement Alg. 1, in the first step, the edges denoted by red dashed lines in Fig. 1(b) are oriented. Among them, V1◦−◦V2/V1◦−◦V5/V1◦→ V4 is transformed to V1 ←◦V2/V1 → V5/V1 → V4 due to V1 = X,V2 ∈ C, V4, V5 ∈ {V ∈ V(P) | V ∗−◦ X in Mi}\C; and V2◦→ V5/V2◦→ V3 is oriented due to V2 ∈ C and V3, V5 ∈ PossDe(X,Mi[−C]). In the second step of Alg. 1, the edge denoted by red dashed line in Fig. 1(c) is oriented due to (1) a circle edge V3 ◦−◦ V5 after the first step, where V3, V5 ∈ PossDe(X,Mi[−C]); (2) FV3 = {V2} ⊂ {V1, V2} = FV5 . In the third step of Alg. 1, the edges denoted by red dashed lines in Fig. 1(d) is oriented byR1 of the orientation rules. Then, we present the key induction result in Thm. 1 for the graph obtained by Alg. 1 in each step. Due to the page limit, we only show a proof sketch, with a detailed version in Appendix B. Then with Thm. 1, we directly conclude that k-step algorithm is complete to orient the PAG with the local BK regarding V1, . . . , Vk in Cor. 1. Theorem 1. Given i, suppose Ms,∀s ∈ {0, 1, . . . , i} satisfies the five following properties: (Closed) Ms is closed under the orientation rules. (Invariant) The arrowheads and tails in Ms are invariant in all the MAGs consistent to P and BK regarding V1, . . . , Vs. (Chordal) The circle component in Ms is chordal. (Balanced) For any three vertices A,B,C in Ms, if A∗→ B ◦−∗C, then there is an edge between A and C with an arrowhead at C, namely, A∗→ C. Furthermore, if the edge between A and B is A→ B, then the edge between A and C is either A→ C or A◦→ C (i.e., it is not A↔ C). (Complete) For each circle at vertex A on any edge A◦−∗B in Ms, there exist MAGsM1 andM2 consistent to P and BK regarding V1, . . . , Vs with A←∗B ∈ E(M1) and A→ B ∈ E(M2). Then the PMG Mi+1 obtained from Mi with BK(Vi+1) by Alg. 1 also satisfies the five properties. Proof sketch: For brevity, we denote Vi+1 by X . (A) The closed property holds due to the third step of Alg. 1.(B) The invariant property holds because all the orientations in Alg. 1 either follow BK(X) or are motivated as the necessary condition for the ancestral property and the fact that there cannot be new unshielded colliders introduced relative to Mi. (C) The chordal property is proved based on the fact that only the first two steps of Alg. 1 possibly introduce new arrowheads, while the third step will only transform the edges as A◦→ B to A→ B, which is proved in Lemma 12 in Appendix B. With this fact, it suffices to prove that the circle component in the graph obtained after the first two steps is chordal. Denote the graph after the first two steps by M̄i+1. We can prove that the circle components in M̄i+1[PossDe(X,Mi[−C])] and in M̄i+1[−PossDe(X,Mi[−C])] are chordal, respectively. Since there are no circle edges connecting PossDe(X,Mi[−C]) and V\PossDe(X,Mi[−C]) (otherwise it has been oriented in the first step of Alg. 1), we conclude the desired result. (D) The balanced property of Mi+1 is proved based on three facts that (1) in Alg. 1, if we transform a circle to arrowhead at V , then V ∈ PossDe(X,Mi[−C]); (2) if there is A ∈ PossDe(X,Mi[−C]) and A ◦−∗ B, B 6∈ C, in Mi+1, then B ∈ PossDe(X,Mi[−C]); (3) Mi satisfies the balanced property. We can prove that it is impossible that there is a sub-structure Vi∗→ Vj ◦−∗ Vk where Vi is not adjacent to Vk or there is Vi ∗−◦ Vk in Mi+1 by discussing whether Vi, Vj , Vk belongs to PossDe(X,Mi[−C]). (E) The completeness property is proved by showing two results: (1) for edge circle edge A ◦−◦B and C◦→ D in Mi+1, C◦→ D can be transformed to C → D and the circle edge can be oriented as both A→ B andA← B in the MAGs consistent to P and local BK regarding V1, · · · , Vi+1; (2) in Mi+1, each edge A◦→ B can be oriented as A ↔ B in an MAG consistent to P and local BK regarding V1, · · · , Vi+1. In this part, the most difficult part is to prove the first result, with which the second result can be proved directly following the proof process of Thm. 3 of Zhang [35]. In the proof for the first result, we show that any MAG obtained from Mi+1 by transforming the edges as A◦→ B to A→ B and the circle component into a DAG without new unshielded colliders is consistent to P and local BK regarding V1, . . . , Vi+1. If not, we can always find an MAG obtained from Mi by transforming the edges as A◦→ B to A → B and the circle component into a DAG without new unshielded colliders that is not consistent to P and local BK regarding V1, . . . , Vi. By induction, there is an MAG obtained from P by transforming the edges as A◦→ B to A → B and the circle component into a DAG without new unshielded colliders that is not consistent to P , contradiction with Thm. 2 of Zhang [35]. We conclude the first result. Corollary 1. The k-step algorithm from M0(= P) to Mk is sound and complete. That is, the non-circle marks in Mk are invariant in all the MAGs consistent to P and BK regarding V1, . . . , Vk. And for each circle in Mk, there exist both MAGs with an arrowhead and MAGs with a tail here that are consistent to P and BK regarding V1, . . . , Vk. Proof. Previous studies [34, 35] show that the last four properties in Thm. 1 are fulfilled in PAG, the case inR′4 will never happen in P because such circles have been oriented byR4 in the process of learning P , and the case inR11 is never triggered by the rules to learn P . Hence P satisfies the five properties. With the induction step implied by Thm 1, we directly conclude that Mk satisfies the five properties, thereby satisfying the invariant and complete property. Theorem 2. The orientation rules are sound and complete to orient a PAG with the local background knowledge regarding V1, . . . , Vk. Proof. The soundness ofR′4 is shown by Prop. 1. The soundness of other rules immediately follows Thm. 4.1 of Ali et al. [34] and Thm. 1 of Zhang [35]. We do not show the details. Roughly speaking, the violation of these rules will lead to that there are new unshielded colliders or directed or almost directed cycles in the oriented graph relative to P . The main part is to prove the completeness. According to Cor. 1, it suffices to prove that in each step by Alg. 1 to incorporate BK(X) into a PMG Mi, the orientations in Alg. 1 either follow BK(X) directly, or can be achieved by the proposed orientation rules. The orientation in the second step of Alg. 1 can be achieved by R1, because no matter FVl\FVj 6= ∅ or Vm → Vl ◦−◦ Vj , there is F ∈ FVl\FVj or F = Vm respectively such that F∗→ Vl ◦−◦ Vj where F is not adjacent to Vj . The orientation in the third step naturally follows the orientation rules. For the orientation in the first step, X ←∗V for V ∈ C is dictated by BK(X), and X → V for V ∈ {V ∈ V(P) | X ◦−∗ V }\C is obtained from X −∗V dictated by BK(X) andR11. The remaining part is to prove for K ∈ PossDe(X,Mi[−C])\{X} and T ∈ C, if there is K ◦−∗ T in Mi, K ←∗T can be oriented by the proposed orientation rules when we incorporate BK(X). Due to K ∈ PossDe(X,Mi[−C])\{X}, there is a possible directed path from X to K that does not go through C. According to Lemma 2 in Appendix B, there is a minimal possible directed path p = 〈X(= F0), F1, . . . ,K(= Ft)〉, t ≥ 1 where each vertex does not belong to C. Hence X → F1 is oriented by BK(X) and R11 unless X → F1 has been in Mi. Hence, X → F1 → · · · → Ft can be oriented by R1 after incorporating BK(X) unless they have been in Mi. If t = 1, there is T∗→ X → K, thus K ←∗T can be oriented byR2. Next, we consider the case when t ≥ 2. We first prove that for any Fm ∈ F1, . . . , Ft, t ≥ 2, Fm is adjacent to T , and there is not Fm → T in Mi. Suppose Fm is not adjacent to T , there must be a sub-structure of Mi induced by Fm−s, Fm−s+1, . . . , Fm+l, T , 1 ≤ s ≤ m, 1 ≤ l ≤ t−m, such that T is only adjacent to Fm−s and Fm+l in this sub-structure. There are at least four vertices in this sub-structure. Hence there must be an unshielded collider (denoted by UC for short) in this sub-structure in P , otherwise no matter how we orient the circle there is either a new UC relative to P or a directed or almost directed cycle there. Since p is possibly directed, the UC is at either Fm+l or T (i.e., ∗→ Fm+l( or T )←∗). If there is a UC at Fm+l, T∗→ Fm+l and Fm+l−1∗→ Fm+l are identified in P . Thus Fm+l → Fm+l+1 · · · → Ft is identified in P . Due to the completeness of FCI algorithm to learn P , there is K ←∗T in P , because there is not an MAG with K → T (there has been T∗→ Fm+l → · · · → K in P). Hence there is K ←∗T in Mi, contradicting with K ◦−∗ T in Mi. If there is not a UC at Fm+l, UC can only be at T . Thus Fm−s∗→ T ←∗Fm+l is identified in P . Since p is possibly directed, Fm+l−1 is not adjacent to T , and there is not a UC at Fm+l in the sub-structure, there cannot be Fm+l ↔ T in P . Hence the path 〈Fm−s, Fm−s+1, . . . , Fm+l, T 〉 in P is an uncovered possible directed path, Fm−s → T is identified in P (otherwise R9 applies). When incorporating BK(X), there is a (almost) directed cycle T∗→ X → · · · → Fm−s → T , contradicting with the correctness of BK. Hence, Fm is adjacent to T . Similarly, if Fm → T in Mi, there is T∗→ X → · · · → Fm → T , impossibility. Finally, since F1 is adjacent to T , and T∗→ X → F1 is oriented according to BK(X), there is T∗→ F1 oriented by R2 unless T∗→ F1 has been in Mi. Hence there is always T∗→ F1 by the orientation rules. Consider T∗→ F1 → F2, there is T∗→ F2 oriented by R2 unless T∗→ F2 has been in Mi. Repeat the process for F3, F4, . . . , Ft(= K), we can prove that if there is Ft(= K)◦−∗T in Mi, there is T∗→ Ft(= K) oriented byR2. The rules thus orient the same marks as Alg. 1. Example 2. We give an example in Fig. 2. Suppose we obtain a PAG as Fig. 2(a) with observational data and have the local BK regarding V1, V2. We divide the whole process of obtaining a PMG from P with the local BK into obtaining M1 from P with BK(V1) by Alg. 1 and then obtaining M2 from M1 with BK(V2) by Alg. 1. M1 and M2 are shown in Fig. 2(b) and 2(c), respectively. It is not hard to verify that all of P , M1, M2 satisfy the closed, chordal, and balanced properties defined in Thm 1. Note if we do not considerR′4, the edge colored red in Fig. 2(b) cannot be oriented. Fig. 2(a) also shows a case where BK(V1) is not tiered [37]. The reason is that the vertices V1, V4, V5 cannot partitioned into disjoint subsets with explicit causal order because V1 and V4 belong to different subsets according to BK(V1) but V5 has ancestor relation with neither V1 nor V4. 4 Active Causal Discovery Framework The establishment of the orientation rules for causal identification with local BK makes causal discovery by interventions possible in the presence of latent variables. Hence, on the basis of the theoretical results, we propose an active learning framework for causal discovery in the presence of latent variables, with the target of learning the MAG with as fewer interventions as possible. The framework is comprised of three stages. In Stage 1, we learn a PAG with observational data. In Stage 2, we select a singleton variable X ∈ V1, . . . , Vd to intervene and collect the interventional data. In Stage 3, we learn causal relations with the data. For each edge X ◦−∗Vi, the circle at X can be learned by a two-sample test on whether the interventional distribution of Vi equals to the observational one. There is X ←∗Vi learned if they are equal, and X −∗Vi otherwise. Hence, the knowledge taken by the Algorithm 2: Intervention variable selection based on maximum entropy criterion with MH alg. Input: A PMG Mi oriented based on P and BK regarding V1, . . . , Vi, number of MAGs L Output: The selected intervention variable X 1 Obtain an MAGM0 based on Mi by transforming ◦→ to→ and the circle component into a DAG without new unshielded colliders; 2 for t = 1, 2, . . . , L′ do 3 Sample an MAGM′ from S(Mt−1); 4 ρ = min(1, |S(Mt−1)||S(M′)| ); 5 Sample u from uniform distribution U [0, 1]; 6 if u ≤ ρ then Mt =M′ else Mt =Mt−1 ; 7 S = {Mt,1≤t≤L′ | Mt has the non-circle marks in Mi} . The set of MAGs consistent to Mi; 8 s← 0, X ← ∅; 9 for Vj = Vi+1, . . . , Vd do 10 Denote V(Vj) = {V ∈ V(Mi) | Vj ◦−∗ V in Mi}, L = |S|; 11 For each possible local structure Lk of Vj , 1 ≤ k ≤ 2|V(Vj)|, we count the number Nk of the appearance of Lk in the L MAGs from S; 12 s′ = − ∑2|V(Vj)| k=1 Nk L log Nk L ; 13 if s ≤ s′ then X ← Vj , s← s′; 14 return X . interventional data is local. We repeat the second and third stages until we identify the MAG. Since the orientation rules are complete, the graph can be updated completely by each intervention. The only remaining problem is how to select the intervention variable in Stage 2. Considering that the whole process is sequential, we only focus on the intervention variable selection in one round. Without loss of generality, suppose we have obtained a PMG Mi by i interventions on V1, V2, . . . , Vi, and will select a variable from {Vi+1, . . . , Vd} to intervene. We adopt the maximum entropy criterion [22]. For Mi, we select the variable X that maximizes HX = − M∑ j=1 lj L log lj L , (1) where j is an index for a local structure of X (a local structure of X denotes a definite orientation of the marks at X), M denotes the number of different local structures, lj denotes the number of MAGs consistent to Mi which has the j-th local structure of X , and L denotes the total number of MAGs consistent to Mi. Intuitively, the maximum entropy criterion is devoted to selecting the intervention variable X such that there is a similar number of MAGs with each local structure of X and as more as possible local structures of X . A justification for intervening on such a variable is that we hope to have a small space of MAGs after the intervention no matter what the true local structure of X is. However, it is hard to count the number of MAGs consistent to Mi with each definite local structure. Even in causal sufficiency setting, implementing such operation (generally called counting maximally oriented partial DAGs) is #P-complete [39]. Considering DAG is a special case for MAG, the counting of MAGs is harder. Hence, we adopt a sampling method based on Metropolis-Hastings (MH) algorithm [40], to uniformly sample from the space of MAGs. The algorithm begins from an MAG consistent to Mi, and in each round we transform the MAG to a candidate MAG and decide to accept or reject it with some probability. Here, we introduce an important result of Zhang and Spirtes [41] for MAGs transformation in Prop. 2. Proposition 2 (Zhang and Spirtes [41], Tian [42]). LetM be an arbitrary MAG, and A → B an arbitrary directed edge inM. LetM′ be the graph identical toM except that the edge between A and B is A↔ B.M′ is an MAG Markov equivalent toM if and only if (1) there is no directed path from A to B other than A→ B inM; (2) for any C → A inM, C → B is also inM; and for any D ↔ A inM, either D → B or D ↔ B is inM; (3) there is no discriminating path for A on which B is the endpoint adjacent to A inM. In the MAG sampling algorithm, in each step we transform the current MAG to a new MAG by converting a directed edge to bi-directed edge or a bi-directed one to directed one, where we use Prop. 2 to determine whether an MAG Markov equivalent to the current MAG can be obtained by the conversion. For MH algorithm, a stationary distribution equal to the desired distribution can be obtained if any two states can be transformed to each other in limited steps [43]. As implied by Theorem 3 of Zhang and Spirtes [41], any MAG can be transformed to another Markov equivalent MAG in a limited number of transformations above. Hence, MH algorithm is valid to sample MAGs uniformly from the space of MAGs consistent to P . Then, we only remain the MAGs that have the same non-circle marks as Mi. In this way, we obtain a set of MAGs which are uniformly sampled from the space of MAGs consistent to Mi. Given an MAGM, let S(M) denote the set of MAGs that can be obtained fromM by transforming one bi-directed edge to directed edge or one directed edge to bi-directed edge according to Prop. 2. Denote the cardinality of S(M) by |S(M)|. We set the probability Q(M′ | M) of an MAGM transformed to another MAGM′ ∈ S(M) as 1/|S(M)|. Hence, the acceptance ratio ρ that is used to decide whether to accept or reject the candidate is ρ = min ( 1, p(M′)Q(M |M′) p(M)Q(M′ | M) ) = min ( 1, |S(M)| |S(M′)| ) . We propose Alg. 2 to select the intervention variable X . As shown by Lemma 15.1 in Appendix B, the graphM0 is an MAG consistent to Mi. From Line 2-Line 6, we execute MH algorithm to sample L′ MAGs. Then, we select the MAGs among them which are consistent to Mi on Line 7. Finally, we estimate the entropy by (1) and select X from Line 9-Line 14. 5 Experiments In this section, we conduct a simple simulation of the three-stage active learning framework. We generate 100 Erdös-Rényi random DAGs for each setting, where the number of variables d = 10 and the probability of including each edge p ∈ {0.1, 0.15, 0.2, 0.25, 0.3}. The weight of each edge is drawn from U [1, 2]. We generate 10000 samples from the linear structural equations, and take three variables as latent variables and the others as observed ones. In the implementation of the MH algorithm in Alg. 2, we discard the first 500 sampled MAGs and collect the following 1000 MAGs. For each intervention variable X , we collect 10000 samples under do(X = 2), and learn the circles at X by two-sample test with a significance level of 0.05. We compare the maximum entropy criterion with a baseline random criterion where we randomly select one variable with circles to intervene in each round. We show the results in Tab. 1. # int. denotes the number of interventions to achieve MAG identification. The effectiveness of the maximum entropy criterion is verified by noting that the number of interventions with maximum entropy criterion is fewer than that with random criterion. Further, we evaluate the three stages respectively. In Stage 1, we obtain a PAG by running FCI algorithm with a significance level of 0.05. In Stage 2, we adopt the two criteria to select intervention variables. In Stage 3, we learn the marks with corresponding interventional data and orientation rules. We evaluate the performance of Stage 1 by # correct PAG/# wrong PAG. # correct PAG/# wrong PAG denotes the number of edges that are correctly/wrongly identified by FCI. An edge is correctly/wrongly identified by FCI if the edge learned by FCI is identical/not identical to the true PAG. The performance of Stage 2 is evaluated by # int.. And we evaluate the performance of Stage 3 by # correct int./# wrong int., where # correct int./# wrong int. denotes the number of edges whose direction are correctly/wrongly identified by interventions. An edge is correctly/wrongly identified by interventions if its existence is correctly identified in P but the direction is uncertain, and after interventions we learn its direction correctly/wrongly. We evaluate the performance of the whole process by Norm. SHD and F1. Norm. SHD denotes the normalized structural hamming distance (SHD), which is calculated by dividing SHD by d(d− 1)/2. F1 score is calculated by the confusion matrix to indicate whether the edge between any two vertices is correctly learned. According to the SHD and F1 score, the active framework can learn the MAG accurately when p is not large. And as shown by the evaluations of Stage 1 and Stage 3, the marks are learned accurately in Stage 3, and most of the mistakes are generated in Stage 1. Hence, in the active learning framework, the PAG estimation in the first stage is the bottleneck of having a good performance. 6 Conclusion In this paper, we show what causal relations are identifiable in the presence of latent variables given local background knowledge with sound and complete orientation rules. Based on the theoretical results, we give the first active learning framework for causal discovery in the presence of latent variables. In the future, we will investigate the causal relations identifiability with general background knowledge. It is also worthy to study how our research may help some recent novel decision-making methodology [44]. Acknowledgment This research was supported by NSFC (61921006), the Collaborative Innovation Center of Novel Software Technology and Industrialization, and the program A for Outstanding PhD candidate of Nanjing University. We are grateful to the reviewers for their valuable comments.
1. What is the focus and contribution of the paper regarding incorporating local background knowledge in causal graphs? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of originality, quality, significance, clarity, and comparison to other works? 3. Do you have any questions or concerns regarding the paper's content, such as the meaning of "algorithm can be achieved" or the relationship between BK and soft interventions? 4. Are there any limitations or areas for improvement in the proposed method or its application?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper For a given PAG, the paper presents a sound and complete procedure to incorporate local background knowledge (BK). For a given node X, the local BK the paper considers is knowledge of the edge marks at X, i.e., the circle marks at X in the given PAG are converted to an arrowhead or a tail using the BK. Then the procedure orients further edges/nodes in the PAG. For a given set of nodes with BK, the procedure sequentially incorporates the BK for each node one at a time. Next, the paper presents an algorithm to decide which nodes to intervene on (interventions are essentially equivalent the BK) with the goal of learning the MAG with a small number of interventions. Strengths And Weaknesses Originality, quality, and significance: The primary contribution of the work is to propose two new orientation rules that allow for sound and complete incorporation of BK for a given node V_i (in Alg. 1). I think the work is significant as it proposes a way to further narrow down the MEC beyond what can be learned using only observational data. And incorporating this specific kind of BK is also novel (although I think more comparison is needed to related literature --- please see my remarks in the "Questions" section). Clarity: Overall, I think the paper is hard to read. I would have appreciated some more examples to illustrate how the algorithm work. It would have been nice if the proof sketches were explained with concrete examples of why the two new rules are sound and complete (especially for Thm. 2). Questions Comparison to learning PAGs with interventional data: There is some work on incorporating interventional data into PAGs. As an example, see [1], which provides a characterization as well as a learning algorithm for causal discovery under unknown interventions (with known interventions being a special case). This algorithm is also sound and complete. BK is not necessarily the same as soft interventions (which can reveal more information). But to me, they seem very closely related as interventions are one way of obtaining BK, so it seems to me that algorithms that are sound and complete in incorporating interventions can also be leveraged for BK. Can the authors comment on how their work is related to this line of work? Line 156: What do you mean by "algorithm can be achieved"? What does it mean for an algorithm to be achieved? [1] Jaber, A., Kocaoglu, M., Shanmugam, K., & Bareinboim, E. (2020). Causal discovery from soft interventions with unknown targets: Characterization and learning. Advances in neural information processing systems, 33, 9551-9561. Limitations I have addressed this in the previous sections.
NIPS
Title Adam Can Converge Without Any Modification On Update Rules Abstract Ever since Reddi et al. (2018) pointed out the divergence issue of Adam, many new variants have been designed to obtain convergence. However, vanilla Adam remains exceptionally popular and it works well in practice. Why is there a gap between theory and practice? We point out there is a mismatch between the settings of theory and practice: Reddi et al. (2018) pick the problem after picking the hyperparameters of Adam, i.e., (β1, β2); while practical applications often fix the problem first and then tune (β1, β2). Due to this observation, we conjecture that the empirical convergence can be theoretically justified, only if we change the order of picking the problem and hyperparameter. In this work, we confirm this conjecture. We prove that, when the 2nd-order momentum parameter β2 is large and 1st-order momentum parameter β1 < √ β2 < 1, Adam converges to the neighborhood of critical points. The size of the neighborhood is propositional to the variance of stochastic gradients. Under an extra condition (strong growth condition), Adam converges to critical points. It is worth mentioning that our results cover a wide range of hyperparameters: as β2 increases, our convergence result can cover any β1 ∈ [0, 1) including β1 = 0.9, which is the default setting in deep learning libraries. To our knowledge, this is the first result showing that Adam can converge without any modification on its update rules. Further, our analysis does not require assumptions of bounded gradients or bounded 2nd-order momentum. When β2 is small, we further point out a large region of (β1, β2) combinations where Adam can diverge to infinity. Our divergence result considers the same setting (fixing the optimization problem ahead) as our convergence result, indicating that there is a phase transition from divergence to convergence when increasing β2. These positive and negative results provide suggestions on how to tune Adam hyperparameters: for instance, when Adam does not work well, we suggest tuning up β2 and trying β1 < √ β2. N/A √ β2 < 1, Adam converges to the neighborhood of critical points. The size of the neighborhood is propositional to the variance of stochastic gradients. Under an extra condition (strong growth condition), Adam converges to critical points. It is worth mentioning that our results cover a wide range of hyperparameters: as β2 increases, our convergence result can cover any β1 ∈ [0, 1) including β1 = 0.9, which is the default setting in deep learning libraries. To our knowledge, this is the first result showing that Adam can converge without any modification on its update rules. Further, our analysis does not require assumptions of bounded gradients or bounded 2nd-order momentum. When β2 is small, we further point out a large region of (β1, β2) combinations where Adam can diverge to infinity. Our divergence result considers the same setting (fixing the optimization problem ahead) as our convergence result, indicating that there is a phase transition from divergence to convergence when increasing β2. These positive and negative results provide suggestions on how to tune Adam hyperparameters: for instance, when Adam does not work well, we suggest tuning up β2 and trying β1 < √ β2. 1 Introduction Modern machine learning tasks often aim to solve the following finite-sum problem. min x∈Rd f(x) = n−1∑ i=0 fi(x), (1) where n is the number of samples or mini-batches and x denotes the trainable parameters. In deep learning, Adam (Kingma & Ba, 2014) is one of the most popular algorithms for solving (1). It has been applied to various machine learning domains such as natural language processing (NLP) (Vaswani et al., 2017; Brown et al., 2020; Devlin et al., 2018), generative adversarial networks (GANs) (Radford et al., 2015; Isola et al., 2017; Zhu et al., 2017) and computer vision (CV) (Dosovitskiy et al., 2021). Despite its prevalence, Reddi et al. (2018) point out that Adam can diverge with a wide range of hyperparameters. A main result in (Reddi et al., 2018) states that 2: ∗Correspondence author 2We formally re-state their results in Appendix D.2. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). For any β1, β2 s.t. 0 ≤ β1 < √ β2 < 1, there exists a problem such that Adam diverges. Here, β1 and β2 are the hyperparameter to control Adam’s 1st-order and 2nd-order momentum. More description of Adam can be seen in Algorithm 1 (presented later in Section 2.1). Ever since (Reddi et al., 2018) pointed out the divergence issue, many new variants have been designed. For instance, AMSGrad (Reddi et al., 2018) enforced the adaptor vt (defined later in Algorithm 1) to be non-decreasing; AdaBound (Luo et al., 2019) imposed constraint vt ∈ [Cl, Cu] to ensure the boundedness on effective stepsize. We introduce more variants in Appendix D.1. On the other hand, counter-intuitively, vanilla Adam remains exceptionally popular (see evidence at (Scholar)). Without any modification on its update rules, Adam works well in practice. Even more mysteriously, we find that the commonly reported hyperparameters actually satisfy the divergence condition stated earlier. For instance, Kingma & Ba (2014) claimed that (β1, β2) = (0.9, 0.999) is a “good choice for the tested machine learning problems" and it is indeed the default setting in deep learning libraries. In super-large models GPT-3 and Megatron (Brown et al., 2020; Smith et al., 2022), (β1, β2) is chosen to be (0.9, 0.95). GAN researchers (e.g. Radford et al. (2015); Isola et al. (2017)) use (β1, β2) = (0.5, 0.999). All these hyperparameters live in the divergence region β1 < √ β2. Surprisingly, instead of observing the divergence issue, these hyperparameters achieve good performances and they actually show the sign of convergence. Why does Adam work well despite its theoretical divergence issue? Is there any mismatch between deep learning problems and the divergent example? We take a closer look into the divergence example and find out the mismatch does exist. In particular, we notice an important (but often ignored) characteristic of the divergence example: (Reddi et al., 2018) picks (β1, β2) before picking the sample size n. Put in another way, to construct the divergence example, they change n for different (β1, β2). For instance, for (β1, β2) = (0, 0.99), they use one n to construct the divergent example; for (β1, β2) = (0, 0.9999), they use another n to construct another divergent example. On the other hand, in practical applications of Adam listed above, practitioners tune the hyperparameters (β1, β2) after the sample size n is fixed. So there is a gap between the setting of theory and practice: the order of picking n and (β1, β2) is different. Considering the good performance of Adam under fixed n, we conjecture that Adam can converge in this setting. Unfortunately, the behavior of vanilla Adam is far less studied than its variants (perhaps due to the criticism of divergence). To verify this conjecture, we run experiments for different choices of (β1, β2) on a few tasks. First, we run Adam for a convex function (2) with fixed n (see the definition in Section 3.2). Second, we run Adam for the classification problem on data MNIST and CIFAR-10 with fixed batchsize. We observe some interesting phenomena in Figure 1 (a), (b) and (c). First, when β2 is large, the optimization error is small for almost all values of β1. Second, when β1, β2 are both small, there is a red region with relatively large error. On MNIST, CIFAR-10, the error in the red region is increased by 1.4 times than that in the blue region. The situation is a lot worse on function (2) (defined later in Section 3.2): the error in the red region is 70 times higher. While Adam’s performances seem unstable in the red region, we find that Adam always performs well in the top blue region in Figure 1. This seems to suggest that Adam can converge without any algorithmic modification, as long as β1 and β2 are chosen properly. We ask the following question: Can Adam provably converge without any modification on its update rules? In this work, we theoretically explore this question. Our contributions are visualized in Figure 1 (d). We prove the following results when n is fixed (or more rigorously, when the function class is fixed): • We prove that when β2 is large enough and β1 < √ β2, Adam converges to the neighborhood of critical points. The size of the neighborhood is propositional to the variance of stochastic gradients. With an extra condition (so-called strong growth condition), we prove that Adam can converge to critical points. As β2 increases, these results can cover any momentum parameter β1 ∈ [0, 1) including the default setting β1 = 0.9. In particular, our analysis does not require bounded gradient assumption. • We study the divergence issue of small-β2 Adam. We prove that: for any fixed n (or more rigorously, for any fixed function class), there exists a function such that, Adam diverges to infinity when (β1, β2) is picked in the red region in Figure 1 (d). The size of the red region increases with n. The shape of the region follows the solution to our analytic conditions. • We emphasize a few characteristics of our results. (1) phase transition. The divergence result considers the same setting as our convergence result, indicating that there is a phase transition from divergence to convergence when changing β2. (2) problem-dependent bounds. Our convergence and divergence regions of (β1, β2) are problem-dependent, which is drastically different from (Reddi et al., 2018) which established the problem-independent worst-case choice of (β1, β2). (3) non-asymptotic characterization. the “divergence region” of (β1, β2) expands as n increases and converges to the whole region [0, 1)2 as n goes to infinity, which recovers (actually stronger than) the problem-independent divergence result of (Reddi et al., 2018) that requires β1 < √ β2. In this sense, we can view the divergence result of (Reddi et al., 2018) as an asymptotic characterization of the divergence region (as n→∞) and our divergence result as a non-asymptotic characterization (for any fixed n). We provide more discussion in Section 4. • Our positive and negative results can provide suggestions for tuning β1 and β2: for instance,when Adam does not work well, we suggest tuning up β2 and trying β1 < √ β2. We provide more tuning suggestions in Appendix C. We believe our results can boost new understandings for Adam. While Reddi et al. (2018) reveal that “Adam can diverge", our results show the other side of the coin: when n is fixed (or when function class is fixed), Adam can still converge without any modification on its update rules. Our results suggest that Adam is still a theoretically justified algorithm and practitioners can use it confidently. We further emphasize that our convergence results can cover any β1 ∈ [0, 1), which allows the algorithm to bring arbitrarily heavy momentum signals. It turns out that large-momentum Adam is not easy to analyze. Even with stronger assumptions like bounded gradient (‖∇f(x)‖ < C,∀x), its convergence is not well understood (see related works in Section 2.2). To our best knowledge, this is the first result that proves vanilla Adam with any β1 can converge without any assumption of bounded gradient or bounded 2nd-order momentum. The proof contains a new method to handle unbounded momentum in the stochastic non-linear dynamics system. We will highlight our technical novelties in Section 5. 2 Preliminaries 2.1 Review of Adam We consider finite-sum problem (1). We use x to denote the optimization variable. We denote∇fj as the gradient of fj and let ◦ be the component-wise product. The division and square-root operator are component-wise as well. We present randomly shuffled Adam in Algorithm 1. In Algorithm 1, m denotes the 1st-order momentum and v denotes the 2nd-order momentum. they are weighted averaged by hyperparameter β1, β2, respectively. Larger β1, β2 will adopt more history information. We denote xk,i,mk,i, vk,i ∈ Rd as the value of x,m, v at the k-th outer loop (epoch) and i-th inner loop (batch), respectively. We choose ηk = η1√nk as the stepsize. In practice, is adopted for numerical stability and it is often chosen to be 10−8. In our theory, we allow to be an arbitrary non-negative constant including 0. In the original version of Adam in (Kingma & Ba, 2014), it has an additional “bias correction” step. This “bias correction” step can be implemented by changing the stepsize ηk into η̂k = √ 1−βk2 1−βk1 ηk Algorithm 1 Adam Initialize x1,0 = x0, m1,−1 = ∇f(x0) and v1,−1 = maxi∇fi(x0) ◦ ∇fi(x0). for k = 1→∞ do Sample {τk,0, τk,1, · · · , τk,n−1} as a random permutation of {0, 1, 2, · · · , n− 1} for i = 0→ n− 1 do mk,i = β1mk,i−1 + (1− β1)∇fτk,i(xk,i) vk,i = β2vk,i−1 + (1− β2)∇fτk,i(xk,i) ◦ ∇fτk,i(xk,i) xk,i+1 = xk,i − ηk√vk,i+ ◦mk,i end for xk+1,0 = xk,n; vk+1,−1 = vk,n−1; mk+1,−1 = mk,n−1 end for and using zero initialization. In Algorithm 1, the “bias correction” step is replaced by a special initialization, which corrects the bias as well. Note that η̂k ∈ [ √ 1− β2ηk, 11−β1 ηk] is well-bounded near ηk, so ηk and η̂k brings the same convergence rate. In addition, as the effect of initialization becomes negligible when the training progresses, Adam with zero & our initialization will have the same asymptotic behavior. In the main body of our proof, we follow the form of Algorithm 1, which makes results cleaner. For completeness, we add the proof on the convergence of Adam with “bias correction” steps in Appendix G.11. In our analysis, we make the assumptions below. Assumption 2.1. we consider x ∈ Rd and fi(x) satisfies gradient Lipschitz continuous with constant L. We assume f(x) is lower bounded by a finite constant f∗. Assumption 2.2. fi(x) and f(x) satisfy: ∑n−1 i=0 ‖∇fi(x)‖ 2 2 ≤ D1‖∇f(x)‖22 +D0,∀x ∈ Rd. Assumption 2.2 is quite general. When D1 = 1/n, it becomes the “constant variance” with constant D0/n. “constant variance" condition is commonly used in both SGD and Adam analysis (e.g. (Ghadimi et al., 2016; Zaheer et al., 2018; Huang et al., 2021)). Assumption 2.2 allows more flexible choices of D1 6= n and thus it is weaker than “constant variance”. When D0 > 0, the problem instance is sometimes called “non-realizable" (Shi et al., 2020). In this case, adaptive gradient methods are not guaranteed to reach the exact critical points. Instead, they only converge to a bounded region (near critical points) (Zaheer et al., 2018; Shi et al., 2020). This phenomenon indeed occurs for Adam in experiments, even with diminishing stepsize (see Figure 4 (a)). The behavior of SGD is similar: constant stepsize SGD converges to a bounded region with its size propositional to the noise level D0 (Yan et al., 2018; Yu et al., 2019; Liu et al., 2020b). When D0 = 0, Assumption 2.2 is often called “strong growth condition" (SGC) (Vaswani et al., 2019). When ‖∇f(x)‖ = 0, under SGC we have ‖∇fj(x)‖ = 0 for all j. SGC is increasingly popular recently e.g.(Schmidt & Roux, 2013; Vaswani et al., 2019). This condition is known to be reasonable in the overparameterized regime where neural networks can interpolate all data points (Vaswani et al., 2019). We will show that Adam can converge to critical points if SGC holds. When n, f∗, L,D0, D1 are fixed a priori, we use Fn,f ∗ L,D0,D1 (Rd) to denote the function class con- taining f(x) satisfying Assumption 2.1 and 2.2 with constant n, f∗, etc.. Since n is fixed when the function class Fn,f ∗ L,D0,D1 (Rd), we introduce this notation to clearly present the divergence result in Proposition 3.3. Without this pre-defined function class, the claim of divergence might be confusing. 2.2 Related Works Ever since Reddi et al. (2018) pointed out the divergence issue, there are many attempts on designing new variants of Adam. Since we focus on understanding Adam without modification on its update rules, we introduce more variants later in Appendix D.1. Compared with proposing new variants, the convergence of vanilla Adam is far less studied than its variants (perhaps due to the criticism of divergence). There are only a few works analyzing vanilla Adam and they require extra assumptions. Zhou et al. (2018b) analyze the counter-example in (Reddi et al., 2018) and find certain hyperparameter can work. However, their analysis is restricted to the counter-example. Zaheer et al. (2018) study the relation between mini-batch sizes and (non)convergence of Adam. However, this work require β1 = 0 and Adam is reduced to RMSProp (Hinton et al., 2012). De et al. (2018) analyze RMSProp and non-zero-β1 Adam, but they assume the sign of all stochastic gradients to keep the same. It seems unclear how to check this condition a priori. Additionally, they require β1 to be inversely related to the upper bound of gradient, which forces β1 to be small (as a side note, this result only applies to full-batch Adam). Défossez et al. (2020) analyze Adam with β1 < β2 and provide some insights on the momentum mechanisms. However, their bound is inversely proportional to (the hyperparameter for numerical stability) and the bound goes to infinity when goes to 0. This is different from practical application since small such as 10−8 often works well. Further, using large is against the nature of adaptive gradient methods because√ v no longer dominates in the choice of stepsize. In this case, Adam is essentially transformed back to SGD. Two recent works (Huang et al., 2021) and (Guo et al., 2021) propose novel and simple frameworks to analyze Adam-family with large β1. Yet, they require the effective stepsize of Adam to be bounded in certain interval, i.e., 1√vt+ ∈ [Cl, Cu] 3. This boundedness condition changes Adam into AdaBound (Luo et al., 2019) and thus they cannot explain the observations on original Adam in Section 1. To summarize, all these works require at least one strong assumption (e.g. large ). Additionally, they all (including those for new variants) require bounded gradient assumptions. A recent work (Shi et al., 2020) takes the first attempt to analyze RMSProp without bounded gradient assumption. They show that RMSProp can converge to the neighborhood of critical points. 4 We believe it is important to study Adam rather than RMSProp: Numerically, Adam often outperforms RMSProp on complicated tasks (e.g. on Atari games, the mean reward is improved from 88% to 110% (Agarwal et al., 2020)). Theoretically, literature on RMSProp cannot reveal the interaction between β1 and β2; or how these hyperparameters jointly affect (or jeopardize) the convergence of Adam. However, it is non-trivial to jointly analyze the effect of β1 and β2. We point out there are at least three challenges. First, it seems unclear how to control the massive momentum mt of Adam. Second, mt is multiplied by 1/ √ vt, causing non-linear perturbation. Third, mt and 1/ √ vt are statistically dependent and cannot be decoupled. We propose new methods to resolve these issues. We highlight our technical novelties in Section 5. 2.3 The Importance and Difficulties of Removing Bounded Gradient Assumptions Here, we emphasize the importance to remove bounded gradient assumption. First, unlike the assumptions in Section 2.1, bounded gradient is not common in SGD analysis. So it is of theoretical interests to remove this condition for Adam. Second, bounded gradient condition rules out the chances of gradient divergence a priori. However, there are numerical evidences showing that Adam’s gradient can diverge (see Section 6 and Appendix B). Removing the boundedness assumption helps us point out the divergence and convergence phase transition in the (β1, β2) diagram. However, it is often difficult to analyze convergence without bounded gradient assumption. First, it is non-trivial to control stochastic momentum. Even for SGD, this task is challenging. For instance, An early paper Bertsekas & Tsitsiklis (2000) analyzed SGD-type methods without any boundedness condition. But it is not until recently that Yu et al. (2019); Liu et al. (2020b); Jin et al. (2022) prove SGDM (SGD with momentum) converges without bounded gradient assumption. Such attempts of removing boundedness assumptions are often appreciated for general optimization problems where “bounded-assumption-free" is considered as a major contribution. Secondly, for Adam, the role of momentum mt is even more intricate since it is multiplied by 1/ √ vt. Combined with vt, the impact of previous signals not only affect the update direction, but also change the stepsize for each component. Further, both momentum mt and stepsize 1/ √ vt are random variables and they are highly correlated. Such statistical dependency causes trouble for analysis. In summary, the role of momentum in Adam could be much different from that in SGDM or GDM. Even with boundedness conditions, the convergence of large-β1 Adam is still not well understood (see related works in Section 2.2). In this work, we propose new techniques to handle Adam’s momentum under any large β1, regardless of the gradient magnitude. These techniques are not revealed in any existing works. We introduce our technical contribution in Section 5. 3For completeness, we explain why they require this condition in Appendix D.1. 4We notice that they also provide a convergence result for Adam with β1 close enough to 0. However, a simple calculation by Zhang et al. (2022) shows that they require β1 < 10−7. Thus their result does not provide much extra information other than RMSProp. 3 Main Results 3.1 Convergence Results Here, we give the convergence results under large β2. Theorem 3.1. For any f(x) ∈ Fn,f ∗ L,D0,D1 (Rd), we assume the hyperparameters in Algorithm 1 satisfy: β1 < √ β2 < 1; β2 is greater or equal to a threshold γ1(n); and ηk = η1√nk . Let km ∈ N satisfies km ≥ 4 and β(km−1)n1 ≤ βn1√ km−1 , 5 we have the following results for any T > km: min k∈[km,T ] E { min [√ 2D1d D0 ‖∇f(xk,0)‖22, ‖∇f(xk,0)‖2 ]} = O ( log T√ T ) +O( √ D0). Remark 1: the choice of β2. Our theory suggests that large β2 should be used to ensure convergence. This message matches our experimental findings in Figure 1. We would like to point out that the requirement of “large β2" is neccessary, because small β2 will indeed lead to divergence (shown later in Section 3.2). We here comment a bit on the the threshold γ1(n). γ1(n) satisfies β2 ≥ 1−O ( 1−βn1 n2ρ ) (see inequality (34) and Remark G.7), where ρ is a constant that depends on the training trajectory. In worst cases, ρ is upper bounded by n2.5, but we find the practical ρ to be much smaller. In Appendix B, we estimate ρ on MNIST and CIFAR-10. In practical training process, we empirically observe that ρ ≈ O(n), thus the required γ1(n) ≈ 1−O ( n−3 ) . Note that our threshold of β2 is a sufficient condition for convergence, so there may be a gap between the practical choices and the theoretical bound of β2. Closing the gap will be an interesting future direction. We find that γ1(n) increases with n. This property suggests that larger β2 should be used when n is large. This phenomenon is also verified by our experiments in Appendix B. We also remark that γ1(n) slowly increases with β1. This property is visualized in Figure 1 (d) where the lower boundary of blue region slightly lifts up when β1 increases. Remark 2: the choice of β1. Theorem 3.1 requires β1 < √ β2. Since β2 is suggested to be large, our convergence result can cover flexible choice of β1 ∈ [0, 1). For instance, β2 = 0.999 brings the threshold of β1 < 0.9995, which covers basically all practical choices of β1 reported in the literature (see Section 1), including the default setting β1 = 0.9. This result is much stronger than those in the RMSProp literature (e.g. (Shi et al., 2020; Zaheer et al., 2018)). To our knowledge, we are the first to prove convergence of Adam under any β1 ∈ [0, 1) without bounded gradient assumption. Remark 3: convergence to a bounded region. When D0 > 0, Adam converges to a bounded region near critical points. As discussed in Section 2.1, converging to bounded region is common for stochastic methods including constant-stepsize SGD (Yan et al., 2018; Yu et al., 2019; Liu et al., 2020b) and diminishing-stepsize RMSProp (Zaheer et al., 2018; Shi et al., 2020). This phenomenon is also observed in practice: even for convex quadratic function with D0 > 0, Adam with diminishing stepsize cannot reach exactly zero gradient (see Figure 4 (a) in Section 6). This is because: even though ηk is decreasing, the effective stepsize ηk/ √ vk,i might not decay. The good news is that, the constant O( √ D0) vanishes to 0 as β2 goes to 1 (both in theory and experiments). The relation between β2 and constant O( √ D0) are introduced in Remark G.14 in Appendix G.9. The size shrinks to 0 because the movement of √vk,i shrinks as β2 increases. As a corollary of Theorem 3.1, we have the following result under SGC (i.e., D0 = 0). Corollary 3.2. Under the setting in Theorem 3.1. When D0 = 0 for Assumption 2.2, we have min k∈[km,T ] E‖∇f(xk,0)‖2 = O ( log T√ T ) . Under SGC (i.e. D0 = 0), Corollary 3.2 states that Adam can converge to critical points. This is indeed the case in practice. For instance, function (2) satisfies SGC and we observe 0 gradient norm after Adam converges (see Section 6 and Appendix B). The convergence rate in Corollary 3.2 is comparable to that of SGD under the same condition in (Vaswani et al., 2019). 5When β1 = 0.9, km = 15 for any n ≥ 1. 3.2 Divergence Results Theorem 3.1 shows that when β2 is large, any β1 < √ β2 ensures convergence. Now we consider the case where β2 is small. We will show that in this case, a wide range of β1 is facing the risk of diverging to infinity. The divergence of small-β2 Adam suggests that “large β2" is necessary in the convergence result Theorem 3.1. We construct a counter-example in Fn,f ∗ L,D0,D1 (Rd). Consider f(x) = ∑n−1 i=0 fi(x) for x ∈ R , we define fi(x) as: fi(x) = { nx, x ≥ −1 n 2 (x+ 2)2 − 3n 2 , x < −1 for i = 0, fi(x) = { −x, x ≥ −1 − 1 2 (x+ 2)2 + 3 2 , x < −1 for i > 0. (2) Summing up all the fi(x), we can see that f(x) = { x, x ≥ −1 1 2 (x+ 2)2 − 3 2 , x < −1 is a lower bounded convex smooth function with optimal solution x∗ = −2. Function (2) allows both iterates and gradients to diverge to infinity. As shown in Figure 1 (a), when running Adam on (2), there exists a red large-error region. This shows the sign of divergence. We further theoretically verify the conjecture in Proposition 3.3. Proposition 3.3. For any function class Fn,f ∗ L,D0,D1 (Rd), there exists a f(x) ∈ Fn,f ∗ L,D0,D1 (Rd), s.t. when (β1, β2) satisfies analytic condition (12), (13), (14) in Appendix E, Adam’s iterates and function values diverge to infinity. By solving these conditions in NumPy, we plot the orange region in Figure 2. The size of the region depends on n and it expands to the whole region when n goes to infinity. The proof can be seen in Appendix E. We find the “divergence region" always stays below the “convergence threshold" γ1(n) in Theorem 3.1, so the two results are self-consistent (see the remark in Appendix E). Proposition 3.3 states the divergence of iterates and function values. Consistently, our experiments also show the divergence of gradient (see Section 6 and Appendix B). These results characterize Adam’s divergence behavior both numerically and theoretically. We emphasize that the orange region is not discussed in (Reddi et al., 2018) because we consider n fixed while they allow n changing. When n is allowed to increase, our orange region will expand to the whole region and thus we can derive a similar (actually stronger) result as (Reddi et al., 2018). We provide more explanation in Section 4. Combining Theorem 3.1 and Proposition 3.3, we establish a clearer image on the relation between (β1, β2) and qualitative behavior of Adam. 4 Reconciling Our Results with (Reddi et al., 2018) We discuss more on the relation between (Reddi et al., 2018) and our results. The divergence result shown in Section 1 does not contradict with our convergence results in Theorem 3.1. Further, it is different from our divergence result in Proposition 3.3. The key difference lies in whether (β1, β2) is picked before or after picking the function class Fn,f ∗ L,D0,D1 (Rd). We discuss the following two cases. Case I: When (β1, β2) is picked before picking Fn,f ∗ L,D0,D1 (Rd). As discussed in Section 1, the divergence result requires different n for different (β1, β2). In this sense, the considered function class is constantly changing. It does not contradict with our Theorem 3.1 which considers a fixed function class with fixed n. For Case I, we illustrate Adam’s behavior in Figure 3. The red region is proved by (Reddi et al., 2018). For completeness, we remove the condition “β1 < √ β2" and further prove that Adam will diverge to infinity for any (β1, β2) ∈ [0, 1)2. The result is shown in the following Corollary 4.1. Corollary 4.1. For any (β1, β2) ∈ [0, 1)2, there exists a function satisfying Assumption 2.1 and 2.2 that the Adam’s iterates and function values diverge to infinity. Proof of Corollary 4.1 can be seen in the final paragraph in Appendix E. In the proof, we also require different n to cause divergence for different (β1, β2). So the function class is constantly changing. As a result, in Case I, we cannot prove any convergence result. Case II: When (β1, β2) is picked after picking Fn,f ∗ L,D0,D1 (Rd). When the function class is picked in advance, sample size n will also be fixed. This case is closer to most practical applications. In this case, we find that Adam’s behavior changes significantly in the different region of Figure 3. First, ∀f(x) ∈ Fn,f ∗ L,D0,D1 (Rd) will converge when β1 < √ β2 and β2 is large. Second, ∃f(x) ∈ Fn,f ∗ L,D0,D1 (Rd) will diverge to infinity when (β1, β2) are in the orange region in Figure 2. Since Case II is closer to practical scenarios, these results can provide better guidance for hyperparameter tuning for Adam users. We provide some suggestions for practitioners in Appendix C. For Case II, we summarize the possible behaviors of Adam in Table 1. We also illustrate our convergence and divergence results in Figure 1 (d). Note that there are some blanket areas where Adam’s behavior remains unknown, this part will be left as interesting future work. 5 Proof Ideas for the Convergence Result We now (informally) introduce our proof ideas for the convergence result in Theorem 3.1. Simply put, we want to control the update direction mk,i/ √ vk,i inside the dual cone of gradient direction. Namely: E〈∇f(xk,0), n−1∑ i=0 mk,i√ vk,i 〉 > 0. (3) However, directly proving (3) could be difficult because both mk,i and vk,i distort the trajectory. To start with, we try to control the movement of vk,i by increasing β2 (similar idea as (Shi et al., 2020; Zou et al., 2019; Chen et al., 2021)). Recall vk,i = (1−β2) ∑i j=1 β i−j 2 ∇fτk,j (xk,j)◦∇fτk,j (xk,j)+ βi2vk,0, we have vk,i ≈ vk,0 when β2 is large. In this case, we have: E 〈 ∇f(xk,0), n−1∑ i=0 mk,i√ vk,i 〉 ≈ E 〈 ∇f(xk,0)√ vk,0 , n−1∑ i=0 mk,i 〉 ≈ E 〈 ∇f(xk,0)√ vk,0 ,∇f(xk,0) 〉 > 0, where the first “≈" is due to the large β2 and the second “≈" is our goal. Now we need to show: E 〈 ∇f(xk,0)√ vk,0 , ( n−1∑ i=0 mk,i ) −∇f(xk,0) 〉 (∗) = E ( d∑ l=1 n−1∑ i=0 ∂lf(xk,0)√ vl,k,0 ( ml,k,i − ∂lfτk,i(xk,0) )) ≈ 0, (4) where ∂lf(xk,0) is the l-th component of∇f(xk,0), similarly for ml,k,0 and vl,k,0. (∗) is due to the finite-sum structure. However, it is not easy to prove (4). We point out some technical issues below. Issue I: massive momentum. Directly proving (4) is still not easy. We need to first consider a simplified problem: for every l ∈ [d], assume we treat ∂lf(xk,0)/ √ vl,k,0 as a constant, how to bound E ∑n−1 i=0 ( ml,k,i − ∂lfτk,i(xk,0) ) ? It turns out that this simplified problem is still non-trivial. When β1 is large, ml,k,i contains heavy historical signals which significantly distort the trajectory from gradient direction. Existing literature (Zaheer et al., 2018; De et al., 2018; Shi et al., 2020) take a naive approach: they set β1 ≈ 0 so that ml,k,i ≈ ∂lfτk,i(xk,i). Then we get (4) ≈ 0. However, this method cannot be applied here since we are interested in practical cases where β1 is large in [0, 1). Issue II: stochastic non-linear dynamics. Even if we solve Issue I, it is still unclear how to prove (4). This is because: for every l ∈ [d], ∂lf(xk,0)/ √ vl,k,0 is a r.v. instead of a constant. With this term involved, we are facing with a stochastic non-linear dynamics, which could be difficult to analyze. Further, ∂lf(xk,0)/ √ vl,k,0 is statistically dependent with ( ml,k,i − ∂lfτk,i(xk,0) ) , so we are not allowed to handle the expectation E(∂lf(xk,0)/ √ vl,k,0) separately . Unfortunately, even with additional assumptions like bounded gradient, there is no general approach to tackle the above issues. In this work, we propose solutions regardless of gradient magnitude. Solution to Issue I. We prove the following Lemma to resolve Issue I. Lemma 5.1. (Informal) Consider Algorithm 1. For every l ∈ [d] and any β1 ∈ [0, 1), we have the following result under Assumption 2.1. δ(β1) := E n−1∑ i=0 ( ml,k,i − ∂lfτk,i(xk,0) ) = O ( 1√ k ) , where ∂lf(xk,0) is the l-th component of∇f(xk,0); ml,k,i = (1− β1)∂lfτk,i(xk,i) + β1ml,k,i−1. We present the proof idea in Appendix A. Simply put, we construct a simple toy example called “color-ball" model (of the 1st kind). This toy model shows a special property of δ(β1). We find out: for Algorithm 1, error terms from successive epochs can be canceled, which keeps the momentum roughly in the descent direction. This important property is not revealed in any existing work. Remark 4: When assuming bounded gradient ‖∇f(x)‖ ≤ G, a naive upper bound would be δ(β1) = O(G). However, such constant upper bound does not imply δ(β1) is close to 0. It will not help prove the convergence. This might be partially the reason why large-β1 Adam is hard to analyze even under bounded gradient (see related works in Section 2.2). We emphasize Lemma 5.1 holds true regardless of gradient norm, so it could be deployed in both bounded or unbounded gradient analysis. Solution to Issue II. We try to show (4) by adopting Lemma 5.1. However, the direct application cannot work since ∂lf(xk,0)√vl,k,0 is random. Despite its randomness, we find out that when β2 is large, the changes of ∂lf(xk,0)√ vl,k,0 shrinks along iteration. As such, although ∂lf(xk,0)√vl,k,0 brings extra perturbation, the quantity in (4) share the similar asymptotic behavior as δ(β1). We prove the following Lemma 5.2. Lemma 5.2. (Informal) Under Assumption 2.1 and 2.2, consider Algorithm 1 with large β2 and β1 < √ β2. For those l with gradient component larger than certain threshold, we have:∣∣∣∣∂lf(xk,0)√vk,0 − ∂lf(xk−1,0)√vk−1,0 ∣∣∣∣ = O( 1√k ) ; (5) E ( ∂lf(xk,0)√ vl,k,0 n−1∑ i=0 (ml,k,i − ∂lfτk,i(xk,0)) ) = O ( 1√ k ) . (6) In Appendix A, we introduce how to derive (6) from (5). To do so, we introduce a new type of “color-ball" model (we call it color-ball of the 2nd kind) which adopts the random perturbation of ∂lf(xk,0)√ vl,k,0 . Understanding color-ball model of the 2nd kind is crucial for proving Lemma 5.2. We conclude the proof of (4) by some additional analysis on “those l with small gradient component". This case is a bit easier since it reduces to bounded gradient case. For readers who wants to learn more about the idea of tackling Issue I and II, please refer to Appendix A where we formally introduce the 1st and 2nd kind of color-ball models. Since the whole proof is quite long, we provide a proof sketch in Appendix G.1. The whole proof is presented in Appendix G. 6 Experiments To support our theory, we provide more simulations and real-data experiments. All the experimental settings and hyperparameters are presented in Appendix B.1. We aim to show: (I). When β2 is large, a large range of β1 gives good performance, including all β1 < √ β2. (II). When β2 is small, a large range of β1 performs relatively badly. Convergence to bounded region when D0 > 0. In Figure 4, we run large-β2 Adam on function (9) (defined later in Appendix B). This function satisfies with D0 > 0. We find that even with diminishing stepsize ηk = 1/ √ k, Adam may not converge to an exact critical point. Instead, it converges to a bounded region. This is because: even though ηk is decreasing, the effective stepsize ηk/ √ vk,i might not decay. Further, the size of the region shrinks when β2 increases. This is because the movement of√vk,i shrinks as β2 increases. These phenomena match Remark 3 and claim (I). Convergence to critical points when D0 = 0 Since function (2) satisfies D0 = 0, we run more experiments on (2) with initialization x = −5 and n = 5, 10, 15, 20. We show the result of n = 20 in Figure 4 (a), (b); the rest are shown in Appendix B. We find that: when β2 is large, Adam converges to critical points for β1 < √ β2. These phenomena match claim (I). Gradient norm of iterates can be unbounded when β2 is small. On function (2), We further run Adam with small β2 at initialization x = −5. In this case, gradient norms of iterates increase dramatically. This emphasizes the importance of discarding bounded gradient assumptions. These phenomena match claim (II). MNIST and CIFAR-10. As shown in Figure 1 (b)& (c) in Section 1, the training results match both claim (I) and (II). In addition, there is a convex-shaped boundary on the transition from low loss to higher loss, this boundary roughly matches the condition in Theorem 3.1. NLP. We use Adam to train Transformer XL (Dai et al., 2019) on the WikiText-103 dataset (Merity et al., 2016). This architecture and dataset is widely used in NLP tasks (e.g. (Howard & Ruder, 2018; See et al., 2017)). As shown in Figure 4 (d), the training results match both claim (I) and (II). 7 Conclusions In this work, we explore the (non-)convergence of Adam. When β2 is large, we prove that Adam can converge with any β1 < √ β2. When β2 is small, we further show that Adam might diverge to infinity for a wide range of β1. One interesting question is to verify the advantage of Adam over SGD. In this work, we focus on the fundamental issue of convergence. Proving faster convergence of Adam would be our future work. Acknowledgments and Disclosure of Funding Yushun Zhang would like to thank Bohan Wang, Reviewer xyuf, Reviewer UR9H, Reviwer tmNN and Reviewer V9yg for the careful proof reading and helpful comments. Yushun Zhang would like to thank Bohan Wang for the valuable discussions around Lemma G.3. Yushun Zhang would also like to thank Reviewer UR9H for the valuable discussions around Lemma G.13. This work is supported by the Internal Project Fund from Shenzhen Research Institute of Big Data under Grant J00220220001. This work is supported by NSFC-A10120170016, NSFC-617310018 and the Guandong Provincial Key Laboratory of Big Data Computing.
1. What is the main contribution of the paper regarding Adam's convergence analysis? 2. What are the strengths and weaknesses of the paper compared to prior works? 3. How does the paper generalize the analysis of Adam's convergence to a larger class of functions? 4. How does the paper address the issue of gradient variance in Adam's divergence? 5. Can the authors provide more comparison and strengths over prior works? 6. How can one tune β1 and β2 in Adam to ensure convergence when gradient variance is large enough to diverge Adam?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper In this paper, the authors provide a solid convergence analysis which suggest Adam can converge to stationary points by picking proper β 1 and β 2 after fixing n . The conclusion is proved without assumption of bounded gradient. The analysis can provide new insights for tuning Adam. Strengths And Weaknesses Strengths: The authors provide a through convergence analysis for the well-known unconvergence issus of Adam. By picking large β 1 and β 2 after fixing n , the authors prove Adam can converge to stationary point of the specific function class given by Assumption 2.1 and Assumption 2.2. Compared to the divergence counterexamples provdied by [1], the target function is generalized to a larger class. Weaknesses: However, some prior works [2] have provided the similiar conclusion that, after fixing the unconvergen counterexamples in [1], tuning larger β 1 and β 2 can make Adam converge. Meanhwile, they provide an equation to descibe the relation between β 1 , β 2 and C (the divergence cirtical point in [1]). Even so, their analysis is contrained on the convex finite-sum problem in [1] and not generalized. The analysis in paper would be more general. Can authors explain more comparison and strengths over them? Theorem 3.1 depends on Assuption 2.2 which contrain the variance of stochastic gradients. However, the unconvergence of Adam emerges only when variance is large [1]. Would Theorem 3.1 mean that Adam converge when gradient variance is within the constrains in Assumption 2.2? Can author analyze the behavior of Adam and how to tune β 1 and β 2 when gradient variance is large enought to diverge Adam? [1] On the Convergence of Adam and Beyond. ICLR 2018. [2] AdaShift: Decorrelation and Convergence of Adaptive Learning Rate Methods. ICLR 2019. [2] Adaptive Gradient Methods with Dynamic Bound of Learning Rate. ICLR 2019. Questions In the unconvergence counterexample provided by Reddi et al, 2018, the gradient variance is coupled with the number of target functions, i.e. n . Can authors explain more about how to decouple n and gradient variance in your analysis? Limitations Some prior works have provided similar conclusion that tuning large β 1 and β 2 can make Adam converge. Even so, this paper provides a more solid and through analysis on generalized function class.
NIPS
Title Adam Can Converge Without Any Modification On Update Rules Abstract Ever since Reddi et al. (2018) pointed out the divergence issue of Adam, many new variants have been designed to obtain convergence. However, vanilla Adam remains exceptionally popular and it works well in practice. Why is there a gap between theory and practice? We point out there is a mismatch between the settings of theory and practice: Reddi et al. (2018) pick the problem after picking the hyperparameters of Adam, i.e., (β1, β2); while practical applications often fix the problem first and then tune (β1, β2). Due to this observation, we conjecture that the empirical convergence can be theoretically justified, only if we change the order of picking the problem and hyperparameter. In this work, we confirm this conjecture. We prove that, when the 2nd-order momentum parameter β2 is large and 1st-order momentum parameter β1 < √ β2 < 1, Adam converges to the neighborhood of critical points. The size of the neighborhood is propositional to the variance of stochastic gradients. Under an extra condition (strong growth condition), Adam converges to critical points. It is worth mentioning that our results cover a wide range of hyperparameters: as β2 increases, our convergence result can cover any β1 ∈ [0, 1) including β1 = 0.9, which is the default setting in deep learning libraries. To our knowledge, this is the first result showing that Adam can converge without any modification on its update rules. Further, our analysis does not require assumptions of bounded gradients or bounded 2nd-order momentum. When β2 is small, we further point out a large region of (β1, β2) combinations where Adam can diverge to infinity. Our divergence result considers the same setting (fixing the optimization problem ahead) as our convergence result, indicating that there is a phase transition from divergence to convergence when increasing β2. These positive and negative results provide suggestions on how to tune Adam hyperparameters: for instance, when Adam does not work well, we suggest tuning up β2 and trying β1 < √ β2. N/A √ β2 < 1, Adam converges to the neighborhood of critical points. The size of the neighborhood is propositional to the variance of stochastic gradients. Under an extra condition (strong growth condition), Adam converges to critical points. It is worth mentioning that our results cover a wide range of hyperparameters: as β2 increases, our convergence result can cover any β1 ∈ [0, 1) including β1 = 0.9, which is the default setting in deep learning libraries. To our knowledge, this is the first result showing that Adam can converge without any modification on its update rules. Further, our analysis does not require assumptions of bounded gradients or bounded 2nd-order momentum. When β2 is small, we further point out a large region of (β1, β2) combinations where Adam can diverge to infinity. Our divergence result considers the same setting (fixing the optimization problem ahead) as our convergence result, indicating that there is a phase transition from divergence to convergence when increasing β2. These positive and negative results provide suggestions on how to tune Adam hyperparameters: for instance, when Adam does not work well, we suggest tuning up β2 and trying β1 < √ β2. 1 Introduction Modern machine learning tasks often aim to solve the following finite-sum problem. min x∈Rd f(x) = n−1∑ i=0 fi(x), (1) where n is the number of samples or mini-batches and x denotes the trainable parameters. In deep learning, Adam (Kingma & Ba, 2014) is one of the most popular algorithms for solving (1). It has been applied to various machine learning domains such as natural language processing (NLP) (Vaswani et al., 2017; Brown et al., 2020; Devlin et al., 2018), generative adversarial networks (GANs) (Radford et al., 2015; Isola et al., 2017; Zhu et al., 2017) and computer vision (CV) (Dosovitskiy et al., 2021). Despite its prevalence, Reddi et al. (2018) point out that Adam can diverge with a wide range of hyperparameters. A main result in (Reddi et al., 2018) states that 2: ∗Correspondence author 2We formally re-state their results in Appendix D.2. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). For any β1, β2 s.t. 0 ≤ β1 < √ β2 < 1, there exists a problem such that Adam diverges. Here, β1 and β2 are the hyperparameter to control Adam’s 1st-order and 2nd-order momentum. More description of Adam can be seen in Algorithm 1 (presented later in Section 2.1). Ever since (Reddi et al., 2018) pointed out the divergence issue, many new variants have been designed. For instance, AMSGrad (Reddi et al., 2018) enforced the adaptor vt (defined later in Algorithm 1) to be non-decreasing; AdaBound (Luo et al., 2019) imposed constraint vt ∈ [Cl, Cu] to ensure the boundedness on effective stepsize. We introduce more variants in Appendix D.1. On the other hand, counter-intuitively, vanilla Adam remains exceptionally popular (see evidence at (Scholar)). Without any modification on its update rules, Adam works well in practice. Even more mysteriously, we find that the commonly reported hyperparameters actually satisfy the divergence condition stated earlier. For instance, Kingma & Ba (2014) claimed that (β1, β2) = (0.9, 0.999) is a “good choice for the tested machine learning problems" and it is indeed the default setting in deep learning libraries. In super-large models GPT-3 and Megatron (Brown et al., 2020; Smith et al., 2022), (β1, β2) is chosen to be (0.9, 0.95). GAN researchers (e.g. Radford et al. (2015); Isola et al. (2017)) use (β1, β2) = (0.5, 0.999). All these hyperparameters live in the divergence region β1 < √ β2. Surprisingly, instead of observing the divergence issue, these hyperparameters achieve good performances and they actually show the sign of convergence. Why does Adam work well despite its theoretical divergence issue? Is there any mismatch between deep learning problems and the divergent example? We take a closer look into the divergence example and find out the mismatch does exist. In particular, we notice an important (but often ignored) characteristic of the divergence example: (Reddi et al., 2018) picks (β1, β2) before picking the sample size n. Put in another way, to construct the divergence example, they change n for different (β1, β2). For instance, for (β1, β2) = (0, 0.99), they use one n to construct the divergent example; for (β1, β2) = (0, 0.9999), they use another n to construct another divergent example. On the other hand, in practical applications of Adam listed above, practitioners tune the hyperparameters (β1, β2) after the sample size n is fixed. So there is a gap between the setting of theory and practice: the order of picking n and (β1, β2) is different. Considering the good performance of Adam under fixed n, we conjecture that Adam can converge in this setting. Unfortunately, the behavior of vanilla Adam is far less studied than its variants (perhaps due to the criticism of divergence). To verify this conjecture, we run experiments for different choices of (β1, β2) on a few tasks. First, we run Adam for a convex function (2) with fixed n (see the definition in Section 3.2). Second, we run Adam for the classification problem on data MNIST and CIFAR-10 with fixed batchsize. We observe some interesting phenomena in Figure 1 (a), (b) and (c). First, when β2 is large, the optimization error is small for almost all values of β1. Second, when β1, β2 are both small, there is a red region with relatively large error. On MNIST, CIFAR-10, the error in the red region is increased by 1.4 times than that in the blue region. The situation is a lot worse on function (2) (defined later in Section 3.2): the error in the red region is 70 times higher. While Adam’s performances seem unstable in the red region, we find that Adam always performs well in the top blue region in Figure 1. This seems to suggest that Adam can converge without any algorithmic modification, as long as β1 and β2 are chosen properly. We ask the following question: Can Adam provably converge without any modification on its update rules? In this work, we theoretically explore this question. Our contributions are visualized in Figure 1 (d). We prove the following results when n is fixed (or more rigorously, when the function class is fixed): • We prove that when β2 is large enough and β1 < √ β2, Adam converges to the neighborhood of critical points. The size of the neighborhood is propositional to the variance of stochastic gradients. With an extra condition (so-called strong growth condition), we prove that Adam can converge to critical points. As β2 increases, these results can cover any momentum parameter β1 ∈ [0, 1) including the default setting β1 = 0.9. In particular, our analysis does not require bounded gradient assumption. • We study the divergence issue of small-β2 Adam. We prove that: for any fixed n (or more rigorously, for any fixed function class), there exists a function such that, Adam diverges to infinity when (β1, β2) is picked in the red region in Figure 1 (d). The size of the red region increases with n. The shape of the region follows the solution to our analytic conditions. • We emphasize a few characteristics of our results. (1) phase transition. The divergence result considers the same setting as our convergence result, indicating that there is a phase transition from divergence to convergence when changing β2. (2) problem-dependent bounds. Our convergence and divergence regions of (β1, β2) are problem-dependent, which is drastically different from (Reddi et al., 2018) which established the problem-independent worst-case choice of (β1, β2). (3) non-asymptotic characterization. the “divergence region” of (β1, β2) expands as n increases and converges to the whole region [0, 1)2 as n goes to infinity, which recovers (actually stronger than) the problem-independent divergence result of (Reddi et al., 2018) that requires β1 < √ β2. In this sense, we can view the divergence result of (Reddi et al., 2018) as an asymptotic characterization of the divergence region (as n→∞) and our divergence result as a non-asymptotic characterization (for any fixed n). We provide more discussion in Section 4. • Our positive and negative results can provide suggestions for tuning β1 and β2: for instance,when Adam does not work well, we suggest tuning up β2 and trying β1 < √ β2. We provide more tuning suggestions in Appendix C. We believe our results can boost new understandings for Adam. While Reddi et al. (2018) reveal that “Adam can diverge", our results show the other side of the coin: when n is fixed (or when function class is fixed), Adam can still converge without any modification on its update rules. Our results suggest that Adam is still a theoretically justified algorithm and practitioners can use it confidently. We further emphasize that our convergence results can cover any β1 ∈ [0, 1), which allows the algorithm to bring arbitrarily heavy momentum signals. It turns out that large-momentum Adam is not easy to analyze. Even with stronger assumptions like bounded gradient (‖∇f(x)‖ < C,∀x), its convergence is not well understood (see related works in Section 2.2). To our best knowledge, this is the first result that proves vanilla Adam with any β1 can converge without any assumption of bounded gradient or bounded 2nd-order momentum. The proof contains a new method to handle unbounded momentum in the stochastic non-linear dynamics system. We will highlight our technical novelties in Section 5. 2 Preliminaries 2.1 Review of Adam We consider finite-sum problem (1). We use x to denote the optimization variable. We denote∇fj as the gradient of fj and let ◦ be the component-wise product. The division and square-root operator are component-wise as well. We present randomly shuffled Adam in Algorithm 1. In Algorithm 1, m denotes the 1st-order momentum and v denotes the 2nd-order momentum. they are weighted averaged by hyperparameter β1, β2, respectively. Larger β1, β2 will adopt more history information. We denote xk,i,mk,i, vk,i ∈ Rd as the value of x,m, v at the k-th outer loop (epoch) and i-th inner loop (batch), respectively. We choose ηk = η1√nk as the stepsize. In practice, is adopted for numerical stability and it is often chosen to be 10−8. In our theory, we allow to be an arbitrary non-negative constant including 0. In the original version of Adam in (Kingma & Ba, 2014), it has an additional “bias correction” step. This “bias correction” step can be implemented by changing the stepsize ηk into η̂k = √ 1−βk2 1−βk1 ηk Algorithm 1 Adam Initialize x1,0 = x0, m1,−1 = ∇f(x0) and v1,−1 = maxi∇fi(x0) ◦ ∇fi(x0). for k = 1→∞ do Sample {τk,0, τk,1, · · · , τk,n−1} as a random permutation of {0, 1, 2, · · · , n− 1} for i = 0→ n− 1 do mk,i = β1mk,i−1 + (1− β1)∇fτk,i(xk,i) vk,i = β2vk,i−1 + (1− β2)∇fτk,i(xk,i) ◦ ∇fτk,i(xk,i) xk,i+1 = xk,i − ηk√vk,i+ ◦mk,i end for xk+1,0 = xk,n; vk+1,−1 = vk,n−1; mk+1,−1 = mk,n−1 end for and using zero initialization. In Algorithm 1, the “bias correction” step is replaced by a special initialization, which corrects the bias as well. Note that η̂k ∈ [ √ 1− β2ηk, 11−β1 ηk] is well-bounded near ηk, so ηk and η̂k brings the same convergence rate. In addition, as the effect of initialization becomes negligible when the training progresses, Adam with zero & our initialization will have the same asymptotic behavior. In the main body of our proof, we follow the form of Algorithm 1, which makes results cleaner. For completeness, we add the proof on the convergence of Adam with “bias correction” steps in Appendix G.11. In our analysis, we make the assumptions below. Assumption 2.1. we consider x ∈ Rd and fi(x) satisfies gradient Lipschitz continuous with constant L. We assume f(x) is lower bounded by a finite constant f∗. Assumption 2.2. fi(x) and f(x) satisfy: ∑n−1 i=0 ‖∇fi(x)‖ 2 2 ≤ D1‖∇f(x)‖22 +D0,∀x ∈ Rd. Assumption 2.2 is quite general. When D1 = 1/n, it becomes the “constant variance” with constant D0/n. “constant variance" condition is commonly used in both SGD and Adam analysis (e.g. (Ghadimi et al., 2016; Zaheer et al., 2018; Huang et al., 2021)). Assumption 2.2 allows more flexible choices of D1 6= n and thus it is weaker than “constant variance”. When D0 > 0, the problem instance is sometimes called “non-realizable" (Shi et al., 2020). In this case, adaptive gradient methods are not guaranteed to reach the exact critical points. Instead, they only converge to a bounded region (near critical points) (Zaheer et al., 2018; Shi et al., 2020). This phenomenon indeed occurs for Adam in experiments, even with diminishing stepsize (see Figure 4 (a)). The behavior of SGD is similar: constant stepsize SGD converges to a bounded region with its size propositional to the noise level D0 (Yan et al., 2018; Yu et al., 2019; Liu et al., 2020b). When D0 = 0, Assumption 2.2 is often called “strong growth condition" (SGC) (Vaswani et al., 2019). When ‖∇f(x)‖ = 0, under SGC we have ‖∇fj(x)‖ = 0 for all j. SGC is increasingly popular recently e.g.(Schmidt & Roux, 2013; Vaswani et al., 2019). This condition is known to be reasonable in the overparameterized regime where neural networks can interpolate all data points (Vaswani et al., 2019). We will show that Adam can converge to critical points if SGC holds. When n, f∗, L,D0, D1 are fixed a priori, we use Fn,f ∗ L,D0,D1 (Rd) to denote the function class con- taining f(x) satisfying Assumption 2.1 and 2.2 with constant n, f∗, etc.. Since n is fixed when the function class Fn,f ∗ L,D0,D1 (Rd), we introduce this notation to clearly present the divergence result in Proposition 3.3. Without this pre-defined function class, the claim of divergence might be confusing. 2.2 Related Works Ever since Reddi et al. (2018) pointed out the divergence issue, there are many attempts on designing new variants of Adam. Since we focus on understanding Adam without modification on its update rules, we introduce more variants later in Appendix D.1. Compared with proposing new variants, the convergence of vanilla Adam is far less studied than its variants (perhaps due to the criticism of divergence). There are only a few works analyzing vanilla Adam and they require extra assumptions. Zhou et al. (2018b) analyze the counter-example in (Reddi et al., 2018) and find certain hyperparameter can work. However, their analysis is restricted to the counter-example. Zaheer et al. (2018) study the relation between mini-batch sizes and (non)convergence of Adam. However, this work require β1 = 0 and Adam is reduced to RMSProp (Hinton et al., 2012). De et al. (2018) analyze RMSProp and non-zero-β1 Adam, but they assume the sign of all stochastic gradients to keep the same. It seems unclear how to check this condition a priori. Additionally, they require β1 to be inversely related to the upper bound of gradient, which forces β1 to be small (as a side note, this result only applies to full-batch Adam). Défossez et al. (2020) analyze Adam with β1 < β2 and provide some insights on the momentum mechanisms. However, their bound is inversely proportional to (the hyperparameter for numerical stability) and the bound goes to infinity when goes to 0. This is different from practical application since small such as 10−8 often works well. Further, using large is against the nature of adaptive gradient methods because√ v no longer dominates in the choice of stepsize. In this case, Adam is essentially transformed back to SGD. Two recent works (Huang et al., 2021) and (Guo et al., 2021) propose novel and simple frameworks to analyze Adam-family with large β1. Yet, they require the effective stepsize of Adam to be bounded in certain interval, i.e., 1√vt+ ∈ [Cl, Cu] 3. This boundedness condition changes Adam into AdaBound (Luo et al., 2019) and thus they cannot explain the observations on original Adam in Section 1. To summarize, all these works require at least one strong assumption (e.g. large ). Additionally, they all (including those for new variants) require bounded gradient assumptions. A recent work (Shi et al., 2020) takes the first attempt to analyze RMSProp without bounded gradient assumption. They show that RMSProp can converge to the neighborhood of critical points. 4 We believe it is important to study Adam rather than RMSProp: Numerically, Adam often outperforms RMSProp on complicated tasks (e.g. on Atari games, the mean reward is improved from 88% to 110% (Agarwal et al., 2020)). Theoretically, literature on RMSProp cannot reveal the interaction between β1 and β2; or how these hyperparameters jointly affect (or jeopardize) the convergence of Adam. However, it is non-trivial to jointly analyze the effect of β1 and β2. We point out there are at least three challenges. First, it seems unclear how to control the massive momentum mt of Adam. Second, mt is multiplied by 1/ √ vt, causing non-linear perturbation. Third, mt and 1/ √ vt are statistically dependent and cannot be decoupled. We propose new methods to resolve these issues. We highlight our technical novelties in Section 5. 2.3 The Importance and Difficulties of Removing Bounded Gradient Assumptions Here, we emphasize the importance to remove bounded gradient assumption. First, unlike the assumptions in Section 2.1, bounded gradient is not common in SGD analysis. So it is of theoretical interests to remove this condition for Adam. Second, bounded gradient condition rules out the chances of gradient divergence a priori. However, there are numerical evidences showing that Adam’s gradient can diverge (see Section 6 and Appendix B). Removing the boundedness assumption helps us point out the divergence and convergence phase transition in the (β1, β2) diagram. However, it is often difficult to analyze convergence without bounded gradient assumption. First, it is non-trivial to control stochastic momentum. Even for SGD, this task is challenging. For instance, An early paper Bertsekas & Tsitsiklis (2000) analyzed SGD-type methods without any boundedness condition. But it is not until recently that Yu et al. (2019); Liu et al. (2020b); Jin et al. (2022) prove SGDM (SGD with momentum) converges without bounded gradient assumption. Such attempts of removing boundedness assumptions are often appreciated for general optimization problems where “bounded-assumption-free" is considered as a major contribution. Secondly, for Adam, the role of momentum mt is even more intricate since it is multiplied by 1/ √ vt. Combined with vt, the impact of previous signals not only affect the update direction, but also change the stepsize for each component. Further, both momentum mt and stepsize 1/ √ vt are random variables and they are highly correlated. Such statistical dependency causes trouble for analysis. In summary, the role of momentum in Adam could be much different from that in SGDM or GDM. Even with boundedness conditions, the convergence of large-β1 Adam is still not well understood (see related works in Section 2.2). In this work, we propose new techniques to handle Adam’s momentum under any large β1, regardless of the gradient magnitude. These techniques are not revealed in any existing works. We introduce our technical contribution in Section 5. 3For completeness, we explain why they require this condition in Appendix D.1. 4We notice that they also provide a convergence result for Adam with β1 close enough to 0. However, a simple calculation by Zhang et al. (2022) shows that they require β1 < 10−7. Thus their result does not provide much extra information other than RMSProp. 3 Main Results 3.1 Convergence Results Here, we give the convergence results under large β2. Theorem 3.1. For any f(x) ∈ Fn,f ∗ L,D0,D1 (Rd), we assume the hyperparameters in Algorithm 1 satisfy: β1 < √ β2 < 1; β2 is greater or equal to a threshold γ1(n); and ηk = η1√nk . Let km ∈ N satisfies km ≥ 4 and β(km−1)n1 ≤ βn1√ km−1 , 5 we have the following results for any T > km: min k∈[km,T ] E { min [√ 2D1d D0 ‖∇f(xk,0)‖22, ‖∇f(xk,0)‖2 ]} = O ( log T√ T ) +O( √ D0). Remark 1: the choice of β2. Our theory suggests that large β2 should be used to ensure convergence. This message matches our experimental findings in Figure 1. We would like to point out that the requirement of “large β2" is neccessary, because small β2 will indeed lead to divergence (shown later in Section 3.2). We here comment a bit on the the threshold γ1(n). γ1(n) satisfies β2 ≥ 1−O ( 1−βn1 n2ρ ) (see inequality (34) and Remark G.7), where ρ is a constant that depends on the training trajectory. In worst cases, ρ is upper bounded by n2.5, but we find the practical ρ to be much smaller. In Appendix B, we estimate ρ on MNIST and CIFAR-10. In practical training process, we empirically observe that ρ ≈ O(n), thus the required γ1(n) ≈ 1−O ( n−3 ) . Note that our threshold of β2 is a sufficient condition for convergence, so there may be a gap between the practical choices and the theoretical bound of β2. Closing the gap will be an interesting future direction. We find that γ1(n) increases with n. This property suggests that larger β2 should be used when n is large. This phenomenon is also verified by our experiments in Appendix B. We also remark that γ1(n) slowly increases with β1. This property is visualized in Figure 1 (d) where the lower boundary of blue region slightly lifts up when β1 increases. Remark 2: the choice of β1. Theorem 3.1 requires β1 < √ β2. Since β2 is suggested to be large, our convergence result can cover flexible choice of β1 ∈ [0, 1). For instance, β2 = 0.999 brings the threshold of β1 < 0.9995, which covers basically all practical choices of β1 reported in the literature (see Section 1), including the default setting β1 = 0.9. This result is much stronger than those in the RMSProp literature (e.g. (Shi et al., 2020; Zaheer et al., 2018)). To our knowledge, we are the first to prove convergence of Adam under any β1 ∈ [0, 1) without bounded gradient assumption. Remark 3: convergence to a bounded region. When D0 > 0, Adam converges to a bounded region near critical points. As discussed in Section 2.1, converging to bounded region is common for stochastic methods including constant-stepsize SGD (Yan et al., 2018; Yu et al., 2019; Liu et al., 2020b) and diminishing-stepsize RMSProp (Zaheer et al., 2018; Shi et al., 2020). This phenomenon is also observed in practice: even for convex quadratic function with D0 > 0, Adam with diminishing stepsize cannot reach exactly zero gradient (see Figure 4 (a) in Section 6). This is because: even though ηk is decreasing, the effective stepsize ηk/ √ vk,i might not decay. The good news is that, the constant O( √ D0) vanishes to 0 as β2 goes to 1 (both in theory and experiments). The relation between β2 and constant O( √ D0) are introduced in Remark G.14 in Appendix G.9. The size shrinks to 0 because the movement of √vk,i shrinks as β2 increases. As a corollary of Theorem 3.1, we have the following result under SGC (i.e., D0 = 0). Corollary 3.2. Under the setting in Theorem 3.1. When D0 = 0 for Assumption 2.2, we have min k∈[km,T ] E‖∇f(xk,0)‖2 = O ( log T√ T ) . Under SGC (i.e. D0 = 0), Corollary 3.2 states that Adam can converge to critical points. This is indeed the case in practice. For instance, function (2) satisfies SGC and we observe 0 gradient norm after Adam converges (see Section 6 and Appendix B). The convergence rate in Corollary 3.2 is comparable to that of SGD under the same condition in (Vaswani et al., 2019). 5When β1 = 0.9, km = 15 for any n ≥ 1. 3.2 Divergence Results Theorem 3.1 shows that when β2 is large, any β1 < √ β2 ensures convergence. Now we consider the case where β2 is small. We will show that in this case, a wide range of β1 is facing the risk of diverging to infinity. The divergence of small-β2 Adam suggests that “large β2" is necessary in the convergence result Theorem 3.1. We construct a counter-example in Fn,f ∗ L,D0,D1 (Rd). Consider f(x) = ∑n−1 i=0 fi(x) for x ∈ R , we define fi(x) as: fi(x) = { nx, x ≥ −1 n 2 (x+ 2)2 − 3n 2 , x < −1 for i = 0, fi(x) = { −x, x ≥ −1 − 1 2 (x+ 2)2 + 3 2 , x < −1 for i > 0. (2) Summing up all the fi(x), we can see that f(x) = { x, x ≥ −1 1 2 (x+ 2)2 − 3 2 , x < −1 is a lower bounded convex smooth function with optimal solution x∗ = −2. Function (2) allows both iterates and gradients to diverge to infinity. As shown in Figure 1 (a), when running Adam on (2), there exists a red large-error region. This shows the sign of divergence. We further theoretically verify the conjecture in Proposition 3.3. Proposition 3.3. For any function class Fn,f ∗ L,D0,D1 (Rd), there exists a f(x) ∈ Fn,f ∗ L,D0,D1 (Rd), s.t. when (β1, β2) satisfies analytic condition (12), (13), (14) in Appendix E, Adam’s iterates and function values diverge to infinity. By solving these conditions in NumPy, we plot the orange region in Figure 2. The size of the region depends on n and it expands to the whole region when n goes to infinity. The proof can be seen in Appendix E. We find the “divergence region" always stays below the “convergence threshold" γ1(n) in Theorem 3.1, so the two results are self-consistent (see the remark in Appendix E). Proposition 3.3 states the divergence of iterates and function values. Consistently, our experiments also show the divergence of gradient (see Section 6 and Appendix B). These results characterize Adam’s divergence behavior both numerically and theoretically. We emphasize that the orange region is not discussed in (Reddi et al., 2018) because we consider n fixed while they allow n changing. When n is allowed to increase, our orange region will expand to the whole region and thus we can derive a similar (actually stronger) result as (Reddi et al., 2018). We provide more explanation in Section 4. Combining Theorem 3.1 and Proposition 3.3, we establish a clearer image on the relation between (β1, β2) and qualitative behavior of Adam. 4 Reconciling Our Results with (Reddi et al., 2018) We discuss more on the relation between (Reddi et al., 2018) and our results. The divergence result shown in Section 1 does not contradict with our convergence results in Theorem 3.1. Further, it is different from our divergence result in Proposition 3.3. The key difference lies in whether (β1, β2) is picked before or after picking the function class Fn,f ∗ L,D0,D1 (Rd). We discuss the following two cases. Case I: When (β1, β2) is picked before picking Fn,f ∗ L,D0,D1 (Rd). As discussed in Section 1, the divergence result requires different n for different (β1, β2). In this sense, the considered function class is constantly changing. It does not contradict with our Theorem 3.1 which considers a fixed function class with fixed n. For Case I, we illustrate Adam’s behavior in Figure 3. The red region is proved by (Reddi et al., 2018). For completeness, we remove the condition “β1 < √ β2" and further prove that Adam will diverge to infinity for any (β1, β2) ∈ [0, 1)2. The result is shown in the following Corollary 4.1. Corollary 4.1. For any (β1, β2) ∈ [0, 1)2, there exists a function satisfying Assumption 2.1 and 2.2 that the Adam’s iterates and function values diverge to infinity. Proof of Corollary 4.1 can be seen in the final paragraph in Appendix E. In the proof, we also require different n to cause divergence for different (β1, β2). So the function class is constantly changing. As a result, in Case I, we cannot prove any convergence result. Case II: When (β1, β2) is picked after picking Fn,f ∗ L,D0,D1 (Rd). When the function class is picked in advance, sample size n will also be fixed. This case is closer to most practical applications. In this case, we find that Adam’s behavior changes significantly in the different region of Figure 3. First, ∀f(x) ∈ Fn,f ∗ L,D0,D1 (Rd) will converge when β1 < √ β2 and β2 is large. Second, ∃f(x) ∈ Fn,f ∗ L,D0,D1 (Rd) will diverge to infinity when (β1, β2) are in the orange region in Figure 2. Since Case II is closer to practical scenarios, these results can provide better guidance for hyperparameter tuning for Adam users. We provide some suggestions for practitioners in Appendix C. For Case II, we summarize the possible behaviors of Adam in Table 1. We also illustrate our convergence and divergence results in Figure 1 (d). Note that there are some blanket areas where Adam’s behavior remains unknown, this part will be left as interesting future work. 5 Proof Ideas for the Convergence Result We now (informally) introduce our proof ideas for the convergence result in Theorem 3.1. Simply put, we want to control the update direction mk,i/ √ vk,i inside the dual cone of gradient direction. Namely: E〈∇f(xk,0), n−1∑ i=0 mk,i√ vk,i 〉 > 0. (3) However, directly proving (3) could be difficult because both mk,i and vk,i distort the trajectory. To start with, we try to control the movement of vk,i by increasing β2 (similar idea as (Shi et al., 2020; Zou et al., 2019; Chen et al., 2021)). Recall vk,i = (1−β2) ∑i j=1 β i−j 2 ∇fτk,j (xk,j)◦∇fτk,j (xk,j)+ βi2vk,0, we have vk,i ≈ vk,0 when β2 is large. In this case, we have: E 〈 ∇f(xk,0), n−1∑ i=0 mk,i√ vk,i 〉 ≈ E 〈 ∇f(xk,0)√ vk,0 , n−1∑ i=0 mk,i 〉 ≈ E 〈 ∇f(xk,0)√ vk,0 ,∇f(xk,0) 〉 > 0, where the first “≈" is due to the large β2 and the second “≈" is our goal. Now we need to show: E 〈 ∇f(xk,0)√ vk,0 , ( n−1∑ i=0 mk,i ) −∇f(xk,0) 〉 (∗) = E ( d∑ l=1 n−1∑ i=0 ∂lf(xk,0)√ vl,k,0 ( ml,k,i − ∂lfτk,i(xk,0) )) ≈ 0, (4) where ∂lf(xk,0) is the l-th component of∇f(xk,0), similarly for ml,k,0 and vl,k,0. (∗) is due to the finite-sum structure. However, it is not easy to prove (4). We point out some technical issues below. Issue I: massive momentum. Directly proving (4) is still not easy. We need to first consider a simplified problem: for every l ∈ [d], assume we treat ∂lf(xk,0)/ √ vl,k,0 as a constant, how to bound E ∑n−1 i=0 ( ml,k,i − ∂lfτk,i(xk,0) ) ? It turns out that this simplified problem is still non-trivial. When β1 is large, ml,k,i contains heavy historical signals which significantly distort the trajectory from gradient direction. Existing literature (Zaheer et al., 2018; De et al., 2018; Shi et al., 2020) take a naive approach: they set β1 ≈ 0 so that ml,k,i ≈ ∂lfτk,i(xk,i). Then we get (4) ≈ 0. However, this method cannot be applied here since we are interested in practical cases where β1 is large in [0, 1). Issue II: stochastic non-linear dynamics. Even if we solve Issue I, it is still unclear how to prove (4). This is because: for every l ∈ [d], ∂lf(xk,0)/ √ vl,k,0 is a r.v. instead of a constant. With this term involved, we are facing with a stochastic non-linear dynamics, which could be difficult to analyze. Further, ∂lf(xk,0)/ √ vl,k,0 is statistically dependent with ( ml,k,i − ∂lfτk,i(xk,0) ) , so we are not allowed to handle the expectation E(∂lf(xk,0)/ √ vl,k,0) separately . Unfortunately, even with additional assumptions like bounded gradient, there is no general approach to tackle the above issues. In this work, we propose solutions regardless of gradient magnitude. Solution to Issue I. We prove the following Lemma to resolve Issue I. Lemma 5.1. (Informal) Consider Algorithm 1. For every l ∈ [d] and any β1 ∈ [0, 1), we have the following result under Assumption 2.1. δ(β1) := E n−1∑ i=0 ( ml,k,i − ∂lfτk,i(xk,0) ) = O ( 1√ k ) , where ∂lf(xk,0) is the l-th component of∇f(xk,0); ml,k,i = (1− β1)∂lfτk,i(xk,i) + β1ml,k,i−1. We present the proof idea in Appendix A. Simply put, we construct a simple toy example called “color-ball" model (of the 1st kind). This toy model shows a special property of δ(β1). We find out: for Algorithm 1, error terms from successive epochs can be canceled, which keeps the momentum roughly in the descent direction. This important property is not revealed in any existing work. Remark 4: When assuming bounded gradient ‖∇f(x)‖ ≤ G, a naive upper bound would be δ(β1) = O(G). However, such constant upper bound does not imply δ(β1) is close to 0. It will not help prove the convergence. This might be partially the reason why large-β1 Adam is hard to analyze even under bounded gradient (see related works in Section 2.2). We emphasize Lemma 5.1 holds true regardless of gradient norm, so it could be deployed in both bounded or unbounded gradient analysis. Solution to Issue II. We try to show (4) by adopting Lemma 5.1. However, the direct application cannot work since ∂lf(xk,0)√vl,k,0 is random. Despite its randomness, we find out that when β2 is large, the changes of ∂lf(xk,0)√ vl,k,0 shrinks along iteration. As such, although ∂lf(xk,0)√vl,k,0 brings extra perturbation, the quantity in (4) share the similar asymptotic behavior as δ(β1). We prove the following Lemma 5.2. Lemma 5.2. (Informal) Under Assumption 2.1 and 2.2, consider Algorithm 1 with large β2 and β1 < √ β2. For those l with gradient component larger than certain threshold, we have:∣∣∣∣∂lf(xk,0)√vk,0 − ∂lf(xk−1,0)√vk−1,0 ∣∣∣∣ = O( 1√k ) ; (5) E ( ∂lf(xk,0)√ vl,k,0 n−1∑ i=0 (ml,k,i − ∂lfτk,i(xk,0)) ) = O ( 1√ k ) . (6) In Appendix A, we introduce how to derive (6) from (5). To do so, we introduce a new type of “color-ball" model (we call it color-ball of the 2nd kind) which adopts the random perturbation of ∂lf(xk,0)√ vl,k,0 . Understanding color-ball model of the 2nd kind is crucial for proving Lemma 5.2. We conclude the proof of (4) by some additional analysis on “those l with small gradient component". This case is a bit easier since it reduces to bounded gradient case. For readers who wants to learn more about the idea of tackling Issue I and II, please refer to Appendix A where we formally introduce the 1st and 2nd kind of color-ball models. Since the whole proof is quite long, we provide a proof sketch in Appendix G.1. The whole proof is presented in Appendix G. 6 Experiments To support our theory, we provide more simulations and real-data experiments. All the experimental settings and hyperparameters are presented in Appendix B.1. We aim to show: (I). When β2 is large, a large range of β1 gives good performance, including all β1 < √ β2. (II). When β2 is small, a large range of β1 performs relatively badly. Convergence to bounded region when D0 > 0. In Figure 4, we run large-β2 Adam on function (9) (defined later in Appendix B). This function satisfies with D0 > 0. We find that even with diminishing stepsize ηk = 1/ √ k, Adam may not converge to an exact critical point. Instead, it converges to a bounded region. This is because: even though ηk is decreasing, the effective stepsize ηk/ √ vk,i might not decay. Further, the size of the region shrinks when β2 increases. This is because the movement of√vk,i shrinks as β2 increases. These phenomena match Remark 3 and claim (I). Convergence to critical points when D0 = 0 Since function (2) satisfies D0 = 0, we run more experiments on (2) with initialization x = −5 and n = 5, 10, 15, 20. We show the result of n = 20 in Figure 4 (a), (b); the rest are shown in Appendix B. We find that: when β2 is large, Adam converges to critical points for β1 < √ β2. These phenomena match claim (I). Gradient norm of iterates can be unbounded when β2 is small. On function (2), We further run Adam with small β2 at initialization x = −5. In this case, gradient norms of iterates increase dramatically. This emphasizes the importance of discarding bounded gradient assumptions. These phenomena match claim (II). MNIST and CIFAR-10. As shown in Figure 1 (b)& (c) in Section 1, the training results match both claim (I) and (II). In addition, there is a convex-shaped boundary on the transition from low loss to higher loss, this boundary roughly matches the condition in Theorem 3.1. NLP. We use Adam to train Transformer XL (Dai et al., 2019) on the WikiText-103 dataset (Merity et al., 2016). This architecture and dataset is widely used in NLP tasks (e.g. (Howard & Ruder, 2018; See et al., 2017)). As shown in Figure 4 (d), the training results match both claim (I) and (II). 7 Conclusions In this work, we explore the (non-)convergence of Adam. When β2 is large, we prove that Adam can converge with any β1 < √ β2. When β2 is small, we further show that Adam might diverge to infinity for a wide range of β1. One interesting question is to verify the advantage of Adam over SGD. In this work, we focus on the fundamental issue of convergence. Proving faster convergence of Adam would be our future work. Acknowledgments and Disclosure of Funding Yushun Zhang would like to thank Bohan Wang, Reviewer xyuf, Reviewer UR9H, Reviwer tmNN and Reviewer V9yg for the careful proof reading and helpful comments. Yushun Zhang would like to thank Bohan Wang for the valuable discussions around Lemma G.3. Yushun Zhang would also like to thank Reviewer UR9H for the valuable discussions around Lemma G.13. This work is supported by the Internal Project Fund from Shenzhen Research Institute of Big Data under Grant J00220220001. This work is supported by NSFC-A10120170016, NSFC-617310018 and the Guandong Provincial Key Laboratory of Big Data Computing.
1. What are the contributions and strengths of the paper regarding the Adam algorithm's convergence analysis? 2. Are there any weaknesses or areas that need improvement in the paper's analysis or presentation? 3. Do you have any questions or concerns about the paper's proof or theoretical analysis? 4. How does the reviewer assess the novelty and significance of the paper's findings in the context of prior works? 5. Are there any minor issues or typos that can be easily addressed in future versions of the paper?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper studies the Adam algorithm and proves that randomly shuffled Adam converges to the neighborhood of stationary points when 1st and 2nd-order momentum parameters satisfy that β 1 < β 2 < 1 and β 2 is sufficiently large. The analysis does not rely on the bounded gradient assumption. The divergence behavior of Adam is also studied, where it is shown that when β 2 is small, Adam can diverge to infinity for a large region of ( β 1 , β 2 ) . Strengths And Weaknesses The convergence analysis of Adam is an important topic, and the claim made in this paper advances the understanding in this regard. The paper is well-written and easy to follow, and the graphic illustrations are nice. Some discussions on the comparison with existing works look redundant, especially those repeatedly emphasizing the removal of the bounded gradient assumption and the difference between the current work and the result in Reddi et al. (2018). I think these should be more condensed. The theoretical analysis seems to be novel, though there might be some issues as will be discussed below. I'm willing to adjust the score if the questions are addressed. Questions Some comments and questions are as follows. The font of some figure legends is too small, e.g., Figure 1(a, b, c), Figure 2, and Figure 4. The axis labels like beta1 and beta2 should be replaced with β 1 and β 2 . In Theorem 3.1, is T fixed? What is the dependence of β 2 on T ? Theorem 3.1 only shows that the loss gradient will become smaller in expectation, and there is still a gap between the convergence of E [ | | ∇ f ( x k , 0 ) | | ] and the actual convergence of x k , 0 . How to resolve this? As shown in Figure 4(c), the training loss does not seem to converge even for large β 2 ? In the statement of Lemma G.6, α is a random variable, but the left-hand-side of the equations after line 996 are expectations, so why can they be bounded by a random variable? In Appendix G.3 for the proof of Lemma G.3, ∂ ℓ f ( x k , 0 ) v k , 0 should be ∂ ℓ f ( x k , 0 ) v ℓ , k , 0 instead? Similarly for the subsequent proofs. When controlling term (a) in the equation after line 1156, in the first inequality where Lemma G.12 is applied, there is a problem with the second term. Due to the indicator I ~ k , k − 1 , we have max i | ∂ ℓ f i ( x k , 0 ) | ≥ Q k which is a lower bound on the absolute value, and this does not lead to an upper bound on the absolute value of the second term, so the inequality is wrong here. Did I miss anything? If not, how to fix this? Minor issues: In the statement of Lemma F.3, it should be | ∂ ℓ f τ k , i ( x k , i + 1 ) − ∂ ℓ f τ k , i ( x k , i ) | ? Unify terms like 1 + β 1 + β 1 2 + ⋯ and 1 + β 1 + ⋯ + β ∞ (define this properly). Limitations The authors have adequately addressed the limitations and potential negative societal impact of their work.
NIPS
Title Adam Can Converge Without Any Modification On Update Rules Abstract Ever since Reddi et al. (2018) pointed out the divergence issue of Adam, many new variants have been designed to obtain convergence. However, vanilla Adam remains exceptionally popular and it works well in practice. Why is there a gap between theory and practice? We point out there is a mismatch between the settings of theory and practice: Reddi et al. (2018) pick the problem after picking the hyperparameters of Adam, i.e., (β1, β2); while practical applications often fix the problem first and then tune (β1, β2). Due to this observation, we conjecture that the empirical convergence can be theoretically justified, only if we change the order of picking the problem and hyperparameter. In this work, we confirm this conjecture. We prove that, when the 2nd-order momentum parameter β2 is large and 1st-order momentum parameter β1 < √ β2 < 1, Adam converges to the neighborhood of critical points. The size of the neighborhood is propositional to the variance of stochastic gradients. Under an extra condition (strong growth condition), Adam converges to critical points. It is worth mentioning that our results cover a wide range of hyperparameters: as β2 increases, our convergence result can cover any β1 ∈ [0, 1) including β1 = 0.9, which is the default setting in deep learning libraries. To our knowledge, this is the first result showing that Adam can converge without any modification on its update rules. Further, our analysis does not require assumptions of bounded gradients or bounded 2nd-order momentum. When β2 is small, we further point out a large region of (β1, β2) combinations where Adam can diverge to infinity. Our divergence result considers the same setting (fixing the optimization problem ahead) as our convergence result, indicating that there is a phase transition from divergence to convergence when increasing β2. These positive and negative results provide suggestions on how to tune Adam hyperparameters: for instance, when Adam does not work well, we suggest tuning up β2 and trying β1 < √ β2. N/A √ β2 < 1, Adam converges to the neighborhood of critical points. The size of the neighborhood is propositional to the variance of stochastic gradients. Under an extra condition (strong growth condition), Adam converges to critical points. It is worth mentioning that our results cover a wide range of hyperparameters: as β2 increases, our convergence result can cover any β1 ∈ [0, 1) including β1 = 0.9, which is the default setting in deep learning libraries. To our knowledge, this is the first result showing that Adam can converge without any modification on its update rules. Further, our analysis does not require assumptions of bounded gradients or bounded 2nd-order momentum. When β2 is small, we further point out a large region of (β1, β2) combinations where Adam can diverge to infinity. Our divergence result considers the same setting (fixing the optimization problem ahead) as our convergence result, indicating that there is a phase transition from divergence to convergence when increasing β2. These positive and negative results provide suggestions on how to tune Adam hyperparameters: for instance, when Adam does not work well, we suggest tuning up β2 and trying β1 < √ β2. 1 Introduction Modern machine learning tasks often aim to solve the following finite-sum problem. min x∈Rd f(x) = n−1∑ i=0 fi(x), (1) where n is the number of samples or mini-batches and x denotes the trainable parameters. In deep learning, Adam (Kingma & Ba, 2014) is one of the most popular algorithms for solving (1). It has been applied to various machine learning domains such as natural language processing (NLP) (Vaswani et al., 2017; Brown et al., 2020; Devlin et al., 2018), generative adversarial networks (GANs) (Radford et al., 2015; Isola et al., 2017; Zhu et al., 2017) and computer vision (CV) (Dosovitskiy et al., 2021). Despite its prevalence, Reddi et al. (2018) point out that Adam can diverge with a wide range of hyperparameters. A main result in (Reddi et al., 2018) states that 2: ∗Correspondence author 2We formally re-state their results in Appendix D.2. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). For any β1, β2 s.t. 0 ≤ β1 < √ β2 < 1, there exists a problem such that Adam diverges. Here, β1 and β2 are the hyperparameter to control Adam’s 1st-order and 2nd-order momentum. More description of Adam can be seen in Algorithm 1 (presented later in Section 2.1). Ever since (Reddi et al., 2018) pointed out the divergence issue, many new variants have been designed. For instance, AMSGrad (Reddi et al., 2018) enforced the adaptor vt (defined later in Algorithm 1) to be non-decreasing; AdaBound (Luo et al., 2019) imposed constraint vt ∈ [Cl, Cu] to ensure the boundedness on effective stepsize. We introduce more variants in Appendix D.1. On the other hand, counter-intuitively, vanilla Adam remains exceptionally popular (see evidence at (Scholar)). Without any modification on its update rules, Adam works well in practice. Even more mysteriously, we find that the commonly reported hyperparameters actually satisfy the divergence condition stated earlier. For instance, Kingma & Ba (2014) claimed that (β1, β2) = (0.9, 0.999) is a “good choice for the tested machine learning problems" and it is indeed the default setting in deep learning libraries. In super-large models GPT-3 and Megatron (Brown et al., 2020; Smith et al., 2022), (β1, β2) is chosen to be (0.9, 0.95). GAN researchers (e.g. Radford et al. (2015); Isola et al. (2017)) use (β1, β2) = (0.5, 0.999). All these hyperparameters live in the divergence region β1 < √ β2. Surprisingly, instead of observing the divergence issue, these hyperparameters achieve good performances and they actually show the sign of convergence. Why does Adam work well despite its theoretical divergence issue? Is there any mismatch between deep learning problems and the divergent example? We take a closer look into the divergence example and find out the mismatch does exist. In particular, we notice an important (but often ignored) characteristic of the divergence example: (Reddi et al., 2018) picks (β1, β2) before picking the sample size n. Put in another way, to construct the divergence example, they change n for different (β1, β2). For instance, for (β1, β2) = (0, 0.99), they use one n to construct the divergent example; for (β1, β2) = (0, 0.9999), they use another n to construct another divergent example. On the other hand, in practical applications of Adam listed above, practitioners tune the hyperparameters (β1, β2) after the sample size n is fixed. So there is a gap between the setting of theory and practice: the order of picking n and (β1, β2) is different. Considering the good performance of Adam under fixed n, we conjecture that Adam can converge in this setting. Unfortunately, the behavior of vanilla Adam is far less studied than its variants (perhaps due to the criticism of divergence). To verify this conjecture, we run experiments for different choices of (β1, β2) on a few tasks. First, we run Adam for a convex function (2) with fixed n (see the definition in Section 3.2). Second, we run Adam for the classification problem on data MNIST and CIFAR-10 with fixed batchsize. We observe some interesting phenomena in Figure 1 (a), (b) and (c). First, when β2 is large, the optimization error is small for almost all values of β1. Second, when β1, β2 are both small, there is a red region with relatively large error. On MNIST, CIFAR-10, the error in the red region is increased by 1.4 times than that in the blue region. The situation is a lot worse on function (2) (defined later in Section 3.2): the error in the red region is 70 times higher. While Adam’s performances seem unstable in the red region, we find that Adam always performs well in the top blue region in Figure 1. This seems to suggest that Adam can converge without any algorithmic modification, as long as β1 and β2 are chosen properly. We ask the following question: Can Adam provably converge without any modification on its update rules? In this work, we theoretically explore this question. Our contributions are visualized in Figure 1 (d). We prove the following results when n is fixed (or more rigorously, when the function class is fixed): • We prove that when β2 is large enough and β1 < √ β2, Adam converges to the neighborhood of critical points. The size of the neighborhood is propositional to the variance of stochastic gradients. With an extra condition (so-called strong growth condition), we prove that Adam can converge to critical points. As β2 increases, these results can cover any momentum parameter β1 ∈ [0, 1) including the default setting β1 = 0.9. In particular, our analysis does not require bounded gradient assumption. • We study the divergence issue of small-β2 Adam. We prove that: for any fixed n (or more rigorously, for any fixed function class), there exists a function such that, Adam diverges to infinity when (β1, β2) is picked in the red region in Figure 1 (d). The size of the red region increases with n. The shape of the region follows the solution to our analytic conditions. • We emphasize a few characteristics of our results. (1) phase transition. The divergence result considers the same setting as our convergence result, indicating that there is a phase transition from divergence to convergence when changing β2. (2) problem-dependent bounds. Our convergence and divergence regions of (β1, β2) are problem-dependent, which is drastically different from (Reddi et al., 2018) which established the problem-independent worst-case choice of (β1, β2). (3) non-asymptotic characterization. the “divergence region” of (β1, β2) expands as n increases and converges to the whole region [0, 1)2 as n goes to infinity, which recovers (actually stronger than) the problem-independent divergence result of (Reddi et al., 2018) that requires β1 < √ β2. In this sense, we can view the divergence result of (Reddi et al., 2018) as an asymptotic characterization of the divergence region (as n→∞) and our divergence result as a non-asymptotic characterization (for any fixed n). We provide more discussion in Section 4. • Our positive and negative results can provide suggestions for tuning β1 and β2: for instance,when Adam does not work well, we suggest tuning up β2 and trying β1 < √ β2. We provide more tuning suggestions in Appendix C. We believe our results can boost new understandings for Adam. While Reddi et al. (2018) reveal that “Adam can diverge", our results show the other side of the coin: when n is fixed (or when function class is fixed), Adam can still converge without any modification on its update rules. Our results suggest that Adam is still a theoretically justified algorithm and practitioners can use it confidently. We further emphasize that our convergence results can cover any β1 ∈ [0, 1), which allows the algorithm to bring arbitrarily heavy momentum signals. It turns out that large-momentum Adam is not easy to analyze. Even with stronger assumptions like bounded gradient (‖∇f(x)‖ < C,∀x), its convergence is not well understood (see related works in Section 2.2). To our best knowledge, this is the first result that proves vanilla Adam with any β1 can converge without any assumption of bounded gradient or bounded 2nd-order momentum. The proof contains a new method to handle unbounded momentum in the stochastic non-linear dynamics system. We will highlight our technical novelties in Section 5. 2 Preliminaries 2.1 Review of Adam We consider finite-sum problem (1). We use x to denote the optimization variable. We denote∇fj as the gradient of fj and let ◦ be the component-wise product. The division and square-root operator are component-wise as well. We present randomly shuffled Adam in Algorithm 1. In Algorithm 1, m denotes the 1st-order momentum and v denotes the 2nd-order momentum. they are weighted averaged by hyperparameter β1, β2, respectively. Larger β1, β2 will adopt more history information. We denote xk,i,mk,i, vk,i ∈ Rd as the value of x,m, v at the k-th outer loop (epoch) and i-th inner loop (batch), respectively. We choose ηk = η1√nk as the stepsize. In practice, is adopted for numerical stability and it is often chosen to be 10−8. In our theory, we allow to be an arbitrary non-negative constant including 0. In the original version of Adam in (Kingma & Ba, 2014), it has an additional “bias correction” step. This “bias correction” step can be implemented by changing the stepsize ηk into η̂k = √ 1−βk2 1−βk1 ηk Algorithm 1 Adam Initialize x1,0 = x0, m1,−1 = ∇f(x0) and v1,−1 = maxi∇fi(x0) ◦ ∇fi(x0). for k = 1→∞ do Sample {τk,0, τk,1, · · · , τk,n−1} as a random permutation of {0, 1, 2, · · · , n− 1} for i = 0→ n− 1 do mk,i = β1mk,i−1 + (1− β1)∇fτk,i(xk,i) vk,i = β2vk,i−1 + (1− β2)∇fτk,i(xk,i) ◦ ∇fτk,i(xk,i) xk,i+1 = xk,i − ηk√vk,i+ ◦mk,i end for xk+1,0 = xk,n; vk+1,−1 = vk,n−1; mk+1,−1 = mk,n−1 end for and using zero initialization. In Algorithm 1, the “bias correction” step is replaced by a special initialization, which corrects the bias as well. Note that η̂k ∈ [ √ 1− β2ηk, 11−β1 ηk] is well-bounded near ηk, so ηk and η̂k brings the same convergence rate. In addition, as the effect of initialization becomes negligible when the training progresses, Adam with zero & our initialization will have the same asymptotic behavior. In the main body of our proof, we follow the form of Algorithm 1, which makes results cleaner. For completeness, we add the proof on the convergence of Adam with “bias correction” steps in Appendix G.11. In our analysis, we make the assumptions below. Assumption 2.1. we consider x ∈ Rd and fi(x) satisfies gradient Lipschitz continuous with constant L. We assume f(x) is lower bounded by a finite constant f∗. Assumption 2.2. fi(x) and f(x) satisfy: ∑n−1 i=0 ‖∇fi(x)‖ 2 2 ≤ D1‖∇f(x)‖22 +D0,∀x ∈ Rd. Assumption 2.2 is quite general. When D1 = 1/n, it becomes the “constant variance” with constant D0/n. “constant variance" condition is commonly used in both SGD and Adam analysis (e.g. (Ghadimi et al., 2016; Zaheer et al., 2018; Huang et al., 2021)). Assumption 2.2 allows more flexible choices of D1 6= n and thus it is weaker than “constant variance”. When D0 > 0, the problem instance is sometimes called “non-realizable" (Shi et al., 2020). In this case, adaptive gradient methods are not guaranteed to reach the exact critical points. Instead, they only converge to a bounded region (near critical points) (Zaheer et al., 2018; Shi et al., 2020). This phenomenon indeed occurs for Adam in experiments, even with diminishing stepsize (see Figure 4 (a)). The behavior of SGD is similar: constant stepsize SGD converges to a bounded region with its size propositional to the noise level D0 (Yan et al., 2018; Yu et al., 2019; Liu et al., 2020b). When D0 = 0, Assumption 2.2 is often called “strong growth condition" (SGC) (Vaswani et al., 2019). When ‖∇f(x)‖ = 0, under SGC we have ‖∇fj(x)‖ = 0 for all j. SGC is increasingly popular recently e.g.(Schmidt & Roux, 2013; Vaswani et al., 2019). This condition is known to be reasonable in the overparameterized regime where neural networks can interpolate all data points (Vaswani et al., 2019). We will show that Adam can converge to critical points if SGC holds. When n, f∗, L,D0, D1 are fixed a priori, we use Fn,f ∗ L,D0,D1 (Rd) to denote the function class con- taining f(x) satisfying Assumption 2.1 and 2.2 with constant n, f∗, etc.. Since n is fixed when the function class Fn,f ∗ L,D0,D1 (Rd), we introduce this notation to clearly present the divergence result in Proposition 3.3. Without this pre-defined function class, the claim of divergence might be confusing. 2.2 Related Works Ever since Reddi et al. (2018) pointed out the divergence issue, there are many attempts on designing new variants of Adam. Since we focus on understanding Adam without modification on its update rules, we introduce more variants later in Appendix D.1. Compared with proposing new variants, the convergence of vanilla Adam is far less studied than its variants (perhaps due to the criticism of divergence). There are only a few works analyzing vanilla Adam and they require extra assumptions. Zhou et al. (2018b) analyze the counter-example in (Reddi et al., 2018) and find certain hyperparameter can work. However, their analysis is restricted to the counter-example. Zaheer et al. (2018) study the relation between mini-batch sizes and (non)convergence of Adam. However, this work require β1 = 0 and Adam is reduced to RMSProp (Hinton et al., 2012). De et al. (2018) analyze RMSProp and non-zero-β1 Adam, but they assume the sign of all stochastic gradients to keep the same. It seems unclear how to check this condition a priori. Additionally, they require β1 to be inversely related to the upper bound of gradient, which forces β1 to be small (as a side note, this result only applies to full-batch Adam). Défossez et al. (2020) analyze Adam with β1 < β2 and provide some insights on the momentum mechanisms. However, their bound is inversely proportional to (the hyperparameter for numerical stability) and the bound goes to infinity when goes to 0. This is different from practical application since small such as 10−8 often works well. Further, using large is against the nature of adaptive gradient methods because√ v no longer dominates in the choice of stepsize. In this case, Adam is essentially transformed back to SGD. Two recent works (Huang et al., 2021) and (Guo et al., 2021) propose novel and simple frameworks to analyze Adam-family with large β1. Yet, they require the effective stepsize of Adam to be bounded in certain interval, i.e., 1√vt+ ∈ [Cl, Cu] 3. This boundedness condition changes Adam into AdaBound (Luo et al., 2019) and thus they cannot explain the observations on original Adam in Section 1. To summarize, all these works require at least one strong assumption (e.g. large ). Additionally, they all (including those for new variants) require bounded gradient assumptions. A recent work (Shi et al., 2020) takes the first attempt to analyze RMSProp without bounded gradient assumption. They show that RMSProp can converge to the neighborhood of critical points. 4 We believe it is important to study Adam rather than RMSProp: Numerically, Adam often outperforms RMSProp on complicated tasks (e.g. on Atari games, the mean reward is improved from 88% to 110% (Agarwal et al., 2020)). Theoretically, literature on RMSProp cannot reveal the interaction between β1 and β2; or how these hyperparameters jointly affect (or jeopardize) the convergence of Adam. However, it is non-trivial to jointly analyze the effect of β1 and β2. We point out there are at least three challenges. First, it seems unclear how to control the massive momentum mt of Adam. Second, mt is multiplied by 1/ √ vt, causing non-linear perturbation. Third, mt and 1/ √ vt are statistically dependent and cannot be decoupled. We propose new methods to resolve these issues. We highlight our technical novelties in Section 5. 2.3 The Importance and Difficulties of Removing Bounded Gradient Assumptions Here, we emphasize the importance to remove bounded gradient assumption. First, unlike the assumptions in Section 2.1, bounded gradient is not common in SGD analysis. So it is of theoretical interests to remove this condition for Adam. Second, bounded gradient condition rules out the chances of gradient divergence a priori. However, there are numerical evidences showing that Adam’s gradient can diverge (see Section 6 and Appendix B). Removing the boundedness assumption helps us point out the divergence and convergence phase transition in the (β1, β2) diagram. However, it is often difficult to analyze convergence without bounded gradient assumption. First, it is non-trivial to control stochastic momentum. Even for SGD, this task is challenging. For instance, An early paper Bertsekas & Tsitsiklis (2000) analyzed SGD-type methods without any boundedness condition. But it is not until recently that Yu et al. (2019); Liu et al. (2020b); Jin et al. (2022) prove SGDM (SGD with momentum) converges without bounded gradient assumption. Such attempts of removing boundedness assumptions are often appreciated for general optimization problems where “bounded-assumption-free" is considered as a major contribution. Secondly, for Adam, the role of momentum mt is even more intricate since it is multiplied by 1/ √ vt. Combined with vt, the impact of previous signals not only affect the update direction, but also change the stepsize for each component. Further, both momentum mt and stepsize 1/ √ vt are random variables and they are highly correlated. Such statistical dependency causes trouble for analysis. In summary, the role of momentum in Adam could be much different from that in SGDM or GDM. Even with boundedness conditions, the convergence of large-β1 Adam is still not well understood (see related works in Section 2.2). In this work, we propose new techniques to handle Adam’s momentum under any large β1, regardless of the gradient magnitude. These techniques are not revealed in any existing works. We introduce our technical contribution in Section 5. 3For completeness, we explain why they require this condition in Appendix D.1. 4We notice that they also provide a convergence result for Adam with β1 close enough to 0. However, a simple calculation by Zhang et al. (2022) shows that they require β1 < 10−7. Thus their result does not provide much extra information other than RMSProp. 3 Main Results 3.1 Convergence Results Here, we give the convergence results under large β2. Theorem 3.1. For any f(x) ∈ Fn,f ∗ L,D0,D1 (Rd), we assume the hyperparameters in Algorithm 1 satisfy: β1 < √ β2 < 1; β2 is greater or equal to a threshold γ1(n); and ηk = η1√nk . Let km ∈ N satisfies km ≥ 4 and β(km−1)n1 ≤ βn1√ km−1 , 5 we have the following results for any T > km: min k∈[km,T ] E { min [√ 2D1d D0 ‖∇f(xk,0)‖22, ‖∇f(xk,0)‖2 ]} = O ( log T√ T ) +O( √ D0). Remark 1: the choice of β2. Our theory suggests that large β2 should be used to ensure convergence. This message matches our experimental findings in Figure 1. We would like to point out that the requirement of “large β2" is neccessary, because small β2 will indeed lead to divergence (shown later in Section 3.2). We here comment a bit on the the threshold γ1(n). γ1(n) satisfies β2 ≥ 1−O ( 1−βn1 n2ρ ) (see inequality (34) and Remark G.7), where ρ is a constant that depends on the training trajectory. In worst cases, ρ is upper bounded by n2.5, but we find the practical ρ to be much smaller. In Appendix B, we estimate ρ on MNIST and CIFAR-10. In practical training process, we empirically observe that ρ ≈ O(n), thus the required γ1(n) ≈ 1−O ( n−3 ) . Note that our threshold of β2 is a sufficient condition for convergence, so there may be a gap between the practical choices and the theoretical bound of β2. Closing the gap will be an interesting future direction. We find that γ1(n) increases with n. This property suggests that larger β2 should be used when n is large. This phenomenon is also verified by our experiments in Appendix B. We also remark that γ1(n) slowly increases with β1. This property is visualized in Figure 1 (d) where the lower boundary of blue region slightly lifts up when β1 increases. Remark 2: the choice of β1. Theorem 3.1 requires β1 < √ β2. Since β2 is suggested to be large, our convergence result can cover flexible choice of β1 ∈ [0, 1). For instance, β2 = 0.999 brings the threshold of β1 < 0.9995, which covers basically all practical choices of β1 reported in the literature (see Section 1), including the default setting β1 = 0.9. This result is much stronger than those in the RMSProp literature (e.g. (Shi et al., 2020; Zaheer et al., 2018)). To our knowledge, we are the first to prove convergence of Adam under any β1 ∈ [0, 1) without bounded gradient assumption. Remark 3: convergence to a bounded region. When D0 > 0, Adam converges to a bounded region near critical points. As discussed in Section 2.1, converging to bounded region is common for stochastic methods including constant-stepsize SGD (Yan et al., 2018; Yu et al., 2019; Liu et al., 2020b) and diminishing-stepsize RMSProp (Zaheer et al., 2018; Shi et al., 2020). This phenomenon is also observed in practice: even for convex quadratic function with D0 > 0, Adam with diminishing stepsize cannot reach exactly zero gradient (see Figure 4 (a) in Section 6). This is because: even though ηk is decreasing, the effective stepsize ηk/ √ vk,i might not decay. The good news is that, the constant O( √ D0) vanishes to 0 as β2 goes to 1 (both in theory and experiments). The relation between β2 and constant O( √ D0) are introduced in Remark G.14 in Appendix G.9. The size shrinks to 0 because the movement of √vk,i shrinks as β2 increases. As a corollary of Theorem 3.1, we have the following result under SGC (i.e., D0 = 0). Corollary 3.2. Under the setting in Theorem 3.1. When D0 = 0 for Assumption 2.2, we have min k∈[km,T ] E‖∇f(xk,0)‖2 = O ( log T√ T ) . Under SGC (i.e. D0 = 0), Corollary 3.2 states that Adam can converge to critical points. This is indeed the case in practice. For instance, function (2) satisfies SGC and we observe 0 gradient norm after Adam converges (see Section 6 and Appendix B). The convergence rate in Corollary 3.2 is comparable to that of SGD under the same condition in (Vaswani et al., 2019). 5When β1 = 0.9, km = 15 for any n ≥ 1. 3.2 Divergence Results Theorem 3.1 shows that when β2 is large, any β1 < √ β2 ensures convergence. Now we consider the case where β2 is small. We will show that in this case, a wide range of β1 is facing the risk of diverging to infinity. The divergence of small-β2 Adam suggests that “large β2" is necessary in the convergence result Theorem 3.1. We construct a counter-example in Fn,f ∗ L,D0,D1 (Rd). Consider f(x) = ∑n−1 i=0 fi(x) for x ∈ R , we define fi(x) as: fi(x) = { nx, x ≥ −1 n 2 (x+ 2)2 − 3n 2 , x < −1 for i = 0, fi(x) = { −x, x ≥ −1 − 1 2 (x+ 2)2 + 3 2 , x < −1 for i > 0. (2) Summing up all the fi(x), we can see that f(x) = { x, x ≥ −1 1 2 (x+ 2)2 − 3 2 , x < −1 is a lower bounded convex smooth function with optimal solution x∗ = −2. Function (2) allows both iterates and gradients to diverge to infinity. As shown in Figure 1 (a), when running Adam on (2), there exists a red large-error region. This shows the sign of divergence. We further theoretically verify the conjecture in Proposition 3.3. Proposition 3.3. For any function class Fn,f ∗ L,D0,D1 (Rd), there exists a f(x) ∈ Fn,f ∗ L,D0,D1 (Rd), s.t. when (β1, β2) satisfies analytic condition (12), (13), (14) in Appendix E, Adam’s iterates and function values diverge to infinity. By solving these conditions in NumPy, we plot the orange region in Figure 2. The size of the region depends on n and it expands to the whole region when n goes to infinity. The proof can be seen in Appendix E. We find the “divergence region" always stays below the “convergence threshold" γ1(n) in Theorem 3.1, so the two results are self-consistent (see the remark in Appendix E). Proposition 3.3 states the divergence of iterates and function values. Consistently, our experiments also show the divergence of gradient (see Section 6 and Appendix B). These results characterize Adam’s divergence behavior both numerically and theoretically. We emphasize that the orange region is not discussed in (Reddi et al., 2018) because we consider n fixed while they allow n changing. When n is allowed to increase, our orange region will expand to the whole region and thus we can derive a similar (actually stronger) result as (Reddi et al., 2018). We provide more explanation in Section 4. Combining Theorem 3.1 and Proposition 3.3, we establish a clearer image on the relation between (β1, β2) and qualitative behavior of Adam. 4 Reconciling Our Results with (Reddi et al., 2018) We discuss more on the relation between (Reddi et al., 2018) and our results. The divergence result shown in Section 1 does not contradict with our convergence results in Theorem 3.1. Further, it is different from our divergence result in Proposition 3.3. The key difference lies in whether (β1, β2) is picked before or after picking the function class Fn,f ∗ L,D0,D1 (Rd). We discuss the following two cases. Case I: When (β1, β2) is picked before picking Fn,f ∗ L,D0,D1 (Rd). As discussed in Section 1, the divergence result requires different n for different (β1, β2). In this sense, the considered function class is constantly changing. It does not contradict with our Theorem 3.1 which considers a fixed function class with fixed n. For Case I, we illustrate Adam’s behavior in Figure 3. The red region is proved by (Reddi et al., 2018). For completeness, we remove the condition “β1 < √ β2" and further prove that Adam will diverge to infinity for any (β1, β2) ∈ [0, 1)2. The result is shown in the following Corollary 4.1. Corollary 4.1. For any (β1, β2) ∈ [0, 1)2, there exists a function satisfying Assumption 2.1 and 2.2 that the Adam’s iterates and function values diverge to infinity. Proof of Corollary 4.1 can be seen in the final paragraph in Appendix E. In the proof, we also require different n to cause divergence for different (β1, β2). So the function class is constantly changing. As a result, in Case I, we cannot prove any convergence result. Case II: When (β1, β2) is picked after picking Fn,f ∗ L,D0,D1 (Rd). When the function class is picked in advance, sample size n will also be fixed. This case is closer to most practical applications. In this case, we find that Adam’s behavior changes significantly in the different region of Figure 3. First, ∀f(x) ∈ Fn,f ∗ L,D0,D1 (Rd) will converge when β1 < √ β2 and β2 is large. Second, ∃f(x) ∈ Fn,f ∗ L,D0,D1 (Rd) will diverge to infinity when (β1, β2) are in the orange region in Figure 2. Since Case II is closer to practical scenarios, these results can provide better guidance for hyperparameter tuning for Adam users. We provide some suggestions for practitioners in Appendix C. For Case II, we summarize the possible behaviors of Adam in Table 1. We also illustrate our convergence and divergence results in Figure 1 (d). Note that there are some blanket areas where Adam’s behavior remains unknown, this part will be left as interesting future work. 5 Proof Ideas for the Convergence Result We now (informally) introduce our proof ideas for the convergence result in Theorem 3.1. Simply put, we want to control the update direction mk,i/ √ vk,i inside the dual cone of gradient direction. Namely: E〈∇f(xk,0), n−1∑ i=0 mk,i√ vk,i 〉 > 0. (3) However, directly proving (3) could be difficult because both mk,i and vk,i distort the trajectory. To start with, we try to control the movement of vk,i by increasing β2 (similar idea as (Shi et al., 2020; Zou et al., 2019; Chen et al., 2021)). Recall vk,i = (1−β2) ∑i j=1 β i−j 2 ∇fτk,j (xk,j)◦∇fτk,j (xk,j)+ βi2vk,0, we have vk,i ≈ vk,0 when β2 is large. In this case, we have: E 〈 ∇f(xk,0), n−1∑ i=0 mk,i√ vk,i 〉 ≈ E 〈 ∇f(xk,0)√ vk,0 , n−1∑ i=0 mk,i 〉 ≈ E 〈 ∇f(xk,0)√ vk,0 ,∇f(xk,0) 〉 > 0, where the first “≈" is due to the large β2 and the second “≈" is our goal. Now we need to show: E 〈 ∇f(xk,0)√ vk,0 , ( n−1∑ i=0 mk,i ) −∇f(xk,0) 〉 (∗) = E ( d∑ l=1 n−1∑ i=0 ∂lf(xk,0)√ vl,k,0 ( ml,k,i − ∂lfτk,i(xk,0) )) ≈ 0, (4) where ∂lf(xk,0) is the l-th component of∇f(xk,0), similarly for ml,k,0 and vl,k,0. (∗) is due to the finite-sum structure. However, it is not easy to prove (4). We point out some technical issues below. Issue I: massive momentum. Directly proving (4) is still not easy. We need to first consider a simplified problem: for every l ∈ [d], assume we treat ∂lf(xk,0)/ √ vl,k,0 as a constant, how to bound E ∑n−1 i=0 ( ml,k,i − ∂lfτk,i(xk,0) ) ? It turns out that this simplified problem is still non-trivial. When β1 is large, ml,k,i contains heavy historical signals which significantly distort the trajectory from gradient direction. Existing literature (Zaheer et al., 2018; De et al., 2018; Shi et al., 2020) take a naive approach: they set β1 ≈ 0 so that ml,k,i ≈ ∂lfτk,i(xk,i). Then we get (4) ≈ 0. However, this method cannot be applied here since we are interested in practical cases where β1 is large in [0, 1). Issue II: stochastic non-linear dynamics. Even if we solve Issue I, it is still unclear how to prove (4). This is because: for every l ∈ [d], ∂lf(xk,0)/ √ vl,k,0 is a r.v. instead of a constant. With this term involved, we are facing with a stochastic non-linear dynamics, which could be difficult to analyze. Further, ∂lf(xk,0)/ √ vl,k,0 is statistically dependent with ( ml,k,i − ∂lfτk,i(xk,0) ) , so we are not allowed to handle the expectation E(∂lf(xk,0)/ √ vl,k,0) separately . Unfortunately, even with additional assumptions like bounded gradient, there is no general approach to tackle the above issues. In this work, we propose solutions regardless of gradient magnitude. Solution to Issue I. We prove the following Lemma to resolve Issue I. Lemma 5.1. (Informal) Consider Algorithm 1. For every l ∈ [d] and any β1 ∈ [0, 1), we have the following result under Assumption 2.1. δ(β1) := E n−1∑ i=0 ( ml,k,i − ∂lfτk,i(xk,0) ) = O ( 1√ k ) , where ∂lf(xk,0) is the l-th component of∇f(xk,0); ml,k,i = (1− β1)∂lfτk,i(xk,i) + β1ml,k,i−1. We present the proof idea in Appendix A. Simply put, we construct a simple toy example called “color-ball" model (of the 1st kind). This toy model shows a special property of δ(β1). We find out: for Algorithm 1, error terms from successive epochs can be canceled, which keeps the momentum roughly in the descent direction. This important property is not revealed in any existing work. Remark 4: When assuming bounded gradient ‖∇f(x)‖ ≤ G, a naive upper bound would be δ(β1) = O(G). However, such constant upper bound does not imply δ(β1) is close to 0. It will not help prove the convergence. This might be partially the reason why large-β1 Adam is hard to analyze even under bounded gradient (see related works in Section 2.2). We emphasize Lemma 5.1 holds true regardless of gradient norm, so it could be deployed in both bounded or unbounded gradient analysis. Solution to Issue II. We try to show (4) by adopting Lemma 5.1. However, the direct application cannot work since ∂lf(xk,0)√vl,k,0 is random. Despite its randomness, we find out that when β2 is large, the changes of ∂lf(xk,0)√ vl,k,0 shrinks along iteration. As such, although ∂lf(xk,0)√vl,k,0 brings extra perturbation, the quantity in (4) share the similar asymptotic behavior as δ(β1). We prove the following Lemma 5.2. Lemma 5.2. (Informal) Under Assumption 2.1 and 2.2, consider Algorithm 1 with large β2 and β1 < √ β2. For those l with gradient component larger than certain threshold, we have:∣∣∣∣∂lf(xk,0)√vk,0 − ∂lf(xk−1,0)√vk−1,0 ∣∣∣∣ = O( 1√k ) ; (5) E ( ∂lf(xk,0)√ vl,k,0 n−1∑ i=0 (ml,k,i − ∂lfτk,i(xk,0)) ) = O ( 1√ k ) . (6) In Appendix A, we introduce how to derive (6) from (5). To do so, we introduce a new type of “color-ball" model (we call it color-ball of the 2nd kind) which adopts the random perturbation of ∂lf(xk,0)√ vl,k,0 . Understanding color-ball model of the 2nd kind is crucial for proving Lemma 5.2. We conclude the proof of (4) by some additional analysis on “those l with small gradient component". This case is a bit easier since it reduces to bounded gradient case. For readers who wants to learn more about the idea of tackling Issue I and II, please refer to Appendix A where we formally introduce the 1st and 2nd kind of color-ball models. Since the whole proof is quite long, we provide a proof sketch in Appendix G.1. The whole proof is presented in Appendix G. 6 Experiments To support our theory, we provide more simulations and real-data experiments. All the experimental settings and hyperparameters are presented in Appendix B.1. We aim to show: (I). When β2 is large, a large range of β1 gives good performance, including all β1 < √ β2. (II). When β2 is small, a large range of β1 performs relatively badly. Convergence to bounded region when D0 > 0. In Figure 4, we run large-β2 Adam on function (9) (defined later in Appendix B). This function satisfies with D0 > 0. We find that even with diminishing stepsize ηk = 1/ √ k, Adam may not converge to an exact critical point. Instead, it converges to a bounded region. This is because: even though ηk is decreasing, the effective stepsize ηk/ √ vk,i might not decay. Further, the size of the region shrinks when β2 increases. This is because the movement of√vk,i shrinks as β2 increases. These phenomena match Remark 3 and claim (I). Convergence to critical points when D0 = 0 Since function (2) satisfies D0 = 0, we run more experiments on (2) with initialization x = −5 and n = 5, 10, 15, 20. We show the result of n = 20 in Figure 4 (a), (b); the rest are shown in Appendix B. We find that: when β2 is large, Adam converges to critical points for β1 < √ β2. These phenomena match claim (I). Gradient norm of iterates can be unbounded when β2 is small. On function (2), We further run Adam with small β2 at initialization x = −5. In this case, gradient norms of iterates increase dramatically. This emphasizes the importance of discarding bounded gradient assumptions. These phenomena match claim (II). MNIST and CIFAR-10. As shown in Figure 1 (b)& (c) in Section 1, the training results match both claim (I) and (II). In addition, there is a convex-shaped boundary on the transition from low loss to higher loss, this boundary roughly matches the condition in Theorem 3.1. NLP. We use Adam to train Transformer XL (Dai et al., 2019) on the WikiText-103 dataset (Merity et al., 2016). This architecture and dataset is widely used in NLP tasks (e.g. (Howard & Ruder, 2018; See et al., 2017)). As shown in Figure 4 (d), the training results match both claim (I) and (II). 7 Conclusions In this work, we explore the (non-)convergence of Adam. When β2 is large, we prove that Adam can converge with any β1 < √ β2. When β2 is small, we further show that Adam might diverge to infinity for a wide range of β1. One interesting question is to verify the advantage of Adam over SGD. In this work, we focus on the fundamental issue of convergence. Proving faster convergence of Adam would be our future work. Acknowledgments and Disclosure of Funding Yushun Zhang would like to thank Bohan Wang, Reviewer xyuf, Reviewer UR9H, Reviwer tmNN and Reviewer V9yg for the careful proof reading and helpful comments. Yushun Zhang would like to thank Bohan Wang for the valuable discussions around Lemma G.3. Yushun Zhang would also like to thank Reviewer UR9H for the valuable discussions around Lemma G.13. This work is supported by the Internal Project Fund from Shenzhen Research Institute of Big Data under Grant J00220220001. This work is supported by NSFC-A10120170016, NSFC-617310018 and the Guandong Provincial Key Laboratory of Big Data Computing.
1. What are the key contributions and novel aspects introduced by the paper in the context of Adam algorithm and finite-sum optimization? 2. What are the strengths and weaknesses of the paper regarding its theoretical analysis and experimental validation? 3. How does the reviewer assess the significance and logical flow of the paper's findings, particularly in comparison with prior works like Reddi et al. 2018? 4. What are the questions raised by the reviewer regarding the paper's message, the importance of finite-sum optimization, and the relationship between assumptions and convergence guarantees? 5. How does the reviewer suggest improving the paper, such as providing a hardness result in an online learning setting or showing the impact of knowing the number of samples on the results theoretically and empirically?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This manuscript studied the Adam algorithm in the finite-sum setting: the number of samples is fixed in advance and Adam can pick ( β 1 , β 2 ) according to it. The authors showed that under the case β 1 < β 2 < 1 and large β 2 , Adam can converge to the stationary point under strong growth conditions and without bounded gradient assumption. When β 2 is small, the authors also identified a region of ( β 1 , β 2 ) such that Adam diverges. Some simple experiments on standard benchmark datasets are provided to justify the theoretical findings. Strengths And Weaknesses Strengths: The paper studied an important problem: the convergence of Adam in the finite-sum setting and the number of summands in the objective function is known in advance. The paper is generally well-written. The proof idea is clearly presented. Weaknesses: While the results are interesting, I doubt the significance and the logical flow of this result. (1). The paper does not convey important messages about the convergence of Adam compared with Reddi et al. 2018. In particular, Reddi et al. 2018 studied Adam in the online convex optimization setting: they provided regret guarantees and non-convergence of average regret. However, this paper can only guarantee Adam's convergence with the strong growth condition and only to a stationary point. I don't see why the framework of finite-sum optimization and the knowledge of n (i.e., the number of samples) would be so important to differentiate this paper from Reddi et al. 2018. For example, it might be possible that in the particular case of β 1 < β 2 < 1 for Adam, the average regret does not converge but Adam indeed converges to a stationary point in the online learning setting such as Reddi et al. 2018 (where n is not available). To show the fundamental advantage of finite-sum optimization and the knowledge of n , the authors are expected to show something like this: for an online nonconvex optimization problem, Adam with β 1 < β 2 < 1 with large β and any other parameter choice (i.e., learning rate) cannot even converge to a stationary point in polynomial time, but the finite-sum version of this problem can indeed converge in polynomial time. (2). The convergence to stationary point only when D 0 = 0 in Theorem 3.1. What is the relationship between D 0 = 0 in Assumption 2.2 and the bounded gradient? We know that Adam can converge if we have a bounded gradient (e.g., Guo et al. 2021). Experiments are rather weak. The authors did not perform an ablation study in terms of n (i.e., the number of training samples). Since this is the main message of the paper, the authors are expected to show that without the knowledge of n , Adam would diverge due to inappropriate hyperparameter choice (e.g., learning rate, β 1 as in Theorem 3.1). Questions Why this paper's message is important compared with Reddi et al. 2018? Why the knowledge of n in finite-sum optimization is so important? What does the result look like in the online nonconvex optimization and finite-sum convex optimization setting? The regret minimization might fail in certain regimes of β 1 , β 2 , but it might also converge to a stationary point without the knowledge of n or even in an online setting. I don't know why Theorem 3.1 is so important since it is not comparable with Theorem 4 of Reddi et al. 2018. To establish the importance of the finite-sum setting, the authors need to provide a hardness result in an online learning setting in terms of non-convergence to a stationary point in polynomial time. What is the relationship between D 0 = 0 in Assumption 2.2 and the bounded gradient? We know that Adam can converge if we have a bounded gradient (e.g., Guo et al. 2021). How does the knowledge of n affect the results theoretically and empirically? If these concerns can be addressed, I am happy to increase my score. ======POST REBUTTAL====== Thank you for the authors's response. I have read other's reviews as well. My main concern still remains with the finite-sum structure, but I decided to change my evaluation a bit after reading the rebuttal. Limitations N/A
NIPS
Title Adam Can Converge Without Any Modification On Update Rules Abstract Ever since Reddi et al. (2018) pointed out the divergence issue of Adam, many new variants have been designed to obtain convergence. However, vanilla Adam remains exceptionally popular and it works well in practice. Why is there a gap between theory and practice? We point out there is a mismatch between the settings of theory and practice: Reddi et al. (2018) pick the problem after picking the hyperparameters of Adam, i.e., (β1, β2); while practical applications often fix the problem first and then tune (β1, β2). Due to this observation, we conjecture that the empirical convergence can be theoretically justified, only if we change the order of picking the problem and hyperparameter. In this work, we confirm this conjecture. We prove that, when the 2nd-order momentum parameter β2 is large and 1st-order momentum parameter β1 < √ β2 < 1, Adam converges to the neighborhood of critical points. The size of the neighborhood is propositional to the variance of stochastic gradients. Under an extra condition (strong growth condition), Adam converges to critical points. It is worth mentioning that our results cover a wide range of hyperparameters: as β2 increases, our convergence result can cover any β1 ∈ [0, 1) including β1 = 0.9, which is the default setting in deep learning libraries. To our knowledge, this is the first result showing that Adam can converge without any modification on its update rules. Further, our analysis does not require assumptions of bounded gradients or bounded 2nd-order momentum. When β2 is small, we further point out a large region of (β1, β2) combinations where Adam can diverge to infinity. Our divergence result considers the same setting (fixing the optimization problem ahead) as our convergence result, indicating that there is a phase transition from divergence to convergence when increasing β2. These positive and negative results provide suggestions on how to tune Adam hyperparameters: for instance, when Adam does not work well, we suggest tuning up β2 and trying β1 < √ β2. N/A √ β2 < 1, Adam converges to the neighborhood of critical points. The size of the neighborhood is propositional to the variance of stochastic gradients. Under an extra condition (strong growth condition), Adam converges to critical points. It is worth mentioning that our results cover a wide range of hyperparameters: as β2 increases, our convergence result can cover any β1 ∈ [0, 1) including β1 = 0.9, which is the default setting in deep learning libraries. To our knowledge, this is the first result showing that Adam can converge without any modification on its update rules. Further, our analysis does not require assumptions of bounded gradients or bounded 2nd-order momentum. When β2 is small, we further point out a large region of (β1, β2) combinations where Adam can diverge to infinity. Our divergence result considers the same setting (fixing the optimization problem ahead) as our convergence result, indicating that there is a phase transition from divergence to convergence when increasing β2. These positive and negative results provide suggestions on how to tune Adam hyperparameters: for instance, when Adam does not work well, we suggest tuning up β2 and trying β1 < √ β2. 1 Introduction Modern machine learning tasks often aim to solve the following finite-sum problem. min x∈Rd f(x) = n−1∑ i=0 fi(x), (1) where n is the number of samples or mini-batches and x denotes the trainable parameters. In deep learning, Adam (Kingma & Ba, 2014) is one of the most popular algorithms for solving (1). It has been applied to various machine learning domains such as natural language processing (NLP) (Vaswani et al., 2017; Brown et al., 2020; Devlin et al., 2018), generative adversarial networks (GANs) (Radford et al., 2015; Isola et al., 2017; Zhu et al., 2017) and computer vision (CV) (Dosovitskiy et al., 2021). Despite its prevalence, Reddi et al. (2018) point out that Adam can diverge with a wide range of hyperparameters. A main result in (Reddi et al., 2018) states that 2: ∗Correspondence author 2We formally re-state their results in Appendix D.2. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). For any β1, β2 s.t. 0 ≤ β1 < √ β2 < 1, there exists a problem such that Adam diverges. Here, β1 and β2 are the hyperparameter to control Adam’s 1st-order and 2nd-order momentum. More description of Adam can be seen in Algorithm 1 (presented later in Section 2.1). Ever since (Reddi et al., 2018) pointed out the divergence issue, many new variants have been designed. For instance, AMSGrad (Reddi et al., 2018) enforced the adaptor vt (defined later in Algorithm 1) to be non-decreasing; AdaBound (Luo et al., 2019) imposed constraint vt ∈ [Cl, Cu] to ensure the boundedness on effective stepsize. We introduce more variants in Appendix D.1. On the other hand, counter-intuitively, vanilla Adam remains exceptionally popular (see evidence at (Scholar)). Without any modification on its update rules, Adam works well in practice. Even more mysteriously, we find that the commonly reported hyperparameters actually satisfy the divergence condition stated earlier. For instance, Kingma & Ba (2014) claimed that (β1, β2) = (0.9, 0.999) is a “good choice for the tested machine learning problems" and it is indeed the default setting in deep learning libraries. In super-large models GPT-3 and Megatron (Brown et al., 2020; Smith et al., 2022), (β1, β2) is chosen to be (0.9, 0.95). GAN researchers (e.g. Radford et al. (2015); Isola et al. (2017)) use (β1, β2) = (0.5, 0.999). All these hyperparameters live in the divergence region β1 < √ β2. Surprisingly, instead of observing the divergence issue, these hyperparameters achieve good performances and they actually show the sign of convergence. Why does Adam work well despite its theoretical divergence issue? Is there any mismatch between deep learning problems and the divergent example? We take a closer look into the divergence example and find out the mismatch does exist. In particular, we notice an important (but often ignored) characteristic of the divergence example: (Reddi et al., 2018) picks (β1, β2) before picking the sample size n. Put in another way, to construct the divergence example, they change n for different (β1, β2). For instance, for (β1, β2) = (0, 0.99), they use one n to construct the divergent example; for (β1, β2) = (0, 0.9999), they use another n to construct another divergent example. On the other hand, in practical applications of Adam listed above, practitioners tune the hyperparameters (β1, β2) after the sample size n is fixed. So there is a gap between the setting of theory and practice: the order of picking n and (β1, β2) is different. Considering the good performance of Adam under fixed n, we conjecture that Adam can converge in this setting. Unfortunately, the behavior of vanilla Adam is far less studied than its variants (perhaps due to the criticism of divergence). To verify this conjecture, we run experiments for different choices of (β1, β2) on a few tasks. First, we run Adam for a convex function (2) with fixed n (see the definition in Section 3.2). Second, we run Adam for the classification problem on data MNIST and CIFAR-10 with fixed batchsize. We observe some interesting phenomena in Figure 1 (a), (b) and (c). First, when β2 is large, the optimization error is small for almost all values of β1. Second, when β1, β2 are both small, there is a red region with relatively large error. On MNIST, CIFAR-10, the error in the red region is increased by 1.4 times than that in the blue region. The situation is a lot worse on function (2) (defined later in Section 3.2): the error in the red region is 70 times higher. While Adam’s performances seem unstable in the red region, we find that Adam always performs well in the top blue region in Figure 1. This seems to suggest that Adam can converge without any algorithmic modification, as long as β1 and β2 are chosen properly. We ask the following question: Can Adam provably converge without any modification on its update rules? In this work, we theoretically explore this question. Our contributions are visualized in Figure 1 (d). We prove the following results when n is fixed (or more rigorously, when the function class is fixed): • We prove that when β2 is large enough and β1 < √ β2, Adam converges to the neighborhood of critical points. The size of the neighborhood is propositional to the variance of stochastic gradients. With an extra condition (so-called strong growth condition), we prove that Adam can converge to critical points. As β2 increases, these results can cover any momentum parameter β1 ∈ [0, 1) including the default setting β1 = 0.9. In particular, our analysis does not require bounded gradient assumption. • We study the divergence issue of small-β2 Adam. We prove that: for any fixed n (or more rigorously, for any fixed function class), there exists a function such that, Adam diverges to infinity when (β1, β2) is picked in the red region in Figure 1 (d). The size of the red region increases with n. The shape of the region follows the solution to our analytic conditions. • We emphasize a few characteristics of our results. (1) phase transition. The divergence result considers the same setting as our convergence result, indicating that there is a phase transition from divergence to convergence when changing β2. (2) problem-dependent bounds. Our convergence and divergence regions of (β1, β2) are problem-dependent, which is drastically different from (Reddi et al., 2018) which established the problem-independent worst-case choice of (β1, β2). (3) non-asymptotic characterization. the “divergence region” of (β1, β2) expands as n increases and converges to the whole region [0, 1)2 as n goes to infinity, which recovers (actually stronger than) the problem-independent divergence result of (Reddi et al., 2018) that requires β1 < √ β2. In this sense, we can view the divergence result of (Reddi et al., 2018) as an asymptotic characterization of the divergence region (as n→∞) and our divergence result as a non-asymptotic characterization (for any fixed n). We provide more discussion in Section 4. • Our positive and negative results can provide suggestions for tuning β1 and β2: for instance,when Adam does not work well, we suggest tuning up β2 and trying β1 < √ β2. We provide more tuning suggestions in Appendix C. We believe our results can boost new understandings for Adam. While Reddi et al. (2018) reveal that “Adam can diverge", our results show the other side of the coin: when n is fixed (or when function class is fixed), Adam can still converge without any modification on its update rules. Our results suggest that Adam is still a theoretically justified algorithm and practitioners can use it confidently. We further emphasize that our convergence results can cover any β1 ∈ [0, 1), which allows the algorithm to bring arbitrarily heavy momentum signals. It turns out that large-momentum Adam is not easy to analyze. Even with stronger assumptions like bounded gradient (‖∇f(x)‖ < C,∀x), its convergence is not well understood (see related works in Section 2.2). To our best knowledge, this is the first result that proves vanilla Adam with any β1 can converge without any assumption of bounded gradient or bounded 2nd-order momentum. The proof contains a new method to handle unbounded momentum in the stochastic non-linear dynamics system. We will highlight our technical novelties in Section 5. 2 Preliminaries 2.1 Review of Adam We consider finite-sum problem (1). We use x to denote the optimization variable. We denote∇fj as the gradient of fj and let ◦ be the component-wise product. The division and square-root operator are component-wise as well. We present randomly shuffled Adam in Algorithm 1. In Algorithm 1, m denotes the 1st-order momentum and v denotes the 2nd-order momentum. they are weighted averaged by hyperparameter β1, β2, respectively. Larger β1, β2 will adopt more history information. We denote xk,i,mk,i, vk,i ∈ Rd as the value of x,m, v at the k-th outer loop (epoch) and i-th inner loop (batch), respectively. We choose ηk = η1√nk as the stepsize. In practice, is adopted for numerical stability and it is often chosen to be 10−8. In our theory, we allow to be an arbitrary non-negative constant including 0. In the original version of Adam in (Kingma & Ba, 2014), it has an additional “bias correction” step. This “bias correction” step can be implemented by changing the stepsize ηk into η̂k = √ 1−βk2 1−βk1 ηk Algorithm 1 Adam Initialize x1,0 = x0, m1,−1 = ∇f(x0) and v1,−1 = maxi∇fi(x0) ◦ ∇fi(x0). for k = 1→∞ do Sample {τk,0, τk,1, · · · , τk,n−1} as a random permutation of {0, 1, 2, · · · , n− 1} for i = 0→ n− 1 do mk,i = β1mk,i−1 + (1− β1)∇fτk,i(xk,i) vk,i = β2vk,i−1 + (1− β2)∇fτk,i(xk,i) ◦ ∇fτk,i(xk,i) xk,i+1 = xk,i − ηk√vk,i+ ◦mk,i end for xk+1,0 = xk,n; vk+1,−1 = vk,n−1; mk+1,−1 = mk,n−1 end for and using zero initialization. In Algorithm 1, the “bias correction” step is replaced by a special initialization, which corrects the bias as well. Note that η̂k ∈ [ √ 1− β2ηk, 11−β1 ηk] is well-bounded near ηk, so ηk and η̂k brings the same convergence rate. In addition, as the effect of initialization becomes negligible when the training progresses, Adam with zero & our initialization will have the same asymptotic behavior. In the main body of our proof, we follow the form of Algorithm 1, which makes results cleaner. For completeness, we add the proof on the convergence of Adam with “bias correction” steps in Appendix G.11. In our analysis, we make the assumptions below. Assumption 2.1. we consider x ∈ Rd and fi(x) satisfies gradient Lipschitz continuous with constant L. We assume f(x) is lower bounded by a finite constant f∗. Assumption 2.2. fi(x) and f(x) satisfy: ∑n−1 i=0 ‖∇fi(x)‖ 2 2 ≤ D1‖∇f(x)‖22 +D0,∀x ∈ Rd. Assumption 2.2 is quite general. When D1 = 1/n, it becomes the “constant variance” with constant D0/n. “constant variance" condition is commonly used in both SGD and Adam analysis (e.g. (Ghadimi et al., 2016; Zaheer et al., 2018; Huang et al., 2021)). Assumption 2.2 allows more flexible choices of D1 6= n and thus it is weaker than “constant variance”. When D0 > 0, the problem instance is sometimes called “non-realizable" (Shi et al., 2020). In this case, adaptive gradient methods are not guaranteed to reach the exact critical points. Instead, they only converge to a bounded region (near critical points) (Zaheer et al., 2018; Shi et al., 2020). This phenomenon indeed occurs for Adam in experiments, even with diminishing stepsize (see Figure 4 (a)). The behavior of SGD is similar: constant stepsize SGD converges to a bounded region with its size propositional to the noise level D0 (Yan et al., 2018; Yu et al., 2019; Liu et al., 2020b). When D0 = 0, Assumption 2.2 is often called “strong growth condition" (SGC) (Vaswani et al., 2019). When ‖∇f(x)‖ = 0, under SGC we have ‖∇fj(x)‖ = 0 for all j. SGC is increasingly popular recently e.g.(Schmidt & Roux, 2013; Vaswani et al., 2019). This condition is known to be reasonable in the overparameterized regime where neural networks can interpolate all data points (Vaswani et al., 2019). We will show that Adam can converge to critical points if SGC holds. When n, f∗, L,D0, D1 are fixed a priori, we use Fn,f ∗ L,D0,D1 (Rd) to denote the function class con- taining f(x) satisfying Assumption 2.1 and 2.2 with constant n, f∗, etc.. Since n is fixed when the function class Fn,f ∗ L,D0,D1 (Rd), we introduce this notation to clearly present the divergence result in Proposition 3.3. Without this pre-defined function class, the claim of divergence might be confusing. 2.2 Related Works Ever since Reddi et al. (2018) pointed out the divergence issue, there are many attempts on designing new variants of Adam. Since we focus on understanding Adam without modification on its update rules, we introduce more variants later in Appendix D.1. Compared with proposing new variants, the convergence of vanilla Adam is far less studied than its variants (perhaps due to the criticism of divergence). There are only a few works analyzing vanilla Adam and they require extra assumptions. Zhou et al. (2018b) analyze the counter-example in (Reddi et al., 2018) and find certain hyperparameter can work. However, their analysis is restricted to the counter-example. Zaheer et al. (2018) study the relation between mini-batch sizes and (non)convergence of Adam. However, this work require β1 = 0 and Adam is reduced to RMSProp (Hinton et al., 2012). De et al. (2018) analyze RMSProp and non-zero-β1 Adam, but they assume the sign of all stochastic gradients to keep the same. It seems unclear how to check this condition a priori. Additionally, they require β1 to be inversely related to the upper bound of gradient, which forces β1 to be small (as a side note, this result only applies to full-batch Adam). Défossez et al. (2020) analyze Adam with β1 < β2 and provide some insights on the momentum mechanisms. However, their bound is inversely proportional to (the hyperparameter for numerical stability) and the bound goes to infinity when goes to 0. This is different from practical application since small such as 10−8 often works well. Further, using large is against the nature of adaptive gradient methods because√ v no longer dominates in the choice of stepsize. In this case, Adam is essentially transformed back to SGD. Two recent works (Huang et al., 2021) and (Guo et al., 2021) propose novel and simple frameworks to analyze Adam-family with large β1. Yet, they require the effective stepsize of Adam to be bounded in certain interval, i.e., 1√vt+ ∈ [Cl, Cu] 3. This boundedness condition changes Adam into AdaBound (Luo et al., 2019) and thus they cannot explain the observations on original Adam in Section 1. To summarize, all these works require at least one strong assumption (e.g. large ). Additionally, they all (including those for new variants) require bounded gradient assumptions. A recent work (Shi et al., 2020) takes the first attempt to analyze RMSProp without bounded gradient assumption. They show that RMSProp can converge to the neighborhood of critical points. 4 We believe it is important to study Adam rather than RMSProp: Numerically, Adam often outperforms RMSProp on complicated tasks (e.g. on Atari games, the mean reward is improved from 88% to 110% (Agarwal et al., 2020)). Theoretically, literature on RMSProp cannot reveal the interaction between β1 and β2; or how these hyperparameters jointly affect (or jeopardize) the convergence of Adam. However, it is non-trivial to jointly analyze the effect of β1 and β2. We point out there are at least three challenges. First, it seems unclear how to control the massive momentum mt of Adam. Second, mt is multiplied by 1/ √ vt, causing non-linear perturbation. Third, mt and 1/ √ vt are statistically dependent and cannot be decoupled. We propose new methods to resolve these issues. We highlight our technical novelties in Section 5. 2.3 The Importance and Difficulties of Removing Bounded Gradient Assumptions Here, we emphasize the importance to remove bounded gradient assumption. First, unlike the assumptions in Section 2.1, bounded gradient is not common in SGD analysis. So it is of theoretical interests to remove this condition for Adam. Second, bounded gradient condition rules out the chances of gradient divergence a priori. However, there are numerical evidences showing that Adam’s gradient can diverge (see Section 6 and Appendix B). Removing the boundedness assumption helps us point out the divergence and convergence phase transition in the (β1, β2) diagram. However, it is often difficult to analyze convergence without bounded gradient assumption. First, it is non-trivial to control stochastic momentum. Even for SGD, this task is challenging. For instance, An early paper Bertsekas & Tsitsiklis (2000) analyzed SGD-type methods without any boundedness condition. But it is not until recently that Yu et al. (2019); Liu et al. (2020b); Jin et al. (2022) prove SGDM (SGD with momentum) converges without bounded gradient assumption. Such attempts of removing boundedness assumptions are often appreciated for general optimization problems where “bounded-assumption-free" is considered as a major contribution. Secondly, for Adam, the role of momentum mt is even more intricate since it is multiplied by 1/ √ vt. Combined with vt, the impact of previous signals not only affect the update direction, but also change the stepsize for each component. Further, both momentum mt and stepsize 1/ √ vt are random variables and they are highly correlated. Such statistical dependency causes trouble for analysis. In summary, the role of momentum in Adam could be much different from that in SGDM or GDM. Even with boundedness conditions, the convergence of large-β1 Adam is still not well understood (see related works in Section 2.2). In this work, we propose new techniques to handle Adam’s momentum under any large β1, regardless of the gradient magnitude. These techniques are not revealed in any existing works. We introduce our technical contribution in Section 5. 3For completeness, we explain why they require this condition in Appendix D.1. 4We notice that they also provide a convergence result for Adam with β1 close enough to 0. However, a simple calculation by Zhang et al. (2022) shows that they require β1 < 10−7. Thus their result does not provide much extra information other than RMSProp. 3 Main Results 3.1 Convergence Results Here, we give the convergence results under large β2. Theorem 3.1. For any f(x) ∈ Fn,f ∗ L,D0,D1 (Rd), we assume the hyperparameters in Algorithm 1 satisfy: β1 < √ β2 < 1; β2 is greater or equal to a threshold γ1(n); and ηk = η1√nk . Let km ∈ N satisfies km ≥ 4 and β(km−1)n1 ≤ βn1√ km−1 , 5 we have the following results for any T > km: min k∈[km,T ] E { min [√ 2D1d D0 ‖∇f(xk,0)‖22, ‖∇f(xk,0)‖2 ]} = O ( log T√ T ) +O( √ D0). Remark 1: the choice of β2. Our theory suggests that large β2 should be used to ensure convergence. This message matches our experimental findings in Figure 1. We would like to point out that the requirement of “large β2" is neccessary, because small β2 will indeed lead to divergence (shown later in Section 3.2). We here comment a bit on the the threshold γ1(n). γ1(n) satisfies β2 ≥ 1−O ( 1−βn1 n2ρ ) (see inequality (34) and Remark G.7), where ρ is a constant that depends on the training trajectory. In worst cases, ρ is upper bounded by n2.5, but we find the practical ρ to be much smaller. In Appendix B, we estimate ρ on MNIST and CIFAR-10. In practical training process, we empirically observe that ρ ≈ O(n), thus the required γ1(n) ≈ 1−O ( n−3 ) . Note that our threshold of β2 is a sufficient condition for convergence, so there may be a gap between the practical choices and the theoretical bound of β2. Closing the gap will be an interesting future direction. We find that γ1(n) increases with n. This property suggests that larger β2 should be used when n is large. This phenomenon is also verified by our experiments in Appendix B. We also remark that γ1(n) slowly increases with β1. This property is visualized in Figure 1 (d) where the lower boundary of blue region slightly lifts up when β1 increases. Remark 2: the choice of β1. Theorem 3.1 requires β1 < √ β2. Since β2 is suggested to be large, our convergence result can cover flexible choice of β1 ∈ [0, 1). For instance, β2 = 0.999 brings the threshold of β1 < 0.9995, which covers basically all practical choices of β1 reported in the literature (see Section 1), including the default setting β1 = 0.9. This result is much stronger than those in the RMSProp literature (e.g. (Shi et al., 2020; Zaheer et al., 2018)). To our knowledge, we are the first to prove convergence of Adam under any β1 ∈ [0, 1) without bounded gradient assumption. Remark 3: convergence to a bounded region. When D0 > 0, Adam converges to a bounded region near critical points. As discussed in Section 2.1, converging to bounded region is common for stochastic methods including constant-stepsize SGD (Yan et al., 2018; Yu et al., 2019; Liu et al., 2020b) and diminishing-stepsize RMSProp (Zaheer et al., 2018; Shi et al., 2020). This phenomenon is also observed in practice: even for convex quadratic function with D0 > 0, Adam with diminishing stepsize cannot reach exactly zero gradient (see Figure 4 (a) in Section 6). This is because: even though ηk is decreasing, the effective stepsize ηk/ √ vk,i might not decay. The good news is that, the constant O( √ D0) vanishes to 0 as β2 goes to 1 (both in theory and experiments). The relation between β2 and constant O( √ D0) are introduced in Remark G.14 in Appendix G.9. The size shrinks to 0 because the movement of √vk,i shrinks as β2 increases. As a corollary of Theorem 3.1, we have the following result under SGC (i.e., D0 = 0). Corollary 3.2. Under the setting in Theorem 3.1. When D0 = 0 for Assumption 2.2, we have min k∈[km,T ] E‖∇f(xk,0)‖2 = O ( log T√ T ) . Under SGC (i.e. D0 = 0), Corollary 3.2 states that Adam can converge to critical points. This is indeed the case in practice. For instance, function (2) satisfies SGC and we observe 0 gradient norm after Adam converges (see Section 6 and Appendix B). The convergence rate in Corollary 3.2 is comparable to that of SGD under the same condition in (Vaswani et al., 2019). 5When β1 = 0.9, km = 15 for any n ≥ 1. 3.2 Divergence Results Theorem 3.1 shows that when β2 is large, any β1 < √ β2 ensures convergence. Now we consider the case where β2 is small. We will show that in this case, a wide range of β1 is facing the risk of diverging to infinity. The divergence of small-β2 Adam suggests that “large β2" is necessary in the convergence result Theorem 3.1. We construct a counter-example in Fn,f ∗ L,D0,D1 (Rd). Consider f(x) = ∑n−1 i=0 fi(x) for x ∈ R , we define fi(x) as: fi(x) = { nx, x ≥ −1 n 2 (x+ 2)2 − 3n 2 , x < −1 for i = 0, fi(x) = { −x, x ≥ −1 − 1 2 (x+ 2)2 + 3 2 , x < −1 for i > 0. (2) Summing up all the fi(x), we can see that f(x) = { x, x ≥ −1 1 2 (x+ 2)2 − 3 2 , x < −1 is a lower bounded convex smooth function with optimal solution x∗ = −2. Function (2) allows both iterates and gradients to diverge to infinity. As shown in Figure 1 (a), when running Adam on (2), there exists a red large-error region. This shows the sign of divergence. We further theoretically verify the conjecture in Proposition 3.3. Proposition 3.3. For any function class Fn,f ∗ L,D0,D1 (Rd), there exists a f(x) ∈ Fn,f ∗ L,D0,D1 (Rd), s.t. when (β1, β2) satisfies analytic condition (12), (13), (14) in Appendix E, Adam’s iterates and function values diverge to infinity. By solving these conditions in NumPy, we plot the orange region in Figure 2. The size of the region depends on n and it expands to the whole region when n goes to infinity. The proof can be seen in Appendix E. We find the “divergence region" always stays below the “convergence threshold" γ1(n) in Theorem 3.1, so the two results are self-consistent (see the remark in Appendix E). Proposition 3.3 states the divergence of iterates and function values. Consistently, our experiments also show the divergence of gradient (see Section 6 and Appendix B). These results characterize Adam’s divergence behavior both numerically and theoretically. We emphasize that the orange region is not discussed in (Reddi et al., 2018) because we consider n fixed while they allow n changing. When n is allowed to increase, our orange region will expand to the whole region and thus we can derive a similar (actually stronger) result as (Reddi et al., 2018). We provide more explanation in Section 4. Combining Theorem 3.1 and Proposition 3.3, we establish a clearer image on the relation between (β1, β2) and qualitative behavior of Adam. 4 Reconciling Our Results with (Reddi et al., 2018) We discuss more on the relation between (Reddi et al., 2018) and our results. The divergence result shown in Section 1 does not contradict with our convergence results in Theorem 3.1. Further, it is different from our divergence result in Proposition 3.3. The key difference lies in whether (β1, β2) is picked before or after picking the function class Fn,f ∗ L,D0,D1 (Rd). We discuss the following two cases. Case I: When (β1, β2) is picked before picking Fn,f ∗ L,D0,D1 (Rd). As discussed in Section 1, the divergence result requires different n for different (β1, β2). In this sense, the considered function class is constantly changing. It does not contradict with our Theorem 3.1 which considers a fixed function class with fixed n. For Case I, we illustrate Adam’s behavior in Figure 3. The red region is proved by (Reddi et al., 2018). For completeness, we remove the condition “β1 < √ β2" and further prove that Adam will diverge to infinity for any (β1, β2) ∈ [0, 1)2. The result is shown in the following Corollary 4.1. Corollary 4.1. For any (β1, β2) ∈ [0, 1)2, there exists a function satisfying Assumption 2.1 and 2.2 that the Adam’s iterates and function values diverge to infinity. Proof of Corollary 4.1 can be seen in the final paragraph in Appendix E. In the proof, we also require different n to cause divergence for different (β1, β2). So the function class is constantly changing. As a result, in Case I, we cannot prove any convergence result. Case II: When (β1, β2) is picked after picking Fn,f ∗ L,D0,D1 (Rd). When the function class is picked in advance, sample size n will also be fixed. This case is closer to most practical applications. In this case, we find that Adam’s behavior changes significantly in the different region of Figure 3. First, ∀f(x) ∈ Fn,f ∗ L,D0,D1 (Rd) will converge when β1 < √ β2 and β2 is large. Second, ∃f(x) ∈ Fn,f ∗ L,D0,D1 (Rd) will diverge to infinity when (β1, β2) are in the orange region in Figure 2. Since Case II is closer to practical scenarios, these results can provide better guidance for hyperparameter tuning for Adam users. We provide some suggestions for practitioners in Appendix C. For Case II, we summarize the possible behaviors of Adam in Table 1. We also illustrate our convergence and divergence results in Figure 1 (d). Note that there are some blanket areas where Adam’s behavior remains unknown, this part will be left as interesting future work. 5 Proof Ideas for the Convergence Result We now (informally) introduce our proof ideas for the convergence result in Theorem 3.1. Simply put, we want to control the update direction mk,i/ √ vk,i inside the dual cone of gradient direction. Namely: E〈∇f(xk,0), n−1∑ i=0 mk,i√ vk,i 〉 > 0. (3) However, directly proving (3) could be difficult because both mk,i and vk,i distort the trajectory. To start with, we try to control the movement of vk,i by increasing β2 (similar idea as (Shi et al., 2020; Zou et al., 2019; Chen et al., 2021)). Recall vk,i = (1−β2) ∑i j=1 β i−j 2 ∇fτk,j (xk,j)◦∇fτk,j (xk,j)+ βi2vk,0, we have vk,i ≈ vk,0 when β2 is large. In this case, we have: E 〈 ∇f(xk,0), n−1∑ i=0 mk,i√ vk,i 〉 ≈ E 〈 ∇f(xk,0)√ vk,0 , n−1∑ i=0 mk,i 〉 ≈ E 〈 ∇f(xk,0)√ vk,0 ,∇f(xk,0) 〉 > 0, where the first “≈" is due to the large β2 and the second “≈" is our goal. Now we need to show: E 〈 ∇f(xk,0)√ vk,0 , ( n−1∑ i=0 mk,i ) −∇f(xk,0) 〉 (∗) = E ( d∑ l=1 n−1∑ i=0 ∂lf(xk,0)√ vl,k,0 ( ml,k,i − ∂lfτk,i(xk,0) )) ≈ 0, (4) where ∂lf(xk,0) is the l-th component of∇f(xk,0), similarly for ml,k,0 and vl,k,0. (∗) is due to the finite-sum structure. However, it is not easy to prove (4). We point out some technical issues below. Issue I: massive momentum. Directly proving (4) is still not easy. We need to first consider a simplified problem: for every l ∈ [d], assume we treat ∂lf(xk,0)/ √ vl,k,0 as a constant, how to bound E ∑n−1 i=0 ( ml,k,i − ∂lfτk,i(xk,0) ) ? It turns out that this simplified problem is still non-trivial. When β1 is large, ml,k,i contains heavy historical signals which significantly distort the trajectory from gradient direction. Existing literature (Zaheer et al., 2018; De et al., 2018; Shi et al., 2020) take a naive approach: they set β1 ≈ 0 so that ml,k,i ≈ ∂lfτk,i(xk,i). Then we get (4) ≈ 0. However, this method cannot be applied here since we are interested in practical cases where β1 is large in [0, 1). Issue II: stochastic non-linear dynamics. Even if we solve Issue I, it is still unclear how to prove (4). This is because: for every l ∈ [d], ∂lf(xk,0)/ √ vl,k,0 is a r.v. instead of a constant. With this term involved, we are facing with a stochastic non-linear dynamics, which could be difficult to analyze. Further, ∂lf(xk,0)/ √ vl,k,0 is statistically dependent with ( ml,k,i − ∂lfτk,i(xk,0) ) , so we are not allowed to handle the expectation E(∂lf(xk,0)/ √ vl,k,0) separately . Unfortunately, even with additional assumptions like bounded gradient, there is no general approach to tackle the above issues. In this work, we propose solutions regardless of gradient magnitude. Solution to Issue I. We prove the following Lemma to resolve Issue I. Lemma 5.1. (Informal) Consider Algorithm 1. For every l ∈ [d] and any β1 ∈ [0, 1), we have the following result under Assumption 2.1. δ(β1) := E n−1∑ i=0 ( ml,k,i − ∂lfτk,i(xk,0) ) = O ( 1√ k ) , where ∂lf(xk,0) is the l-th component of∇f(xk,0); ml,k,i = (1− β1)∂lfτk,i(xk,i) + β1ml,k,i−1. We present the proof idea in Appendix A. Simply put, we construct a simple toy example called “color-ball" model (of the 1st kind). This toy model shows a special property of δ(β1). We find out: for Algorithm 1, error terms from successive epochs can be canceled, which keeps the momentum roughly in the descent direction. This important property is not revealed in any existing work. Remark 4: When assuming bounded gradient ‖∇f(x)‖ ≤ G, a naive upper bound would be δ(β1) = O(G). However, such constant upper bound does not imply δ(β1) is close to 0. It will not help prove the convergence. This might be partially the reason why large-β1 Adam is hard to analyze even under bounded gradient (see related works in Section 2.2). We emphasize Lemma 5.1 holds true regardless of gradient norm, so it could be deployed in both bounded or unbounded gradient analysis. Solution to Issue II. We try to show (4) by adopting Lemma 5.1. However, the direct application cannot work since ∂lf(xk,0)√vl,k,0 is random. Despite its randomness, we find out that when β2 is large, the changes of ∂lf(xk,0)√ vl,k,0 shrinks along iteration. As such, although ∂lf(xk,0)√vl,k,0 brings extra perturbation, the quantity in (4) share the similar asymptotic behavior as δ(β1). We prove the following Lemma 5.2. Lemma 5.2. (Informal) Under Assumption 2.1 and 2.2, consider Algorithm 1 with large β2 and β1 < √ β2. For those l with gradient component larger than certain threshold, we have:∣∣∣∣∂lf(xk,0)√vk,0 − ∂lf(xk−1,0)√vk−1,0 ∣∣∣∣ = O( 1√k ) ; (5) E ( ∂lf(xk,0)√ vl,k,0 n−1∑ i=0 (ml,k,i − ∂lfτk,i(xk,0)) ) = O ( 1√ k ) . (6) In Appendix A, we introduce how to derive (6) from (5). To do so, we introduce a new type of “color-ball" model (we call it color-ball of the 2nd kind) which adopts the random perturbation of ∂lf(xk,0)√ vl,k,0 . Understanding color-ball model of the 2nd kind is crucial for proving Lemma 5.2. We conclude the proof of (4) by some additional analysis on “those l with small gradient component". This case is a bit easier since it reduces to bounded gradient case. For readers who wants to learn more about the idea of tackling Issue I and II, please refer to Appendix A where we formally introduce the 1st and 2nd kind of color-ball models. Since the whole proof is quite long, we provide a proof sketch in Appendix G.1. The whole proof is presented in Appendix G. 6 Experiments To support our theory, we provide more simulations and real-data experiments. All the experimental settings and hyperparameters are presented in Appendix B.1. We aim to show: (I). When β2 is large, a large range of β1 gives good performance, including all β1 < √ β2. (II). When β2 is small, a large range of β1 performs relatively badly. Convergence to bounded region when D0 > 0. In Figure 4, we run large-β2 Adam on function (9) (defined later in Appendix B). This function satisfies with D0 > 0. We find that even with diminishing stepsize ηk = 1/ √ k, Adam may not converge to an exact critical point. Instead, it converges to a bounded region. This is because: even though ηk is decreasing, the effective stepsize ηk/ √ vk,i might not decay. Further, the size of the region shrinks when β2 increases. This is because the movement of√vk,i shrinks as β2 increases. These phenomena match Remark 3 and claim (I). Convergence to critical points when D0 = 0 Since function (2) satisfies D0 = 0, we run more experiments on (2) with initialization x = −5 and n = 5, 10, 15, 20. We show the result of n = 20 in Figure 4 (a), (b); the rest are shown in Appendix B. We find that: when β2 is large, Adam converges to critical points for β1 < √ β2. These phenomena match claim (I). Gradient norm of iterates can be unbounded when β2 is small. On function (2), We further run Adam with small β2 at initialization x = −5. In this case, gradient norms of iterates increase dramatically. This emphasizes the importance of discarding bounded gradient assumptions. These phenomena match claim (II). MNIST and CIFAR-10. As shown in Figure 1 (b)& (c) in Section 1, the training results match both claim (I) and (II). In addition, there is a convex-shaped boundary on the transition from low loss to higher loss, this boundary roughly matches the condition in Theorem 3.1. NLP. We use Adam to train Transformer XL (Dai et al., 2019) on the WikiText-103 dataset (Merity et al., 2016). This architecture and dataset is widely used in NLP tasks (e.g. (Howard & Ruder, 2018; See et al., 2017)). As shown in Figure 4 (d), the training results match both claim (I) and (II). 7 Conclusions In this work, we explore the (non-)convergence of Adam. When β2 is large, we prove that Adam can converge with any β1 < √ β2. When β2 is small, we further show that Adam might diverge to infinity for a wide range of β1. One interesting question is to verify the advantage of Adam over SGD. In this work, we focus on the fundamental issue of convergence. Proving faster convergence of Adam would be our future work. Acknowledgments and Disclosure of Funding Yushun Zhang would like to thank Bohan Wang, Reviewer xyuf, Reviewer UR9H, Reviwer tmNN and Reviewer V9yg for the careful proof reading and helpful comments. Yushun Zhang would like to thank Bohan Wang for the valuable discussions around Lemma G.3. Yushun Zhang would also like to thank Reviewer UR9H for the valuable discussions around Lemma G.13. This work is supported by the Internal Project Fund from Shenzhen Research Institute of Big Data under Grant J00220220001. This work is supported by NSFC-A10120170016, NSFC-617310018 and the Guandong Provincial Key Laboratory of Big Data Computing.
1. What is the focus of the paper regarding Adam's convergence? 2. What are the strengths of the proposed approach, particularly in terms of removing the bounded gradient assumption? 3. Do you have any questions or concerns about the paper's assumptions, proofs, or conclusions? 4. How does the reviewer assess the novelty and contribution of the paper regarding analyzing methods with momentum? 5. Are there any grammatical or mathematical typos in the paper that need correction?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper proves that Adam with hyper-parameters β 1 ≤ β 2 < 1 and large β 2 converges to a neighborhood of stationary points under mild assumptions on the function class and if Strong Growth Condition is satisfied, it converges to stationary point. Specifically, they argue that analyzing Adam with large β 1 which results in large momentum is harder and previous works could not manage to handle non-zero β 1 . Strengths And Weaknesses This paper is superbly written and the proofs, although being long, are managed to be as clear as possible. The writing style is excellent and helps reader understand the main ideas, specially the long proofs. I checked all the proofs, except for pages 56-58 and 60-64. Except for some typos, which do not affect the results, I could not find any flaw. I really like how you manage to use color-ball method and finite vs infinite sum argument in your work. Also as pointed out in several places, you explain the short-comings of previous work in analyzing non-zero β 1 .The removal of bounded gradient assumption and sticking to Assumption 2.1 which encompasses SGC and constant variance assumptions also looks entertaining. Example of giving convex functions in prop. 3.3 when Adam diverges in a relatively large region is also instructive: small β 2 is dangerous. examples on practical networks also corroborate the analytic results. This piece of work seems to have important contribution for analyzing methods with momentum like Adam and as pointed in conclusion, it paves the way for much more intricate convergence analysis of Adam and its effectiveness compared to SGD. I highly recommend this paper to be accepted. Questions Question: in lines 123-124, can you tell me (intuitively) why SGC is reasonable in overparametrized regime?can you point out to more literature (if any) regarding SGC? You say in line 98 that in your theoretical analysis you let ϵ be an arbitrary non-negative constant including 0, but in proofs I couldn't catch that. It seems to me you just set it to 0. Please explain. in line 625, is it like you can β 1 = 0 and large β 2 can converge repeating the proof of Reddi? or is it a fundamentally different approach? in lines 196-199, I can't understand why smaller batchsize needs a larger β 2 . The bound γ 1 ( n ) increases with n, so it seems to me that smaller n has smaller γ 1 ( n ) hence smaller β 2 is okay. Grammatical Typos: line 93, randomly. line 125, converge "to". line 161, require. line 205, "and". line 262, "red". line 328, "reduces". line 509, "some more". line 597 and 622, "require". line 658, "as". line 677, "an". line 690, converge"s". line 732, this. line 732, "a" instead of "an". Math typos: line 225 and 699, I think for i > 0 , f i ( x ) = 1 / 2 ( x + 2 ) 2 " + " 3 / 2 . line 291, equation 3, why there is no sign of v k , 0 in r.h.s (which is the case in equation 4) ? line 311, definition of m l , k , i should have f τ rather than f . bottom of page 22, β 2 s in nominators may change to β 1 .This also occurs in line 734, first line. line 725, first line, j = 0 . line 739, I feel a missing 1 k in r.h.s. Also in line 740, there might be a missing n in r.h.s. because of the summation. just above line 789, definition of v should have β 2 rather than β 1 . line 793, eq 17, f τ k , i + 1 ( x k , i ) should change to f τ k , i ( x k , i + 1 ) , which is corrected in line 795. line 815, x k − x k + 1 instead of x k + 1 − x k . line 837, 2nd line, based on eq (18) there might be a missing 1 1 − β 2 , which just affects F 1 . in the lines following line 1052, the subscript ℓ is missing. you can note that in the beginning of Appendix G 3 to prevent confusion. equation at the bottom of page 40, based on Lemma F.3 you may have an extra factor of n , as Lemma F.3 deals with f τ rather than f . bottom of page 46, − 1 k . in equation 45, I feel like f 0 should cancel out in all epochs, just like epoch 0. in lines 1159 and 1166, Lemma G.12 gives β 2 n in denominator rather than nominator. Please correct it. line 1163, second equation, it is ( F l , k ) k − 1 . Limitations regarding negative societal impact, as authors pinpoint, training neural nets for illegal use is the main concern of this kind of work.
NIPS
Title Finding and Listing Front-door Adjustment Sets Abstract Identifying the effects of new interventions from data is a significant challenge found across a wide range of the empirical sciences. A well-known strategy for identifying such effects is Pearl’s front-door (FD) criterion [26]. The definition of the FD criterion is declarative, only allowing one to decide whether a specific set satisfies the criterion. In this paper, we present algorithms for finding and enumerating possible sets satisfying the FD criterion in a given causal diagram. These results are useful in facilitating the practical applications of the FD criterion for causal effects estimation and helping scientists to select estimands with desired properties, e.g., based on cost, feasibility of measurement, or statistical power. 1 Introduction Learning cause and effect relationships is a fundamental challenge across data-driven fields. For example, health scientists developing a treatment for curing lung cancer need to understand how a new drug affects the patient’s body and the tumor’s progression. The distillation of causal relations is indispensable to understanding the dynamics of the underlying system and how to perform decisionmaking in a principled and systematic fashion [27, 37, 2, 30, 1, 23, 24]. One of the most common methods for learning causal relations is through Randomized Controlled Trials (RCTs, for short) [8]. RCTs are considered as the “gold standard” in many fields of empirical research and are used throughout the health and social sciences as well as machine learning and AI. In practice, however, RCTs are often hard to perform due to ethical, financial, and technical issues. For instance, it may be unethical to submit an individual to a certain condition if such condition may have some potentially negative effects (e.g., smoking). Whenever RCTs cannot be conducted, one needs to resort to analytical methods to infer causal relations from observational data, which appears in the literature as the problem of causal effect identification [26, 27]. The causal identification problem asks whether the effect of holding a variableX at a constant value x on a variable Y , written as P (Y |do(X = x)), or P (Y |do(x)), can be computed from a combination of observational data and causal assumptions. One of the most common ways of eliciting these assumptions is in the form of a causal diagram represented by a directed acyclic graph (DAG), where its nodes and edges describe the underlying data generating process. For instance, in Fig. 1a, three nodes X,Z, Y represent variables, a directed edge X → Z indicates that X causes Z, and a dashedbidirected edge X ↔ Y represents that X and Y are confounded by unmeasured (latent) factors. Different methods can solve the identification problem, including Pearl’s celebrated do-calculus [26] as well as different algorithmic solutions [40, 34, 12]. In practice, researchers often rely on identification strategies that generate well-known identification formulas. One of the arguably most popular strategies is identification by covariate adjustment. Whenever a set Z satisfies the back-door (BD) criterion [26] relative to the pair X and Y , where X and Y represent the treatment and outcome variables, respectively, the causal effect P (Y |do(x)) can be evaluated through the BD adjustment formula ∑ z P (y|x, z)P (z). 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Despite the popularity of the covariate adjustment technique for estimating causal effects, there are still settings in which no BD admissible set exists. For example, consider the causal diagram G in Fig. 1a. There clearly exists no set to block the BD path from X to Y , through the bidirected arrow, X ↔ Y . One may surmise that this effect is not identifiable and the only one of evaluating the interventional distribution is through experimentation. Still, this is not the case. The effect P (Y |do(x)) is identifiable from G and the observed distribution P (x, y, z) over {X,Y, Z} by another classic identification strategy known as the front-door (FD) criterion [26]. In particular, through the following FD adjustment formula provides the way of evaluating the interventional distribution: P (Y |do(x)) = ∑ z P (z|x) ∑ x′ P (y|x′, z)P (x′). (1) We refer to Pearl and Mackenzie [28, Sec. 3.4] for an interesting account of the history of the FD criterion, which was the first graphical generalization of the BD case. The FD criterion is drawing more attention in recent years. For applications of the FD criterion, see, e.g., Hünermund and Bareinboim [13] and Glynn and Kashin [10]. Statistically efficient and doubly robust estimators have recently been developed for estimating the FD estimand in Eq. (1) from finite samples [9], which are still elusive for arbitrary estimands identifiable in a diagram despite recent progress [18, 19, 5, 20, 43]. Both the BD and FD criteria are only descriptive, i.e., they specify whether a specific set Z satisfies the criteria or not, but do not provide a way to find an admissible set Z. In addition, in many situations, it is possible that multiple adjustment sets exist. Consider for example the causal diagram in Fig. 1b, and the task of identifying the effect of X on Y . The distribution P (Y |do(x)) can indeed be identified by the FD criterion with a set Z = {A,B,C} given by the expression in Eq. (1) (with Z replaced with {A,B,C}). Still, what if the variable B is costly to measure or encodes some personal information about patients which is undesirable to be shared due to ethical concerns? In this case, the set Z = {A,C} also satisfies the FD criterion and may be used. Even when both B and C are unmeasured, the set Z = {A} is also FD admissible. This simple example shows that a target effect can be estimated using different adjustment sets leading to different probability expressions over different set of variables, which has important practical implications. Each variable implies different practical challenges in terms of measurement, such as cost, availability, privacy. Each estimand has different statistical properties in terms of sample complexity, variance, which may play a key role in the study design [31, 11, 32, 36]. Algorithms for finding and listing all possible adjustment sets are hence very useful in practice, which will allow scientists to select an adjustment set that exhibits desirable properties. Indeed, algorithms have been developed in recent years for finding one or listing all BD admissible sets [38, 39, 41, 29, 42]. However, no such algorithm is currently available for finding/listing FD admissible sets. The goal of this paper is to close this gap to facilitate the practical applications of the FD criterion for causal effects estimation and help scientists to select estimand with certain desired properties 1. Specifically, the contributions of this paper are as follows: 1. We develop an algorithm that finds an admissible front-door adjustment set Z in a given causal diagram in polynomial time (if one exists). We solve a variant of the problem that imposes constraints I ⊆ Z ⊆ R for given sets I and R, which allows a scientist to constrain the search to include specific subsets of variables or exclude variables from search perhaps due to cost, availability, or other technical considerations. 2. We develop a sound and complete algorithm that enumerates all front-door adjustment sets with polynomial delay - the algorithm takes polynomial amount of time to return each new admissible set, if one exists, or return failure whenever it exhausted all admissible sets. 1Code is available at https://github.com/CausalAILab/FrontdoorAdjustmentSets. 2 Preliminaries Notation. We write a variable in capital letters (X) and its value as small letters (x). Bold letters, X or x, represent a set of variables or values. We use kinship terminology to denote various relationships in a graph G and denote the parents, ancestors, and descendants of X (including X itself) as Pa(X),An(X), and De(X), respectively. Given a graph G over a set of variables V, a subgraph GX consists of a subset of variables X ⊆ V and their incident edges in G. A graph G can be transformed: GX is the graph resulting from removing all incoming edges to X, and GX is the graph with all outgoing edges from X removed. A DAG G may be moralized into an undirected graph where all directed edges of G are converted into undirected edges, and for every pair of nonadjacent nodes in G that share a common child, an undirected edge that connects such pair is added [22]. A path π from a node X to a node Y in G is a sequence of edges where X and Y are the endpoints of π. A node W on π is said to be a collider if W has converging arrows into W in π, e.g.,→ W ← or↔ W ←. π is said to be blocked by a set Z if there exists a node W on π satisfying one of the following two conditions: 1) W is a collider, and neither W nor any of its descendants are in Z, or 2) W is not a collider, and W is in Z [25]. Given three disjoint sets X,Y, and Z in G, Z is said to d-separate X from Y in G if and only if Z blocks every path from a node in X to a node in Y according to the d-separation criterion [25], and we say that Z is a separator of X and Y in G. Structural Causal Models (SCMs). We use Structural Causal Models (SCMs, for short) [27] as our basic semantical framework. An SCM is a 4-tuple 〈U,V,F, P (u)〉, where 1) U is a set of exogenous (latent) variables, 2) V is a set of endogenous (observed) variables, 3) F is a set of functions {fV }V ∈V that determine the value of endogenous variables, e.g., v ← fV (paV ,uV ) is a function with PAV ⊆ V \ {V } and UV ⊆ U, and 4) P (u) is a joint distribution over the exogenous variables U. Each SCM induces a causal diagram G [3, Def. 13] where every variable v ∈ V is a vertex and directed edges in G correspond to functional relationships as specified in F and dashed bidirected edges represent common exogenous variables between two vertices. Within the structural semantics, performing an intervention and setting X = x is represented through the do-operator, do(X = x), which encodes the operation of replacing the original functions of X (i.e., fX(paX ,uX)) by the constant x and induces a submodelMx and an interventional distribution P (v|do(x)). Classic Causal Effects Identification Criteria. Given a causal diagram G over V, an effect P (y|do(x)) is said to be identifiable in G if P (y|do(x)) is uniquely computable from the observed distribution P (v) in any SCM that induces G [27, p. 77]. A path between X and Y with an arrow into X is known as a back-door path from X to Y . The celebrated back-door (BD) criterion [26] provides a sufficient condition for effect identification from observational data, which states that if a set Z of non-descendants of X blocks all BD paths from X to Y, then the causal effect P (y|do(x)) is identified by the BD adjustment formula: P (y|do(x)) = ∑ z P (y|x, z)P (z) (2) Another classic identification condition that is key to the discussion in this paper is known as the front-door criterion, which is defined as follows: Definition 1. (Front-door (FD) Criterion [26]) A set of variables Z is said to satisfy the front-door criterion relative to the pair (X,Y) if 1. Z intercepts all directed paths from X to Y, 2. There is no unblocked back-door path from X to Z, and 3. All back-door paths from Z to Y are blocked by X, i.e., X is a separator of Z and Y in GZ. If Z satisfies the FD criterion relative to the pair (X,Y), then P (y|do(x)) is identified by the following FD adjustment formula [26]: P (y|do(x)) = ∑ z P (z|x) ∑ x′ P (y|x′, z)P (x′). (3) 3 Finding A Front-door Adjustment Set Algorithm 1 FINDFDSET (G,X,Y, I,R) 1: Input: G a causal diagram; X,Y disjoint sets of variables; I,R sets of variables. 2: Output: Z a set of variables satisfying the front- door criterion relative to (X,Y) with the constraint I ⊆ Z ⊆ R. 3: Step 1: 4: R′ ← GETCAND2NDFDC(G,X, I,R) 5: if R′ =⊥ then: return ⊥ 6: Step 2: 7: R′′ ← GETCAND3RDFDC(G,X,Y, I,R′) 8: if R′′ =⊥ then: return ⊥ 9: Step 3: 10: G′ ← GETCAUSALPATHGRAPH(G,X,Y) 11: if TESTSEP(G′,X,Y,R′′) = True then: 12: return Z = R′′ 13: else: return ⊥ In this section, we address the following question: given a causal diagram G, is there a set Z that satisfies the FD criterion relative to the pair (X,Y) and, therefore, allows us to identify P (y|do(x)) by the FD adjustment? We solve a more general variant of this question that imposes a constraint I ⊆ Z ⊆ R for given sets I and R. Here, I are variables that must be included in Z (I could be empty) and R are variables that could be included in Z (R could be V\(X∪Y)). Note the constraint that variables in W cannot be included can be enforced by excluding W from R. Solving this version of the problem will allow scientists to put constraints on candidate adjustment sets based on practical considerations. In addition, this version will form a building block for an algorithm that enumerates all FD admissible sets in a given G - the algorithm LISTFDSETS (shown in Alg. 2 in Section 4) for listing all FD admissible sets will utilize this result during the recursive call. We have developed a procedure called FINDFDSET shown in Alg. 1 that outputs a FD adjustment set Z relative to (X,Y) satisfying I ⊆ Z ⊆ R, or outputs ⊥ if none exists, given a causal diagram G, disjoint sets of variables X and Y, and two sets of variables I and R. Example 1. Consider the causal graph G′, shown in Fig. 1b, with X = {X}, Y = {Y }, I = ∅ and R = {A,B,C,D}. Then, FINDFDSET outputs {A,B,C}. With I = {C} and R = {A,C}, FINDFDSET outputs {A,C}. With I = {D} and R = {A,B,C,D}, FINDFDSET outputs ⊥ as no FD adjustment set that contains D is available. FINDFDSET runs in three major steps. Each step identifies candidate variables that incrementally satisfy each of the conditions of the FD criterion relative to (X,Y). First, FINDFDSET constructs a set of candidate variables R′, with I ⊆ R′ ⊆ R, such that every subset Z with I ⊆ Z ⊆ R′ satisfies the second condition of the FD criterion (i.e., there is no BD path from X to Z). Next, FINDFDSET generates a set of candidate variables R′′, with I ⊆ R′′ ⊆ R′, such that for every variable v ∈ R′′, there exists a set Z with I ⊆ Z ⊆ R′ and v ∈ Z that further satisfies the third condition of the FD criterion, that is, all BD paths from Z to Y are blocked by X. Finally, FINDFDSET outputs a set Z that further satisfies the first condition of the FD criterion - Z intercepts all causal paths from X to Y. Step 1 of FINDFDSET In Step 1, FINDFDSET calls the function GETCAND2NDFDC (presented in Fig. 2) to construct a set R′ that consists of all the variables v ∈ R such that there is no BD path from X to v (R′ is set to empty if there is a BD path from X to I). Then, there is no BD path from X to any set I ⊆ Z ⊆ R′ since, by definition, there is no BD path from X to Z if and only if there is no BD path from X to any v ∈ Z. GETCAND2NDFDC iterates through each variable v ∈ R and checks if there exists an open BD path from X to v by calling the function TESTSEP(GX,X, v, ∅) [41]. TESTSEP(G,A,B,C) returns True if C is a separator of A and B in G, or False otherwise. Therefore, TESTSEP(GX,X, v, ∅) returns True if ∅ is a separator of X and v in GX (i.e., there is no BD path from X to v), or False otherwise. If TESTSEP returns False, then v is removed from R′ because every set Z containing v violates the second condition of the FD criterion relative to (X,Y). Example 2. Continuing Example 1. With I = ∅ and R = {A,B,C,D}, GETCAND2NDFDC outputs a set R′ = {A,B,C}. D is excluded from R′ since there exists a BD path from {X} to {D}, and any set containing D violates the second condition of the FD criterion relative to ({X}, {Y }). Lemma 1 (Correctness of GETCAND2NDFDC). GETCAND2NDFDC(G,X, I,R) generates a set of variables R′ with I ⊆ R′ ⊆ R such that R′ consists of all and only variables v that satisfies the second condition of the FD criterion relative to (X,Y). Further, every subset Z ⊆ R′ satisfies the second condition of the FD criterion relative to (X,Y), and every set Z with I ⊆ Z ⊆ R that satisfies the second condition of the FD criterion relative to (X,Y) must be a subset of R′. Step 2 of FINDFDSET In Step 2, FINDFDSET calls the function GETCAND3RDFDC presented in Fig. 3 to generate a set R′′ consisting of all the variables v ∈ R′ such that there exists a set Z containing v with I ⊆ Z ⊆ R′ that further satisfies the third condition of the FD criterion relative to (X,Y) (i.e., all BD paths from Z to Y are blocked by X). In other words, R′′ is the union of all Z with I ⊆ Z ⊆ R′ that satisfies the third condition of the FD criterion. GETCAND3RDFDC iterates through each variable v ∈ R′ and calls the function GETDEP(G,X,Y, {v},R′) in line 5. Presented in Fig. 4, GETDEP returns a subset Z′ ⊆ R′ \ {v} such that all BD paths from Z = {v} ∪ Z′ to Y are blocked by X (if there exists such Z′). If GETDEP returns ⊥, then there exists no Z containing v that satisfies the third condition of the FD criterion relative to (X,Y), so v is removed from R′′. Example 3. Continuing Example 2. Given I = ∅ and R′ = {A,B,C}, GETCAND3RDFDC outputs R′′ = {A,B,C} because for each variable v ∈ R′′, GETDEP finds a set Z′ such that {v} ∪ Z′ satisfies the third condition of the FD criterion relative to ({X}, {Y }). For v = A, Z′ = ∅, for v = B, Z′ = {A}, and for v = C, Z′ = {A}. Next, we explain how the function GETDEP(G,X,Y,T,R′) works. First, GETDEP constructs an undirected graphM in a way that the paths from T to Y inM represent all BD paths from T to Y that cannot be blocked by X in G. The auxiliary function MORALIZE(G) moralizes a given graph G into an undirected graph. The moralization is performed on the subgraph over An(T ∪X ∪Y) instead of G based on the following property: T and Y are d-separated by X in G if and only if X is a T-Y node cut (i.e., removing X disconnects T and Y) in G′ = MORALIZE(GAn(T∪X∪Y)) [21]. GETDEP performs Breadth-First Search (BFS) from T to Y onM and incrementally constructs a subset Z′ ⊆ R′ \ T such that, after BFS terminates, there will be no BD path from Z = T ∪ Z′ to Y that cannot be blocked by X in G. While constructing Z′, GETDEP calls the function GETNEIGHBORS(u,M) (presented in Fig. 8, Appendix) to obtain all observed neighbors of u inM. The BFS starts from each variable v ∈ T. Whenever a non-visited node u is encountered, the set NR, observed neighbors of u that belong to R′, is computed. NR can be added to Z′ because removing all outgoing edges of NR may contribute to disconnecting some BD paths Π from T to Y that cannot be blocked by X in G. In other words, in GT∪Z′∪NR, Π could be disconnected from T to 1: function GETDEP(G,X,Y,T,R′) 2: Output: Z′ ⊆ R′ \T, a set of variables such that T ∪ Z′ satisfies the third condition of the FD criterion relative to (X,Y). 3: G′ ← GAn(T∪X∪Y) 4: G′ ← G′ with all bidirected edges A ↔ B replaced by a latent node LAB and two edges LAB → A and LAB → B 5: G′′ ← G′T 6: M← MORALIZE(G′′) then remove X 7: Z′ ← ∅,Q← T and mark all v ∈ T as visited 8: while Q 6= ∅ do 9: u← Q.POP() 10: if u ∈ Y then: return ⊥ 11: NR← GETNEIGHBORS(u,M) ∩R′ that are not visited 12: G′′ ← G′T∪Z′∪NR 13: M← MORALIZE(G′′) then remove X 14: N′ ← GETNEIGHBORS(u,M) that are not visited 15: NR′ ← {w ∈ NR| there exists an incoming arrow into w in G} 16: N← N′ ∪NR′,Z′ ← Z′ ∪NR 17: Q.INSERT(N) and mark all w ∈ N as visited 18: end while 19: return Z′ 20: end function Figure 4: A function that facilitates the construction of a set that satisfies the third condition of the FD criterion. Y where Π are not disconnected in GT∪Z′ . After adding NR to Z′,M must be reconstructed in a way that reflects the setting where all outgoing edges of NR are removed. BFS will be performed on such modifiedM. GETDEP checks if there exists any set of nodes N to be visited further. N consists of two sets: 1) N′, all observed neighbors of u that are still reachable from u, even after removing all outgoing edges of NR, and 2) NR′ ⊆ NR where for every node w ∈ NR, there exists an incoming arrow into w in G. All nodes in NR′ must be checked because there might exist some BD path π from w to y ∈ Y that cannot be blocked by X in G. If π cannot be disconnected from w to y, then the set Z will violate the third condition of the FD criterion relative to (X,Y). The BFS continues until either a node y ∈ Y is visited, or no more nodes can be visited. If GETDEP returns a set Z′, then we have that all BD paths from T to Y that cannot be blocked by X in G have been disconnected in GZ while ensuring that there exists no BD path from Z to Y that cannot be blocked by X in G. Therefore, Z satisfies the third condition of the FD criterion relative to (X,Y). Otherwise, if GETDEP returns ⊥ (i.e., y is visited), then there does not exist any Z containing T that satisfies the third condition of the FD criterion relative to (X,Y). This is because there exists a BD path π from t ∈ T to y that cannot be blocked by X in G; removing outgoing edges of all w ∈ R′ that intersect π cannot disconnect π from t to y. Example 4. Expanding on Example 3 to show the use of function GETDEP. Consider the case when v = B. Then, Q = T = {B} and u = B is popped from Q at line 9. We have NR = {A},N′ = ∅,NR′ = {A},N = {A}, and Z′ = {A}. Since N is inserted to Q at line 17, u = A is popped from Q in the next iteration of while loop. Then, NR = ∅,N′ = ∅,NR′ = ∅, and N = ∅. Since Q is empty, the while loop terminates and GETDEP returns Z′ = {A}. Example 5. Illustrating the use of function GETDEP. Let I = ∅, R′ = {B,C}, and v = B. Q = T = {B} and u = B is popped from Q at line 9. NR = ∅,N′ = {A},NR′ = ∅,N = {A}, and Z′ = ∅. Since N is inserted to Q at line 17, u = A is popped from Q in the second iteration of while loop. NR = NR′ = {C}, N′ = N = {C,D, Y }, Z′ = {C}, and Q = {C,D, Y }. On the third iteration, u = C is popped from Q. NR = NR′ = N′ = N = ∅ and Q = {D,Y }. On the fourth iteration, u = D is popped from Q. NR = NR′ = N′ = N = ∅ and Q = {Y }. Next, u = Y is popped from Q. Since u ∈ {Y }, GETDEP returns ⊥ at line 10. There exists no set Z′ ⊆ (R′ \T) = {C} such that T ∪ Z′ satisfies the third condition of the FD criterion relative to ({X}, {Y }). Lemma 2 (Correctness of GETCAND3RDFDC). GETCAND3RDFDC(G,X,Y, I,R′) in Step 2 of Alg. 1 generates a set of variables R′′ where I ⊆ R′′ ⊆ R′. R′′ consists of all and only variables v such that there exists a subset Z with I ⊆ Z ⊆ R′ and v ∈ Z that satisfies the third condition of the FD criterion relative to (X,Y). Further, every set Z with I ⊆ Z ⊆ R that satisfies both the second and the third conditions of the FD criterion must be a subset of R′′. Remark: Even though every set Z with I ⊆ Z ⊆ R′ that satisfies the third condition of the FD criterion must be a subset of R′′, not every subset Z ⊆ R′′ satisfies the third condition of the FD criterion, as illustrated by the following example. Example 6. In Example 3, GETCAND3RDFDC outputs R′′ = {A,B,C}. However, for Z = {B}, the BD path {B ← A→ D → Y } is not blocked by {X}; for Z = {C}, the BD path {C ← A→ D → Y } is not blocked by {X}. On the other hand, we show that Z = R′′ itself satisfies the third condition of the FD criterion, as shown in the following. Lemma 3. R′′ generated by GETCAND3RDFDC (in Step 2 of Alg. 1) satisfies the third condition of the FD criterion, that is, all BD paths from R′′ to Y are blocked by X. Step 3 of FINDFDSET Finally, in Step 3, FINDFDSET looks for a set Z ⊆ R′′ that satisfies the first condition of the FD criterion relative to (X,Y), that is, Z intercepts all causal paths from X to Y. To facilitate checking whether a set Z intercepts all causal paths from X to Y, we introduce the concept of causal path graph defined as follows. Definition 2. (Causal Path Graph) Let G be a causal graph and X,Y disjoint sets of variables. A causal path graph G′ relative to (G,X,Y) is a graph over X ∪ Y ∪ PCP (X,Y), where PCP (X,Y) = (De(X)GX \X) ∩An(Y)GX 2, constructed as follows: 1. Construct a subgraph G′′ = GX∪Y∪PCP (X,Y). 2. Construct a graph G′ = G′′ XY , then remove all bidirected edges from G′. X YZ (a) G X A B Y C D (b) G′′ Figure 5: Two causal path graphs generated from (a) the causal graph in Fig. 1a, and (b) the causal graph in Fig. 1b. Both preserve all and only causal paths from {X} to {Y } in the original graphs. A function GETCAUSALPATHGRAPH(G,X,Y) for constructing a causal path graph is presented in Fig. 9 in the Appendix. Example 7. Consider the causal graph G′ shown in Fig. 1b with X = {X}, and Y = {Y }. The causal path graph G′′ relative to (G′, {X}, {Y }) is shown in Fig. 5b. All causal paths from {X} to {Y } in G′ are present in G′′. After constructing a causal path graph G′ relative to (G,X,Y), we use the function TESTSEP(G′,X,Y,Z) to check if Z is a separator of X and Y in G′. Based on the following lemma, Z satisfies the first condition of the FD criterion relative to (X,Y) if and only if TESTSEP returns True. Lemma 4. Let G be a causal graph and X,Y,Z disjoint sets of variables. Let G′ be the causal path graph relative to (G,X,Y). Then, Z satisfies the first condition of the FD criterion relative to (X,Y) if and only if Z is a separator of X and Y in G′. Given the set R′′ that contains every set Z with I ⊆ Z ⊆ R that satisfies both the second and the third conditions of the FD criterion (Lemma 2), it may appear that we need to search for a set Z ⊆ R′′ that satisfies the first condition of the FD criterion. We show instead that all we need is to check whether the set R′′ itself satisfies the first condition which has been shown to satisfy the second and third conditions by Lemma 3. This result is summarized in the following lemma. 2A notation introduced by van der Zander et al. [41] to denote the set of variables on proper causal paths from X to Y. Lemma 5. There exists a set Z0 satisfying the FD criterion relative to (X,Y) with I ⊆ Z0 ⊆ R if and only if R′′ generated by GETCAND3RDFDC (in Step 2 of Alg. 1) satisfies the FD criterion relative to (X,Y). Example 8. Continuing Example 3. In Step 3, FINDFDSET outputs Z = R′′ = {A,B,C} since Z is a separator of {X} and {Y } in the causal path graph G′′ in Fig. 5b. The results in this section are summarized as follows. Theorem 1 (Correctness of FINDFDSET). Let G be a causal graph, X,Y disjoint sets of variables, and I,R sets of variables such that I ⊆ R. Then, FINDFDSET(G,X,Y, I,R) outputs a set Z with I ⊆ Z ⊆ R that satisfies the FD criterion relative to (X,Y), or outputs ⊥ if none exists, in O(n3(n+m)) time, where n and m represent the number of nodes and edges in G. 4 Enumerating Front-door Adjustment Sets Algorithm 2 LISTFDSETS (G,X,Y, I,R) 1: Input: G a causal diagram; X,Y disjoint sets of variables; I,R sets of variables. 2: Output: Listing front-door adjustment set Z rel- ative to (X,Y) where I ⊆ Z ⊆ R. 3: if FINDFDSET(G,X,Y, I,R) 6=⊥ then: 4: if I = R then: Output I 5: else: 6: v ← any variable from R \ I 7: LISTFDSETS(G,X,Y, I ∪ {v},R) 8: LISTFDSETS(G,X,Y, I,R \ {v}) Our goal in this section is to develop an algorithm that lists all FD adjustment sets in a causal diagram. In general, there may exist exponential number of such sets, which means that any listing algorithm will take exponential time to list them all. We will instead look for an algorithm that has an interesting property known as polynomial delay [38]. In words, poly-delay algorithms output the first answer (or indicate none is available) in polynomial time, and take polynomial time to output each consecutive answer as well. Consider the following example. Example 9. Consider the three causal graphs in Fig. 6. In G shown in Fig. 6a, there exists 9 valid FD adjustment sets relative to ({X}, {Y }). In G′, presented in Fig. 6b, two variables A3 and B3 are added from G, forming an additional causal path from X to Y . 27 FD adjustment sets relative to ({X}, {Y }) are available in G′. If another causal path X → A4 → B4 → Y is added to G′, then there are 81 FD adjustment sets relative to ({X}, {Y }). As shown in Fig. 6c, in a graph G′′ with similar pattern with causal path X → Ai → Bi → Y, i = 1, . . . n, there are at least 3n number of FD adjustment sets. We have developed an algorithm named LISTFDSETS, shown in Alg. 2, that lists all FD adjustment sets Z relative to (X,Y) satisfying I ⊆ Z ⊆ R with polynomial delay, given a causal diagram G, disjoint sets of variables X and Y, and two sets of variables I and R. Example 10. Consider the causal graph G′ shown in Fig. 1b with X = {X}, Y = {Y }, I = ∅ and R = {A,B,C,D}. LISTFDSETS outputs {A,B,C}, {A,B}, {A,C}, {A} one by one, and finally stops as no more adjustment sets exist. The algorithm LISTFDSETS takes the same search strategy as the listing algorithm LISTSEP [41] that enumerates all BD adjustment sets with polynomial delay. LISTFDSETS implicitly constructs a binary search tree where each tree node N (I′,R′) represents the collection of all FD adjustment sets Z relative to (X,Y) with I′ ⊆ Z ⊆ R′. The search starts from the root tree node N (I,R), indicating that LISTFDSETS will list all FD adjustment sets Z relative to (X,Y) with I ⊆ Z ⊆ R. Upon visiting a node N (I′,R′), LISTFDSETS first calls the function FINDFDSET (line 3) to decide whether it is necessary to search further from N . If FINDFDSET outputs ⊥, then there does not exist any FD adjustment set Z0 with I′ ⊆ Z0 ⊆ R′ and there is no need to search further. Otherwise, N spawns two children, N1 and N2, and LISTFDSETS continues the search over each child separately. N1 in line 7 represents the collection of all FD adjustment sets Z1 relative to (X,Y) where I′ ∪ {v} ⊆ Z1 ⊆ R′. On the other hand, N2 in line 8 represents the collection of all FD adjustment sets Z2 where I′ ⊆ Z2 ⊆ R′ \ {v}. N1 and N2 are disjoint and thus the search never overlaps, which is crucial to guaranteeing that LISTFDSETS runs in polynomial delay. Finally, a leaf tree node L is reached when I′ = R′, and LISTFDSETS outputs a valid FD adjustment set I′. Example 11. Continuing from Example 10. Fig. 7 shows a search tree generated by running LISTFDSETS(G′, {X}, {Y }, ∅, {A,B,C,D}). Initially, the search starts from the root tree node N (∅, {A,B,C,D}). Since FINDFDSET returns a set {A,B,C}, N branches out into two children N ′({A}, {A,B,C,D}) and N ′′(∅, {B,C,D}). The search continues from the left child N ′ until reaching the leaf tree node L1({A,B,C,D}, {A,B,C,D}) where FINDFDSET returns⊥. LISTFDSETS backtracks to the parent tree node N1({A,B,C}, {A,B,C,D}) and then checks the next leaf L2({A,B,C}, {A,B,C}) where FINDFDSET returns a set {A,B,C}, a valid FD admissible set relative to ({X}, {Y }). LISTFDSETS outputs {A,B,C}. Next, LISTFDSETS backtracks to the tree node N2({A,B}, {A,B,C,D}) and reaches the leaf L3({A,B}, {A,B}) where FINDFDSET outputs {A,B}, and thus LISTFDSETS outputs {A,B}. LISTFDSETS continues and outputs two sets {A,C} and {A} in order. Finally, LISTFDSETS backtracks to the root N and checks the right childN ′′ where FINDFDSET returns ⊥. LISTFDSETS does not search further fromN ′′ and stops as no more tree node is left to be visited. Our results are summarized in the following theorem, which provides the correctness, completeness, and poly-delay complexity of the proposed algorithm. Note that the completeness of the algorithm means that it lists “all” valid sets satisfying the FD criterion. On the other hand, Pearl’s FD criterion is not complete in the sense that there might exist a causal effect that can be computed by the FD adjustment formula (Eq. (3)) but the set Z does not satisfy the FD criterion. Theorem 2 (Correctness of LISTFDSETS). Let G be a causal graph, X,Y disjoint sets of variables, and I,R sets of variables. LISTFDSETS(G,X,Y, I,R) enumerates all and only sets Z with I ⊆ Z ⊆ R that satisfy the FD criterion relative to (X,Y) in O(n4(n+m)) delay where n and m represent the number of nodes and edges in G. 5 Discussion and Conclusions This work has some limitations and can be extended in several directions. First, Pearl’s FD criterion is not complete with respect to the FD adjustment formula (Eq. (3)). While the BD criterion has been generalized to a complete criterion for BD adjustment [35], it is an interesting open problem to come up with a complete criterion for sets satisfying the FD adjustment. Second, this work assumes that the causal diagram is given (or inferred based on scientists’ domain knowledge and/or data). Although this assumption is quite common throughout the causal inference literature, more recent work has moved to finding BD admissible sets given incomplete or partially specified causal diagrams, e.g., maximal ancestral graphs (MAGs) [41], partial ancestral graphs (PAGs) [29], and completed partially directed acyclic graphs (CPDAGs) [29]. There are algorithms capable of performing causal effect identification in a data-driven fashion from an equivalence class [14, 15, 16, 17]. It is an interesting and certainly challenging future work to develop algorithms for finding FD admissible sets in these types of graphs. Some recent work has proposed data-driven methods for finding and listing BD admissible sets, using an anchor variable, when the underlying causal diagram is unknown [7, 6, 33]. A criterion for testing FD-admissibility of a given set using data and an anchor variable is also available [4]. Other interesting future research topics include developing algorithms for finding minimal, minimum, and minimum cost FD adjustment sets, which are available for the BD adjustment sets [42], as well as algorithms for finding conditional FD adjustment sets [13, 9]. Having said all of that, we believe that the results developed in this paper is a necessary step towards solving these more challenging problems. After all, we started from the observation that identification is not restricted to BD adjustment, and Pearl’s FD criterion provides a classic strategy for estimating causal effects from observational data and qualitative knowledge encoded in the form of a causal diagram. The criterion is drawing more attention in recent years and statistically efficient and doubly robust estimators have been developed for estimating the FD estimand from finite samples. In this paper, we develop algorithms that given a causal diagram G, find an admissible FD set (Alg. 1 FINDFDSET, Thm. 1) and enumerate all admissible FD sets with polynomial delay (Alg. 2 LISTFDSETS, Thm. 2). We hope that the methods and algorithms proposed in this work will help scientists to use the FD strategy for causal effects estimation in the practical applications and are useful for scientists in study design to select covariates based on desired properties, including cost, feasibility, and statistical power.
1. What are the key contributions and strengths of the paper regarding the implementation of front door criteria in causal graphical models? 2. What are the weaknesses of the paper, particularly in terms of demonstration and evaluation? 3. How does the reviewer assess the complexity of the algorithms presented in the paper? 4. Are there any questions or concerns regarding the consideration of "evidence variables" in the algorithm? 5. Would implementing the pseudo-codes and providing an open-source version of the program be beneficial for potential users?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper shows the more detailed steps and pseudo-codes for finding frond-door adjustment sets in causal graphical models. Starting from the definition of front door criteria, this paper goes through the implementation of each criterion and offers a complexity analysis. Strengths And Weaknesses The strength of this paper is showing the detailed steps for implementing frond door criteria in causal graphical models, and it finally extends by listing all frond door sets satisfying Pearl's front door criteria. The weakness is that no demonstration or evaluation is available for given algorithms. Questions I think the complex statements are better to be shown in the main text. This algorithm works by removing nodes from the full set of candidates, which could be all nodes except for the X and Y. Does this algorithm take into account "evidence variables"? If we find a full list of FD sets from a causal graphical model, and if we introduce evidence variables, should we restart from the beginning? Have you implemented these pseudo-codes? I think it's better to double-check it by implementation on top of proofs and hand-written examples. It is in a polynomial-time algorithm in terms of big-O analysis, but the overall running time may be longer than one might imagine when there are many nodes and edges in the graph. This algorithm can be served as a baseline for possible future works that improve running times and may be useful for implementing do-calculus. They are very close to script languages such as python already, and if possible, showing the results through running the program or providing it as open-source could be even more helpful to scientists who may use this FD strategy for experiment design. Limitations I think this work is not relevant to this section.
NIPS
Title Finding and Listing Front-door Adjustment Sets Abstract Identifying the effects of new interventions from data is a significant challenge found across a wide range of the empirical sciences. A well-known strategy for identifying such effects is Pearl’s front-door (FD) criterion [26]. The definition of the FD criterion is declarative, only allowing one to decide whether a specific set satisfies the criterion. In this paper, we present algorithms for finding and enumerating possible sets satisfying the FD criterion in a given causal diagram. These results are useful in facilitating the practical applications of the FD criterion for causal effects estimation and helping scientists to select estimands with desired properties, e.g., based on cost, feasibility of measurement, or statistical power. 1 Introduction Learning cause and effect relationships is a fundamental challenge across data-driven fields. For example, health scientists developing a treatment for curing lung cancer need to understand how a new drug affects the patient’s body and the tumor’s progression. The distillation of causal relations is indispensable to understanding the dynamics of the underlying system and how to perform decisionmaking in a principled and systematic fashion [27, 37, 2, 30, 1, 23, 24]. One of the most common methods for learning causal relations is through Randomized Controlled Trials (RCTs, for short) [8]. RCTs are considered as the “gold standard” in many fields of empirical research and are used throughout the health and social sciences as well as machine learning and AI. In practice, however, RCTs are often hard to perform due to ethical, financial, and technical issues. For instance, it may be unethical to submit an individual to a certain condition if such condition may have some potentially negative effects (e.g., smoking). Whenever RCTs cannot be conducted, one needs to resort to analytical methods to infer causal relations from observational data, which appears in the literature as the problem of causal effect identification [26, 27]. The causal identification problem asks whether the effect of holding a variableX at a constant value x on a variable Y , written as P (Y |do(X = x)), or P (Y |do(x)), can be computed from a combination of observational data and causal assumptions. One of the most common ways of eliciting these assumptions is in the form of a causal diagram represented by a directed acyclic graph (DAG), where its nodes and edges describe the underlying data generating process. For instance, in Fig. 1a, three nodes X,Z, Y represent variables, a directed edge X → Z indicates that X causes Z, and a dashedbidirected edge X ↔ Y represents that X and Y are confounded by unmeasured (latent) factors. Different methods can solve the identification problem, including Pearl’s celebrated do-calculus [26] as well as different algorithmic solutions [40, 34, 12]. In practice, researchers often rely on identification strategies that generate well-known identification formulas. One of the arguably most popular strategies is identification by covariate adjustment. Whenever a set Z satisfies the back-door (BD) criterion [26] relative to the pair X and Y , where X and Y represent the treatment and outcome variables, respectively, the causal effect P (Y |do(x)) can be evaluated through the BD adjustment formula ∑ z P (y|x, z)P (z). 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Despite the popularity of the covariate adjustment technique for estimating causal effects, there are still settings in which no BD admissible set exists. For example, consider the causal diagram G in Fig. 1a. There clearly exists no set to block the BD path from X to Y , through the bidirected arrow, X ↔ Y . One may surmise that this effect is not identifiable and the only one of evaluating the interventional distribution is through experimentation. Still, this is not the case. The effect P (Y |do(x)) is identifiable from G and the observed distribution P (x, y, z) over {X,Y, Z} by another classic identification strategy known as the front-door (FD) criterion [26]. In particular, through the following FD adjustment formula provides the way of evaluating the interventional distribution: P (Y |do(x)) = ∑ z P (z|x) ∑ x′ P (y|x′, z)P (x′). (1) We refer to Pearl and Mackenzie [28, Sec. 3.4] for an interesting account of the history of the FD criterion, which was the first graphical generalization of the BD case. The FD criterion is drawing more attention in recent years. For applications of the FD criterion, see, e.g., Hünermund and Bareinboim [13] and Glynn and Kashin [10]. Statistically efficient and doubly robust estimators have recently been developed for estimating the FD estimand in Eq. (1) from finite samples [9], which are still elusive for arbitrary estimands identifiable in a diagram despite recent progress [18, 19, 5, 20, 43]. Both the BD and FD criteria are only descriptive, i.e., they specify whether a specific set Z satisfies the criteria or not, but do not provide a way to find an admissible set Z. In addition, in many situations, it is possible that multiple adjustment sets exist. Consider for example the causal diagram in Fig. 1b, and the task of identifying the effect of X on Y . The distribution P (Y |do(x)) can indeed be identified by the FD criterion with a set Z = {A,B,C} given by the expression in Eq. (1) (with Z replaced with {A,B,C}). Still, what if the variable B is costly to measure or encodes some personal information about patients which is undesirable to be shared due to ethical concerns? In this case, the set Z = {A,C} also satisfies the FD criterion and may be used. Even when both B and C are unmeasured, the set Z = {A} is also FD admissible. This simple example shows that a target effect can be estimated using different adjustment sets leading to different probability expressions over different set of variables, which has important practical implications. Each variable implies different practical challenges in terms of measurement, such as cost, availability, privacy. Each estimand has different statistical properties in terms of sample complexity, variance, which may play a key role in the study design [31, 11, 32, 36]. Algorithms for finding and listing all possible adjustment sets are hence very useful in practice, which will allow scientists to select an adjustment set that exhibits desirable properties. Indeed, algorithms have been developed in recent years for finding one or listing all BD admissible sets [38, 39, 41, 29, 42]. However, no such algorithm is currently available for finding/listing FD admissible sets. The goal of this paper is to close this gap to facilitate the practical applications of the FD criterion for causal effects estimation and help scientists to select estimand with certain desired properties 1. Specifically, the contributions of this paper are as follows: 1. We develop an algorithm that finds an admissible front-door adjustment set Z in a given causal diagram in polynomial time (if one exists). We solve a variant of the problem that imposes constraints I ⊆ Z ⊆ R for given sets I and R, which allows a scientist to constrain the search to include specific subsets of variables or exclude variables from search perhaps due to cost, availability, or other technical considerations. 2. We develop a sound and complete algorithm that enumerates all front-door adjustment sets with polynomial delay - the algorithm takes polynomial amount of time to return each new admissible set, if one exists, or return failure whenever it exhausted all admissible sets. 1Code is available at https://github.com/CausalAILab/FrontdoorAdjustmentSets. 2 Preliminaries Notation. We write a variable in capital letters (X) and its value as small letters (x). Bold letters, X or x, represent a set of variables or values. We use kinship terminology to denote various relationships in a graph G and denote the parents, ancestors, and descendants of X (including X itself) as Pa(X),An(X), and De(X), respectively. Given a graph G over a set of variables V, a subgraph GX consists of a subset of variables X ⊆ V and their incident edges in G. A graph G can be transformed: GX is the graph resulting from removing all incoming edges to X, and GX is the graph with all outgoing edges from X removed. A DAG G may be moralized into an undirected graph where all directed edges of G are converted into undirected edges, and for every pair of nonadjacent nodes in G that share a common child, an undirected edge that connects such pair is added [22]. A path π from a node X to a node Y in G is a sequence of edges where X and Y are the endpoints of π. A node W on π is said to be a collider if W has converging arrows into W in π, e.g.,→ W ← or↔ W ←. π is said to be blocked by a set Z if there exists a node W on π satisfying one of the following two conditions: 1) W is a collider, and neither W nor any of its descendants are in Z, or 2) W is not a collider, and W is in Z [25]. Given three disjoint sets X,Y, and Z in G, Z is said to d-separate X from Y in G if and only if Z blocks every path from a node in X to a node in Y according to the d-separation criterion [25], and we say that Z is a separator of X and Y in G. Structural Causal Models (SCMs). We use Structural Causal Models (SCMs, for short) [27] as our basic semantical framework. An SCM is a 4-tuple 〈U,V,F, P (u)〉, where 1) U is a set of exogenous (latent) variables, 2) V is a set of endogenous (observed) variables, 3) F is a set of functions {fV }V ∈V that determine the value of endogenous variables, e.g., v ← fV (paV ,uV ) is a function with PAV ⊆ V \ {V } and UV ⊆ U, and 4) P (u) is a joint distribution over the exogenous variables U. Each SCM induces a causal diagram G [3, Def. 13] where every variable v ∈ V is a vertex and directed edges in G correspond to functional relationships as specified in F and dashed bidirected edges represent common exogenous variables between two vertices. Within the structural semantics, performing an intervention and setting X = x is represented through the do-operator, do(X = x), which encodes the operation of replacing the original functions of X (i.e., fX(paX ,uX)) by the constant x and induces a submodelMx and an interventional distribution P (v|do(x)). Classic Causal Effects Identification Criteria. Given a causal diagram G over V, an effect P (y|do(x)) is said to be identifiable in G if P (y|do(x)) is uniquely computable from the observed distribution P (v) in any SCM that induces G [27, p. 77]. A path between X and Y with an arrow into X is known as a back-door path from X to Y . The celebrated back-door (BD) criterion [26] provides a sufficient condition for effect identification from observational data, which states that if a set Z of non-descendants of X blocks all BD paths from X to Y, then the causal effect P (y|do(x)) is identified by the BD adjustment formula: P (y|do(x)) = ∑ z P (y|x, z)P (z) (2) Another classic identification condition that is key to the discussion in this paper is known as the front-door criterion, which is defined as follows: Definition 1. (Front-door (FD) Criterion [26]) A set of variables Z is said to satisfy the front-door criterion relative to the pair (X,Y) if 1. Z intercepts all directed paths from X to Y, 2. There is no unblocked back-door path from X to Z, and 3. All back-door paths from Z to Y are blocked by X, i.e., X is a separator of Z and Y in GZ. If Z satisfies the FD criterion relative to the pair (X,Y), then P (y|do(x)) is identified by the following FD adjustment formula [26]: P (y|do(x)) = ∑ z P (z|x) ∑ x′ P (y|x′, z)P (x′). (3) 3 Finding A Front-door Adjustment Set Algorithm 1 FINDFDSET (G,X,Y, I,R) 1: Input: G a causal diagram; X,Y disjoint sets of variables; I,R sets of variables. 2: Output: Z a set of variables satisfying the front- door criterion relative to (X,Y) with the constraint I ⊆ Z ⊆ R. 3: Step 1: 4: R′ ← GETCAND2NDFDC(G,X, I,R) 5: if R′ =⊥ then: return ⊥ 6: Step 2: 7: R′′ ← GETCAND3RDFDC(G,X,Y, I,R′) 8: if R′′ =⊥ then: return ⊥ 9: Step 3: 10: G′ ← GETCAUSALPATHGRAPH(G,X,Y) 11: if TESTSEP(G′,X,Y,R′′) = True then: 12: return Z = R′′ 13: else: return ⊥ In this section, we address the following question: given a causal diagram G, is there a set Z that satisfies the FD criterion relative to the pair (X,Y) and, therefore, allows us to identify P (y|do(x)) by the FD adjustment? We solve a more general variant of this question that imposes a constraint I ⊆ Z ⊆ R for given sets I and R. Here, I are variables that must be included in Z (I could be empty) and R are variables that could be included in Z (R could be V\(X∪Y)). Note the constraint that variables in W cannot be included can be enforced by excluding W from R. Solving this version of the problem will allow scientists to put constraints on candidate adjustment sets based on practical considerations. In addition, this version will form a building block for an algorithm that enumerates all FD admissible sets in a given G - the algorithm LISTFDSETS (shown in Alg. 2 in Section 4) for listing all FD admissible sets will utilize this result during the recursive call. We have developed a procedure called FINDFDSET shown in Alg. 1 that outputs a FD adjustment set Z relative to (X,Y) satisfying I ⊆ Z ⊆ R, or outputs ⊥ if none exists, given a causal diagram G, disjoint sets of variables X and Y, and two sets of variables I and R. Example 1. Consider the causal graph G′, shown in Fig. 1b, with X = {X}, Y = {Y }, I = ∅ and R = {A,B,C,D}. Then, FINDFDSET outputs {A,B,C}. With I = {C} and R = {A,C}, FINDFDSET outputs {A,C}. With I = {D} and R = {A,B,C,D}, FINDFDSET outputs ⊥ as no FD adjustment set that contains D is available. FINDFDSET runs in three major steps. Each step identifies candidate variables that incrementally satisfy each of the conditions of the FD criterion relative to (X,Y). First, FINDFDSET constructs a set of candidate variables R′, with I ⊆ R′ ⊆ R, such that every subset Z with I ⊆ Z ⊆ R′ satisfies the second condition of the FD criterion (i.e., there is no BD path from X to Z). Next, FINDFDSET generates a set of candidate variables R′′, with I ⊆ R′′ ⊆ R′, such that for every variable v ∈ R′′, there exists a set Z with I ⊆ Z ⊆ R′ and v ∈ Z that further satisfies the third condition of the FD criterion, that is, all BD paths from Z to Y are blocked by X. Finally, FINDFDSET outputs a set Z that further satisfies the first condition of the FD criterion - Z intercepts all causal paths from X to Y. Step 1 of FINDFDSET In Step 1, FINDFDSET calls the function GETCAND2NDFDC (presented in Fig. 2) to construct a set R′ that consists of all the variables v ∈ R such that there is no BD path from X to v (R′ is set to empty if there is a BD path from X to I). Then, there is no BD path from X to any set I ⊆ Z ⊆ R′ since, by definition, there is no BD path from X to Z if and only if there is no BD path from X to any v ∈ Z. GETCAND2NDFDC iterates through each variable v ∈ R and checks if there exists an open BD path from X to v by calling the function TESTSEP(GX,X, v, ∅) [41]. TESTSEP(G,A,B,C) returns True if C is a separator of A and B in G, or False otherwise. Therefore, TESTSEP(GX,X, v, ∅) returns True if ∅ is a separator of X and v in GX (i.e., there is no BD path from X to v), or False otherwise. If TESTSEP returns False, then v is removed from R′ because every set Z containing v violates the second condition of the FD criterion relative to (X,Y). Example 2. Continuing Example 1. With I = ∅ and R = {A,B,C,D}, GETCAND2NDFDC outputs a set R′ = {A,B,C}. D is excluded from R′ since there exists a BD path from {X} to {D}, and any set containing D violates the second condition of the FD criterion relative to ({X}, {Y }). Lemma 1 (Correctness of GETCAND2NDFDC). GETCAND2NDFDC(G,X, I,R) generates a set of variables R′ with I ⊆ R′ ⊆ R such that R′ consists of all and only variables v that satisfies the second condition of the FD criterion relative to (X,Y). Further, every subset Z ⊆ R′ satisfies the second condition of the FD criterion relative to (X,Y), and every set Z with I ⊆ Z ⊆ R that satisfies the second condition of the FD criterion relative to (X,Y) must be a subset of R′. Step 2 of FINDFDSET In Step 2, FINDFDSET calls the function GETCAND3RDFDC presented in Fig. 3 to generate a set R′′ consisting of all the variables v ∈ R′ such that there exists a set Z containing v with I ⊆ Z ⊆ R′ that further satisfies the third condition of the FD criterion relative to (X,Y) (i.e., all BD paths from Z to Y are blocked by X). In other words, R′′ is the union of all Z with I ⊆ Z ⊆ R′ that satisfies the third condition of the FD criterion. GETCAND3RDFDC iterates through each variable v ∈ R′ and calls the function GETDEP(G,X,Y, {v},R′) in line 5. Presented in Fig. 4, GETDEP returns a subset Z′ ⊆ R′ \ {v} such that all BD paths from Z = {v} ∪ Z′ to Y are blocked by X (if there exists such Z′). If GETDEP returns ⊥, then there exists no Z containing v that satisfies the third condition of the FD criterion relative to (X,Y), so v is removed from R′′. Example 3. Continuing Example 2. Given I = ∅ and R′ = {A,B,C}, GETCAND3RDFDC outputs R′′ = {A,B,C} because for each variable v ∈ R′′, GETDEP finds a set Z′ such that {v} ∪ Z′ satisfies the third condition of the FD criterion relative to ({X}, {Y }). For v = A, Z′ = ∅, for v = B, Z′ = {A}, and for v = C, Z′ = {A}. Next, we explain how the function GETDEP(G,X,Y,T,R′) works. First, GETDEP constructs an undirected graphM in a way that the paths from T to Y inM represent all BD paths from T to Y that cannot be blocked by X in G. The auxiliary function MORALIZE(G) moralizes a given graph G into an undirected graph. The moralization is performed on the subgraph over An(T ∪X ∪Y) instead of G based on the following property: T and Y are d-separated by X in G if and only if X is a T-Y node cut (i.e., removing X disconnects T and Y) in G′ = MORALIZE(GAn(T∪X∪Y)) [21]. GETDEP performs Breadth-First Search (BFS) from T to Y onM and incrementally constructs a subset Z′ ⊆ R′ \ T such that, after BFS terminates, there will be no BD path from Z = T ∪ Z′ to Y that cannot be blocked by X in G. While constructing Z′, GETDEP calls the function GETNEIGHBORS(u,M) (presented in Fig. 8, Appendix) to obtain all observed neighbors of u inM. The BFS starts from each variable v ∈ T. Whenever a non-visited node u is encountered, the set NR, observed neighbors of u that belong to R′, is computed. NR can be added to Z′ because removing all outgoing edges of NR may contribute to disconnecting some BD paths Π from T to Y that cannot be blocked by X in G. In other words, in GT∪Z′∪NR, Π could be disconnected from T to 1: function GETDEP(G,X,Y,T,R′) 2: Output: Z′ ⊆ R′ \T, a set of variables such that T ∪ Z′ satisfies the third condition of the FD criterion relative to (X,Y). 3: G′ ← GAn(T∪X∪Y) 4: G′ ← G′ with all bidirected edges A ↔ B replaced by a latent node LAB and two edges LAB → A and LAB → B 5: G′′ ← G′T 6: M← MORALIZE(G′′) then remove X 7: Z′ ← ∅,Q← T and mark all v ∈ T as visited 8: while Q 6= ∅ do 9: u← Q.POP() 10: if u ∈ Y then: return ⊥ 11: NR← GETNEIGHBORS(u,M) ∩R′ that are not visited 12: G′′ ← G′T∪Z′∪NR 13: M← MORALIZE(G′′) then remove X 14: N′ ← GETNEIGHBORS(u,M) that are not visited 15: NR′ ← {w ∈ NR| there exists an incoming arrow into w in G} 16: N← N′ ∪NR′,Z′ ← Z′ ∪NR 17: Q.INSERT(N) and mark all w ∈ N as visited 18: end while 19: return Z′ 20: end function Figure 4: A function that facilitates the construction of a set that satisfies the third condition of the FD criterion. Y where Π are not disconnected in GT∪Z′ . After adding NR to Z′,M must be reconstructed in a way that reflects the setting where all outgoing edges of NR are removed. BFS will be performed on such modifiedM. GETDEP checks if there exists any set of nodes N to be visited further. N consists of two sets: 1) N′, all observed neighbors of u that are still reachable from u, even after removing all outgoing edges of NR, and 2) NR′ ⊆ NR where for every node w ∈ NR, there exists an incoming arrow into w in G. All nodes in NR′ must be checked because there might exist some BD path π from w to y ∈ Y that cannot be blocked by X in G. If π cannot be disconnected from w to y, then the set Z will violate the third condition of the FD criterion relative to (X,Y). The BFS continues until either a node y ∈ Y is visited, or no more nodes can be visited. If GETDEP returns a set Z′, then we have that all BD paths from T to Y that cannot be blocked by X in G have been disconnected in GZ while ensuring that there exists no BD path from Z to Y that cannot be blocked by X in G. Therefore, Z satisfies the third condition of the FD criterion relative to (X,Y). Otherwise, if GETDEP returns ⊥ (i.e., y is visited), then there does not exist any Z containing T that satisfies the third condition of the FD criterion relative to (X,Y). This is because there exists a BD path π from t ∈ T to y that cannot be blocked by X in G; removing outgoing edges of all w ∈ R′ that intersect π cannot disconnect π from t to y. Example 4. Expanding on Example 3 to show the use of function GETDEP. Consider the case when v = B. Then, Q = T = {B} and u = B is popped from Q at line 9. We have NR = {A},N′ = ∅,NR′ = {A},N = {A}, and Z′ = {A}. Since N is inserted to Q at line 17, u = A is popped from Q in the next iteration of while loop. Then, NR = ∅,N′ = ∅,NR′ = ∅, and N = ∅. Since Q is empty, the while loop terminates and GETDEP returns Z′ = {A}. Example 5. Illustrating the use of function GETDEP. Let I = ∅, R′ = {B,C}, and v = B. Q = T = {B} and u = B is popped from Q at line 9. NR = ∅,N′ = {A},NR′ = ∅,N = {A}, and Z′ = ∅. Since N is inserted to Q at line 17, u = A is popped from Q in the second iteration of while loop. NR = NR′ = {C}, N′ = N = {C,D, Y }, Z′ = {C}, and Q = {C,D, Y }. On the third iteration, u = C is popped from Q. NR = NR′ = N′ = N = ∅ and Q = {D,Y }. On the fourth iteration, u = D is popped from Q. NR = NR′ = N′ = N = ∅ and Q = {Y }. Next, u = Y is popped from Q. Since u ∈ {Y }, GETDEP returns ⊥ at line 10. There exists no set Z′ ⊆ (R′ \T) = {C} such that T ∪ Z′ satisfies the third condition of the FD criterion relative to ({X}, {Y }). Lemma 2 (Correctness of GETCAND3RDFDC). GETCAND3RDFDC(G,X,Y, I,R′) in Step 2 of Alg. 1 generates a set of variables R′′ where I ⊆ R′′ ⊆ R′. R′′ consists of all and only variables v such that there exists a subset Z with I ⊆ Z ⊆ R′ and v ∈ Z that satisfies the third condition of the FD criterion relative to (X,Y). Further, every set Z with I ⊆ Z ⊆ R that satisfies both the second and the third conditions of the FD criterion must be a subset of R′′. Remark: Even though every set Z with I ⊆ Z ⊆ R′ that satisfies the third condition of the FD criterion must be a subset of R′′, not every subset Z ⊆ R′′ satisfies the third condition of the FD criterion, as illustrated by the following example. Example 6. In Example 3, GETCAND3RDFDC outputs R′′ = {A,B,C}. However, for Z = {B}, the BD path {B ← A→ D → Y } is not blocked by {X}; for Z = {C}, the BD path {C ← A→ D → Y } is not blocked by {X}. On the other hand, we show that Z = R′′ itself satisfies the third condition of the FD criterion, as shown in the following. Lemma 3. R′′ generated by GETCAND3RDFDC (in Step 2 of Alg. 1) satisfies the third condition of the FD criterion, that is, all BD paths from R′′ to Y are blocked by X. Step 3 of FINDFDSET Finally, in Step 3, FINDFDSET looks for a set Z ⊆ R′′ that satisfies the first condition of the FD criterion relative to (X,Y), that is, Z intercepts all causal paths from X to Y. To facilitate checking whether a set Z intercepts all causal paths from X to Y, we introduce the concept of causal path graph defined as follows. Definition 2. (Causal Path Graph) Let G be a causal graph and X,Y disjoint sets of variables. A causal path graph G′ relative to (G,X,Y) is a graph over X ∪ Y ∪ PCP (X,Y), where PCP (X,Y) = (De(X)GX \X) ∩An(Y)GX 2, constructed as follows: 1. Construct a subgraph G′′ = GX∪Y∪PCP (X,Y). 2. Construct a graph G′ = G′′ XY , then remove all bidirected edges from G′. X YZ (a) G X A B Y C D (b) G′′ Figure 5: Two causal path graphs generated from (a) the causal graph in Fig. 1a, and (b) the causal graph in Fig. 1b. Both preserve all and only causal paths from {X} to {Y } in the original graphs. A function GETCAUSALPATHGRAPH(G,X,Y) for constructing a causal path graph is presented in Fig. 9 in the Appendix. Example 7. Consider the causal graph G′ shown in Fig. 1b with X = {X}, and Y = {Y }. The causal path graph G′′ relative to (G′, {X}, {Y }) is shown in Fig. 5b. All causal paths from {X} to {Y } in G′ are present in G′′. After constructing a causal path graph G′ relative to (G,X,Y), we use the function TESTSEP(G′,X,Y,Z) to check if Z is a separator of X and Y in G′. Based on the following lemma, Z satisfies the first condition of the FD criterion relative to (X,Y) if and only if TESTSEP returns True. Lemma 4. Let G be a causal graph and X,Y,Z disjoint sets of variables. Let G′ be the causal path graph relative to (G,X,Y). Then, Z satisfies the first condition of the FD criterion relative to (X,Y) if and only if Z is a separator of X and Y in G′. Given the set R′′ that contains every set Z with I ⊆ Z ⊆ R that satisfies both the second and the third conditions of the FD criterion (Lemma 2), it may appear that we need to search for a set Z ⊆ R′′ that satisfies the first condition of the FD criterion. We show instead that all we need is to check whether the set R′′ itself satisfies the first condition which has been shown to satisfy the second and third conditions by Lemma 3. This result is summarized in the following lemma. 2A notation introduced by van der Zander et al. [41] to denote the set of variables on proper causal paths from X to Y. Lemma 5. There exists a set Z0 satisfying the FD criterion relative to (X,Y) with I ⊆ Z0 ⊆ R if and only if R′′ generated by GETCAND3RDFDC (in Step 2 of Alg. 1) satisfies the FD criterion relative to (X,Y). Example 8. Continuing Example 3. In Step 3, FINDFDSET outputs Z = R′′ = {A,B,C} since Z is a separator of {X} and {Y } in the causal path graph G′′ in Fig. 5b. The results in this section are summarized as follows. Theorem 1 (Correctness of FINDFDSET). Let G be a causal graph, X,Y disjoint sets of variables, and I,R sets of variables such that I ⊆ R. Then, FINDFDSET(G,X,Y, I,R) outputs a set Z with I ⊆ Z ⊆ R that satisfies the FD criterion relative to (X,Y), or outputs ⊥ if none exists, in O(n3(n+m)) time, where n and m represent the number of nodes and edges in G. 4 Enumerating Front-door Adjustment Sets Algorithm 2 LISTFDSETS (G,X,Y, I,R) 1: Input: G a causal diagram; X,Y disjoint sets of variables; I,R sets of variables. 2: Output: Listing front-door adjustment set Z rel- ative to (X,Y) where I ⊆ Z ⊆ R. 3: if FINDFDSET(G,X,Y, I,R) 6=⊥ then: 4: if I = R then: Output I 5: else: 6: v ← any variable from R \ I 7: LISTFDSETS(G,X,Y, I ∪ {v},R) 8: LISTFDSETS(G,X,Y, I,R \ {v}) Our goal in this section is to develop an algorithm that lists all FD adjustment sets in a causal diagram. In general, there may exist exponential number of such sets, which means that any listing algorithm will take exponential time to list them all. We will instead look for an algorithm that has an interesting property known as polynomial delay [38]. In words, poly-delay algorithms output the first answer (or indicate none is available) in polynomial time, and take polynomial time to output each consecutive answer as well. Consider the following example. Example 9. Consider the three causal graphs in Fig. 6. In G shown in Fig. 6a, there exists 9 valid FD adjustment sets relative to ({X}, {Y }). In G′, presented in Fig. 6b, two variables A3 and B3 are added from G, forming an additional causal path from X to Y . 27 FD adjustment sets relative to ({X}, {Y }) are available in G′. If another causal path X → A4 → B4 → Y is added to G′, then there are 81 FD adjustment sets relative to ({X}, {Y }). As shown in Fig. 6c, in a graph G′′ with similar pattern with causal path X → Ai → Bi → Y, i = 1, . . . n, there are at least 3n number of FD adjustment sets. We have developed an algorithm named LISTFDSETS, shown in Alg. 2, that lists all FD adjustment sets Z relative to (X,Y) satisfying I ⊆ Z ⊆ R with polynomial delay, given a causal diagram G, disjoint sets of variables X and Y, and two sets of variables I and R. Example 10. Consider the causal graph G′ shown in Fig. 1b with X = {X}, Y = {Y }, I = ∅ and R = {A,B,C,D}. LISTFDSETS outputs {A,B,C}, {A,B}, {A,C}, {A} one by one, and finally stops as no more adjustment sets exist. The algorithm LISTFDSETS takes the same search strategy as the listing algorithm LISTSEP [41] that enumerates all BD adjustment sets with polynomial delay. LISTFDSETS implicitly constructs a binary search tree where each tree node N (I′,R′) represents the collection of all FD adjustment sets Z relative to (X,Y) with I′ ⊆ Z ⊆ R′. The search starts from the root tree node N (I,R), indicating that LISTFDSETS will list all FD adjustment sets Z relative to (X,Y) with I ⊆ Z ⊆ R. Upon visiting a node N (I′,R′), LISTFDSETS first calls the function FINDFDSET (line 3) to decide whether it is necessary to search further from N . If FINDFDSET outputs ⊥, then there does not exist any FD adjustment set Z0 with I′ ⊆ Z0 ⊆ R′ and there is no need to search further. Otherwise, N spawns two children, N1 and N2, and LISTFDSETS continues the search over each child separately. N1 in line 7 represents the collection of all FD adjustment sets Z1 relative to (X,Y) where I′ ∪ {v} ⊆ Z1 ⊆ R′. On the other hand, N2 in line 8 represents the collection of all FD adjustment sets Z2 where I′ ⊆ Z2 ⊆ R′ \ {v}. N1 and N2 are disjoint and thus the search never overlaps, which is crucial to guaranteeing that LISTFDSETS runs in polynomial delay. Finally, a leaf tree node L is reached when I′ = R′, and LISTFDSETS outputs a valid FD adjustment set I′. Example 11. Continuing from Example 10. Fig. 7 shows a search tree generated by running LISTFDSETS(G′, {X}, {Y }, ∅, {A,B,C,D}). Initially, the search starts from the root tree node N (∅, {A,B,C,D}). Since FINDFDSET returns a set {A,B,C}, N branches out into two children N ′({A}, {A,B,C,D}) and N ′′(∅, {B,C,D}). The search continues from the left child N ′ until reaching the leaf tree node L1({A,B,C,D}, {A,B,C,D}) where FINDFDSET returns⊥. LISTFDSETS backtracks to the parent tree node N1({A,B,C}, {A,B,C,D}) and then checks the next leaf L2({A,B,C}, {A,B,C}) where FINDFDSET returns a set {A,B,C}, a valid FD admissible set relative to ({X}, {Y }). LISTFDSETS outputs {A,B,C}. Next, LISTFDSETS backtracks to the tree node N2({A,B}, {A,B,C,D}) and reaches the leaf L3({A,B}, {A,B}) where FINDFDSET outputs {A,B}, and thus LISTFDSETS outputs {A,B}. LISTFDSETS continues and outputs two sets {A,C} and {A} in order. Finally, LISTFDSETS backtracks to the root N and checks the right childN ′′ where FINDFDSET returns ⊥. LISTFDSETS does not search further fromN ′′ and stops as no more tree node is left to be visited. Our results are summarized in the following theorem, which provides the correctness, completeness, and poly-delay complexity of the proposed algorithm. Note that the completeness of the algorithm means that it lists “all” valid sets satisfying the FD criterion. On the other hand, Pearl’s FD criterion is not complete in the sense that there might exist a causal effect that can be computed by the FD adjustment formula (Eq. (3)) but the set Z does not satisfy the FD criterion. Theorem 2 (Correctness of LISTFDSETS). Let G be a causal graph, X,Y disjoint sets of variables, and I,R sets of variables. LISTFDSETS(G,X,Y, I,R) enumerates all and only sets Z with I ⊆ Z ⊆ R that satisfy the FD criterion relative to (X,Y) in O(n4(n+m)) delay where n and m represent the number of nodes and edges in G. 5 Discussion and Conclusions This work has some limitations and can be extended in several directions. First, Pearl’s FD criterion is not complete with respect to the FD adjustment formula (Eq. (3)). While the BD criterion has been generalized to a complete criterion for BD adjustment [35], it is an interesting open problem to come up with a complete criterion for sets satisfying the FD adjustment. Second, this work assumes that the causal diagram is given (or inferred based on scientists’ domain knowledge and/or data). Although this assumption is quite common throughout the causal inference literature, more recent work has moved to finding BD admissible sets given incomplete or partially specified causal diagrams, e.g., maximal ancestral graphs (MAGs) [41], partial ancestral graphs (PAGs) [29], and completed partially directed acyclic graphs (CPDAGs) [29]. There are algorithms capable of performing causal effect identification in a data-driven fashion from an equivalence class [14, 15, 16, 17]. It is an interesting and certainly challenging future work to develop algorithms for finding FD admissible sets in these types of graphs. Some recent work has proposed data-driven methods for finding and listing BD admissible sets, using an anchor variable, when the underlying causal diagram is unknown [7, 6, 33]. A criterion for testing FD-admissibility of a given set using data and an anchor variable is also available [4]. Other interesting future research topics include developing algorithms for finding minimal, minimum, and minimum cost FD adjustment sets, which are available for the BD adjustment sets [42], as well as algorithms for finding conditional FD adjustment sets [13, 9]. Having said all of that, we believe that the results developed in this paper is a necessary step towards solving these more challenging problems. After all, we started from the observation that identification is not restricted to BD adjustment, and Pearl’s FD criterion provides a classic strategy for estimating causal effects from observational data and qualitative knowledge encoded in the form of a causal diagram. The criterion is drawing more attention in recent years and statistically efficient and doubly robust estimators have been developed for estimating the FD estimand from finite samples. In this paper, we develop algorithms that given a causal diagram G, find an admissible FD set (Alg. 1 FINDFDSET, Thm. 1) and enumerate all admissible FD sets with polynomial delay (Alg. 2 LISTFDSETS, Thm. 2). We hope that the methods and algorithms proposed in this work will help scientists to use the FD strategy for causal effects estimation in the practical applications and are useful for scientists in study design to select covariates based on desired properties, including cost, feasibility, and statistical power.
1. What is the focus and contribution of the paper regarding listing adjustment sets? 2. What are the strengths of the proposed approach, particularly in terms of its novelty and theoretical clarity? 3. Do you have any concerns or questions about the applicability of the algorithm, especially when dealing with certain variables or ethical considerations? 4. How does the reviewer assess the significance and potential impact of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The authors propose a feasible algorithm for listing adjustment sets (one at a time in reasonable time) satisfying the front-door criterion. Strengths And Weaknesses This is very strong paper, very clearly written and easy to follow. The theory and algorithms are clear and make sense. It should be easy to implement the algorithm described. The idea is novel, since no such algorithm currently exists in the literature, and it is significant, since it could easily be incorporated into existing software and used. The only criticism is that it does not include an experimental section in which the algorithm is test for accuracy and speed on particular models. Minor point. Line 252, "no more node" should be "no more nodes". Questions I did have some questions after having read the paper, which I didn't know how to answer. For medical or ethical reasons, it may be necessary to avoid including certain variables in an adjustment set. Is this straightforward to do with this algorithm (i.e., simply not including the prohibited variables in any set in the course of the algorithm), or could it possibly be the case the simply excluding certain variable will prevent allowable front-door adjustment sets from being discovered? For ethical reasons (fairness, for example), it may be necessary to disallow certain paths of influence in a model (e.g., race affecting the outcome variable). Can this be done inside the algorithms, or some one have to simply list example front-door adjustment sets and simply ignore the offending examples? Limitations The last sentence addresses societal impacts: "We hope that the methods and algorithms proposed in this work will help scientists to use the FD strategy for causal effects estimation in the practical applications and are useful for scientists in study design to select covariates based on desired properties, including cost, feasibility, and statistical power." This can of course be made more than a hope; perhaps answering questions such as the above (or others) might help make that a reality.
NIPS
Title Finding and Listing Front-door Adjustment Sets Abstract Identifying the effects of new interventions from data is a significant challenge found across a wide range of the empirical sciences. A well-known strategy for identifying such effects is Pearl’s front-door (FD) criterion [26]. The definition of the FD criterion is declarative, only allowing one to decide whether a specific set satisfies the criterion. In this paper, we present algorithms for finding and enumerating possible sets satisfying the FD criterion in a given causal diagram. These results are useful in facilitating the practical applications of the FD criterion for causal effects estimation and helping scientists to select estimands with desired properties, e.g., based on cost, feasibility of measurement, or statistical power. 1 Introduction Learning cause and effect relationships is a fundamental challenge across data-driven fields. For example, health scientists developing a treatment for curing lung cancer need to understand how a new drug affects the patient’s body and the tumor’s progression. The distillation of causal relations is indispensable to understanding the dynamics of the underlying system and how to perform decisionmaking in a principled and systematic fashion [27, 37, 2, 30, 1, 23, 24]. One of the most common methods for learning causal relations is through Randomized Controlled Trials (RCTs, for short) [8]. RCTs are considered as the “gold standard” in many fields of empirical research and are used throughout the health and social sciences as well as machine learning and AI. In practice, however, RCTs are often hard to perform due to ethical, financial, and technical issues. For instance, it may be unethical to submit an individual to a certain condition if such condition may have some potentially negative effects (e.g., smoking). Whenever RCTs cannot be conducted, one needs to resort to analytical methods to infer causal relations from observational data, which appears in the literature as the problem of causal effect identification [26, 27]. The causal identification problem asks whether the effect of holding a variableX at a constant value x on a variable Y , written as P (Y |do(X = x)), or P (Y |do(x)), can be computed from a combination of observational data and causal assumptions. One of the most common ways of eliciting these assumptions is in the form of a causal diagram represented by a directed acyclic graph (DAG), where its nodes and edges describe the underlying data generating process. For instance, in Fig. 1a, three nodes X,Z, Y represent variables, a directed edge X → Z indicates that X causes Z, and a dashedbidirected edge X ↔ Y represents that X and Y are confounded by unmeasured (latent) factors. Different methods can solve the identification problem, including Pearl’s celebrated do-calculus [26] as well as different algorithmic solutions [40, 34, 12]. In practice, researchers often rely on identification strategies that generate well-known identification formulas. One of the arguably most popular strategies is identification by covariate adjustment. Whenever a set Z satisfies the back-door (BD) criterion [26] relative to the pair X and Y , where X and Y represent the treatment and outcome variables, respectively, the causal effect P (Y |do(x)) can be evaluated through the BD adjustment formula ∑ z P (y|x, z)P (z). 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Despite the popularity of the covariate adjustment technique for estimating causal effects, there are still settings in which no BD admissible set exists. For example, consider the causal diagram G in Fig. 1a. There clearly exists no set to block the BD path from X to Y , through the bidirected arrow, X ↔ Y . One may surmise that this effect is not identifiable and the only one of evaluating the interventional distribution is through experimentation. Still, this is not the case. The effect P (Y |do(x)) is identifiable from G and the observed distribution P (x, y, z) over {X,Y, Z} by another classic identification strategy known as the front-door (FD) criterion [26]. In particular, through the following FD adjustment formula provides the way of evaluating the interventional distribution: P (Y |do(x)) = ∑ z P (z|x) ∑ x′ P (y|x′, z)P (x′). (1) We refer to Pearl and Mackenzie [28, Sec. 3.4] for an interesting account of the history of the FD criterion, which was the first graphical generalization of the BD case. The FD criterion is drawing more attention in recent years. For applications of the FD criterion, see, e.g., Hünermund and Bareinboim [13] and Glynn and Kashin [10]. Statistically efficient and doubly robust estimators have recently been developed for estimating the FD estimand in Eq. (1) from finite samples [9], which are still elusive for arbitrary estimands identifiable in a diagram despite recent progress [18, 19, 5, 20, 43]. Both the BD and FD criteria are only descriptive, i.e., they specify whether a specific set Z satisfies the criteria or not, but do not provide a way to find an admissible set Z. In addition, in many situations, it is possible that multiple adjustment sets exist. Consider for example the causal diagram in Fig. 1b, and the task of identifying the effect of X on Y . The distribution P (Y |do(x)) can indeed be identified by the FD criterion with a set Z = {A,B,C} given by the expression in Eq. (1) (with Z replaced with {A,B,C}). Still, what if the variable B is costly to measure or encodes some personal information about patients which is undesirable to be shared due to ethical concerns? In this case, the set Z = {A,C} also satisfies the FD criterion and may be used. Even when both B and C are unmeasured, the set Z = {A} is also FD admissible. This simple example shows that a target effect can be estimated using different adjustment sets leading to different probability expressions over different set of variables, which has important practical implications. Each variable implies different practical challenges in terms of measurement, such as cost, availability, privacy. Each estimand has different statistical properties in terms of sample complexity, variance, which may play a key role in the study design [31, 11, 32, 36]. Algorithms for finding and listing all possible adjustment sets are hence very useful in practice, which will allow scientists to select an adjustment set that exhibits desirable properties. Indeed, algorithms have been developed in recent years for finding one or listing all BD admissible sets [38, 39, 41, 29, 42]. However, no such algorithm is currently available for finding/listing FD admissible sets. The goal of this paper is to close this gap to facilitate the practical applications of the FD criterion for causal effects estimation and help scientists to select estimand with certain desired properties 1. Specifically, the contributions of this paper are as follows: 1. We develop an algorithm that finds an admissible front-door adjustment set Z in a given causal diagram in polynomial time (if one exists). We solve a variant of the problem that imposes constraints I ⊆ Z ⊆ R for given sets I and R, which allows a scientist to constrain the search to include specific subsets of variables or exclude variables from search perhaps due to cost, availability, or other technical considerations. 2. We develop a sound and complete algorithm that enumerates all front-door adjustment sets with polynomial delay - the algorithm takes polynomial amount of time to return each new admissible set, if one exists, or return failure whenever it exhausted all admissible sets. 1Code is available at https://github.com/CausalAILab/FrontdoorAdjustmentSets. 2 Preliminaries Notation. We write a variable in capital letters (X) and its value as small letters (x). Bold letters, X or x, represent a set of variables or values. We use kinship terminology to denote various relationships in a graph G and denote the parents, ancestors, and descendants of X (including X itself) as Pa(X),An(X), and De(X), respectively. Given a graph G over a set of variables V, a subgraph GX consists of a subset of variables X ⊆ V and their incident edges in G. A graph G can be transformed: GX is the graph resulting from removing all incoming edges to X, and GX is the graph with all outgoing edges from X removed. A DAG G may be moralized into an undirected graph where all directed edges of G are converted into undirected edges, and for every pair of nonadjacent nodes in G that share a common child, an undirected edge that connects such pair is added [22]. A path π from a node X to a node Y in G is a sequence of edges where X and Y are the endpoints of π. A node W on π is said to be a collider if W has converging arrows into W in π, e.g.,→ W ← or↔ W ←. π is said to be blocked by a set Z if there exists a node W on π satisfying one of the following two conditions: 1) W is a collider, and neither W nor any of its descendants are in Z, or 2) W is not a collider, and W is in Z [25]. Given three disjoint sets X,Y, and Z in G, Z is said to d-separate X from Y in G if and only if Z blocks every path from a node in X to a node in Y according to the d-separation criterion [25], and we say that Z is a separator of X and Y in G. Structural Causal Models (SCMs). We use Structural Causal Models (SCMs, for short) [27] as our basic semantical framework. An SCM is a 4-tuple 〈U,V,F, P (u)〉, where 1) U is a set of exogenous (latent) variables, 2) V is a set of endogenous (observed) variables, 3) F is a set of functions {fV }V ∈V that determine the value of endogenous variables, e.g., v ← fV (paV ,uV ) is a function with PAV ⊆ V \ {V } and UV ⊆ U, and 4) P (u) is a joint distribution over the exogenous variables U. Each SCM induces a causal diagram G [3, Def. 13] where every variable v ∈ V is a vertex and directed edges in G correspond to functional relationships as specified in F and dashed bidirected edges represent common exogenous variables between two vertices. Within the structural semantics, performing an intervention and setting X = x is represented through the do-operator, do(X = x), which encodes the operation of replacing the original functions of X (i.e., fX(paX ,uX)) by the constant x and induces a submodelMx and an interventional distribution P (v|do(x)). Classic Causal Effects Identification Criteria. Given a causal diagram G over V, an effect P (y|do(x)) is said to be identifiable in G if P (y|do(x)) is uniquely computable from the observed distribution P (v) in any SCM that induces G [27, p. 77]. A path between X and Y with an arrow into X is known as a back-door path from X to Y . The celebrated back-door (BD) criterion [26] provides a sufficient condition for effect identification from observational data, which states that if a set Z of non-descendants of X blocks all BD paths from X to Y, then the causal effect P (y|do(x)) is identified by the BD adjustment formula: P (y|do(x)) = ∑ z P (y|x, z)P (z) (2) Another classic identification condition that is key to the discussion in this paper is known as the front-door criterion, which is defined as follows: Definition 1. (Front-door (FD) Criterion [26]) A set of variables Z is said to satisfy the front-door criterion relative to the pair (X,Y) if 1. Z intercepts all directed paths from X to Y, 2. There is no unblocked back-door path from X to Z, and 3. All back-door paths from Z to Y are blocked by X, i.e., X is a separator of Z and Y in GZ. If Z satisfies the FD criterion relative to the pair (X,Y), then P (y|do(x)) is identified by the following FD adjustment formula [26]: P (y|do(x)) = ∑ z P (z|x) ∑ x′ P (y|x′, z)P (x′). (3) 3 Finding A Front-door Adjustment Set Algorithm 1 FINDFDSET (G,X,Y, I,R) 1: Input: G a causal diagram; X,Y disjoint sets of variables; I,R sets of variables. 2: Output: Z a set of variables satisfying the front- door criterion relative to (X,Y) with the constraint I ⊆ Z ⊆ R. 3: Step 1: 4: R′ ← GETCAND2NDFDC(G,X, I,R) 5: if R′ =⊥ then: return ⊥ 6: Step 2: 7: R′′ ← GETCAND3RDFDC(G,X,Y, I,R′) 8: if R′′ =⊥ then: return ⊥ 9: Step 3: 10: G′ ← GETCAUSALPATHGRAPH(G,X,Y) 11: if TESTSEP(G′,X,Y,R′′) = True then: 12: return Z = R′′ 13: else: return ⊥ In this section, we address the following question: given a causal diagram G, is there a set Z that satisfies the FD criterion relative to the pair (X,Y) and, therefore, allows us to identify P (y|do(x)) by the FD adjustment? We solve a more general variant of this question that imposes a constraint I ⊆ Z ⊆ R for given sets I and R. Here, I are variables that must be included in Z (I could be empty) and R are variables that could be included in Z (R could be V\(X∪Y)). Note the constraint that variables in W cannot be included can be enforced by excluding W from R. Solving this version of the problem will allow scientists to put constraints on candidate adjustment sets based on practical considerations. In addition, this version will form a building block for an algorithm that enumerates all FD admissible sets in a given G - the algorithm LISTFDSETS (shown in Alg. 2 in Section 4) for listing all FD admissible sets will utilize this result during the recursive call. We have developed a procedure called FINDFDSET shown in Alg. 1 that outputs a FD adjustment set Z relative to (X,Y) satisfying I ⊆ Z ⊆ R, or outputs ⊥ if none exists, given a causal diagram G, disjoint sets of variables X and Y, and two sets of variables I and R. Example 1. Consider the causal graph G′, shown in Fig. 1b, with X = {X}, Y = {Y }, I = ∅ and R = {A,B,C,D}. Then, FINDFDSET outputs {A,B,C}. With I = {C} and R = {A,C}, FINDFDSET outputs {A,C}. With I = {D} and R = {A,B,C,D}, FINDFDSET outputs ⊥ as no FD adjustment set that contains D is available. FINDFDSET runs in three major steps. Each step identifies candidate variables that incrementally satisfy each of the conditions of the FD criterion relative to (X,Y). First, FINDFDSET constructs a set of candidate variables R′, with I ⊆ R′ ⊆ R, such that every subset Z with I ⊆ Z ⊆ R′ satisfies the second condition of the FD criterion (i.e., there is no BD path from X to Z). Next, FINDFDSET generates a set of candidate variables R′′, with I ⊆ R′′ ⊆ R′, such that for every variable v ∈ R′′, there exists a set Z with I ⊆ Z ⊆ R′ and v ∈ Z that further satisfies the third condition of the FD criterion, that is, all BD paths from Z to Y are blocked by X. Finally, FINDFDSET outputs a set Z that further satisfies the first condition of the FD criterion - Z intercepts all causal paths from X to Y. Step 1 of FINDFDSET In Step 1, FINDFDSET calls the function GETCAND2NDFDC (presented in Fig. 2) to construct a set R′ that consists of all the variables v ∈ R such that there is no BD path from X to v (R′ is set to empty if there is a BD path from X to I). Then, there is no BD path from X to any set I ⊆ Z ⊆ R′ since, by definition, there is no BD path from X to Z if and only if there is no BD path from X to any v ∈ Z. GETCAND2NDFDC iterates through each variable v ∈ R and checks if there exists an open BD path from X to v by calling the function TESTSEP(GX,X, v, ∅) [41]. TESTSEP(G,A,B,C) returns True if C is a separator of A and B in G, or False otherwise. Therefore, TESTSEP(GX,X, v, ∅) returns True if ∅ is a separator of X and v in GX (i.e., there is no BD path from X to v), or False otherwise. If TESTSEP returns False, then v is removed from R′ because every set Z containing v violates the second condition of the FD criterion relative to (X,Y). Example 2. Continuing Example 1. With I = ∅ and R = {A,B,C,D}, GETCAND2NDFDC outputs a set R′ = {A,B,C}. D is excluded from R′ since there exists a BD path from {X} to {D}, and any set containing D violates the second condition of the FD criterion relative to ({X}, {Y }). Lemma 1 (Correctness of GETCAND2NDFDC). GETCAND2NDFDC(G,X, I,R) generates a set of variables R′ with I ⊆ R′ ⊆ R such that R′ consists of all and only variables v that satisfies the second condition of the FD criterion relative to (X,Y). Further, every subset Z ⊆ R′ satisfies the second condition of the FD criterion relative to (X,Y), and every set Z with I ⊆ Z ⊆ R that satisfies the second condition of the FD criterion relative to (X,Y) must be a subset of R′. Step 2 of FINDFDSET In Step 2, FINDFDSET calls the function GETCAND3RDFDC presented in Fig. 3 to generate a set R′′ consisting of all the variables v ∈ R′ such that there exists a set Z containing v with I ⊆ Z ⊆ R′ that further satisfies the third condition of the FD criterion relative to (X,Y) (i.e., all BD paths from Z to Y are blocked by X). In other words, R′′ is the union of all Z with I ⊆ Z ⊆ R′ that satisfies the third condition of the FD criterion. GETCAND3RDFDC iterates through each variable v ∈ R′ and calls the function GETDEP(G,X,Y, {v},R′) in line 5. Presented in Fig. 4, GETDEP returns a subset Z′ ⊆ R′ \ {v} such that all BD paths from Z = {v} ∪ Z′ to Y are blocked by X (if there exists such Z′). If GETDEP returns ⊥, then there exists no Z containing v that satisfies the third condition of the FD criterion relative to (X,Y), so v is removed from R′′. Example 3. Continuing Example 2. Given I = ∅ and R′ = {A,B,C}, GETCAND3RDFDC outputs R′′ = {A,B,C} because for each variable v ∈ R′′, GETDEP finds a set Z′ such that {v} ∪ Z′ satisfies the third condition of the FD criterion relative to ({X}, {Y }). For v = A, Z′ = ∅, for v = B, Z′ = {A}, and for v = C, Z′ = {A}. Next, we explain how the function GETDEP(G,X,Y,T,R′) works. First, GETDEP constructs an undirected graphM in a way that the paths from T to Y inM represent all BD paths from T to Y that cannot be blocked by X in G. The auxiliary function MORALIZE(G) moralizes a given graph G into an undirected graph. The moralization is performed on the subgraph over An(T ∪X ∪Y) instead of G based on the following property: T and Y are d-separated by X in G if and only if X is a T-Y node cut (i.e., removing X disconnects T and Y) in G′ = MORALIZE(GAn(T∪X∪Y)) [21]. GETDEP performs Breadth-First Search (BFS) from T to Y onM and incrementally constructs a subset Z′ ⊆ R′ \ T such that, after BFS terminates, there will be no BD path from Z = T ∪ Z′ to Y that cannot be blocked by X in G. While constructing Z′, GETDEP calls the function GETNEIGHBORS(u,M) (presented in Fig. 8, Appendix) to obtain all observed neighbors of u inM. The BFS starts from each variable v ∈ T. Whenever a non-visited node u is encountered, the set NR, observed neighbors of u that belong to R′, is computed. NR can be added to Z′ because removing all outgoing edges of NR may contribute to disconnecting some BD paths Π from T to Y that cannot be blocked by X in G. In other words, in GT∪Z′∪NR, Π could be disconnected from T to 1: function GETDEP(G,X,Y,T,R′) 2: Output: Z′ ⊆ R′ \T, a set of variables such that T ∪ Z′ satisfies the third condition of the FD criterion relative to (X,Y). 3: G′ ← GAn(T∪X∪Y) 4: G′ ← G′ with all bidirected edges A ↔ B replaced by a latent node LAB and two edges LAB → A and LAB → B 5: G′′ ← G′T 6: M← MORALIZE(G′′) then remove X 7: Z′ ← ∅,Q← T and mark all v ∈ T as visited 8: while Q 6= ∅ do 9: u← Q.POP() 10: if u ∈ Y then: return ⊥ 11: NR← GETNEIGHBORS(u,M) ∩R′ that are not visited 12: G′′ ← G′T∪Z′∪NR 13: M← MORALIZE(G′′) then remove X 14: N′ ← GETNEIGHBORS(u,M) that are not visited 15: NR′ ← {w ∈ NR| there exists an incoming arrow into w in G} 16: N← N′ ∪NR′,Z′ ← Z′ ∪NR 17: Q.INSERT(N) and mark all w ∈ N as visited 18: end while 19: return Z′ 20: end function Figure 4: A function that facilitates the construction of a set that satisfies the third condition of the FD criterion. Y where Π are not disconnected in GT∪Z′ . After adding NR to Z′,M must be reconstructed in a way that reflects the setting where all outgoing edges of NR are removed. BFS will be performed on such modifiedM. GETDEP checks if there exists any set of nodes N to be visited further. N consists of two sets: 1) N′, all observed neighbors of u that are still reachable from u, even after removing all outgoing edges of NR, and 2) NR′ ⊆ NR where for every node w ∈ NR, there exists an incoming arrow into w in G. All nodes in NR′ must be checked because there might exist some BD path π from w to y ∈ Y that cannot be blocked by X in G. If π cannot be disconnected from w to y, then the set Z will violate the third condition of the FD criterion relative to (X,Y). The BFS continues until either a node y ∈ Y is visited, or no more nodes can be visited. If GETDEP returns a set Z′, then we have that all BD paths from T to Y that cannot be blocked by X in G have been disconnected in GZ while ensuring that there exists no BD path from Z to Y that cannot be blocked by X in G. Therefore, Z satisfies the third condition of the FD criterion relative to (X,Y). Otherwise, if GETDEP returns ⊥ (i.e., y is visited), then there does not exist any Z containing T that satisfies the third condition of the FD criterion relative to (X,Y). This is because there exists a BD path π from t ∈ T to y that cannot be blocked by X in G; removing outgoing edges of all w ∈ R′ that intersect π cannot disconnect π from t to y. Example 4. Expanding on Example 3 to show the use of function GETDEP. Consider the case when v = B. Then, Q = T = {B} and u = B is popped from Q at line 9. We have NR = {A},N′ = ∅,NR′ = {A},N = {A}, and Z′ = {A}. Since N is inserted to Q at line 17, u = A is popped from Q in the next iteration of while loop. Then, NR = ∅,N′ = ∅,NR′ = ∅, and N = ∅. Since Q is empty, the while loop terminates and GETDEP returns Z′ = {A}. Example 5. Illustrating the use of function GETDEP. Let I = ∅, R′ = {B,C}, and v = B. Q = T = {B} and u = B is popped from Q at line 9. NR = ∅,N′ = {A},NR′ = ∅,N = {A}, and Z′ = ∅. Since N is inserted to Q at line 17, u = A is popped from Q in the second iteration of while loop. NR = NR′ = {C}, N′ = N = {C,D, Y }, Z′ = {C}, and Q = {C,D, Y }. On the third iteration, u = C is popped from Q. NR = NR′ = N′ = N = ∅ and Q = {D,Y }. On the fourth iteration, u = D is popped from Q. NR = NR′ = N′ = N = ∅ and Q = {Y }. Next, u = Y is popped from Q. Since u ∈ {Y }, GETDEP returns ⊥ at line 10. There exists no set Z′ ⊆ (R′ \T) = {C} such that T ∪ Z′ satisfies the third condition of the FD criterion relative to ({X}, {Y }). Lemma 2 (Correctness of GETCAND3RDFDC). GETCAND3RDFDC(G,X,Y, I,R′) in Step 2 of Alg. 1 generates a set of variables R′′ where I ⊆ R′′ ⊆ R′. R′′ consists of all and only variables v such that there exists a subset Z with I ⊆ Z ⊆ R′ and v ∈ Z that satisfies the third condition of the FD criterion relative to (X,Y). Further, every set Z with I ⊆ Z ⊆ R that satisfies both the second and the third conditions of the FD criterion must be a subset of R′′. Remark: Even though every set Z with I ⊆ Z ⊆ R′ that satisfies the third condition of the FD criterion must be a subset of R′′, not every subset Z ⊆ R′′ satisfies the third condition of the FD criterion, as illustrated by the following example. Example 6. In Example 3, GETCAND3RDFDC outputs R′′ = {A,B,C}. However, for Z = {B}, the BD path {B ← A→ D → Y } is not blocked by {X}; for Z = {C}, the BD path {C ← A→ D → Y } is not blocked by {X}. On the other hand, we show that Z = R′′ itself satisfies the third condition of the FD criterion, as shown in the following. Lemma 3. R′′ generated by GETCAND3RDFDC (in Step 2 of Alg. 1) satisfies the third condition of the FD criterion, that is, all BD paths from R′′ to Y are blocked by X. Step 3 of FINDFDSET Finally, in Step 3, FINDFDSET looks for a set Z ⊆ R′′ that satisfies the first condition of the FD criterion relative to (X,Y), that is, Z intercepts all causal paths from X to Y. To facilitate checking whether a set Z intercepts all causal paths from X to Y, we introduce the concept of causal path graph defined as follows. Definition 2. (Causal Path Graph) Let G be a causal graph and X,Y disjoint sets of variables. A causal path graph G′ relative to (G,X,Y) is a graph over X ∪ Y ∪ PCP (X,Y), where PCP (X,Y) = (De(X)GX \X) ∩An(Y)GX 2, constructed as follows: 1. Construct a subgraph G′′ = GX∪Y∪PCP (X,Y). 2. Construct a graph G′ = G′′ XY , then remove all bidirected edges from G′. X YZ (a) G X A B Y C D (b) G′′ Figure 5: Two causal path graphs generated from (a) the causal graph in Fig. 1a, and (b) the causal graph in Fig. 1b. Both preserve all and only causal paths from {X} to {Y } in the original graphs. A function GETCAUSALPATHGRAPH(G,X,Y) for constructing a causal path graph is presented in Fig. 9 in the Appendix. Example 7. Consider the causal graph G′ shown in Fig. 1b with X = {X}, and Y = {Y }. The causal path graph G′′ relative to (G′, {X}, {Y }) is shown in Fig. 5b. All causal paths from {X} to {Y } in G′ are present in G′′. After constructing a causal path graph G′ relative to (G,X,Y), we use the function TESTSEP(G′,X,Y,Z) to check if Z is a separator of X and Y in G′. Based on the following lemma, Z satisfies the first condition of the FD criterion relative to (X,Y) if and only if TESTSEP returns True. Lemma 4. Let G be a causal graph and X,Y,Z disjoint sets of variables. Let G′ be the causal path graph relative to (G,X,Y). Then, Z satisfies the first condition of the FD criterion relative to (X,Y) if and only if Z is a separator of X and Y in G′. Given the set R′′ that contains every set Z with I ⊆ Z ⊆ R that satisfies both the second and the third conditions of the FD criterion (Lemma 2), it may appear that we need to search for a set Z ⊆ R′′ that satisfies the first condition of the FD criterion. We show instead that all we need is to check whether the set R′′ itself satisfies the first condition which has been shown to satisfy the second and third conditions by Lemma 3. This result is summarized in the following lemma. 2A notation introduced by van der Zander et al. [41] to denote the set of variables on proper causal paths from X to Y. Lemma 5. There exists a set Z0 satisfying the FD criterion relative to (X,Y) with I ⊆ Z0 ⊆ R if and only if R′′ generated by GETCAND3RDFDC (in Step 2 of Alg. 1) satisfies the FD criterion relative to (X,Y). Example 8. Continuing Example 3. In Step 3, FINDFDSET outputs Z = R′′ = {A,B,C} since Z is a separator of {X} and {Y } in the causal path graph G′′ in Fig. 5b. The results in this section are summarized as follows. Theorem 1 (Correctness of FINDFDSET). Let G be a causal graph, X,Y disjoint sets of variables, and I,R sets of variables such that I ⊆ R. Then, FINDFDSET(G,X,Y, I,R) outputs a set Z with I ⊆ Z ⊆ R that satisfies the FD criterion relative to (X,Y), or outputs ⊥ if none exists, in O(n3(n+m)) time, where n and m represent the number of nodes and edges in G. 4 Enumerating Front-door Adjustment Sets Algorithm 2 LISTFDSETS (G,X,Y, I,R) 1: Input: G a causal diagram; X,Y disjoint sets of variables; I,R sets of variables. 2: Output: Listing front-door adjustment set Z rel- ative to (X,Y) where I ⊆ Z ⊆ R. 3: if FINDFDSET(G,X,Y, I,R) 6=⊥ then: 4: if I = R then: Output I 5: else: 6: v ← any variable from R \ I 7: LISTFDSETS(G,X,Y, I ∪ {v},R) 8: LISTFDSETS(G,X,Y, I,R \ {v}) Our goal in this section is to develop an algorithm that lists all FD adjustment sets in a causal diagram. In general, there may exist exponential number of such sets, which means that any listing algorithm will take exponential time to list them all. We will instead look for an algorithm that has an interesting property known as polynomial delay [38]. In words, poly-delay algorithms output the first answer (or indicate none is available) in polynomial time, and take polynomial time to output each consecutive answer as well. Consider the following example. Example 9. Consider the three causal graphs in Fig. 6. In G shown in Fig. 6a, there exists 9 valid FD adjustment sets relative to ({X}, {Y }). In G′, presented in Fig. 6b, two variables A3 and B3 are added from G, forming an additional causal path from X to Y . 27 FD adjustment sets relative to ({X}, {Y }) are available in G′. If another causal path X → A4 → B4 → Y is added to G′, then there are 81 FD adjustment sets relative to ({X}, {Y }). As shown in Fig. 6c, in a graph G′′ with similar pattern with causal path X → Ai → Bi → Y, i = 1, . . . n, there are at least 3n number of FD adjustment sets. We have developed an algorithm named LISTFDSETS, shown in Alg. 2, that lists all FD adjustment sets Z relative to (X,Y) satisfying I ⊆ Z ⊆ R with polynomial delay, given a causal diagram G, disjoint sets of variables X and Y, and two sets of variables I and R. Example 10. Consider the causal graph G′ shown in Fig. 1b with X = {X}, Y = {Y }, I = ∅ and R = {A,B,C,D}. LISTFDSETS outputs {A,B,C}, {A,B}, {A,C}, {A} one by one, and finally stops as no more adjustment sets exist. The algorithm LISTFDSETS takes the same search strategy as the listing algorithm LISTSEP [41] that enumerates all BD adjustment sets with polynomial delay. LISTFDSETS implicitly constructs a binary search tree where each tree node N (I′,R′) represents the collection of all FD adjustment sets Z relative to (X,Y) with I′ ⊆ Z ⊆ R′. The search starts from the root tree node N (I,R), indicating that LISTFDSETS will list all FD adjustment sets Z relative to (X,Y) with I ⊆ Z ⊆ R. Upon visiting a node N (I′,R′), LISTFDSETS first calls the function FINDFDSET (line 3) to decide whether it is necessary to search further from N . If FINDFDSET outputs ⊥, then there does not exist any FD adjustment set Z0 with I′ ⊆ Z0 ⊆ R′ and there is no need to search further. Otherwise, N spawns two children, N1 and N2, and LISTFDSETS continues the search over each child separately. N1 in line 7 represents the collection of all FD adjustment sets Z1 relative to (X,Y) where I′ ∪ {v} ⊆ Z1 ⊆ R′. On the other hand, N2 in line 8 represents the collection of all FD adjustment sets Z2 where I′ ⊆ Z2 ⊆ R′ \ {v}. N1 and N2 are disjoint and thus the search never overlaps, which is crucial to guaranteeing that LISTFDSETS runs in polynomial delay. Finally, a leaf tree node L is reached when I′ = R′, and LISTFDSETS outputs a valid FD adjustment set I′. Example 11. Continuing from Example 10. Fig. 7 shows a search tree generated by running LISTFDSETS(G′, {X}, {Y }, ∅, {A,B,C,D}). Initially, the search starts from the root tree node N (∅, {A,B,C,D}). Since FINDFDSET returns a set {A,B,C}, N branches out into two children N ′({A}, {A,B,C,D}) and N ′′(∅, {B,C,D}). The search continues from the left child N ′ until reaching the leaf tree node L1({A,B,C,D}, {A,B,C,D}) where FINDFDSET returns⊥. LISTFDSETS backtracks to the parent tree node N1({A,B,C}, {A,B,C,D}) and then checks the next leaf L2({A,B,C}, {A,B,C}) where FINDFDSET returns a set {A,B,C}, a valid FD admissible set relative to ({X}, {Y }). LISTFDSETS outputs {A,B,C}. Next, LISTFDSETS backtracks to the tree node N2({A,B}, {A,B,C,D}) and reaches the leaf L3({A,B}, {A,B}) where FINDFDSET outputs {A,B}, and thus LISTFDSETS outputs {A,B}. LISTFDSETS continues and outputs two sets {A,C} and {A} in order. Finally, LISTFDSETS backtracks to the root N and checks the right childN ′′ where FINDFDSET returns ⊥. LISTFDSETS does not search further fromN ′′ and stops as no more tree node is left to be visited. Our results are summarized in the following theorem, which provides the correctness, completeness, and poly-delay complexity of the proposed algorithm. Note that the completeness of the algorithm means that it lists “all” valid sets satisfying the FD criterion. On the other hand, Pearl’s FD criterion is not complete in the sense that there might exist a causal effect that can be computed by the FD adjustment formula (Eq. (3)) but the set Z does not satisfy the FD criterion. Theorem 2 (Correctness of LISTFDSETS). Let G be a causal graph, X,Y disjoint sets of variables, and I,R sets of variables. LISTFDSETS(G,X,Y, I,R) enumerates all and only sets Z with I ⊆ Z ⊆ R that satisfy the FD criterion relative to (X,Y) in O(n4(n+m)) delay where n and m represent the number of nodes and edges in G. 5 Discussion and Conclusions This work has some limitations and can be extended in several directions. First, Pearl’s FD criterion is not complete with respect to the FD adjustment formula (Eq. (3)). While the BD criterion has been generalized to a complete criterion for BD adjustment [35], it is an interesting open problem to come up with a complete criterion for sets satisfying the FD adjustment. Second, this work assumes that the causal diagram is given (or inferred based on scientists’ domain knowledge and/or data). Although this assumption is quite common throughout the causal inference literature, more recent work has moved to finding BD admissible sets given incomplete or partially specified causal diagrams, e.g., maximal ancestral graphs (MAGs) [41], partial ancestral graphs (PAGs) [29], and completed partially directed acyclic graphs (CPDAGs) [29]. There are algorithms capable of performing causal effect identification in a data-driven fashion from an equivalence class [14, 15, 16, 17]. It is an interesting and certainly challenging future work to develop algorithms for finding FD admissible sets in these types of graphs. Some recent work has proposed data-driven methods for finding and listing BD admissible sets, using an anchor variable, when the underlying causal diagram is unknown [7, 6, 33]. A criterion for testing FD-admissibility of a given set using data and an anchor variable is also available [4]. Other interesting future research topics include developing algorithms for finding minimal, minimum, and minimum cost FD adjustment sets, which are available for the BD adjustment sets [42], as well as algorithms for finding conditional FD adjustment sets [13, 9]. Having said all of that, we believe that the results developed in this paper is a necessary step towards solving these more challenging problems. After all, we started from the observation that identification is not restricted to BD adjustment, and Pearl’s FD criterion provides a classic strategy for estimating causal effects from observational data and qualitative knowledge encoded in the form of a causal diagram. The criterion is drawing more attention in recent years and statistically efficient and doubly robust estimators have been developed for estimating the FD estimand from finite samples. In this paper, we develop algorithms that given a causal diagram G, find an admissible FD set (Alg. 1 FINDFDSET, Thm. 1) and enumerate all admissible FD sets with polynomial delay (Alg. 2 LISTFDSETS, Thm. 2). We hope that the methods and algorithms proposed in this work will help scientists to use the FD strategy for causal effects estimation in the practical applications and are useful for scientists in study design to select covariates based on desired properties, including cost, feasibility, and statistical power.
1. What are the key contributions and strengths of the paper regarding algorithms for finding admissible front-door adjustment sets? 2. How does the proposed approach differ from prior works on back-door adjustment sets? 3. What are the limitations and weaknesses of the paper, particularly concerning the assumption of knowing the underlying causal diagram? 4. Can the proposed algorithms be applied to real-world scenarios where the causal diagram is unknown? If not, what are some potential solutions or future research directions? 5. How might the paper's findings and approaches be relevant to the NeurIPS community, especially in terms of selecting estimands with desired properties?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper Given the underlying causal diagram, this paper provides two algorithms: To find an admissible front-door adjustment set in polynomial time (if one exists). To enumerate all admissible front-door adjustment sets with polynomial delay. These algorithms allow constraining the search to include and/or exclude specific subsets of variables which could help selecting estimands with desired properties. Strengths And Weaknesses Strength: In my limited knowledge, this paper is one of the first papers to provide algorithms to find/list all admissible front-door adjustment sets (given the underlying causal diagram). Finding valid front-door sets is an important topic and of relevance to the NeurIPS community. Through this work, the authors take a first step to close the gap between what is known for back-door adjustment sets and what is known for front-door adjustment sets. By constraining the search space to include/exclude specific subsets of variables, the authors provide flexibility in selecting estimands with different cost, availability, privacy, and statistical power. Originality: The algorithm/approach proposed in the paper relies on a mix of novel concepts (e.g., GETDEP, GETNEIGHBORS) as well as ideas known in the literature (e.g., TESTSEP, LISTSEP). However, the key novelty lies in unifying these concepts to propose FINDFDSET and LISTFDSETS. Weakness: This paper assumes the knowledge of the underlying causal diagram in order to find admissible front-door adjustment sets. In most real-world applications, the causal diagrams are unknown and this requirement is quite restrictive. Clarity: In general, the paper is quite well written. Further, the algorithms and the functions are generally well explained in the text. While the authors do a decent job, the function GETDEP is a bit complicated to understand. I strongly recommend the authors to add running examples covering various scenarios that may arise in this function to make it easier for the readers. Questions Suggestions: As mentioned by authors in section 1 (line 77) and in section 5 (lines 393-395), there has been a lot of work on finding/listing back-door admissible sets when the underlying DAG/MAG/PAG/CPDAG is known. More importantly, there has also been some work on data-driven approaches for finding/listing back-door admissible sets using only an anchor variable when the underlying causal diagram is not known. For example: (1) Entner et al 2013 -- Data-driven covariate selection for nonparametric estimation of causal effects [https://proceedings.mlr.press/v31/entner13a.html], (2) Cheng et al 2020 -- Towards unique and unbiased causal effect estimation from data with hidden variable [https://arxiv.org/pdf/2002.10091.pdf], (3) Shah et al 2021 -- Finding Valid Adjustments under Non-ignorability with Minimal DAG Knowledge [https://arxiv.org/pdf/2106.11560.pdf]. In order to provide a comprehensive background regarding what is know for finding/listing back-door admissible sets, the authors should also talk about these works. As mentioned in the weaknesses section above, this paper assumes the knowledge of the underlying causal diagram in order to find admissible front-door adjustment sets. In contrast, the recent work of Bhattacharya and Nabi-- On Testability of the Front-Door Model via Verma Constraints [https://arxiv.org/pdf/2203.00161v1.pdf] -- explores finding admissible front-door adjustment sets in a data-driven manner using only an anchor variable when the underlying causal structure is not known similar to the works mentioned above. The authors should differentiate their work from this work. Typo: Did the authors intend to say 3 n instead of 2 n on line 339? Limitations In my opinion, the most important limitation of this work is the requirement of the underlying causal diagram which is acknowledged by the authors in Section 5. This limits the application of this work in most real-world scenarios where the causal diagram is not known. Therefore, in addition to finding/listing front-door admissible sets from incomplete or partially specified causal diagram as mentioned by the authors (in lines 395-396), it would be useful (for future research) to develop data-driven methods when the underlying causal structure is not known (except an anchor variable) by taking inspiration from the works of Entner et al 2013, Shah et al 2022, Cheng et al 2022, and Bhattacharya and Nabi 2022.
NIPS
Title Finding and Listing Front-door Adjustment Sets Abstract Identifying the effects of new interventions from data is a significant challenge found across a wide range of the empirical sciences. A well-known strategy for identifying such effects is Pearl’s front-door (FD) criterion [26]. The definition of the FD criterion is declarative, only allowing one to decide whether a specific set satisfies the criterion. In this paper, we present algorithms for finding and enumerating possible sets satisfying the FD criterion in a given causal diagram. These results are useful in facilitating the practical applications of the FD criterion for causal effects estimation and helping scientists to select estimands with desired properties, e.g., based on cost, feasibility of measurement, or statistical power. 1 Introduction Learning cause and effect relationships is a fundamental challenge across data-driven fields. For example, health scientists developing a treatment for curing lung cancer need to understand how a new drug affects the patient’s body and the tumor’s progression. The distillation of causal relations is indispensable to understanding the dynamics of the underlying system and how to perform decisionmaking in a principled and systematic fashion [27, 37, 2, 30, 1, 23, 24]. One of the most common methods for learning causal relations is through Randomized Controlled Trials (RCTs, for short) [8]. RCTs are considered as the “gold standard” in many fields of empirical research and are used throughout the health and social sciences as well as machine learning and AI. In practice, however, RCTs are often hard to perform due to ethical, financial, and technical issues. For instance, it may be unethical to submit an individual to a certain condition if such condition may have some potentially negative effects (e.g., smoking). Whenever RCTs cannot be conducted, one needs to resort to analytical methods to infer causal relations from observational data, which appears in the literature as the problem of causal effect identification [26, 27]. The causal identification problem asks whether the effect of holding a variableX at a constant value x on a variable Y , written as P (Y |do(X = x)), or P (Y |do(x)), can be computed from a combination of observational data and causal assumptions. One of the most common ways of eliciting these assumptions is in the form of a causal diagram represented by a directed acyclic graph (DAG), where its nodes and edges describe the underlying data generating process. For instance, in Fig. 1a, three nodes X,Z, Y represent variables, a directed edge X → Z indicates that X causes Z, and a dashedbidirected edge X ↔ Y represents that X and Y are confounded by unmeasured (latent) factors. Different methods can solve the identification problem, including Pearl’s celebrated do-calculus [26] as well as different algorithmic solutions [40, 34, 12]. In practice, researchers often rely on identification strategies that generate well-known identification formulas. One of the arguably most popular strategies is identification by covariate adjustment. Whenever a set Z satisfies the back-door (BD) criterion [26] relative to the pair X and Y , where X and Y represent the treatment and outcome variables, respectively, the causal effect P (Y |do(x)) can be evaluated through the BD adjustment formula ∑ z P (y|x, z)P (z). 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Despite the popularity of the covariate adjustment technique for estimating causal effects, there are still settings in which no BD admissible set exists. For example, consider the causal diagram G in Fig. 1a. There clearly exists no set to block the BD path from X to Y , through the bidirected arrow, X ↔ Y . One may surmise that this effect is not identifiable and the only one of evaluating the interventional distribution is through experimentation. Still, this is not the case. The effect P (Y |do(x)) is identifiable from G and the observed distribution P (x, y, z) over {X,Y, Z} by another classic identification strategy known as the front-door (FD) criterion [26]. In particular, through the following FD adjustment formula provides the way of evaluating the interventional distribution: P (Y |do(x)) = ∑ z P (z|x) ∑ x′ P (y|x′, z)P (x′). (1) We refer to Pearl and Mackenzie [28, Sec. 3.4] for an interesting account of the history of the FD criterion, which was the first graphical generalization of the BD case. The FD criterion is drawing more attention in recent years. For applications of the FD criterion, see, e.g., Hünermund and Bareinboim [13] and Glynn and Kashin [10]. Statistically efficient and doubly robust estimators have recently been developed for estimating the FD estimand in Eq. (1) from finite samples [9], which are still elusive for arbitrary estimands identifiable in a diagram despite recent progress [18, 19, 5, 20, 43]. Both the BD and FD criteria are only descriptive, i.e., they specify whether a specific set Z satisfies the criteria or not, but do not provide a way to find an admissible set Z. In addition, in many situations, it is possible that multiple adjustment sets exist. Consider for example the causal diagram in Fig. 1b, and the task of identifying the effect of X on Y . The distribution P (Y |do(x)) can indeed be identified by the FD criterion with a set Z = {A,B,C} given by the expression in Eq. (1) (with Z replaced with {A,B,C}). Still, what if the variable B is costly to measure or encodes some personal information about patients which is undesirable to be shared due to ethical concerns? In this case, the set Z = {A,C} also satisfies the FD criterion and may be used. Even when both B and C are unmeasured, the set Z = {A} is also FD admissible. This simple example shows that a target effect can be estimated using different adjustment sets leading to different probability expressions over different set of variables, which has important practical implications. Each variable implies different practical challenges in terms of measurement, such as cost, availability, privacy. Each estimand has different statistical properties in terms of sample complexity, variance, which may play a key role in the study design [31, 11, 32, 36]. Algorithms for finding and listing all possible adjustment sets are hence very useful in practice, which will allow scientists to select an adjustment set that exhibits desirable properties. Indeed, algorithms have been developed in recent years for finding one or listing all BD admissible sets [38, 39, 41, 29, 42]. However, no such algorithm is currently available for finding/listing FD admissible sets. The goal of this paper is to close this gap to facilitate the practical applications of the FD criterion for causal effects estimation and help scientists to select estimand with certain desired properties 1. Specifically, the contributions of this paper are as follows: 1. We develop an algorithm that finds an admissible front-door adjustment set Z in a given causal diagram in polynomial time (if one exists). We solve a variant of the problem that imposes constraints I ⊆ Z ⊆ R for given sets I and R, which allows a scientist to constrain the search to include specific subsets of variables or exclude variables from search perhaps due to cost, availability, or other technical considerations. 2. We develop a sound and complete algorithm that enumerates all front-door adjustment sets with polynomial delay - the algorithm takes polynomial amount of time to return each new admissible set, if one exists, or return failure whenever it exhausted all admissible sets. 1Code is available at https://github.com/CausalAILab/FrontdoorAdjustmentSets. 2 Preliminaries Notation. We write a variable in capital letters (X) and its value as small letters (x). Bold letters, X or x, represent a set of variables or values. We use kinship terminology to denote various relationships in a graph G and denote the parents, ancestors, and descendants of X (including X itself) as Pa(X),An(X), and De(X), respectively. Given a graph G over a set of variables V, a subgraph GX consists of a subset of variables X ⊆ V and their incident edges in G. A graph G can be transformed: GX is the graph resulting from removing all incoming edges to X, and GX is the graph with all outgoing edges from X removed. A DAG G may be moralized into an undirected graph where all directed edges of G are converted into undirected edges, and for every pair of nonadjacent nodes in G that share a common child, an undirected edge that connects such pair is added [22]. A path π from a node X to a node Y in G is a sequence of edges where X and Y are the endpoints of π. A node W on π is said to be a collider if W has converging arrows into W in π, e.g.,→ W ← or↔ W ←. π is said to be blocked by a set Z if there exists a node W on π satisfying one of the following two conditions: 1) W is a collider, and neither W nor any of its descendants are in Z, or 2) W is not a collider, and W is in Z [25]. Given three disjoint sets X,Y, and Z in G, Z is said to d-separate X from Y in G if and only if Z blocks every path from a node in X to a node in Y according to the d-separation criterion [25], and we say that Z is a separator of X and Y in G. Structural Causal Models (SCMs). We use Structural Causal Models (SCMs, for short) [27] as our basic semantical framework. An SCM is a 4-tuple 〈U,V,F, P (u)〉, where 1) U is a set of exogenous (latent) variables, 2) V is a set of endogenous (observed) variables, 3) F is a set of functions {fV }V ∈V that determine the value of endogenous variables, e.g., v ← fV (paV ,uV ) is a function with PAV ⊆ V \ {V } and UV ⊆ U, and 4) P (u) is a joint distribution over the exogenous variables U. Each SCM induces a causal diagram G [3, Def. 13] where every variable v ∈ V is a vertex and directed edges in G correspond to functional relationships as specified in F and dashed bidirected edges represent common exogenous variables between two vertices. Within the structural semantics, performing an intervention and setting X = x is represented through the do-operator, do(X = x), which encodes the operation of replacing the original functions of X (i.e., fX(paX ,uX)) by the constant x and induces a submodelMx and an interventional distribution P (v|do(x)). Classic Causal Effects Identification Criteria. Given a causal diagram G over V, an effect P (y|do(x)) is said to be identifiable in G if P (y|do(x)) is uniquely computable from the observed distribution P (v) in any SCM that induces G [27, p. 77]. A path between X and Y with an arrow into X is known as a back-door path from X to Y . The celebrated back-door (BD) criterion [26] provides a sufficient condition for effect identification from observational data, which states that if a set Z of non-descendants of X blocks all BD paths from X to Y, then the causal effect P (y|do(x)) is identified by the BD adjustment formula: P (y|do(x)) = ∑ z P (y|x, z)P (z) (2) Another classic identification condition that is key to the discussion in this paper is known as the front-door criterion, which is defined as follows: Definition 1. (Front-door (FD) Criterion [26]) A set of variables Z is said to satisfy the front-door criterion relative to the pair (X,Y) if 1. Z intercepts all directed paths from X to Y, 2. There is no unblocked back-door path from X to Z, and 3. All back-door paths from Z to Y are blocked by X, i.e., X is a separator of Z and Y in GZ. If Z satisfies the FD criterion relative to the pair (X,Y), then P (y|do(x)) is identified by the following FD adjustment formula [26]: P (y|do(x)) = ∑ z P (z|x) ∑ x′ P (y|x′, z)P (x′). (3) 3 Finding A Front-door Adjustment Set Algorithm 1 FINDFDSET (G,X,Y, I,R) 1: Input: G a causal diagram; X,Y disjoint sets of variables; I,R sets of variables. 2: Output: Z a set of variables satisfying the front- door criterion relative to (X,Y) with the constraint I ⊆ Z ⊆ R. 3: Step 1: 4: R′ ← GETCAND2NDFDC(G,X, I,R) 5: if R′ =⊥ then: return ⊥ 6: Step 2: 7: R′′ ← GETCAND3RDFDC(G,X,Y, I,R′) 8: if R′′ =⊥ then: return ⊥ 9: Step 3: 10: G′ ← GETCAUSALPATHGRAPH(G,X,Y) 11: if TESTSEP(G′,X,Y,R′′) = True then: 12: return Z = R′′ 13: else: return ⊥ In this section, we address the following question: given a causal diagram G, is there a set Z that satisfies the FD criterion relative to the pair (X,Y) and, therefore, allows us to identify P (y|do(x)) by the FD adjustment? We solve a more general variant of this question that imposes a constraint I ⊆ Z ⊆ R for given sets I and R. Here, I are variables that must be included in Z (I could be empty) and R are variables that could be included in Z (R could be V\(X∪Y)). Note the constraint that variables in W cannot be included can be enforced by excluding W from R. Solving this version of the problem will allow scientists to put constraints on candidate adjustment sets based on practical considerations. In addition, this version will form a building block for an algorithm that enumerates all FD admissible sets in a given G - the algorithm LISTFDSETS (shown in Alg. 2 in Section 4) for listing all FD admissible sets will utilize this result during the recursive call. We have developed a procedure called FINDFDSET shown in Alg. 1 that outputs a FD adjustment set Z relative to (X,Y) satisfying I ⊆ Z ⊆ R, or outputs ⊥ if none exists, given a causal diagram G, disjoint sets of variables X and Y, and two sets of variables I and R. Example 1. Consider the causal graph G′, shown in Fig. 1b, with X = {X}, Y = {Y }, I = ∅ and R = {A,B,C,D}. Then, FINDFDSET outputs {A,B,C}. With I = {C} and R = {A,C}, FINDFDSET outputs {A,C}. With I = {D} and R = {A,B,C,D}, FINDFDSET outputs ⊥ as no FD adjustment set that contains D is available. FINDFDSET runs in three major steps. Each step identifies candidate variables that incrementally satisfy each of the conditions of the FD criterion relative to (X,Y). First, FINDFDSET constructs a set of candidate variables R′, with I ⊆ R′ ⊆ R, such that every subset Z with I ⊆ Z ⊆ R′ satisfies the second condition of the FD criterion (i.e., there is no BD path from X to Z). Next, FINDFDSET generates a set of candidate variables R′′, with I ⊆ R′′ ⊆ R′, such that for every variable v ∈ R′′, there exists a set Z with I ⊆ Z ⊆ R′ and v ∈ Z that further satisfies the third condition of the FD criterion, that is, all BD paths from Z to Y are blocked by X. Finally, FINDFDSET outputs a set Z that further satisfies the first condition of the FD criterion - Z intercepts all causal paths from X to Y. Step 1 of FINDFDSET In Step 1, FINDFDSET calls the function GETCAND2NDFDC (presented in Fig. 2) to construct a set R′ that consists of all the variables v ∈ R such that there is no BD path from X to v (R′ is set to empty if there is a BD path from X to I). Then, there is no BD path from X to any set I ⊆ Z ⊆ R′ since, by definition, there is no BD path from X to Z if and only if there is no BD path from X to any v ∈ Z. GETCAND2NDFDC iterates through each variable v ∈ R and checks if there exists an open BD path from X to v by calling the function TESTSEP(GX,X, v, ∅) [41]. TESTSEP(G,A,B,C) returns True if C is a separator of A and B in G, or False otherwise. Therefore, TESTSEP(GX,X, v, ∅) returns True if ∅ is a separator of X and v in GX (i.e., there is no BD path from X to v), or False otherwise. If TESTSEP returns False, then v is removed from R′ because every set Z containing v violates the second condition of the FD criterion relative to (X,Y). Example 2. Continuing Example 1. With I = ∅ and R = {A,B,C,D}, GETCAND2NDFDC outputs a set R′ = {A,B,C}. D is excluded from R′ since there exists a BD path from {X} to {D}, and any set containing D violates the second condition of the FD criterion relative to ({X}, {Y }). Lemma 1 (Correctness of GETCAND2NDFDC). GETCAND2NDFDC(G,X, I,R) generates a set of variables R′ with I ⊆ R′ ⊆ R such that R′ consists of all and only variables v that satisfies the second condition of the FD criterion relative to (X,Y). Further, every subset Z ⊆ R′ satisfies the second condition of the FD criterion relative to (X,Y), and every set Z with I ⊆ Z ⊆ R that satisfies the second condition of the FD criterion relative to (X,Y) must be a subset of R′. Step 2 of FINDFDSET In Step 2, FINDFDSET calls the function GETCAND3RDFDC presented in Fig. 3 to generate a set R′′ consisting of all the variables v ∈ R′ such that there exists a set Z containing v with I ⊆ Z ⊆ R′ that further satisfies the third condition of the FD criterion relative to (X,Y) (i.e., all BD paths from Z to Y are blocked by X). In other words, R′′ is the union of all Z with I ⊆ Z ⊆ R′ that satisfies the third condition of the FD criterion. GETCAND3RDFDC iterates through each variable v ∈ R′ and calls the function GETDEP(G,X,Y, {v},R′) in line 5. Presented in Fig. 4, GETDEP returns a subset Z′ ⊆ R′ \ {v} such that all BD paths from Z = {v} ∪ Z′ to Y are blocked by X (if there exists such Z′). If GETDEP returns ⊥, then there exists no Z containing v that satisfies the third condition of the FD criterion relative to (X,Y), so v is removed from R′′. Example 3. Continuing Example 2. Given I = ∅ and R′ = {A,B,C}, GETCAND3RDFDC outputs R′′ = {A,B,C} because for each variable v ∈ R′′, GETDEP finds a set Z′ such that {v} ∪ Z′ satisfies the third condition of the FD criterion relative to ({X}, {Y }). For v = A, Z′ = ∅, for v = B, Z′ = {A}, and for v = C, Z′ = {A}. Next, we explain how the function GETDEP(G,X,Y,T,R′) works. First, GETDEP constructs an undirected graphM in a way that the paths from T to Y inM represent all BD paths from T to Y that cannot be blocked by X in G. The auxiliary function MORALIZE(G) moralizes a given graph G into an undirected graph. The moralization is performed on the subgraph over An(T ∪X ∪Y) instead of G based on the following property: T and Y are d-separated by X in G if and only if X is a T-Y node cut (i.e., removing X disconnects T and Y) in G′ = MORALIZE(GAn(T∪X∪Y)) [21]. GETDEP performs Breadth-First Search (BFS) from T to Y onM and incrementally constructs a subset Z′ ⊆ R′ \ T such that, after BFS terminates, there will be no BD path from Z = T ∪ Z′ to Y that cannot be blocked by X in G. While constructing Z′, GETDEP calls the function GETNEIGHBORS(u,M) (presented in Fig. 8, Appendix) to obtain all observed neighbors of u inM. The BFS starts from each variable v ∈ T. Whenever a non-visited node u is encountered, the set NR, observed neighbors of u that belong to R′, is computed. NR can be added to Z′ because removing all outgoing edges of NR may contribute to disconnecting some BD paths Π from T to Y that cannot be blocked by X in G. In other words, in GT∪Z′∪NR, Π could be disconnected from T to 1: function GETDEP(G,X,Y,T,R′) 2: Output: Z′ ⊆ R′ \T, a set of variables such that T ∪ Z′ satisfies the third condition of the FD criterion relative to (X,Y). 3: G′ ← GAn(T∪X∪Y) 4: G′ ← G′ with all bidirected edges A ↔ B replaced by a latent node LAB and two edges LAB → A and LAB → B 5: G′′ ← G′T 6: M← MORALIZE(G′′) then remove X 7: Z′ ← ∅,Q← T and mark all v ∈ T as visited 8: while Q 6= ∅ do 9: u← Q.POP() 10: if u ∈ Y then: return ⊥ 11: NR← GETNEIGHBORS(u,M) ∩R′ that are not visited 12: G′′ ← G′T∪Z′∪NR 13: M← MORALIZE(G′′) then remove X 14: N′ ← GETNEIGHBORS(u,M) that are not visited 15: NR′ ← {w ∈ NR| there exists an incoming arrow into w in G} 16: N← N′ ∪NR′,Z′ ← Z′ ∪NR 17: Q.INSERT(N) and mark all w ∈ N as visited 18: end while 19: return Z′ 20: end function Figure 4: A function that facilitates the construction of a set that satisfies the third condition of the FD criterion. Y where Π are not disconnected in GT∪Z′ . After adding NR to Z′,M must be reconstructed in a way that reflects the setting where all outgoing edges of NR are removed. BFS will be performed on such modifiedM. GETDEP checks if there exists any set of nodes N to be visited further. N consists of two sets: 1) N′, all observed neighbors of u that are still reachable from u, even after removing all outgoing edges of NR, and 2) NR′ ⊆ NR where for every node w ∈ NR, there exists an incoming arrow into w in G. All nodes in NR′ must be checked because there might exist some BD path π from w to y ∈ Y that cannot be blocked by X in G. If π cannot be disconnected from w to y, then the set Z will violate the third condition of the FD criterion relative to (X,Y). The BFS continues until either a node y ∈ Y is visited, or no more nodes can be visited. If GETDEP returns a set Z′, then we have that all BD paths from T to Y that cannot be blocked by X in G have been disconnected in GZ while ensuring that there exists no BD path from Z to Y that cannot be blocked by X in G. Therefore, Z satisfies the third condition of the FD criterion relative to (X,Y). Otherwise, if GETDEP returns ⊥ (i.e., y is visited), then there does not exist any Z containing T that satisfies the third condition of the FD criterion relative to (X,Y). This is because there exists a BD path π from t ∈ T to y that cannot be blocked by X in G; removing outgoing edges of all w ∈ R′ that intersect π cannot disconnect π from t to y. Example 4. Expanding on Example 3 to show the use of function GETDEP. Consider the case when v = B. Then, Q = T = {B} and u = B is popped from Q at line 9. We have NR = {A},N′ = ∅,NR′ = {A},N = {A}, and Z′ = {A}. Since N is inserted to Q at line 17, u = A is popped from Q in the next iteration of while loop. Then, NR = ∅,N′ = ∅,NR′ = ∅, and N = ∅. Since Q is empty, the while loop terminates and GETDEP returns Z′ = {A}. Example 5. Illustrating the use of function GETDEP. Let I = ∅, R′ = {B,C}, and v = B. Q = T = {B} and u = B is popped from Q at line 9. NR = ∅,N′ = {A},NR′ = ∅,N = {A}, and Z′ = ∅. Since N is inserted to Q at line 17, u = A is popped from Q in the second iteration of while loop. NR = NR′ = {C}, N′ = N = {C,D, Y }, Z′ = {C}, and Q = {C,D, Y }. On the third iteration, u = C is popped from Q. NR = NR′ = N′ = N = ∅ and Q = {D,Y }. On the fourth iteration, u = D is popped from Q. NR = NR′ = N′ = N = ∅ and Q = {Y }. Next, u = Y is popped from Q. Since u ∈ {Y }, GETDEP returns ⊥ at line 10. There exists no set Z′ ⊆ (R′ \T) = {C} such that T ∪ Z′ satisfies the third condition of the FD criterion relative to ({X}, {Y }). Lemma 2 (Correctness of GETCAND3RDFDC). GETCAND3RDFDC(G,X,Y, I,R′) in Step 2 of Alg. 1 generates a set of variables R′′ where I ⊆ R′′ ⊆ R′. R′′ consists of all and only variables v such that there exists a subset Z with I ⊆ Z ⊆ R′ and v ∈ Z that satisfies the third condition of the FD criterion relative to (X,Y). Further, every set Z with I ⊆ Z ⊆ R that satisfies both the second and the third conditions of the FD criterion must be a subset of R′′. Remark: Even though every set Z with I ⊆ Z ⊆ R′ that satisfies the third condition of the FD criterion must be a subset of R′′, not every subset Z ⊆ R′′ satisfies the third condition of the FD criterion, as illustrated by the following example. Example 6. In Example 3, GETCAND3RDFDC outputs R′′ = {A,B,C}. However, for Z = {B}, the BD path {B ← A→ D → Y } is not blocked by {X}; for Z = {C}, the BD path {C ← A→ D → Y } is not blocked by {X}. On the other hand, we show that Z = R′′ itself satisfies the third condition of the FD criterion, as shown in the following. Lemma 3. R′′ generated by GETCAND3RDFDC (in Step 2 of Alg. 1) satisfies the third condition of the FD criterion, that is, all BD paths from R′′ to Y are blocked by X. Step 3 of FINDFDSET Finally, in Step 3, FINDFDSET looks for a set Z ⊆ R′′ that satisfies the first condition of the FD criterion relative to (X,Y), that is, Z intercepts all causal paths from X to Y. To facilitate checking whether a set Z intercepts all causal paths from X to Y, we introduce the concept of causal path graph defined as follows. Definition 2. (Causal Path Graph) Let G be a causal graph and X,Y disjoint sets of variables. A causal path graph G′ relative to (G,X,Y) is a graph over X ∪ Y ∪ PCP (X,Y), where PCP (X,Y) = (De(X)GX \X) ∩An(Y)GX 2, constructed as follows: 1. Construct a subgraph G′′ = GX∪Y∪PCP (X,Y). 2. Construct a graph G′ = G′′ XY , then remove all bidirected edges from G′. X YZ (a) G X A B Y C D (b) G′′ Figure 5: Two causal path graphs generated from (a) the causal graph in Fig. 1a, and (b) the causal graph in Fig. 1b. Both preserve all and only causal paths from {X} to {Y } in the original graphs. A function GETCAUSALPATHGRAPH(G,X,Y) for constructing a causal path graph is presented in Fig. 9 in the Appendix. Example 7. Consider the causal graph G′ shown in Fig. 1b with X = {X}, and Y = {Y }. The causal path graph G′′ relative to (G′, {X}, {Y }) is shown in Fig. 5b. All causal paths from {X} to {Y } in G′ are present in G′′. After constructing a causal path graph G′ relative to (G,X,Y), we use the function TESTSEP(G′,X,Y,Z) to check if Z is a separator of X and Y in G′. Based on the following lemma, Z satisfies the first condition of the FD criterion relative to (X,Y) if and only if TESTSEP returns True. Lemma 4. Let G be a causal graph and X,Y,Z disjoint sets of variables. Let G′ be the causal path graph relative to (G,X,Y). Then, Z satisfies the first condition of the FD criterion relative to (X,Y) if and only if Z is a separator of X and Y in G′. Given the set R′′ that contains every set Z with I ⊆ Z ⊆ R that satisfies both the second and the third conditions of the FD criterion (Lemma 2), it may appear that we need to search for a set Z ⊆ R′′ that satisfies the first condition of the FD criterion. We show instead that all we need is to check whether the set R′′ itself satisfies the first condition which has been shown to satisfy the second and third conditions by Lemma 3. This result is summarized in the following lemma. 2A notation introduced by van der Zander et al. [41] to denote the set of variables on proper causal paths from X to Y. Lemma 5. There exists a set Z0 satisfying the FD criterion relative to (X,Y) with I ⊆ Z0 ⊆ R if and only if R′′ generated by GETCAND3RDFDC (in Step 2 of Alg. 1) satisfies the FD criterion relative to (X,Y). Example 8. Continuing Example 3. In Step 3, FINDFDSET outputs Z = R′′ = {A,B,C} since Z is a separator of {X} and {Y } in the causal path graph G′′ in Fig. 5b. The results in this section are summarized as follows. Theorem 1 (Correctness of FINDFDSET). Let G be a causal graph, X,Y disjoint sets of variables, and I,R sets of variables such that I ⊆ R. Then, FINDFDSET(G,X,Y, I,R) outputs a set Z with I ⊆ Z ⊆ R that satisfies the FD criterion relative to (X,Y), or outputs ⊥ if none exists, in O(n3(n+m)) time, where n and m represent the number of nodes and edges in G. 4 Enumerating Front-door Adjustment Sets Algorithm 2 LISTFDSETS (G,X,Y, I,R) 1: Input: G a causal diagram; X,Y disjoint sets of variables; I,R sets of variables. 2: Output: Listing front-door adjustment set Z rel- ative to (X,Y) where I ⊆ Z ⊆ R. 3: if FINDFDSET(G,X,Y, I,R) 6=⊥ then: 4: if I = R then: Output I 5: else: 6: v ← any variable from R \ I 7: LISTFDSETS(G,X,Y, I ∪ {v},R) 8: LISTFDSETS(G,X,Y, I,R \ {v}) Our goal in this section is to develop an algorithm that lists all FD adjustment sets in a causal diagram. In general, there may exist exponential number of such sets, which means that any listing algorithm will take exponential time to list them all. We will instead look for an algorithm that has an interesting property known as polynomial delay [38]. In words, poly-delay algorithms output the first answer (or indicate none is available) in polynomial time, and take polynomial time to output each consecutive answer as well. Consider the following example. Example 9. Consider the three causal graphs in Fig. 6. In G shown in Fig. 6a, there exists 9 valid FD adjustment sets relative to ({X}, {Y }). In G′, presented in Fig. 6b, two variables A3 and B3 are added from G, forming an additional causal path from X to Y . 27 FD adjustment sets relative to ({X}, {Y }) are available in G′. If another causal path X → A4 → B4 → Y is added to G′, then there are 81 FD adjustment sets relative to ({X}, {Y }). As shown in Fig. 6c, in a graph G′′ with similar pattern with causal path X → Ai → Bi → Y, i = 1, . . . n, there are at least 3n number of FD adjustment sets. We have developed an algorithm named LISTFDSETS, shown in Alg. 2, that lists all FD adjustment sets Z relative to (X,Y) satisfying I ⊆ Z ⊆ R with polynomial delay, given a causal diagram G, disjoint sets of variables X and Y, and two sets of variables I and R. Example 10. Consider the causal graph G′ shown in Fig. 1b with X = {X}, Y = {Y }, I = ∅ and R = {A,B,C,D}. LISTFDSETS outputs {A,B,C}, {A,B}, {A,C}, {A} one by one, and finally stops as no more adjustment sets exist. The algorithm LISTFDSETS takes the same search strategy as the listing algorithm LISTSEP [41] that enumerates all BD adjustment sets with polynomial delay. LISTFDSETS implicitly constructs a binary search tree where each tree node N (I′,R′) represents the collection of all FD adjustment sets Z relative to (X,Y) with I′ ⊆ Z ⊆ R′. The search starts from the root tree node N (I,R), indicating that LISTFDSETS will list all FD adjustment sets Z relative to (X,Y) with I ⊆ Z ⊆ R. Upon visiting a node N (I′,R′), LISTFDSETS first calls the function FINDFDSET (line 3) to decide whether it is necessary to search further from N . If FINDFDSET outputs ⊥, then there does not exist any FD adjustment set Z0 with I′ ⊆ Z0 ⊆ R′ and there is no need to search further. Otherwise, N spawns two children, N1 and N2, and LISTFDSETS continues the search over each child separately. N1 in line 7 represents the collection of all FD adjustment sets Z1 relative to (X,Y) where I′ ∪ {v} ⊆ Z1 ⊆ R′. On the other hand, N2 in line 8 represents the collection of all FD adjustment sets Z2 where I′ ⊆ Z2 ⊆ R′ \ {v}. N1 and N2 are disjoint and thus the search never overlaps, which is crucial to guaranteeing that LISTFDSETS runs in polynomial delay. Finally, a leaf tree node L is reached when I′ = R′, and LISTFDSETS outputs a valid FD adjustment set I′. Example 11. Continuing from Example 10. Fig. 7 shows a search tree generated by running LISTFDSETS(G′, {X}, {Y }, ∅, {A,B,C,D}). Initially, the search starts from the root tree node N (∅, {A,B,C,D}). Since FINDFDSET returns a set {A,B,C}, N branches out into two children N ′({A}, {A,B,C,D}) and N ′′(∅, {B,C,D}). The search continues from the left child N ′ until reaching the leaf tree node L1({A,B,C,D}, {A,B,C,D}) where FINDFDSET returns⊥. LISTFDSETS backtracks to the parent tree node N1({A,B,C}, {A,B,C,D}) and then checks the next leaf L2({A,B,C}, {A,B,C}) where FINDFDSET returns a set {A,B,C}, a valid FD admissible set relative to ({X}, {Y }). LISTFDSETS outputs {A,B,C}. Next, LISTFDSETS backtracks to the tree node N2({A,B}, {A,B,C,D}) and reaches the leaf L3({A,B}, {A,B}) where FINDFDSET outputs {A,B}, and thus LISTFDSETS outputs {A,B}. LISTFDSETS continues and outputs two sets {A,C} and {A} in order. Finally, LISTFDSETS backtracks to the root N and checks the right childN ′′ where FINDFDSET returns ⊥. LISTFDSETS does not search further fromN ′′ and stops as no more tree node is left to be visited. Our results are summarized in the following theorem, which provides the correctness, completeness, and poly-delay complexity of the proposed algorithm. Note that the completeness of the algorithm means that it lists “all” valid sets satisfying the FD criterion. On the other hand, Pearl’s FD criterion is not complete in the sense that there might exist a causal effect that can be computed by the FD adjustment formula (Eq. (3)) but the set Z does not satisfy the FD criterion. Theorem 2 (Correctness of LISTFDSETS). Let G be a causal graph, X,Y disjoint sets of variables, and I,R sets of variables. LISTFDSETS(G,X,Y, I,R) enumerates all and only sets Z with I ⊆ Z ⊆ R that satisfy the FD criterion relative to (X,Y) in O(n4(n+m)) delay where n and m represent the number of nodes and edges in G. 5 Discussion and Conclusions This work has some limitations and can be extended in several directions. First, Pearl’s FD criterion is not complete with respect to the FD adjustment formula (Eq. (3)). While the BD criterion has been generalized to a complete criterion for BD adjustment [35], it is an interesting open problem to come up with a complete criterion for sets satisfying the FD adjustment. Second, this work assumes that the causal diagram is given (or inferred based on scientists’ domain knowledge and/or data). Although this assumption is quite common throughout the causal inference literature, more recent work has moved to finding BD admissible sets given incomplete or partially specified causal diagrams, e.g., maximal ancestral graphs (MAGs) [41], partial ancestral graphs (PAGs) [29], and completed partially directed acyclic graphs (CPDAGs) [29]. There are algorithms capable of performing causal effect identification in a data-driven fashion from an equivalence class [14, 15, 16, 17]. It is an interesting and certainly challenging future work to develop algorithms for finding FD admissible sets in these types of graphs. Some recent work has proposed data-driven methods for finding and listing BD admissible sets, using an anchor variable, when the underlying causal diagram is unknown [7, 6, 33]. A criterion for testing FD-admissibility of a given set using data and an anchor variable is also available [4]. Other interesting future research topics include developing algorithms for finding minimal, minimum, and minimum cost FD adjustment sets, which are available for the BD adjustment sets [42], as well as algorithms for finding conditional FD adjustment sets [13, 9]. Having said all of that, we believe that the results developed in this paper is a necessary step towards solving these more challenging problems. After all, we started from the observation that identification is not restricted to BD adjustment, and Pearl’s FD criterion provides a classic strategy for estimating causal effects from observational data and qualitative knowledge encoded in the form of a causal diagram. The criterion is drawing more attention in recent years and statistically efficient and doubly robust estimators have been developed for estimating the FD estimand from finite samples. In this paper, we develop algorithms that given a causal diagram G, find an admissible FD set (Alg. 1 FINDFDSET, Thm. 1) and enumerate all admissible FD sets with polynomial delay (Alg. 2 LISTFDSETS, Thm. 2). We hope that the methods and algorithms proposed in this work will help scientists to use the FD strategy for causal effects estimation in the practical applications and are useful for scientists in study design to select covariates based on desired properties, including cost, feasibility, and statistical power.
1. What is the focus and contribution of the paper regarding causal DAGs? 2. What are the strengths of the proposed algorithm, particularly in terms of its soundness and completeness? 3. What are the weaknesses of the paper, especially regarding its assumptions and limitations? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions or concerns regarding the paper that the reviewer would like to address?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper develops an algorithm, i.e. FINDFDSE() to find an admissible adjustment set that satisfies the front-door criterion in a given causal DAG. The solution is to solve a variant of the problem that imposes constraints I for given sets I and R. Based on the algorithm FINDFDSE(), a sound and complete algorithm is developed to enumerate all front-door adjustment sets with polynomial time. Moreover, the detailed theoretical analysis supports the soundness of the developed LISTFDSETS() algorithm. Strengths And Weaknesses *Strengths: This work is well-motivated and organised. A large number of examples make the paper easy to follow and understand. The technical quality of the work is high. The steps of both algorithms are also very clear. *Weaknesses: Assumptions are not clear. To my understanding, the given causal DAG is a very specific class of DAGs. For example, X = { X 1 , X 2 } , Y = { Y } , R = { A , B , C } ; the graph is formed: Y ↔ X 1 → A → B → Y ; Y ↔ X 2 ← A ; B → C ← X 2 ; C → Y . ∀ v ∈ R , we have TESTSEP( G X ― , X , v , ∅ ) = FALSE. Thus, FINDFDSET() returns ⊥ . However, for a single X ∈ X , FINDFDSET() returns A for ( X 1 , Y ) and C for ( X 2 , Y ) , respectively. Therefore, the generalization on X may not be perfect. The ``completeness'' is not specified. Furthermore, there is no evidence for completeness. Some minors: Definition 1, "There is no back-door path from X to Z " should be "There is no unblocked back-door path from X to Z ". PA V and pa V are not defined. Questions Q1. is the first weakness. Q2. In Fig. 1b, If the edge A → D is replaced by A ← D , is there still an admission front-door adjustment set $\mathbf{Z]$? If the answer is no, are there some graphical criteria to handle such a case? Limitations The causal DAG and the sets I and R need to be provided. In many real-world applications, this information is not available. Hence, the application of the developed algorithms in this work has limited application at present. For multiple treatments and outcomes ( X , Y ) , the developed algorithms may only be able to solve very few cases, and most of them return ⊥ .
NIPS
Title Pushing the Limits of Narrow Precision Inferencing at Cloud Scale with Microsoft Floating Point Abstract In this paper, we explore the limits of Microsoft Floating Point (MSFP), a new class of datatypes developed for production cloud-scale inferencing on custom hardware. Through the co-evolution of hardware design and algorithms, MSFP16 incurs 3× lower cost compared to Bfloat16 and MSFP12 has 4× lower cost compared to INT8 while delivering a comparable or better accuracy. MSFP incurs negligible impact to accuracy (<1%), requires no changes to the model topology, and is integrated with a mature cloud production pipeline. MSFP supports various classes of deep learning models including CNNs, RNNs, and Transformers without modification. Finally, we characterize the accuracy and implementation of MSFP and demonstrate its efficacy on a number of production scenarios, including models that power major online scenarios such as web search, question-answering, and image classification. 1 Introduction Over the past few years, there has been an exponential growth in the size of deep neural networks (DNNs) to further push achievable accuracy [1, 2, 3]. With the diminishing of Moore’s law, the arithmetic density that can fit on computing hardware plays an important role for large-scale inferencing. One key to increasing arithmetic density is the use of narrow bit-width datatypes. Inferencing DNNs with narrow bit-width at a required service level agreement (i.e., accuracy and latency) requires a careful balance of dynamic range and hardware complexity. For instance, although narrow fixed-point datatypes incur a low hardware overhead, they lack a wide enough dynamic range. As such, the use of fixed-point arithmetic at large-scale is typically limited due to noticeable accuracy ∗Equal contribution. Email correspondence to birouhan@microsoft.com †work done while Anna Vinogradsky was an intern at Microsoft. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. drops (>1%). Fixed-point datatypes also require a manual calibration process for each new benchmark. To address these inefficiencies, there is a rising interest in custom datatypes specifically designed for DNN workloads. Google deployed a custom datatype called Bfloat16 on its TPUs [4, 5] and NVIDIA recently announced a new datatype called TF32 available on its latest generation A100 GPUs [6]. Both Bfloat16 and TF32 represent a wide dynamic range which leads to close to zero accuracy drop when the DNN model is executed with these datatypes. However, while these datatypes have a lower hardware footprint compared to IEEE-compliant Float32, their overhead is still considered to be high for low-cost inference at scale. As a result, the industry seems to be converging to INT8 for inferencing which requires a careful model re-calibration to preserve accuracy. With the current setup, going below 8-bits almost always results in an accuracy drop. This paper introduces Microsoft Floating Point (MSFP), a class of new datatypes for robust and low-cost DNN inference at scale. MSFP is a hardware/algorithm co-designed numerical format that enables an efficient realization of dot products (which are the building blocks of DNN workloads) on custom hardware while maintaining a high dynamic range ([2−126, 2127]). Figure 1 corroborates the accuracy-area trade-off of using different datatypes for serving ResNet50-ImageNet. MSFP outperforms existing datatypes in terms of area and energy cost when the model is held to a fixed accuracy. Variants of MSFP together form a new Pareto frontier for computational performance/mm2 compared to a collection of competitive datatypes. In this paper, we quantify the Pareto frontier in terms of arithmetic density, the measure of how many dot products can be fit into 1mm2 of silicon on 16nm process. MSFP16 can be used as a drop-in replacement for Bfloat16 without any accuracy drop or requiring any re-calibration or hyper-parameter tuning. MSFP16 provides 2× memory saving and 2.8× higher arithmetic density compared to Bfloat16. We further built a fully automated fine-tuning pipeline to enable serving DNNs at even a lower cost while preserving the accuracy. With moderate model fine-tuning, MSFP can provide 4× higher arithmetic density compared to INT8 industry-standard datatype for inference while delivering comparable accuracy. MSFP is deployed at large-scale production for industry web services and has been successfully validated on over a dozen proprietary and open-sourced benchmarks. Extensive evaluations of a variety of computer vision and natural language processing models demonstrate the robustness and generality of the MSFP format. In summary, we make the following contributions: • We propose Microsoft floating point, a hardware/algorithm co-designed numerical datatype for DNN workloads that can achieve the same accuracy level of existing datatypes at a fraction of the area and power cost on custom silicon. • We build a low-friction production pipeline for serving pre-trained DNN models. MSFP preserves model accuracy at ultra-narrow bit-width with as few as three mantissa bits (i.e., MSFP12) with minimal fine-tuning. All conversions of weights and activations to MSFP format are handled in-situ on custom hardware. • We perform extensive evaluations of MSFP on various CNN, RNN, and Transformer models. Deploying DNN models using the MSFP datatype leads to a new state-of-the-art Pareto frontier between accuracy and computational cost. 2 Microsoft Floating Point MSFP is a hardware/algorithm co-designed numerical format for DNN workloads. Here we build on IEEE-compliant formats to introduce the structure of MSFP and elaborate on its functionality. IEEE floating point formats include one sign bit s, a number of exponent bits e, and a number of significand or mantissa bits m. Float16, for instance, consists of 1 sign bit, 5 exponent bits, and 11 mantissa bits (10 of which are explicitly stored). The resulting value can be decoded as x = (−1)s ∗ 2e′ ∗m where e′ is set to e− 15 to adjust for the encoded bias. MSFP has a similar structure with one main difference. Instead of assigning a private exponent to each element of a tensor, MSFP relies on using a shared exponent among some number of values. For instance, for a vector of elements, the floating point representation is: [(−1)s0 2e0 m0 , (−1)s1 2e1 m1 , ... , (−1)sn−1 2en−1 mn−1 ], the MSFP representation is: 2eshared [ (−1)s0 m′0 , (−1) s1 m′1, . . . , (−1) sn−1 m′n−1 ] . The number of elements sharing one exponent is referred to as the bounding-box size. The shared exponent can be any value that is representative of the range of elements in each bounding-box. We use the maximum exponent in our setting to best represent the outliers in each bounding-box. However, other approaches could be used such as taking an average or percentile value. By associating each element in the bounding-box with the max exponent in the box, each mantissa term mi must be adjusted by shifting it to the right by the difference between emax and ei. The term m′i is defined as mi (eshared − ei), where is the right-shift operator. As emax − ei increases, the right shift will truncate away more of the least-significant bits in the mantissa. Thus, similar to fixed-point, MSFP is affected by extreme outlier values. However, because each bounding box defines its own dynamic range, an outlier’s effect would be limited to the bounding-box in which it occurs. In MSFP, zero is represented by having all mantissa bits being 0 for a given value (shared exponent can be any value). MSFP mantissas do not have an implicit leading bit and all mantissa bits are explicitly represented. The key insight behind MSFP is to strike a compromise between the dynamic range of floating point and the hardware efficiency of fixed-point. Floating point uses an exponent for every element, while fixed-point (with scaling) uses one exponent for all elements. In contrast, MSFP uses one exponent for each n elements, and is able to approach the benefits of both formats. Computing with the MSFP format. MSFP is selectively applied to performance-critical components of a model that exhaust computation resources and memory bandwidth. Dot product is one of the core operations involved in DNN inference, being the basic operation underlying both convolutional and fully-connected layers. Suppose we have two n-dimensional row-vectors −→x0 and−→x1 in MSFP format with shared exponents 2e0 and 2e1 , respectively. The dot product of these vectors takes the form: −→x0.−→x1T = 2e0 [ (−1)s0,0 m′0,0 , (−1) s0,1 m′0,1, . . . , (−1) s0,n−1 m′0,n−1 ] . 2e1 [ (−1)s1,0 m′1,0 , (−1) s1,1 m′1,1, . . . , (−1) s1,n−1 m′1,n−1 ]T = 2e0+e1 n−1∑ i=0 ( (−1)s0,i⊕s1,i m′0,i ∗m′1,i ) , where⊕ is XOR operation and T stands for transposition. As shown, the dot product in MSFP format consists of a single fixed-point addition of exponents, n fixed-point multiplications of mantissas, and n− 1 fixed-point additions of mantissa products. Here, n, the length of the dot product, coincides with the bounding-box size. However, this does not always need to hold. A large dot product can be built by summing the results of smaller bounding-box sized dot products. The overhead of MSFP compared to pure fixed-point is precisely the hardware needed to handle the shared exponent, and this overhead is amortized over the bounding-box size. Figure 2 shows the high level overview of a systolic tensor core architecture (left) which computes long dot products using multiple bounding-box length MSFP dot product units (right). Within MSFP dot product, the multipliers and adders are all fixed-point. Thus for the same number of mantissa bits, MSFP has a significantly lower circuit footprint compared to IEEE floating point. By truncating the number of bits assigned to each mantissa, the circuit area can be reduced even further. Table 1 summarizes the dot product density and memory savings with MSFP format for different numbers of mantissa bits. The bounding-box size here is 16. Throughout the paper, we refer to different MSFP configurations as MSFPN (e.g., MSFP12). The number listed is the sum of the bit-width assigned to sign, mantissa, and shared exponent. For the rest of the paper, MSFP will follow a sign-magnitude format and has an 8-bit shared exponent. The default bounding-box size is 16 unless explicitly mentioned otherwise. We will show in Section 4 that a bounding-box size of 16 provides a reasonable balance between inference cost and accuracy across various DNN benchmarks. MSFP12’s MAC circuit size is smaller even than Int4. MSFP uses a sign-magnitude mantissa format, which costs less area and energy compared to conventional two’s complement integer format (assuming equal mantissa bit-width). 3 MSFP configuration In this section, we first discuss the effects of different configuration settings for the MSFP datatype. We conclude the section with the MSFP quantization pipeline for DNN workloads. Bounding-box size. The granularity of values which share an exponent (bounding-box size) is an important factor for both model accuracy and hardware cost. Sharing an exponent among fewer values improves the encoding efficiency and sharing exponents amongst a larger number of values helps keep hardware costs down. Figure 3 illustrates the Kullback-Leibler (KL) divergence between MSFP encoding and Float32 encoding for various bounding-box sizes and mantissa bit-widths in a layer sampled from ResNet-50. KL divergence between two encoding format P and Q is defined as KL(P ‖ Q) = ∑ x∈X P (x) log ( P (x) Q(x) ) . Intuitively, a lower KL divergence translates to a lower discrepancy between data distributions before and after quantization; thus resulting in better accuracy. As shown in Figure 3, with fewer mantissa bits, smaller bounding-box sizes are required to keep the quantization error under control. In practice, we found a bounding-box size of 16-128 to be effective in preserving the accuracy while incurring a moderate hardware cost. Bounding-box shape. The shape of a bounding-box impacts the computing cost and ultimate model accuracy. While clustering similar magnitude values to create pertinent bounding-boxes will reduce quantization error, tracking exponents for arbitrary bounding-box shapes is expensive. Figure 4 presents several hardware-friendly options to partition matrices into bounding-boxes. Consider a right-hand matrix multiply y = xW , where x is a row-vector and W is a matrix. The simplest approach is to treat the entire matrix W as a single bounding-box. This is a common approach used in prior works [7]. Even though such a coarse-grained approach incurs a low hardware overhead, this can lead to severe accuracy loss due to outliers and needs careful re-calibration per benchmark. In a right-hand matrix multiply, dot products are performed between the row vector x and columns of the matrix W . Another natural boundary is to treat each column of the matrix W as a separate bounding-box. By aligning bounding-boxes to the columns of the matrix, all dot products are still between a pair of bounding-boxes which can be calculated using fixed-point arithmetic. Large matrices typically are broken down into small tiles that fit into limited hardware resources of a kernel. Thus, one may more effectively split the computation into finer-grained regions that align with hardware tiles (see the right-most images in Figure 4). We chose to work with tile-based partitioning to obtain a balance between accuracy and hardware cost. A similar strategy is applied to convolution layers by sharing the exponent along the channel depth. Encoding efficiency. The encoding efficiency of MSFP depends on two main factors: bit-width and bounding-box size. Figure 5 demonstrates the expected value of Quantized Noise to Signal Ratio (QNSR) as a function of bit-width for MSFP format. In particular, the expected value of QNSR is defined as E(Q(x)−xx ), where x is a random entry of random tensor X and Q(x) is the corresponding quantized value. To measure MSFP encoding efficiency, we considered thousands of random tensors sampled from parameters of different neural networks and report the average acquired QNSRs (for output of a dot-product) in Figure 5. The overall variation across different tensors are shown using the error bars. As demonstrated, adding 1 bit to the mantissa decreases QNSR by almost 3.2dB. Figure 5 further illustrates MSFP encoding efficiency as a function of bounding-box size. In this experiment, we used 6 bits of mantissa, 1 sign bit, and 8 bits of the shared exponent for conversion to MSFP format (i.e., MSFP15). As shown, doubling the block-size increases the QNSR by approximately 0.52dB. The same trend is observed with other mantissa bit-widths. Quantization pipeline and deployment In this paper, we focus on efficient inference of pretrained deep neural networks. Although quantization-aware training of DNNs (e.g., with injected quantization noise) can lead to a better accuracy [8, 9], it would be impractical to force all users of a DNN inferencing platform to train their models with quantization from scratch. Instead, pre-trained floating point weights are directly quantized into MSFP, either offline or online within the hardware accelerator. Activation tensors are converted to MSFP in-situ on the hardware. At ultra-narrow bit-width, we found that a few steps (as low as 1 epoch) of model fine-tuning can help improve the accuracy of the quantized model. To fine-tune the network, we use the conventional training procedure based on stochastic gradient descent. During the forward pass, weights and activations are quantized to MSFP and the loss is calculated with the introduced quantization errors. A straightthrough estimator in the back-propagation was to used for the gradients of quantization operators. The learning rate should be small during this fine-tuning phase, typically equal to the final learning rate used for original Float32 training. MSFP was deployed on project Brainwave [10, 11], a datacenter-scale DNN inferencing service using networked FPGAs. Each Brainwave FPGA is a self-contained neural processing unit that has custom tensor-cores. Although the underlying configuration (i.e., bounding-box size and shape) can be re-configured based on the application requirements, we opt to use a fixed configuration in our experiments for simplicity (see Section 4). We emphasize that all data conversions and computation of shared exponent are handled in-situ on hardware without requiring users to do any pre-processing. 4 Experiments To measure accuracy, the impact of MSFP on DNN inference was modeled in both Tensorflow and Pytorch using a custom library. This MSFP library was validated against our hardware for high fidelity emulation. For image classification, the majority of experiments focused on the ImageNet benchmark. The models used include ResNet-50 [12], ResNet-101, ResNet-152, Inception-v3 [13], Inceptionv4 [14], MobileNet_V1_1.0_224 [15, 16], VGG16 [17], VGG19, and EfficientNet-EdgeTPU (-S, -M, and -L) [18, 19]. For transformer-based models, a pre-trained BERT-base [1] was used, and the accuracy of MSFP was evaluated on two downstream tasks: SQuAD (question answering) and MRPC (paraphrase detection). In addition, two proprietary RNN-based models named Production-DS and Production-DR were tested. Production-DS is a search relevance model that includes four GRUs with state size 500. Production-DR is a machine reading comprehension model that includes eight LSTMs with hidden dimension 100. Conversion to MSFP was applied to computation extensive layers as discussed in Section 2. Element-wise scalar operations such as activation functions were performed in Float16. A bounding box size of 16 was used unless otherwise specified. CNN models are typically used as backbone feature extractor in accelerated cloud-based applications. As such, in those benchmarks, MSFP was applied to main backbone convolution layers only and the last fully-connected layer was kept in Float32. The last layer is usually run on commodity hardware and not the custom accelerator. Table 2 shows the normalized accuracy for a variety of different models. The accuracy values are normalized with respect to the Float32 counterpart. For image classification models, the metric is top-1 accuracy. Production-DS is evaluated using an area-under-curve (AUC) metric, BERT-MRPC is a classification task based on classic accuracy metric, and Production-DR and BERT-SQuAD use F1 score [20]. MSFP16 enables instant quantization while preserving the Float32 accuracy across various benchmarks without any fine-tuning or ad-hoc optimizations such as clipping or calibration. Model fine-tuning further enables pushing down the required bit-width for different benchmarks. Note that the quantization fine-tuning step is fairly low overhead (1-10 epochs) without any hyper-parameter tuning. We used the same learning rate as the final stage of original Float32 training for fine-tuning. As shown in Table 2, CNN-based models typically require higher bit-widths to stay within 1% of the floating point result compared to RNNs and transformers. Note that even for CNN-based models one can drop the bit-width for weights to MSFP12 and still maintain high accuracy with MSFP as long as the activations are computed with MSFP13 or higher. Table 3 shows the results for using different bit-widths for weights and activations for ResNet-50 benchmark. We use a bounding-box size of 128 in this experiment which yields even better arithmetic density compared to default bounding-box size of 16. In general, we’ve observed that using more bits for activations produces higher accuracy than using more bits for weights. This paper focuses on uniform quantization. Mixed precision inference [21] with MSFP is an interesting extension for future work. Figure 6 compares the accuracy versus relative multiplier density for different numerical formats. As shown, MSFP has superior performance while delivering high accuracy across various benchmarks. We would like to emphasize that no special optimization (such as manual clipping, extensive hyperparameter tuning, or quantization-aware training from scratch) is applied to boost accuracy. The same recipe is applied to all computer vision and natural language processing benchmarks. 5 Related work DNN inference with low bit-width arithmetic involves mapping a continuous set of values onto a discrete lattice (a.k.a., quantization). Over the past few years, a large body of work has looked into quantized DNN inference with fixed-point format [22, 23, 24, 25, 26, 7, 27, 28]. Outliers and irregular distributions, however, create a challenge for fixed-point quantization [29, 27, 30]. On the one hand, a uniform fixed-point quantization scheme that allocates lattice points for outliers will have fewer points available for dense portions of the distribution yielding large errors even though it incurs a low circuit footprint on custom silicon. On the other hand, focusing the quantization lattice on the dense portions of the distribution via a code-book [31] (a.k.a., non-uniform quantization) or posit [32, 33] datatype can potentially reduce quantization error but require radical changes to the arithmetic circuitry that are unlikely to be adopted for industry-scale deployment. Finding the right balance between accuracy and hardware complexity is an active area of research. Despite several promising results on a few benchmarks [7, 30], the use of fixed-point arithmetic is considered a high-friction choice for large scale production, as it usually requires significant developer investment, hyper-parameter tuning, and/or re-calibration to adapt the proposed technique to a new domain. NVIDIA recently introduced a new software tool (called TensorRT) to adaptively transform an input Float32 model into int8 [34]. TensorRT requires model re-calibration and manual adjustments such as clipping to regain accuracy across various benchmarks. Quantizing below 8-bits with integer format often results in significant accuracy drop without significant model re-training. MSFP addresses the aforementioned challenges by dividing the values in a tensor into fine-grained regions (bounding boxes) to limit the effect of outliers and irregularly distributed values. By independently quantizing subsets of a tensor, the probability that any given bounding-box suffers from outlier effects decreases. Compared to uniform and non-uniform quantization, MSFP is a hybrid quantization approach. Within each bounding region, a locally scaled uniform quantization is deployed to capture the distribution of each subset. The scaling factors used for different bounding regions, however, are independent of one another; resembling the structure of a non-uniform quantization. This hybrid approach lets us use a low overhead uniform lattice for each localized bounding-box with a low hardware cost while preserving a high level of accuracy by better taking care of local irregularities. The idea of using shared exponents to enable computing with a precision approaching floating point while using a fixed-point processor [35, 36, 37] is not new in computer architecture or statistics. The pertinent parameter state space for DNN workloads, however, is relatively large and needs its own study. In this work, we carefully characterized the state space for different operations involved in DNN workloads. Our study led to identifying a set of parameterization that works robustly across various types of neural networks. We further provide an accompanying extensible ISA to process DNN graphs and map pertinent operations to custom accelerators without requiring users to do any kind of pre-processing. Our hardware/algorithm co-designed approach lets us significantly drive down the cost of DNN inference. Through extensive evaluations, we demonstrate the potential of MSFP in the narrow bit-width regime. MSFP pushes the limits of narrow precision inferencing and enables operating at a new accuracy-cost Pareto frontier. Our use of a shared exponent in MSFP shares similarity to FlexPoint [7]. FlexPoint, however, uses coarse-grained scaling factors (at the granularity of an entire layer) based on tensor-wide statistics which can lead to a significant accuracy drop (in the presence of outliers) or a high-friction pass to re-calibrate for new benchmarks [38]. In this paper, we show that using fine-grained scaling is the key to enable high performance narrow-precision quantization. In addition, unlike FlexPoint, all conversions to MSFP (including the selection of shared exponent) are automatically handled on the fly in-situ on our custom hardware without requiring pre-processing of activations/weights. We emphasize that the primary objective of designing MSFP is accommodating a higher arithmetic density on a single node and not necessarily memory compression. While compression techniques such as pruning [31, 39, 40, 41], knowledge distillation [42, 43], weight sharing [44], or hashing [45, 46] result in promising memory compression, they may not necessarily yield a better arithmetic density and thus higher throughput and lower inference cost. In addition, this line of works requires changes to the topology of the pertinent model and usually incurs a low development velocity and a high re-training cost to obtain the original accuracy. 6 Conclusion We presented Microsoft floating point as a robust datatype for low-cost DNN inference. Variants of MSFP together enable operating at a new accuracy-cost Pareto frontier compared to a collection of current standards. MSFP-based inference eliminates the need for extensive model re-calibration or ad-hoc optimization to preserve target accuracy. MSFP has the capability to effectively encode a wide range of tensors across different application domains including vision and language models. MSFP is deployed at a datacenter-scale and has been used to successfully ship over a dozen models. 7 Broader impact This paper opens a new axis for the growing research in quantized DNN inference. It challenges the current practice in the field regarding the choice of numerical format and sheds light on the importance of a holistic co-design of hardware architecture and algorithms. This paper further highlights the importance of generalization in designing next generation standard quantization techniques to minimize the non-recurring engineering cost and ensure ease-of-use across various classes of models. Large-scale cost-efficient inference over DNNs enables an ever-increasing number of AI applications in consumer and enterprise products. MSFP enables inferencing on larger and more powerful DNN models in scenarios that require very high rates of inference such as web search, enterprise search, and email search. Other scenarios such as real-time recommendations, AI-powered text auto-completion (e.g. auto-suggestion, smart compose), and conversational interfaces also require high inference rates and benefit from inferencing with MSFP format. Acknowledgments and Disclosure of Funding We thank the anonymous reviewers and program committee for their feedback.
1. What is the main contribution of the paper regarding numerical representation and its impact on low-power or high-speed inference? 2. What are the strengths of the proposed approach compared to traditional block floating point or Flexpoint methods? 3. What are some potential weaknesses or limitations of the method, particularly in terms of representing zero, implementing ReLU, handling odd sizes, and performing strided or à trous convolution? 4. How does the reviewer assess the novelty and applicability of the work in low-power or high-speed inference, especially in comparison to integer quantization methods? 5. Are there any questions or concerns regarding the choice of how the bounding box structure is applied to linear algebra computations, or the lack of clarity in certain aspects of the arithmetic?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper shows a grouped numerical format that represents contiguous values in tensors along some dimension with a shared exponent, similar to traditional block floating point / Flexpoint (as the authors mention), except with controllable granularity and direction of the sharing. Since a group of floating point values being operated upon share the same exponent, this avoids issues with significand alignment for summation (for inner product) and allows for a tradeoff between encoding the dynamic range and the significand precision. Given relatively high dynamic range (8 bits of base-2 exponent here), the authors find that in typical application the significand widths can be decreased significantly, which leads to quadratically smaller multiplication hardware and linearly smaller adders. The overall result is that a BBFP processing element/ALU can be much higher density / lower power (and probably lower latency) than the alternatives. While the choice of how the bounding box structure is applied to linear algebra computations is somewhat up in the air with this paper, the authors show that when applied to a wide range of NN inferencing tasks, performance is quite acceptable, and with minimal fine tuning or other adjustments to the network. Generall;y this work seems applicable to low-power or high-speed inference due to this increase in density, and seems superior to integer quantization methods in this regard. Strengths I believe this work provides a novel way to trade off dynamic range with significand precision that is part way between the Flexpoint approach and the prefix-free encoding approach within single scalar values such as the posit design. The Flexpoint approach similarly can be applied to reduce multiplier and adder sizes, but appears to suffer too many problems due to the granularity of the shared exponent. While the posit-style approach makes better use of the bits for matching the data distribution of encoded scalar values (a higher entropy encoding than typical fixed-width fields), but the circuits required are more complicated and only increase adder and multiplier sizes. While memory reduction was not the primary goal of the authors, this approach also has the side benefit of a reduction in needed bandwidth to the circuitry due to a 4-7x reduction in encodings. The greatest benefit is in the ALU sizes especially for inner product, and the advantage seems pretty clear to me. This approach is another technique that is applicable for developing either high throughput or lower power NN inferencing hardware. Weaknesses Some of these I'm listing here in weaknesses are more a lack of clarity in how this arithmetic works than things which may be weaknesses per se. Can zero be represented at all in this format? Typical floating point formats presume an implicit leading integer 1 for the significand (with the remainder of the bits being the fractional part of the significand, and the exception being IEEE 754 subnormal encoding which gives an implicit leading integer 0 for the significand). Presuming IEEE-style usage, zero can only be represented if all values in the bounding box are zero (the shared exponent is zero, and all significand bits are zero). Or is zero approximated by the smallest representable value within the shared exponent. How would, say, ReLU be implemented then? It is unclear how the bounding box shapes or how they are aligned/tiled is chosen. Is it ever the case that you have operations between mismatched bounding box sizes due to different tiling patterns, or must the direction of inner product always be tiled in the same way. How are odd sizes handled (a trailing bounding box with ignored remaining values), which may be produced from certain convolution kernel sizes or pooling sizes. This is not really described at all save for a cursory comment in Figure 3. How would, say, strided or à trous convolution be implemented? Is this done by a separate scatter/gather engine on top of this to restructure data, or are these operations possible at all? I understand how inner product would work, but how does pointwise addition/subtraction work (e.g., as seen in a residual network)? Is significand alignment performed for all values based on exponent difference? What happens in the case of carries (some scalar values want to increment the exponent, while others do not, or a subtraction wants to decrement the exponent)? It seems that when forming a bounding box even for the results of many inner products (e.g., C = AB, the scalar entries of C are the results of multiple BBFP inner products), what happens when you attempt to repackage C into new bounding boxes with a shared exponent. How would, e.g., transcendental functions like sigmoid, log or exp be implemented using this format and this hardware design? If these need to be farmed out to separate hardware that did not maintain the bounding box approach, then this would penalize network designs that frequently had such things. What hardware overhead is needed for maintaining the extent of bounding boxes, or the need to repackage single scalar results (e.g., the scalar entries of C as results of multiple BBFP inner products in C = AB above)?
NIPS
Title Pushing the Limits of Narrow Precision Inferencing at Cloud Scale with Microsoft Floating Point Abstract In this paper, we explore the limits of Microsoft Floating Point (MSFP), a new class of datatypes developed for production cloud-scale inferencing on custom hardware. Through the co-evolution of hardware design and algorithms, MSFP16 incurs 3× lower cost compared to Bfloat16 and MSFP12 has 4× lower cost compared to INT8 while delivering a comparable or better accuracy. MSFP incurs negligible impact to accuracy (<1%), requires no changes to the model topology, and is integrated with a mature cloud production pipeline. MSFP supports various classes of deep learning models including CNNs, RNNs, and Transformers without modification. Finally, we characterize the accuracy and implementation of MSFP and demonstrate its efficacy on a number of production scenarios, including models that power major online scenarios such as web search, question-answering, and image classification. 1 Introduction Over the past few years, there has been an exponential growth in the size of deep neural networks (DNNs) to further push achievable accuracy [1, 2, 3]. With the diminishing of Moore’s law, the arithmetic density that can fit on computing hardware plays an important role for large-scale inferencing. One key to increasing arithmetic density is the use of narrow bit-width datatypes. Inferencing DNNs with narrow bit-width at a required service level agreement (i.e., accuracy and latency) requires a careful balance of dynamic range and hardware complexity. For instance, although narrow fixed-point datatypes incur a low hardware overhead, they lack a wide enough dynamic range. As such, the use of fixed-point arithmetic at large-scale is typically limited due to noticeable accuracy ∗Equal contribution. Email correspondence to birouhan@microsoft.com †work done while Anna Vinogradsky was an intern at Microsoft. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. drops (>1%). Fixed-point datatypes also require a manual calibration process for each new benchmark. To address these inefficiencies, there is a rising interest in custom datatypes specifically designed for DNN workloads. Google deployed a custom datatype called Bfloat16 on its TPUs [4, 5] and NVIDIA recently announced a new datatype called TF32 available on its latest generation A100 GPUs [6]. Both Bfloat16 and TF32 represent a wide dynamic range which leads to close to zero accuracy drop when the DNN model is executed with these datatypes. However, while these datatypes have a lower hardware footprint compared to IEEE-compliant Float32, their overhead is still considered to be high for low-cost inference at scale. As a result, the industry seems to be converging to INT8 for inferencing which requires a careful model re-calibration to preserve accuracy. With the current setup, going below 8-bits almost always results in an accuracy drop. This paper introduces Microsoft Floating Point (MSFP), a class of new datatypes for robust and low-cost DNN inference at scale. MSFP is a hardware/algorithm co-designed numerical format that enables an efficient realization of dot products (which are the building blocks of DNN workloads) on custom hardware while maintaining a high dynamic range ([2−126, 2127]). Figure 1 corroborates the accuracy-area trade-off of using different datatypes for serving ResNet50-ImageNet. MSFP outperforms existing datatypes in terms of area and energy cost when the model is held to a fixed accuracy. Variants of MSFP together form a new Pareto frontier for computational performance/mm2 compared to a collection of competitive datatypes. In this paper, we quantify the Pareto frontier in terms of arithmetic density, the measure of how many dot products can be fit into 1mm2 of silicon on 16nm process. MSFP16 can be used as a drop-in replacement for Bfloat16 without any accuracy drop or requiring any re-calibration or hyper-parameter tuning. MSFP16 provides 2× memory saving and 2.8× higher arithmetic density compared to Bfloat16. We further built a fully automated fine-tuning pipeline to enable serving DNNs at even a lower cost while preserving the accuracy. With moderate model fine-tuning, MSFP can provide 4× higher arithmetic density compared to INT8 industry-standard datatype for inference while delivering comparable accuracy. MSFP is deployed at large-scale production for industry web services and has been successfully validated on over a dozen proprietary and open-sourced benchmarks. Extensive evaluations of a variety of computer vision and natural language processing models demonstrate the robustness and generality of the MSFP format. In summary, we make the following contributions: • We propose Microsoft floating point, a hardware/algorithm co-designed numerical datatype for DNN workloads that can achieve the same accuracy level of existing datatypes at a fraction of the area and power cost on custom silicon. • We build a low-friction production pipeline for serving pre-trained DNN models. MSFP preserves model accuracy at ultra-narrow bit-width with as few as three mantissa bits (i.e., MSFP12) with minimal fine-tuning. All conversions of weights and activations to MSFP format are handled in-situ on custom hardware. • We perform extensive evaluations of MSFP on various CNN, RNN, and Transformer models. Deploying DNN models using the MSFP datatype leads to a new state-of-the-art Pareto frontier between accuracy and computational cost. 2 Microsoft Floating Point MSFP is a hardware/algorithm co-designed numerical format for DNN workloads. Here we build on IEEE-compliant formats to introduce the structure of MSFP and elaborate on its functionality. IEEE floating point formats include one sign bit s, a number of exponent bits e, and a number of significand or mantissa bits m. Float16, for instance, consists of 1 sign bit, 5 exponent bits, and 11 mantissa bits (10 of which are explicitly stored). The resulting value can be decoded as x = (−1)s ∗ 2e′ ∗m where e′ is set to e− 15 to adjust for the encoded bias. MSFP has a similar structure with one main difference. Instead of assigning a private exponent to each element of a tensor, MSFP relies on using a shared exponent among some number of values. For instance, for a vector of elements, the floating point representation is: [(−1)s0 2e0 m0 , (−1)s1 2e1 m1 , ... , (−1)sn−1 2en−1 mn−1 ], the MSFP representation is: 2eshared [ (−1)s0 m′0 , (−1) s1 m′1, . . . , (−1) sn−1 m′n−1 ] . The number of elements sharing one exponent is referred to as the bounding-box size. The shared exponent can be any value that is representative of the range of elements in each bounding-box. We use the maximum exponent in our setting to best represent the outliers in each bounding-box. However, other approaches could be used such as taking an average or percentile value. By associating each element in the bounding-box with the max exponent in the box, each mantissa term mi must be adjusted by shifting it to the right by the difference between emax and ei. The term m′i is defined as mi (eshared − ei), where is the right-shift operator. As emax − ei increases, the right shift will truncate away more of the least-significant bits in the mantissa. Thus, similar to fixed-point, MSFP is affected by extreme outlier values. However, because each bounding box defines its own dynamic range, an outlier’s effect would be limited to the bounding-box in which it occurs. In MSFP, zero is represented by having all mantissa bits being 0 for a given value (shared exponent can be any value). MSFP mantissas do not have an implicit leading bit and all mantissa bits are explicitly represented. The key insight behind MSFP is to strike a compromise between the dynamic range of floating point and the hardware efficiency of fixed-point. Floating point uses an exponent for every element, while fixed-point (with scaling) uses one exponent for all elements. In contrast, MSFP uses one exponent for each n elements, and is able to approach the benefits of both formats. Computing with the MSFP format. MSFP is selectively applied to performance-critical components of a model that exhaust computation resources and memory bandwidth. Dot product is one of the core operations involved in DNN inference, being the basic operation underlying both convolutional and fully-connected layers. Suppose we have two n-dimensional row-vectors −→x0 and−→x1 in MSFP format with shared exponents 2e0 and 2e1 , respectively. The dot product of these vectors takes the form: −→x0.−→x1T = 2e0 [ (−1)s0,0 m′0,0 , (−1) s0,1 m′0,1, . . . , (−1) s0,n−1 m′0,n−1 ] . 2e1 [ (−1)s1,0 m′1,0 , (−1) s1,1 m′1,1, . . . , (−1) s1,n−1 m′1,n−1 ]T = 2e0+e1 n−1∑ i=0 ( (−1)s0,i⊕s1,i m′0,i ∗m′1,i ) , where⊕ is XOR operation and T stands for transposition. As shown, the dot product in MSFP format consists of a single fixed-point addition of exponents, n fixed-point multiplications of mantissas, and n− 1 fixed-point additions of mantissa products. Here, n, the length of the dot product, coincides with the bounding-box size. However, this does not always need to hold. A large dot product can be built by summing the results of smaller bounding-box sized dot products. The overhead of MSFP compared to pure fixed-point is precisely the hardware needed to handle the shared exponent, and this overhead is amortized over the bounding-box size. Figure 2 shows the high level overview of a systolic tensor core architecture (left) which computes long dot products using multiple bounding-box length MSFP dot product units (right). Within MSFP dot product, the multipliers and adders are all fixed-point. Thus for the same number of mantissa bits, MSFP has a significantly lower circuit footprint compared to IEEE floating point. By truncating the number of bits assigned to each mantissa, the circuit area can be reduced even further. Table 1 summarizes the dot product density and memory savings with MSFP format for different numbers of mantissa bits. The bounding-box size here is 16. Throughout the paper, we refer to different MSFP configurations as MSFPN (e.g., MSFP12). The number listed is the sum of the bit-width assigned to sign, mantissa, and shared exponent. For the rest of the paper, MSFP will follow a sign-magnitude format and has an 8-bit shared exponent. The default bounding-box size is 16 unless explicitly mentioned otherwise. We will show in Section 4 that a bounding-box size of 16 provides a reasonable balance between inference cost and accuracy across various DNN benchmarks. MSFP12’s MAC circuit size is smaller even than Int4. MSFP uses a sign-magnitude mantissa format, which costs less area and energy compared to conventional two’s complement integer format (assuming equal mantissa bit-width). 3 MSFP configuration In this section, we first discuss the effects of different configuration settings for the MSFP datatype. We conclude the section with the MSFP quantization pipeline for DNN workloads. Bounding-box size. The granularity of values which share an exponent (bounding-box size) is an important factor for both model accuracy and hardware cost. Sharing an exponent among fewer values improves the encoding efficiency and sharing exponents amongst a larger number of values helps keep hardware costs down. Figure 3 illustrates the Kullback-Leibler (KL) divergence between MSFP encoding and Float32 encoding for various bounding-box sizes and mantissa bit-widths in a layer sampled from ResNet-50. KL divergence between two encoding format P and Q is defined as KL(P ‖ Q) = ∑ x∈X P (x) log ( P (x) Q(x) ) . Intuitively, a lower KL divergence translates to a lower discrepancy between data distributions before and after quantization; thus resulting in better accuracy. As shown in Figure 3, with fewer mantissa bits, smaller bounding-box sizes are required to keep the quantization error under control. In practice, we found a bounding-box size of 16-128 to be effective in preserving the accuracy while incurring a moderate hardware cost. Bounding-box shape. The shape of a bounding-box impacts the computing cost and ultimate model accuracy. While clustering similar magnitude values to create pertinent bounding-boxes will reduce quantization error, tracking exponents for arbitrary bounding-box shapes is expensive. Figure 4 presents several hardware-friendly options to partition matrices into bounding-boxes. Consider a right-hand matrix multiply y = xW , where x is a row-vector and W is a matrix. The simplest approach is to treat the entire matrix W as a single bounding-box. This is a common approach used in prior works [7]. Even though such a coarse-grained approach incurs a low hardware overhead, this can lead to severe accuracy loss due to outliers and needs careful re-calibration per benchmark. In a right-hand matrix multiply, dot products are performed between the row vector x and columns of the matrix W . Another natural boundary is to treat each column of the matrix W as a separate bounding-box. By aligning bounding-boxes to the columns of the matrix, all dot products are still between a pair of bounding-boxes which can be calculated using fixed-point arithmetic. Large matrices typically are broken down into small tiles that fit into limited hardware resources of a kernel. Thus, one may more effectively split the computation into finer-grained regions that align with hardware tiles (see the right-most images in Figure 4). We chose to work with tile-based partitioning to obtain a balance between accuracy and hardware cost. A similar strategy is applied to convolution layers by sharing the exponent along the channel depth. Encoding efficiency. The encoding efficiency of MSFP depends on two main factors: bit-width and bounding-box size. Figure 5 demonstrates the expected value of Quantized Noise to Signal Ratio (QNSR) as a function of bit-width for MSFP format. In particular, the expected value of QNSR is defined as E(Q(x)−xx ), where x is a random entry of random tensor X and Q(x) is the corresponding quantized value. To measure MSFP encoding efficiency, we considered thousands of random tensors sampled from parameters of different neural networks and report the average acquired QNSRs (for output of a dot-product) in Figure 5. The overall variation across different tensors are shown using the error bars. As demonstrated, adding 1 bit to the mantissa decreases QNSR by almost 3.2dB. Figure 5 further illustrates MSFP encoding efficiency as a function of bounding-box size. In this experiment, we used 6 bits of mantissa, 1 sign bit, and 8 bits of the shared exponent for conversion to MSFP format (i.e., MSFP15). As shown, doubling the block-size increases the QNSR by approximately 0.52dB. The same trend is observed with other mantissa bit-widths. Quantization pipeline and deployment In this paper, we focus on efficient inference of pretrained deep neural networks. Although quantization-aware training of DNNs (e.g., with injected quantization noise) can lead to a better accuracy [8, 9], it would be impractical to force all users of a DNN inferencing platform to train their models with quantization from scratch. Instead, pre-trained floating point weights are directly quantized into MSFP, either offline or online within the hardware accelerator. Activation tensors are converted to MSFP in-situ on the hardware. At ultra-narrow bit-width, we found that a few steps (as low as 1 epoch) of model fine-tuning can help improve the accuracy of the quantized model. To fine-tune the network, we use the conventional training procedure based on stochastic gradient descent. During the forward pass, weights and activations are quantized to MSFP and the loss is calculated with the introduced quantization errors. A straightthrough estimator in the back-propagation was to used for the gradients of quantization operators. The learning rate should be small during this fine-tuning phase, typically equal to the final learning rate used for original Float32 training. MSFP was deployed on project Brainwave [10, 11], a datacenter-scale DNN inferencing service using networked FPGAs. Each Brainwave FPGA is a self-contained neural processing unit that has custom tensor-cores. Although the underlying configuration (i.e., bounding-box size and shape) can be re-configured based on the application requirements, we opt to use a fixed configuration in our experiments for simplicity (see Section 4). We emphasize that all data conversions and computation of shared exponent are handled in-situ on hardware without requiring users to do any pre-processing. 4 Experiments To measure accuracy, the impact of MSFP on DNN inference was modeled in both Tensorflow and Pytorch using a custom library. This MSFP library was validated against our hardware for high fidelity emulation. For image classification, the majority of experiments focused on the ImageNet benchmark. The models used include ResNet-50 [12], ResNet-101, ResNet-152, Inception-v3 [13], Inceptionv4 [14], MobileNet_V1_1.0_224 [15, 16], VGG16 [17], VGG19, and EfficientNet-EdgeTPU (-S, -M, and -L) [18, 19]. For transformer-based models, a pre-trained BERT-base [1] was used, and the accuracy of MSFP was evaluated on two downstream tasks: SQuAD (question answering) and MRPC (paraphrase detection). In addition, two proprietary RNN-based models named Production-DS and Production-DR were tested. Production-DS is a search relevance model that includes four GRUs with state size 500. Production-DR is a machine reading comprehension model that includes eight LSTMs with hidden dimension 100. Conversion to MSFP was applied to computation extensive layers as discussed in Section 2. Element-wise scalar operations such as activation functions were performed in Float16. A bounding box size of 16 was used unless otherwise specified. CNN models are typically used as backbone feature extractor in accelerated cloud-based applications. As such, in those benchmarks, MSFP was applied to main backbone convolution layers only and the last fully-connected layer was kept in Float32. The last layer is usually run on commodity hardware and not the custom accelerator. Table 2 shows the normalized accuracy for a variety of different models. The accuracy values are normalized with respect to the Float32 counterpart. For image classification models, the metric is top-1 accuracy. Production-DS is evaluated using an area-under-curve (AUC) metric, BERT-MRPC is a classification task based on classic accuracy metric, and Production-DR and BERT-SQuAD use F1 score [20]. MSFP16 enables instant quantization while preserving the Float32 accuracy across various benchmarks without any fine-tuning or ad-hoc optimizations such as clipping or calibration. Model fine-tuning further enables pushing down the required bit-width for different benchmarks. Note that the quantization fine-tuning step is fairly low overhead (1-10 epochs) without any hyper-parameter tuning. We used the same learning rate as the final stage of original Float32 training for fine-tuning. As shown in Table 2, CNN-based models typically require higher bit-widths to stay within 1% of the floating point result compared to RNNs and transformers. Note that even for CNN-based models one can drop the bit-width for weights to MSFP12 and still maintain high accuracy with MSFP as long as the activations are computed with MSFP13 or higher. Table 3 shows the results for using different bit-widths for weights and activations for ResNet-50 benchmark. We use a bounding-box size of 128 in this experiment which yields even better arithmetic density compared to default bounding-box size of 16. In general, we’ve observed that using more bits for activations produces higher accuracy than using more bits for weights. This paper focuses on uniform quantization. Mixed precision inference [21] with MSFP is an interesting extension for future work. Figure 6 compares the accuracy versus relative multiplier density for different numerical formats. As shown, MSFP has superior performance while delivering high accuracy across various benchmarks. We would like to emphasize that no special optimization (such as manual clipping, extensive hyperparameter tuning, or quantization-aware training from scratch) is applied to boost accuracy. The same recipe is applied to all computer vision and natural language processing benchmarks. 5 Related work DNN inference with low bit-width arithmetic involves mapping a continuous set of values onto a discrete lattice (a.k.a., quantization). Over the past few years, a large body of work has looked into quantized DNN inference with fixed-point format [22, 23, 24, 25, 26, 7, 27, 28]. Outliers and irregular distributions, however, create a challenge for fixed-point quantization [29, 27, 30]. On the one hand, a uniform fixed-point quantization scheme that allocates lattice points for outliers will have fewer points available for dense portions of the distribution yielding large errors even though it incurs a low circuit footprint on custom silicon. On the other hand, focusing the quantization lattice on the dense portions of the distribution via a code-book [31] (a.k.a., non-uniform quantization) or posit [32, 33] datatype can potentially reduce quantization error but require radical changes to the arithmetic circuitry that are unlikely to be adopted for industry-scale deployment. Finding the right balance between accuracy and hardware complexity is an active area of research. Despite several promising results on a few benchmarks [7, 30], the use of fixed-point arithmetic is considered a high-friction choice for large scale production, as it usually requires significant developer investment, hyper-parameter tuning, and/or re-calibration to adapt the proposed technique to a new domain. NVIDIA recently introduced a new software tool (called TensorRT) to adaptively transform an input Float32 model into int8 [34]. TensorRT requires model re-calibration and manual adjustments such as clipping to regain accuracy across various benchmarks. Quantizing below 8-bits with integer format often results in significant accuracy drop without significant model re-training. MSFP addresses the aforementioned challenges by dividing the values in a tensor into fine-grained regions (bounding boxes) to limit the effect of outliers and irregularly distributed values. By independently quantizing subsets of a tensor, the probability that any given bounding-box suffers from outlier effects decreases. Compared to uniform and non-uniform quantization, MSFP is a hybrid quantization approach. Within each bounding region, a locally scaled uniform quantization is deployed to capture the distribution of each subset. The scaling factors used for different bounding regions, however, are independent of one another; resembling the structure of a non-uniform quantization. This hybrid approach lets us use a low overhead uniform lattice for each localized bounding-box with a low hardware cost while preserving a high level of accuracy by better taking care of local irregularities. The idea of using shared exponents to enable computing with a precision approaching floating point while using a fixed-point processor [35, 36, 37] is not new in computer architecture or statistics. The pertinent parameter state space for DNN workloads, however, is relatively large and needs its own study. In this work, we carefully characterized the state space for different operations involved in DNN workloads. Our study led to identifying a set of parameterization that works robustly across various types of neural networks. We further provide an accompanying extensible ISA to process DNN graphs and map pertinent operations to custom accelerators without requiring users to do any kind of pre-processing. Our hardware/algorithm co-designed approach lets us significantly drive down the cost of DNN inference. Through extensive evaluations, we demonstrate the potential of MSFP in the narrow bit-width regime. MSFP pushes the limits of narrow precision inferencing and enables operating at a new accuracy-cost Pareto frontier. Our use of a shared exponent in MSFP shares similarity to FlexPoint [7]. FlexPoint, however, uses coarse-grained scaling factors (at the granularity of an entire layer) based on tensor-wide statistics which can lead to a significant accuracy drop (in the presence of outliers) or a high-friction pass to re-calibrate for new benchmarks [38]. In this paper, we show that using fine-grained scaling is the key to enable high performance narrow-precision quantization. In addition, unlike FlexPoint, all conversions to MSFP (including the selection of shared exponent) are automatically handled on the fly in-situ on our custom hardware without requiring pre-processing of activations/weights. We emphasize that the primary objective of designing MSFP is accommodating a higher arithmetic density on a single node and not necessarily memory compression. While compression techniques such as pruning [31, 39, 40, 41], knowledge distillation [42, 43], weight sharing [44], or hashing [45, 46] result in promising memory compression, they may not necessarily yield a better arithmetic density and thus higher throughput and lower inference cost. In addition, this line of works requires changes to the topology of the pertinent model and usually incurs a low development velocity and a high re-training cost to obtain the original accuracy. 6 Conclusion We presented Microsoft floating point as a robust datatype for low-cost DNN inference. Variants of MSFP together enable operating at a new accuracy-cost Pareto frontier compared to a collection of current standards. MSFP-based inference eliminates the need for extensive model re-calibration or ad-hoc optimization to preserve target accuracy. MSFP has the capability to effectively encode a wide range of tensors across different application domains including vision and language models. MSFP is deployed at a datacenter-scale and has been used to successfully ship over a dozen models. 7 Broader impact This paper opens a new axis for the growing research in quantized DNN inference. It challenges the current practice in the field regarding the choice of numerical format and sheds light on the importance of a holistic co-design of hardware architecture and algorithms. This paper further highlights the importance of generalization in designing next generation standard quantization techniques to minimize the non-recurring engineering cost and ensure ease-of-use across various classes of models. Large-scale cost-efficient inference over DNNs enables an ever-increasing number of AI applications in consumer and enterprise products. MSFP enables inferencing on larger and more powerful DNN models in scenarios that require very high rates of inference such as web search, enterprise search, and email search. Other scenarios such as real-time recommendations, AI-powered text auto-completion (e.g. auto-suggestion, smart compose), and conversational interfaces also require high inference rates and benefit from inferencing with MSFP format. Acknowledgments and Disclosure of Funding We thank the anonymous reviewers and program committee for their feedback.
1. What is the focus and contribution of the paper regarding numerical formats for deep learning inference? 2. What are the strengths of the proposed approach, particularly in its evaluation and comparison with other formats? 3. Do you have any concerns or suggestions regarding the definition and measurement of quantization noise? 4. What are some limitations of the paper regarding the consideration of other numerical formats and the comparison with them?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper proposes a variant of blocking floating point numerical format (rebranded as Bounding-Box Floating-Point (BBFP)). In contrast with previous works, the window's size (bounding-box size) of the shared exponent is parameterized. The inference performance of deep neural networks, recurrent neural networks, and transfer models using this numerical format are evaluated and compared with a 32-bit floating-point format. Moreover, the arithmetic density of this numerical format and other numerical formats such as Bfloat16 and INT8 and INT4 are discussed. Strengths 1- The proposed numerical format is evaluated in large and variant of benchmarks which mean the proposed numerical format can be deployed for different models 2- Presenting the relationship between the gaussian quantization noise and the Bounding-Box Floating-Point quantization noise in Table 2 is interesting. Weaknesses 1- Is the QNSR defined by the power of a signal (x^2) and noise (N^2) rather than signal and noise itself? I also suggest using QSNR to ignore the negative value, which is more readable. 2- This new numerical format is compared with FP32 across various benchmarks. Compared with different numerical formats such as adaptive float [1], posit, and uniform and block floating points in the same number of bits are missed. [1] Tambe, Thierry, et al. "AdaptivFloat: A Floating-Point Based Data Type for Resilient Deep Learning Inference." arXiv preprint arXiv:1909.13271 (2019).
NIPS
Title Pushing the Limits of Narrow Precision Inferencing at Cloud Scale with Microsoft Floating Point Abstract In this paper, we explore the limits of Microsoft Floating Point (MSFP), a new class of datatypes developed for production cloud-scale inferencing on custom hardware. Through the co-evolution of hardware design and algorithms, MSFP16 incurs 3× lower cost compared to Bfloat16 and MSFP12 has 4× lower cost compared to INT8 while delivering a comparable or better accuracy. MSFP incurs negligible impact to accuracy (<1%), requires no changes to the model topology, and is integrated with a mature cloud production pipeline. MSFP supports various classes of deep learning models including CNNs, RNNs, and Transformers without modification. Finally, we characterize the accuracy and implementation of MSFP and demonstrate its efficacy on a number of production scenarios, including models that power major online scenarios such as web search, question-answering, and image classification. 1 Introduction Over the past few years, there has been an exponential growth in the size of deep neural networks (DNNs) to further push achievable accuracy [1, 2, 3]. With the diminishing of Moore’s law, the arithmetic density that can fit on computing hardware plays an important role for large-scale inferencing. One key to increasing arithmetic density is the use of narrow bit-width datatypes. Inferencing DNNs with narrow bit-width at a required service level agreement (i.e., accuracy and latency) requires a careful balance of dynamic range and hardware complexity. For instance, although narrow fixed-point datatypes incur a low hardware overhead, they lack a wide enough dynamic range. As such, the use of fixed-point arithmetic at large-scale is typically limited due to noticeable accuracy ∗Equal contribution. Email correspondence to birouhan@microsoft.com †work done while Anna Vinogradsky was an intern at Microsoft. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. drops (>1%). Fixed-point datatypes also require a manual calibration process for each new benchmark. To address these inefficiencies, there is a rising interest in custom datatypes specifically designed for DNN workloads. Google deployed a custom datatype called Bfloat16 on its TPUs [4, 5] and NVIDIA recently announced a new datatype called TF32 available on its latest generation A100 GPUs [6]. Both Bfloat16 and TF32 represent a wide dynamic range which leads to close to zero accuracy drop when the DNN model is executed with these datatypes. However, while these datatypes have a lower hardware footprint compared to IEEE-compliant Float32, their overhead is still considered to be high for low-cost inference at scale. As a result, the industry seems to be converging to INT8 for inferencing which requires a careful model re-calibration to preserve accuracy. With the current setup, going below 8-bits almost always results in an accuracy drop. This paper introduces Microsoft Floating Point (MSFP), a class of new datatypes for robust and low-cost DNN inference at scale. MSFP is a hardware/algorithm co-designed numerical format that enables an efficient realization of dot products (which are the building blocks of DNN workloads) on custom hardware while maintaining a high dynamic range ([2−126, 2127]). Figure 1 corroborates the accuracy-area trade-off of using different datatypes for serving ResNet50-ImageNet. MSFP outperforms existing datatypes in terms of area and energy cost when the model is held to a fixed accuracy. Variants of MSFP together form a new Pareto frontier for computational performance/mm2 compared to a collection of competitive datatypes. In this paper, we quantify the Pareto frontier in terms of arithmetic density, the measure of how many dot products can be fit into 1mm2 of silicon on 16nm process. MSFP16 can be used as a drop-in replacement for Bfloat16 without any accuracy drop or requiring any re-calibration or hyper-parameter tuning. MSFP16 provides 2× memory saving and 2.8× higher arithmetic density compared to Bfloat16. We further built a fully automated fine-tuning pipeline to enable serving DNNs at even a lower cost while preserving the accuracy. With moderate model fine-tuning, MSFP can provide 4× higher arithmetic density compared to INT8 industry-standard datatype for inference while delivering comparable accuracy. MSFP is deployed at large-scale production for industry web services and has been successfully validated on over a dozen proprietary and open-sourced benchmarks. Extensive evaluations of a variety of computer vision and natural language processing models demonstrate the robustness and generality of the MSFP format. In summary, we make the following contributions: • We propose Microsoft floating point, a hardware/algorithm co-designed numerical datatype for DNN workloads that can achieve the same accuracy level of existing datatypes at a fraction of the area and power cost on custom silicon. • We build a low-friction production pipeline for serving pre-trained DNN models. MSFP preserves model accuracy at ultra-narrow bit-width with as few as three mantissa bits (i.e., MSFP12) with minimal fine-tuning. All conversions of weights and activations to MSFP format are handled in-situ on custom hardware. • We perform extensive evaluations of MSFP on various CNN, RNN, and Transformer models. Deploying DNN models using the MSFP datatype leads to a new state-of-the-art Pareto frontier between accuracy and computational cost. 2 Microsoft Floating Point MSFP is a hardware/algorithm co-designed numerical format for DNN workloads. Here we build on IEEE-compliant formats to introduce the structure of MSFP and elaborate on its functionality. IEEE floating point formats include one sign bit s, a number of exponent bits e, and a number of significand or mantissa bits m. Float16, for instance, consists of 1 sign bit, 5 exponent bits, and 11 mantissa bits (10 of which are explicitly stored). The resulting value can be decoded as x = (−1)s ∗ 2e′ ∗m where e′ is set to e− 15 to adjust for the encoded bias. MSFP has a similar structure with one main difference. Instead of assigning a private exponent to each element of a tensor, MSFP relies on using a shared exponent among some number of values. For instance, for a vector of elements, the floating point representation is: [(−1)s0 2e0 m0 , (−1)s1 2e1 m1 , ... , (−1)sn−1 2en−1 mn−1 ], the MSFP representation is: 2eshared [ (−1)s0 m′0 , (−1) s1 m′1, . . . , (−1) sn−1 m′n−1 ] . The number of elements sharing one exponent is referred to as the bounding-box size. The shared exponent can be any value that is representative of the range of elements in each bounding-box. We use the maximum exponent in our setting to best represent the outliers in each bounding-box. However, other approaches could be used such as taking an average or percentile value. By associating each element in the bounding-box with the max exponent in the box, each mantissa term mi must be adjusted by shifting it to the right by the difference between emax and ei. The term m′i is defined as mi (eshared − ei), where is the right-shift operator. As emax − ei increases, the right shift will truncate away more of the least-significant bits in the mantissa. Thus, similar to fixed-point, MSFP is affected by extreme outlier values. However, because each bounding box defines its own dynamic range, an outlier’s effect would be limited to the bounding-box in which it occurs. In MSFP, zero is represented by having all mantissa bits being 0 for a given value (shared exponent can be any value). MSFP mantissas do not have an implicit leading bit and all mantissa bits are explicitly represented. The key insight behind MSFP is to strike a compromise between the dynamic range of floating point and the hardware efficiency of fixed-point. Floating point uses an exponent for every element, while fixed-point (with scaling) uses one exponent for all elements. In contrast, MSFP uses one exponent for each n elements, and is able to approach the benefits of both formats. Computing with the MSFP format. MSFP is selectively applied to performance-critical components of a model that exhaust computation resources and memory bandwidth. Dot product is one of the core operations involved in DNN inference, being the basic operation underlying both convolutional and fully-connected layers. Suppose we have two n-dimensional row-vectors −→x0 and−→x1 in MSFP format with shared exponents 2e0 and 2e1 , respectively. The dot product of these vectors takes the form: −→x0.−→x1T = 2e0 [ (−1)s0,0 m′0,0 , (−1) s0,1 m′0,1, . . . , (−1) s0,n−1 m′0,n−1 ] . 2e1 [ (−1)s1,0 m′1,0 , (−1) s1,1 m′1,1, . . . , (−1) s1,n−1 m′1,n−1 ]T = 2e0+e1 n−1∑ i=0 ( (−1)s0,i⊕s1,i m′0,i ∗m′1,i ) , where⊕ is XOR operation and T stands for transposition. As shown, the dot product in MSFP format consists of a single fixed-point addition of exponents, n fixed-point multiplications of mantissas, and n− 1 fixed-point additions of mantissa products. Here, n, the length of the dot product, coincides with the bounding-box size. However, this does not always need to hold. A large dot product can be built by summing the results of smaller bounding-box sized dot products. The overhead of MSFP compared to pure fixed-point is precisely the hardware needed to handle the shared exponent, and this overhead is amortized over the bounding-box size. Figure 2 shows the high level overview of a systolic tensor core architecture (left) which computes long dot products using multiple bounding-box length MSFP dot product units (right). Within MSFP dot product, the multipliers and adders are all fixed-point. Thus for the same number of mantissa bits, MSFP has a significantly lower circuit footprint compared to IEEE floating point. By truncating the number of bits assigned to each mantissa, the circuit area can be reduced even further. Table 1 summarizes the dot product density and memory savings with MSFP format for different numbers of mantissa bits. The bounding-box size here is 16. Throughout the paper, we refer to different MSFP configurations as MSFPN (e.g., MSFP12). The number listed is the sum of the bit-width assigned to sign, mantissa, and shared exponent. For the rest of the paper, MSFP will follow a sign-magnitude format and has an 8-bit shared exponent. The default bounding-box size is 16 unless explicitly mentioned otherwise. We will show in Section 4 that a bounding-box size of 16 provides a reasonable balance between inference cost and accuracy across various DNN benchmarks. MSFP12’s MAC circuit size is smaller even than Int4. MSFP uses a sign-magnitude mantissa format, which costs less area and energy compared to conventional two’s complement integer format (assuming equal mantissa bit-width). 3 MSFP configuration In this section, we first discuss the effects of different configuration settings for the MSFP datatype. We conclude the section with the MSFP quantization pipeline for DNN workloads. Bounding-box size. The granularity of values which share an exponent (bounding-box size) is an important factor for both model accuracy and hardware cost. Sharing an exponent among fewer values improves the encoding efficiency and sharing exponents amongst a larger number of values helps keep hardware costs down. Figure 3 illustrates the Kullback-Leibler (KL) divergence between MSFP encoding and Float32 encoding for various bounding-box sizes and mantissa bit-widths in a layer sampled from ResNet-50. KL divergence between two encoding format P and Q is defined as KL(P ‖ Q) = ∑ x∈X P (x) log ( P (x) Q(x) ) . Intuitively, a lower KL divergence translates to a lower discrepancy between data distributions before and after quantization; thus resulting in better accuracy. As shown in Figure 3, with fewer mantissa bits, smaller bounding-box sizes are required to keep the quantization error under control. In practice, we found a bounding-box size of 16-128 to be effective in preserving the accuracy while incurring a moderate hardware cost. Bounding-box shape. The shape of a bounding-box impacts the computing cost and ultimate model accuracy. While clustering similar magnitude values to create pertinent bounding-boxes will reduce quantization error, tracking exponents for arbitrary bounding-box shapes is expensive. Figure 4 presents several hardware-friendly options to partition matrices into bounding-boxes. Consider a right-hand matrix multiply y = xW , where x is a row-vector and W is a matrix. The simplest approach is to treat the entire matrix W as a single bounding-box. This is a common approach used in prior works [7]. Even though such a coarse-grained approach incurs a low hardware overhead, this can lead to severe accuracy loss due to outliers and needs careful re-calibration per benchmark. In a right-hand matrix multiply, dot products are performed between the row vector x and columns of the matrix W . Another natural boundary is to treat each column of the matrix W as a separate bounding-box. By aligning bounding-boxes to the columns of the matrix, all dot products are still between a pair of bounding-boxes which can be calculated using fixed-point arithmetic. Large matrices typically are broken down into small tiles that fit into limited hardware resources of a kernel. Thus, one may more effectively split the computation into finer-grained regions that align with hardware tiles (see the right-most images in Figure 4). We chose to work with tile-based partitioning to obtain a balance between accuracy and hardware cost. A similar strategy is applied to convolution layers by sharing the exponent along the channel depth. Encoding efficiency. The encoding efficiency of MSFP depends on two main factors: bit-width and bounding-box size. Figure 5 demonstrates the expected value of Quantized Noise to Signal Ratio (QNSR) as a function of bit-width for MSFP format. In particular, the expected value of QNSR is defined as E(Q(x)−xx ), where x is a random entry of random tensor X and Q(x) is the corresponding quantized value. To measure MSFP encoding efficiency, we considered thousands of random tensors sampled from parameters of different neural networks and report the average acquired QNSRs (for output of a dot-product) in Figure 5. The overall variation across different tensors are shown using the error bars. As demonstrated, adding 1 bit to the mantissa decreases QNSR by almost 3.2dB. Figure 5 further illustrates MSFP encoding efficiency as a function of bounding-box size. In this experiment, we used 6 bits of mantissa, 1 sign bit, and 8 bits of the shared exponent for conversion to MSFP format (i.e., MSFP15). As shown, doubling the block-size increases the QNSR by approximately 0.52dB. The same trend is observed with other mantissa bit-widths. Quantization pipeline and deployment In this paper, we focus on efficient inference of pretrained deep neural networks. Although quantization-aware training of DNNs (e.g., with injected quantization noise) can lead to a better accuracy [8, 9], it would be impractical to force all users of a DNN inferencing platform to train their models with quantization from scratch. Instead, pre-trained floating point weights are directly quantized into MSFP, either offline or online within the hardware accelerator. Activation tensors are converted to MSFP in-situ on the hardware. At ultra-narrow bit-width, we found that a few steps (as low as 1 epoch) of model fine-tuning can help improve the accuracy of the quantized model. To fine-tune the network, we use the conventional training procedure based on stochastic gradient descent. During the forward pass, weights and activations are quantized to MSFP and the loss is calculated with the introduced quantization errors. A straightthrough estimator in the back-propagation was to used for the gradients of quantization operators. The learning rate should be small during this fine-tuning phase, typically equal to the final learning rate used for original Float32 training. MSFP was deployed on project Brainwave [10, 11], a datacenter-scale DNN inferencing service using networked FPGAs. Each Brainwave FPGA is a self-contained neural processing unit that has custom tensor-cores. Although the underlying configuration (i.e., bounding-box size and shape) can be re-configured based on the application requirements, we opt to use a fixed configuration in our experiments for simplicity (see Section 4). We emphasize that all data conversions and computation of shared exponent are handled in-situ on hardware without requiring users to do any pre-processing. 4 Experiments To measure accuracy, the impact of MSFP on DNN inference was modeled in both Tensorflow and Pytorch using a custom library. This MSFP library was validated against our hardware for high fidelity emulation. For image classification, the majority of experiments focused on the ImageNet benchmark. The models used include ResNet-50 [12], ResNet-101, ResNet-152, Inception-v3 [13], Inceptionv4 [14], MobileNet_V1_1.0_224 [15, 16], VGG16 [17], VGG19, and EfficientNet-EdgeTPU (-S, -M, and -L) [18, 19]. For transformer-based models, a pre-trained BERT-base [1] was used, and the accuracy of MSFP was evaluated on two downstream tasks: SQuAD (question answering) and MRPC (paraphrase detection). In addition, two proprietary RNN-based models named Production-DS and Production-DR were tested. Production-DS is a search relevance model that includes four GRUs with state size 500. Production-DR is a machine reading comprehension model that includes eight LSTMs with hidden dimension 100. Conversion to MSFP was applied to computation extensive layers as discussed in Section 2. Element-wise scalar operations such as activation functions were performed in Float16. A bounding box size of 16 was used unless otherwise specified. CNN models are typically used as backbone feature extractor in accelerated cloud-based applications. As such, in those benchmarks, MSFP was applied to main backbone convolution layers only and the last fully-connected layer was kept in Float32. The last layer is usually run on commodity hardware and not the custom accelerator. Table 2 shows the normalized accuracy for a variety of different models. The accuracy values are normalized with respect to the Float32 counterpart. For image classification models, the metric is top-1 accuracy. Production-DS is evaluated using an area-under-curve (AUC) metric, BERT-MRPC is a classification task based on classic accuracy metric, and Production-DR and BERT-SQuAD use F1 score [20]. MSFP16 enables instant quantization while preserving the Float32 accuracy across various benchmarks without any fine-tuning or ad-hoc optimizations such as clipping or calibration. Model fine-tuning further enables pushing down the required bit-width for different benchmarks. Note that the quantization fine-tuning step is fairly low overhead (1-10 epochs) without any hyper-parameter tuning. We used the same learning rate as the final stage of original Float32 training for fine-tuning. As shown in Table 2, CNN-based models typically require higher bit-widths to stay within 1% of the floating point result compared to RNNs and transformers. Note that even for CNN-based models one can drop the bit-width for weights to MSFP12 and still maintain high accuracy with MSFP as long as the activations are computed with MSFP13 or higher. Table 3 shows the results for using different bit-widths for weights and activations for ResNet-50 benchmark. We use a bounding-box size of 128 in this experiment which yields even better arithmetic density compared to default bounding-box size of 16. In general, we’ve observed that using more bits for activations produces higher accuracy than using more bits for weights. This paper focuses on uniform quantization. Mixed precision inference [21] with MSFP is an interesting extension for future work. Figure 6 compares the accuracy versus relative multiplier density for different numerical formats. As shown, MSFP has superior performance while delivering high accuracy across various benchmarks. We would like to emphasize that no special optimization (such as manual clipping, extensive hyperparameter tuning, or quantization-aware training from scratch) is applied to boost accuracy. The same recipe is applied to all computer vision and natural language processing benchmarks. 5 Related work DNN inference with low bit-width arithmetic involves mapping a continuous set of values onto a discrete lattice (a.k.a., quantization). Over the past few years, a large body of work has looked into quantized DNN inference with fixed-point format [22, 23, 24, 25, 26, 7, 27, 28]. Outliers and irregular distributions, however, create a challenge for fixed-point quantization [29, 27, 30]. On the one hand, a uniform fixed-point quantization scheme that allocates lattice points for outliers will have fewer points available for dense portions of the distribution yielding large errors even though it incurs a low circuit footprint on custom silicon. On the other hand, focusing the quantization lattice on the dense portions of the distribution via a code-book [31] (a.k.a., non-uniform quantization) or posit [32, 33] datatype can potentially reduce quantization error but require radical changes to the arithmetic circuitry that are unlikely to be adopted for industry-scale deployment. Finding the right balance between accuracy and hardware complexity is an active area of research. Despite several promising results on a few benchmarks [7, 30], the use of fixed-point arithmetic is considered a high-friction choice for large scale production, as it usually requires significant developer investment, hyper-parameter tuning, and/or re-calibration to adapt the proposed technique to a new domain. NVIDIA recently introduced a new software tool (called TensorRT) to adaptively transform an input Float32 model into int8 [34]. TensorRT requires model re-calibration and manual adjustments such as clipping to regain accuracy across various benchmarks. Quantizing below 8-bits with integer format often results in significant accuracy drop without significant model re-training. MSFP addresses the aforementioned challenges by dividing the values in a tensor into fine-grained regions (bounding boxes) to limit the effect of outliers and irregularly distributed values. By independently quantizing subsets of a tensor, the probability that any given bounding-box suffers from outlier effects decreases. Compared to uniform and non-uniform quantization, MSFP is a hybrid quantization approach. Within each bounding region, a locally scaled uniform quantization is deployed to capture the distribution of each subset. The scaling factors used for different bounding regions, however, are independent of one another; resembling the structure of a non-uniform quantization. This hybrid approach lets us use a low overhead uniform lattice for each localized bounding-box with a low hardware cost while preserving a high level of accuracy by better taking care of local irregularities. The idea of using shared exponents to enable computing with a precision approaching floating point while using a fixed-point processor [35, 36, 37] is not new in computer architecture or statistics. The pertinent parameter state space for DNN workloads, however, is relatively large and needs its own study. In this work, we carefully characterized the state space for different operations involved in DNN workloads. Our study led to identifying a set of parameterization that works robustly across various types of neural networks. We further provide an accompanying extensible ISA to process DNN graphs and map pertinent operations to custom accelerators without requiring users to do any kind of pre-processing. Our hardware/algorithm co-designed approach lets us significantly drive down the cost of DNN inference. Through extensive evaluations, we demonstrate the potential of MSFP in the narrow bit-width regime. MSFP pushes the limits of narrow precision inferencing and enables operating at a new accuracy-cost Pareto frontier. Our use of a shared exponent in MSFP shares similarity to FlexPoint [7]. FlexPoint, however, uses coarse-grained scaling factors (at the granularity of an entire layer) based on tensor-wide statistics which can lead to a significant accuracy drop (in the presence of outliers) or a high-friction pass to re-calibrate for new benchmarks [38]. In this paper, we show that using fine-grained scaling is the key to enable high performance narrow-precision quantization. In addition, unlike FlexPoint, all conversions to MSFP (including the selection of shared exponent) are automatically handled on the fly in-situ on our custom hardware without requiring pre-processing of activations/weights. We emphasize that the primary objective of designing MSFP is accommodating a higher arithmetic density on a single node and not necessarily memory compression. While compression techniques such as pruning [31, 39, 40, 41], knowledge distillation [42, 43], weight sharing [44], or hashing [45, 46] result in promising memory compression, they may not necessarily yield a better arithmetic density and thus higher throughput and lower inference cost. In addition, this line of works requires changes to the topology of the pertinent model and usually incurs a low development velocity and a high re-training cost to obtain the original accuracy. 6 Conclusion We presented Microsoft floating point as a robust datatype for low-cost DNN inference. Variants of MSFP together enable operating at a new accuracy-cost Pareto frontier compared to a collection of current standards. MSFP-based inference eliminates the need for extensive model re-calibration or ad-hoc optimization to preserve target accuracy. MSFP has the capability to effectively encode a wide range of tensors across different application domains including vision and language models. MSFP is deployed at a datacenter-scale and has been used to successfully ship over a dozen models. 7 Broader impact This paper opens a new axis for the growing research in quantized DNN inference. It challenges the current practice in the field regarding the choice of numerical format and sheds light on the importance of a holistic co-design of hardware architecture and algorithms. This paper further highlights the importance of generalization in designing next generation standard quantization techniques to minimize the non-recurring engineering cost and ensure ease-of-use across various classes of models. Large-scale cost-efficient inference over DNNs enables an ever-increasing number of AI applications in consumer and enterprise products. MSFP enables inferencing on larger and more powerful DNN models in scenarios that require very high rates of inference such as web search, enterprise search, and email search. Other scenarios such as real-time recommendations, AI-powered text auto-completion (e.g. auto-suggestion, smart compose), and conversational interfaces also require high inference rates and benefit from inferencing with MSFP format. Acknowledgments and Disclosure of Funding We thank the anonymous reviewers and program committee for their feedback.
1. What is the focus and contribution of the paper regarding efficient quantization of pre-trained DNN weights and activations? 2. What are the strengths of the proposed approach, particularly in terms of memory and energy efficiency? 3. What are the weaknesses of the paper, especially regarding fine-tuning and novelty compared to prior works? 4. How does the reviewer assess the clarity, quality, and significance of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper describes a technique for efficient quantization of pre-trained DNN weights and activations. Typical floating point representations comprise a sign bit, an exponent and a mantissa. The key idea of this paper is to share the exponent across a group of elements (a so-called bounding box). This scheme reduces both the number of bits required to represent the numbers in the box as well as the arithmetic required for dot product operations. If the boxes are designed so that elements in the same box have roughly the same scale, then accuracy of the resulting DNN classification will not be adversely affected. The paper describes the proposed representation (BBFP) in a lot of detail, including how they design the size and shape of the bounding box, as well as how they perform fine-tuning to improve the accuracy when the number of mantissa bits is made extremely small. Finally, the paper presents a lot of experimental results of different neural networks (CCNs, RNNs and Transformers) to quantify the loss in accuracy relative to the gains in terms of memory and arithmetic density. Strengths DNN models are becoming widely used in production systems and many different groups are developing custom hardware to make such systems faster and/or more energy efficient. The custom floating point format proposed in this paper can be used to reduce the area and energy footprint of DNNs and thus is of significant interest to the NeurIPS community. The BBFP scheme is clearly described mathematically and the experimental improvements over FP32 in terms of memory and arithmetic density at the same accuracy level are impressive and significant. Weaknesses Very little space is dedicated to describing the fine tuning part of the pipeline. In particular, it was not clear to me what using a "straight-through estimator for the BBFP quantization" during the backward pass means. It would also improve the paper to provide some quantitative comparison of this fine-tuning effort vs. whatever is needed to get fixed-point schemes (e.g. INT8) working well. My other main concern is novelty. The authors indicate that the idea of sharing the exponent across blocks of elements is not, by itself, novel. Previously work (Flexpoint) has studied the effect of sharing the exponent across entire tensors. The authors argue that this is too coarse an approach and that the accuracy suffers accordingly. In that sense, the main novelty of BBFP seems to be how the bounding boxes are defined which seems to be performed in a relatively heuristic manner (e.g. Figure 3). In Section 5 the authors state that another difference between their solution and Flexpoint is that "all conversions to BBFP are automatically handled on the fly in-situ on our custom hardware". It might add to the novelty if these on-the-fly conversions were described somewhere in the paper.
NIPS
Title Pushing the Limits of Narrow Precision Inferencing at Cloud Scale with Microsoft Floating Point Abstract In this paper, we explore the limits of Microsoft Floating Point (MSFP), a new class of datatypes developed for production cloud-scale inferencing on custom hardware. Through the co-evolution of hardware design and algorithms, MSFP16 incurs 3× lower cost compared to Bfloat16 and MSFP12 has 4× lower cost compared to INT8 while delivering a comparable or better accuracy. MSFP incurs negligible impact to accuracy (<1%), requires no changes to the model topology, and is integrated with a mature cloud production pipeline. MSFP supports various classes of deep learning models including CNNs, RNNs, and Transformers without modification. Finally, we characterize the accuracy and implementation of MSFP and demonstrate its efficacy on a number of production scenarios, including models that power major online scenarios such as web search, question-answering, and image classification. 1 Introduction Over the past few years, there has been an exponential growth in the size of deep neural networks (DNNs) to further push achievable accuracy [1, 2, 3]. With the diminishing of Moore’s law, the arithmetic density that can fit on computing hardware plays an important role for large-scale inferencing. One key to increasing arithmetic density is the use of narrow bit-width datatypes. Inferencing DNNs with narrow bit-width at a required service level agreement (i.e., accuracy and latency) requires a careful balance of dynamic range and hardware complexity. For instance, although narrow fixed-point datatypes incur a low hardware overhead, they lack a wide enough dynamic range. As such, the use of fixed-point arithmetic at large-scale is typically limited due to noticeable accuracy ∗Equal contribution. Email correspondence to birouhan@microsoft.com †work done while Anna Vinogradsky was an intern at Microsoft. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. drops (>1%). Fixed-point datatypes also require a manual calibration process for each new benchmark. To address these inefficiencies, there is a rising interest in custom datatypes specifically designed for DNN workloads. Google deployed a custom datatype called Bfloat16 on its TPUs [4, 5] and NVIDIA recently announced a new datatype called TF32 available on its latest generation A100 GPUs [6]. Both Bfloat16 and TF32 represent a wide dynamic range which leads to close to zero accuracy drop when the DNN model is executed with these datatypes. However, while these datatypes have a lower hardware footprint compared to IEEE-compliant Float32, their overhead is still considered to be high for low-cost inference at scale. As a result, the industry seems to be converging to INT8 for inferencing which requires a careful model re-calibration to preserve accuracy. With the current setup, going below 8-bits almost always results in an accuracy drop. This paper introduces Microsoft Floating Point (MSFP), a class of new datatypes for robust and low-cost DNN inference at scale. MSFP is a hardware/algorithm co-designed numerical format that enables an efficient realization of dot products (which are the building blocks of DNN workloads) on custom hardware while maintaining a high dynamic range ([2−126, 2127]). Figure 1 corroborates the accuracy-area trade-off of using different datatypes for serving ResNet50-ImageNet. MSFP outperforms existing datatypes in terms of area and energy cost when the model is held to a fixed accuracy. Variants of MSFP together form a new Pareto frontier for computational performance/mm2 compared to a collection of competitive datatypes. In this paper, we quantify the Pareto frontier in terms of arithmetic density, the measure of how many dot products can be fit into 1mm2 of silicon on 16nm process. MSFP16 can be used as a drop-in replacement for Bfloat16 without any accuracy drop or requiring any re-calibration or hyper-parameter tuning. MSFP16 provides 2× memory saving and 2.8× higher arithmetic density compared to Bfloat16. We further built a fully automated fine-tuning pipeline to enable serving DNNs at even a lower cost while preserving the accuracy. With moderate model fine-tuning, MSFP can provide 4× higher arithmetic density compared to INT8 industry-standard datatype for inference while delivering comparable accuracy. MSFP is deployed at large-scale production for industry web services and has been successfully validated on over a dozen proprietary and open-sourced benchmarks. Extensive evaluations of a variety of computer vision and natural language processing models demonstrate the robustness and generality of the MSFP format. In summary, we make the following contributions: • We propose Microsoft floating point, a hardware/algorithm co-designed numerical datatype for DNN workloads that can achieve the same accuracy level of existing datatypes at a fraction of the area and power cost on custom silicon. • We build a low-friction production pipeline for serving pre-trained DNN models. MSFP preserves model accuracy at ultra-narrow bit-width with as few as three mantissa bits (i.e., MSFP12) with minimal fine-tuning. All conversions of weights and activations to MSFP format are handled in-situ on custom hardware. • We perform extensive evaluations of MSFP on various CNN, RNN, and Transformer models. Deploying DNN models using the MSFP datatype leads to a new state-of-the-art Pareto frontier between accuracy and computational cost. 2 Microsoft Floating Point MSFP is a hardware/algorithm co-designed numerical format for DNN workloads. Here we build on IEEE-compliant formats to introduce the structure of MSFP and elaborate on its functionality. IEEE floating point formats include one sign bit s, a number of exponent bits e, and a number of significand or mantissa bits m. Float16, for instance, consists of 1 sign bit, 5 exponent bits, and 11 mantissa bits (10 of which are explicitly stored). The resulting value can be decoded as x = (−1)s ∗ 2e′ ∗m where e′ is set to e− 15 to adjust for the encoded bias. MSFP has a similar structure with one main difference. Instead of assigning a private exponent to each element of a tensor, MSFP relies on using a shared exponent among some number of values. For instance, for a vector of elements, the floating point representation is: [(−1)s0 2e0 m0 , (−1)s1 2e1 m1 , ... , (−1)sn−1 2en−1 mn−1 ], the MSFP representation is: 2eshared [ (−1)s0 m′0 , (−1) s1 m′1, . . . , (−1) sn−1 m′n−1 ] . The number of elements sharing one exponent is referred to as the bounding-box size. The shared exponent can be any value that is representative of the range of elements in each bounding-box. We use the maximum exponent in our setting to best represent the outliers in each bounding-box. However, other approaches could be used such as taking an average or percentile value. By associating each element in the bounding-box with the max exponent in the box, each mantissa term mi must be adjusted by shifting it to the right by the difference between emax and ei. The term m′i is defined as mi (eshared − ei), where is the right-shift operator. As emax − ei increases, the right shift will truncate away more of the least-significant bits in the mantissa. Thus, similar to fixed-point, MSFP is affected by extreme outlier values. However, because each bounding box defines its own dynamic range, an outlier’s effect would be limited to the bounding-box in which it occurs. In MSFP, zero is represented by having all mantissa bits being 0 for a given value (shared exponent can be any value). MSFP mantissas do not have an implicit leading bit and all mantissa bits are explicitly represented. The key insight behind MSFP is to strike a compromise between the dynamic range of floating point and the hardware efficiency of fixed-point. Floating point uses an exponent for every element, while fixed-point (with scaling) uses one exponent for all elements. In contrast, MSFP uses one exponent for each n elements, and is able to approach the benefits of both formats. Computing with the MSFP format. MSFP is selectively applied to performance-critical components of a model that exhaust computation resources and memory bandwidth. Dot product is one of the core operations involved in DNN inference, being the basic operation underlying both convolutional and fully-connected layers. Suppose we have two n-dimensional row-vectors −→x0 and−→x1 in MSFP format with shared exponents 2e0 and 2e1 , respectively. The dot product of these vectors takes the form: −→x0.−→x1T = 2e0 [ (−1)s0,0 m′0,0 , (−1) s0,1 m′0,1, . . . , (−1) s0,n−1 m′0,n−1 ] . 2e1 [ (−1)s1,0 m′1,0 , (−1) s1,1 m′1,1, . . . , (−1) s1,n−1 m′1,n−1 ]T = 2e0+e1 n−1∑ i=0 ( (−1)s0,i⊕s1,i m′0,i ∗m′1,i ) , where⊕ is XOR operation and T stands for transposition. As shown, the dot product in MSFP format consists of a single fixed-point addition of exponents, n fixed-point multiplications of mantissas, and n− 1 fixed-point additions of mantissa products. Here, n, the length of the dot product, coincides with the bounding-box size. However, this does not always need to hold. A large dot product can be built by summing the results of smaller bounding-box sized dot products. The overhead of MSFP compared to pure fixed-point is precisely the hardware needed to handle the shared exponent, and this overhead is amortized over the bounding-box size. Figure 2 shows the high level overview of a systolic tensor core architecture (left) which computes long dot products using multiple bounding-box length MSFP dot product units (right). Within MSFP dot product, the multipliers and adders are all fixed-point. Thus for the same number of mantissa bits, MSFP has a significantly lower circuit footprint compared to IEEE floating point. By truncating the number of bits assigned to each mantissa, the circuit area can be reduced even further. Table 1 summarizes the dot product density and memory savings with MSFP format for different numbers of mantissa bits. The bounding-box size here is 16. Throughout the paper, we refer to different MSFP configurations as MSFPN (e.g., MSFP12). The number listed is the sum of the bit-width assigned to sign, mantissa, and shared exponent. For the rest of the paper, MSFP will follow a sign-magnitude format and has an 8-bit shared exponent. The default bounding-box size is 16 unless explicitly mentioned otherwise. We will show in Section 4 that a bounding-box size of 16 provides a reasonable balance between inference cost and accuracy across various DNN benchmarks. MSFP12’s MAC circuit size is smaller even than Int4. MSFP uses a sign-magnitude mantissa format, which costs less area and energy compared to conventional two’s complement integer format (assuming equal mantissa bit-width). 3 MSFP configuration In this section, we first discuss the effects of different configuration settings for the MSFP datatype. We conclude the section with the MSFP quantization pipeline for DNN workloads. Bounding-box size. The granularity of values which share an exponent (bounding-box size) is an important factor for both model accuracy and hardware cost. Sharing an exponent among fewer values improves the encoding efficiency and sharing exponents amongst a larger number of values helps keep hardware costs down. Figure 3 illustrates the Kullback-Leibler (KL) divergence between MSFP encoding and Float32 encoding for various bounding-box sizes and mantissa bit-widths in a layer sampled from ResNet-50. KL divergence between two encoding format P and Q is defined as KL(P ‖ Q) = ∑ x∈X P (x) log ( P (x) Q(x) ) . Intuitively, a lower KL divergence translates to a lower discrepancy between data distributions before and after quantization; thus resulting in better accuracy. As shown in Figure 3, with fewer mantissa bits, smaller bounding-box sizes are required to keep the quantization error under control. In practice, we found a bounding-box size of 16-128 to be effective in preserving the accuracy while incurring a moderate hardware cost. Bounding-box shape. The shape of a bounding-box impacts the computing cost and ultimate model accuracy. While clustering similar magnitude values to create pertinent bounding-boxes will reduce quantization error, tracking exponents for arbitrary bounding-box shapes is expensive. Figure 4 presents several hardware-friendly options to partition matrices into bounding-boxes. Consider a right-hand matrix multiply y = xW , where x is a row-vector and W is a matrix. The simplest approach is to treat the entire matrix W as a single bounding-box. This is a common approach used in prior works [7]. Even though such a coarse-grained approach incurs a low hardware overhead, this can lead to severe accuracy loss due to outliers and needs careful re-calibration per benchmark. In a right-hand matrix multiply, dot products are performed between the row vector x and columns of the matrix W . Another natural boundary is to treat each column of the matrix W as a separate bounding-box. By aligning bounding-boxes to the columns of the matrix, all dot products are still between a pair of bounding-boxes which can be calculated using fixed-point arithmetic. Large matrices typically are broken down into small tiles that fit into limited hardware resources of a kernel. Thus, one may more effectively split the computation into finer-grained regions that align with hardware tiles (see the right-most images in Figure 4). We chose to work with tile-based partitioning to obtain a balance between accuracy and hardware cost. A similar strategy is applied to convolution layers by sharing the exponent along the channel depth. Encoding efficiency. The encoding efficiency of MSFP depends on two main factors: bit-width and bounding-box size. Figure 5 demonstrates the expected value of Quantized Noise to Signal Ratio (QNSR) as a function of bit-width for MSFP format. In particular, the expected value of QNSR is defined as E(Q(x)−xx ), where x is a random entry of random tensor X and Q(x) is the corresponding quantized value. To measure MSFP encoding efficiency, we considered thousands of random tensors sampled from parameters of different neural networks and report the average acquired QNSRs (for output of a dot-product) in Figure 5. The overall variation across different tensors are shown using the error bars. As demonstrated, adding 1 bit to the mantissa decreases QNSR by almost 3.2dB. Figure 5 further illustrates MSFP encoding efficiency as a function of bounding-box size. In this experiment, we used 6 bits of mantissa, 1 sign bit, and 8 bits of the shared exponent for conversion to MSFP format (i.e., MSFP15). As shown, doubling the block-size increases the QNSR by approximately 0.52dB. The same trend is observed with other mantissa bit-widths. Quantization pipeline and deployment In this paper, we focus on efficient inference of pretrained deep neural networks. Although quantization-aware training of DNNs (e.g., with injected quantization noise) can lead to a better accuracy [8, 9], it would be impractical to force all users of a DNN inferencing platform to train their models with quantization from scratch. Instead, pre-trained floating point weights are directly quantized into MSFP, either offline or online within the hardware accelerator. Activation tensors are converted to MSFP in-situ on the hardware. At ultra-narrow bit-width, we found that a few steps (as low as 1 epoch) of model fine-tuning can help improve the accuracy of the quantized model. To fine-tune the network, we use the conventional training procedure based on stochastic gradient descent. During the forward pass, weights and activations are quantized to MSFP and the loss is calculated with the introduced quantization errors. A straightthrough estimator in the back-propagation was to used for the gradients of quantization operators. The learning rate should be small during this fine-tuning phase, typically equal to the final learning rate used for original Float32 training. MSFP was deployed on project Brainwave [10, 11], a datacenter-scale DNN inferencing service using networked FPGAs. Each Brainwave FPGA is a self-contained neural processing unit that has custom tensor-cores. Although the underlying configuration (i.e., bounding-box size and shape) can be re-configured based on the application requirements, we opt to use a fixed configuration in our experiments for simplicity (see Section 4). We emphasize that all data conversions and computation of shared exponent are handled in-situ on hardware without requiring users to do any pre-processing. 4 Experiments To measure accuracy, the impact of MSFP on DNN inference was modeled in both Tensorflow and Pytorch using a custom library. This MSFP library was validated against our hardware for high fidelity emulation. For image classification, the majority of experiments focused on the ImageNet benchmark. The models used include ResNet-50 [12], ResNet-101, ResNet-152, Inception-v3 [13], Inceptionv4 [14], MobileNet_V1_1.0_224 [15, 16], VGG16 [17], VGG19, and EfficientNet-EdgeTPU (-S, -M, and -L) [18, 19]. For transformer-based models, a pre-trained BERT-base [1] was used, and the accuracy of MSFP was evaluated on two downstream tasks: SQuAD (question answering) and MRPC (paraphrase detection). In addition, two proprietary RNN-based models named Production-DS and Production-DR were tested. Production-DS is a search relevance model that includes four GRUs with state size 500. Production-DR is a machine reading comprehension model that includes eight LSTMs with hidden dimension 100. Conversion to MSFP was applied to computation extensive layers as discussed in Section 2. Element-wise scalar operations such as activation functions were performed in Float16. A bounding box size of 16 was used unless otherwise specified. CNN models are typically used as backbone feature extractor in accelerated cloud-based applications. As such, in those benchmarks, MSFP was applied to main backbone convolution layers only and the last fully-connected layer was kept in Float32. The last layer is usually run on commodity hardware and not the custom accelerator. Table 2 shows the normalized accuracy for a variety of different models. The accuracy values are normalized with respect to the Float32 counterpart. For image classification models, the metric is top-1 accuracy. Production-DS is evaluated using an area-under-curve (AUC) metric, BERT-MRPC is a classification task based on classic accuracy metric, and Production-DR and BERT-SQuAD use F1 score [20]. MSFP16 enables instant quantization while preserving the Float32 accuracy across various benchmarks without any fine-tuning or ad-hoc optimizations such as clipping or calibration. Model fine-tuning further enables pushing down the required bit-width for different benchmarks. Note that the quantization fine-tuning step is fairly low overhead (1-10 epochs) without any hyper-parameter tuning. We used the same learning rate as the final stage of original Float32 training for fine-tuning. As shown in Table 2, CNN-based models typically require higher bit-widths to stay within 1% of the floating point result compared to RNNs and transformers. Note that even for CNN-based models one can drop the bit-width for weights to MSFP12 and still maintain high accuracy with MSFP as long as the activations are computed with MSFP13 or higher. Table 3 shows the results for using different bit-widths for weights and activations for ResNet-50 benchmark. We use a bounding-box size of 128 in this experiment which yields even better arithmetic density compared to default bounding-box size of 16. In general, we’ve observed that using more bits for activations produces higher accuracy than using more bits for weights. This paper focuses on uniform quantization. Mixed precision inference [21] with MSFP is an interesting extension for future work. Figure 6 compares the accuracy versus relative multiplier density for different numerical formats. As shown, MSFP has superior performance while delivering high accuracy across various benchmarks. We would like to emphasize that no special optimization (such as manual clipping, extensive hyperparameter tuning, or quantization-aware training from scratch) is applied to boost accuracy. The same recipe is applied to all computer vision and natural language processing benchmarks. 5 Related work DNN inference with low bit-width arithmetic involves mapping a continuous set of values onto a discrete lattice (a.k.a., quantization). Over the past few years, a large body of work has looked into quantized DNN inference with fixed-point format [22, 23, 24, 25, 26, 7, 27, 28]. Outliers and irregular distributions, however, create a challenge for fixed-point quantization [29, 27, 30]. On the one hand, a uniform fixed-point quantization scheme that allocates lattice points for outliers will have fewer points available for dense portions of the distribution yielding large errors even though it incurs a low circuit footprint on custom silicon. On the other hand, focusing the quantization lattice on the dense portions of the distribution via a code-book [31] (a.k.a., non-uniform quantization) or posit [32, 33] datatype can potentially reduce quantization error but require radical changes to the arithmetic circuitry that are unlikely to be adopted for industry-scale deployment. Finding the right balance between accuracy and hardware complexity is an active area of research. Despite several promising results on a few benchmarks [7, 30], the use of fixed-point arithmetic is considered a high-friction choice for large scale production, as it usually requires significant developer investment, hyper-parameter tuning, and/or re-calibration to adapt the proposed technique to a new domain. NVIDIA recently introduced a new software tool (called TensorRT) to adaptively transform an input Float32 model into int8 [34]. TensorRT requires model re-calibration and manual adjustments such as clipping to regain accuracy across various benchmarks. Quantizing below 8-bits with integer format often results in significant accuracy drop without significant model re-training. MSFP addresses the aforementioned challenges by dividing the values in a tensor into fine-grained regions (bounding boxes) to limit the effect of outliers and irregularly distributed values. By independently quantizing subsets of a tensor, the probability that any given bounding-box suffers from outlier effects decreases. Compared to uniform and non-uniform quantization, MSFP is a hybrid quantization approach. Within each bounding region, a locally scaled uniform quantization is deployed to capture the distribution of each subset. The scaling factors used for different bounding regions, however, are independent of one another; resembling the structure of a non-uniform quantization. This hybrid approach lets us use a low overhead uniform lattice for each localized bounding-box with a low hardware cost while preserving a high level of accuracy by better taking care of local irregularities. The idea of using shared exponents to enable computing with a precision approaching floating point while using a fixed-point processor [35, 36, 37] is not new in computer architecture or statistics. The pertinent parameter state space for DNN workloads, however, is relatively large and needs its own study. In this work, we carefully characterized the state space for different operations involved in DNN workloads. Our study led to identifying a set of parameterization that works robustly across various types of neural networks. We further provide an accompanying extensible ISA to process DNN graphs and map pertinent operations to custom accelerators without requiring users to do any kind of pre-processing. Our hardware/algorithm co-designed approach lets us significantly drive down the cost of DNN inference. Through extensive evaluations, we demonstrate the potential of MSFP in the narrow bit-width regime. MSFP pushes the limits of narrow precision inferencing and enables operating at a new accuracy-cost Pareto frontier. Our use of a shared exponent in MSFP shares similarity to FlexPoint [7]. FlexPoint, however, uses coarse-grained scaling factors (at the granularity of an entire layer) based on tensor-wide statistics which can lead to a significant accuracy drop (in the presence of outliers) or a high-friction pass to re-calibrate for new benchmarks [38]. In this paper, we show that using fine-grained scaling is the key to enable high performance narrow-precision quantization. In addition, unlike FlexPoint, all conversions to MSFP (including the selection of shared exponent) are automatically handled on the fly in-situ on our custom hardware without requiring pre-processing of activations/weights. We emphasize that the primary objective of designing MSFP is accommodating a higher arithmetic density on a single node and not necessarily memory compression. While compression techniques such as pruning [31, 39, 40, 41], knowledge distillation [42, 43], weight sharing [44], or hashing [45, 46] result in promising memory compression, they may not necessarily yield a better arithmetic density and thus higher throughput and lower inference cost. In addition, this line of works requires changes to the topology of the pertinent model and usually incurs a low development velocity and a high re-training cost to obtain the original accuracy. 6 Conclusion We presented Microsoft floating point as a robust datatype for low-cost DNN inference. Variants of MSFP together enable operating at a new accuracy-cost Pareto frontier compared to a collection of current standards. MSFP-based inference eliminates the need for extensive model re-calibration or ad-hoc optimization to preserve target accuracy. MSFP has the capability to effectively encode a wide range of tensors across different application domains including vision and language models. MSFP is deployed at a datacenter-scale and has been used to successfully ship over a dozen models. 7 Broader impact This paper opens a new axis for the growing research in quantized DNN inference. It challenges the current practice in the field regarding the choice of numerical format and sheds light on the importance of a holistic co-design of hardware architecture and algorithms. This paper further highlights the importance of generalization in designing next generation standard quantization techniques to minimize the non-recurring engineering cost and ensure ease-of-use across various classes of models. Large-scale cost-efficient inference over DNNs enables an ever-increasing number of AI applications in consumer and enterprise products. MSFP enables inferencing on larger and more powerful DNN models in scenarios that require very high rates of inference such as web search, enterprise search, and email search. Other scenarios such as real-time recommendations, AI-powered text auto-completion (e.g. auto-suggestion, smart compose), and conversational interfaces also require high inference rates and benefit from inferencing with MSFP format. Acknowledgments and Disclosure of Funding We thank the anonymous reviewers and program committee for their feedback.
1. What is the main contribution of the paper regarding neural network inference? 2. What are the strengths of the proposed approach, particularly in terms of its theoretical soundness and experimental results? 3. What are the weaknesses of the paper, especially regarding reproducibility and hardware design? 4. Do you have any questions about the paper's methodology or missing details?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper proposes a new floating-point format for neural network inference. The proposed format shares the exponent per each chunk of several numbers. Contributions include: 1. the proposed numerical format and some empirical study on the selection of hyperparameters; 2. a co-designed hardware (?) with less MAC area; 3. accuracy results on a wide variety of models. Strengths This paper addresses an important problem of accelerate the inference of neural networks. The idea of sharing exponent per chunk is theoretically sound. The experimental result is strong. It shows a significantly reduction of memory and arithmetic density, under minor accuracy loss, on a wide variety of tasks. Weaknesses I think the biggest problem of this paper is reproducibility. I think the most attractive point of this paper is the hardware part, which leads to a significant lower cost. However, except the results in Table 1&3, I didn't find any text describing the design of the hardware, how to implement matrix multiplication with the proposed format on the hardware, and why the proposed numerical format leads to so much cost reduction. Thus this paper looks incomplete from my perspective. Furthermore, the algorithm part also misses some details, for example, how is the KL divergence in Fig. 2 defined? What is the bounding box side along each spatial dimension (batch size, channel, height, width)?
NIPS
Title More Is Less: Learning Efficient Video Representations by Big-Little Network and Depthwise Temporal Aggregation Abstract Current state-of-the-art models for video action recognition are mostly based on expensive 3D ConvNets. This results in a need for large GPU clusters to train and evaluate such architectures. To address this problem, we present an lightweight and memory-friendly architecture for action recognition that performs on par with or better than current architectures by using only a fraction of resources. The proposed architecture is based on a combination of a deep subnet operating on low-resolution frames with a compact subnet operating on high-resolution frames, allowing for high efficiency and accuracy at the same time. We demonstrate that our approach achieves a reduction by 3 ∼ 4 times in FLOPs and ∼ 2 times in memory usage compared to the baseline. This enables training deeper models with more input frames under the same computational budget. To further obviate the need for large-scale 3D convolutions, a temporal aggregation module is proposed to model temporal dependencies in a video at very small additional computational costs. Our models achieve strong performance on several action recognition benchmarks including Kinetics, Something-Something and Moments-in-time. The code and models are available at https://github.com/IBM/bLVNet-TAM. 1 Introduction Current state-of-the-art approaches for video action recognition are based on convolutional neural networks (CNNs). These include the best performing 3D models, such as I3D [1] and ResNet3D [2], and some effective 2D models, such as Temporal Relation Networks (TRN) [3] and Temporal Shift Modules (TSM) [4]. A CNN-based model usually considers a sequence of frames as input, obtained through either uniform or dense sampling from a video [1, 5]. In general, Longer input sequences yield better recognition results. However, one problem arising for a model requesting more input frames is that the GPU resources required for training and inference also significantly increase in both memory and time. For example, the top-performing I3D models [1] on the Kinetics [6] dataset were trained with 64 frames on a cluster of 32 GPUs, and the non-local network [7] even uses 128 frames as input. Another problem for action recognition is the lack of effective methods for temporal modeling when moving away from 3D spatiotemporal convolutions. While 2D convolutional models are more resource-friendly than their 3D counterparts, they lack expressiveness over time and thus cannot take much benefit from richer input data. In this paper, we present an efficient and memory-friendly spatio-temporal representation for action recognition, which enables training of deeper models while allowing for more input frames. The first part of our approach is inspired by the Big-Little-Net architecture (bLNet [8]). We propose a new 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. video architecture that has two network branches with different complexities: one branch processing low-resolution frames in a very deep subnet, and another branch processing high-resolution frames in a compact subnet. The two branches complement each other through merging at the end of each network layer. With such a design, our approach can process twice as many frames as the baseline model without compromising efficiency. We refer to this architecture as “Big-Little-Video-Net” (bLVNet). In light of the limited ability of capturing temporal dependencies in bLVNet, we further develop an effective method to exploit temporal relations across frames by a so called “Depthwise Temporal Aggregation Module” (TAM). The method enables the exchange of temporal information between frames by weighted channel-wise aggregation. This aggregation is made learnable with 1×1 depthwise convolution, and implemented as an independent network module. The temporal aggregation module can be easily integrated into the proposed network architecture to progressively learn spatio-temporal patterns in a hierarchical way. Moreover, the module is extremely compact and adds only negligible computational costs and parameters to bLVNet. Our main contributions lie in the following two interconnected aspects: (1) We propose a lightweight video architecture based on dual-path network to learn video features, and (2) we develop a temporal aggregation module to enable effective temporal modeling without the need for computationally expensive 3D convolutions. We evaluate our approach on the Kinetics-400 [6], Something-Something [9] and Moments-intime [10] datasets. The evaluation shows that bLVNet-TAM successfully allows us to train actionclassification models with deeper backbones (i.e., ResNet-101) as well as more (up to 64) input frames, using a single compute node with 8 Tesla V100 GPUs. Our comprehensive experiments demonstrate that our approach achieves highly competitive results on all datasets while maintaining efficiency. Especially, it establishes a new state-of-the-art result on Something-Something and Moments-in-time by outperforming previous approaches in the literature by a large margin. 2 Related Work Activity classification has always been a challenging research topic, with first attempts reaching back by almost two decades [11]; deep-learning architectures nowadays achieve tremendous recognition rates on various challenging tasks, such as Kinetics [1], ActivityNet [12], or Thumos [13]. Most successful architectures in the field are usually based on the so-called two-stream model [14], processing a single RGB frame and optical-flow input in two separate CNNs with a late fusion in the upper layers. Over the last years, many approaches extend this idea by processing a stack of input frames in both streams, thus extending the temporal window of the architecture form 1 to up to 128 input frames per stream. To further capture the temporal correlation in the input over time, those architectures usually make use of 3D convolutions as, e.g., in I3D [1], S3D [15], and ResNet3D [2], usually leading to a large-scale parameter space to train. Another way to capture temporal relations has been proposed by [5], [3], and [4]. Those architectures mainly build on the idea of processing videos in the form of multiple segments, and then fusing them at the higher layers of the networks. The first approach with this pattern was the so-called Temporal Segment Networks (TSN) proposed by Wang et al. [5]. The idea of TSN has been extended by Temporal Relation Networks (TRN) [3], which apply the idea of relational networks to the modeling of temporal relations between observations in videos. Another approach for capturing temporal contexts has been proposed by Temporal Shift Modules (TSM) [4]. This approach shifts part of the channels along the temporal dimension, thereby allowing for information to be exchanged among neighboring frames. More complex approaches have been tried as well, e.g. in the context of nonlocal neural networks [7]. Our temporal aggregation module is based on depthwise 1×1 convolutions to capture temporal dependencies across frames effectively. Separate convolutions are considered in approaches such as [15, 16] to reduce costly computation in 3D convolutional models. More recently, SlowFast Network [17] uses a dual-pathway network to process a video at both slow and fast frame rates. The fast pathway is made lightweight, similar to Little Net in our proposed architecture. However, our approach reduces computation based on both a lightweight architecture and low image resolution. Furthermore, the recent work Timeception [18] applies the concept of “Inception" to temporal domain for capturing long-range temporal dependencies in a video. The Timeception layers involve group convolutions at different time scales while our TAM layers only use depthwise convolution. As a result, the Timeception has significantly more parameters than the TAM (10% vs. 0.1% of the total model parameters). 3 Our Approach We aim at developing efficient and effective video representations for video understanding. To address the computational challenge imposed by the desired long input to a model, we propose a new video architecture based on the Big-Little network (bLNet) [8] for learning video features.We first give a brief recap of bLNet in Section 3.1. We then show, in Section 3.2, how to extend bLNet to an efficient video architecture that allows for seeing more frames with less computation and memory. An example of the proposed network architecture can be found in the supplementary material (Section A). To make temporal modeling more effective in our approach, we further develop a temporal aggregation module (TAM) to capture short-term as well as long-term temporal dependencies across frames. Our method is implemented as a separate network module and integrated with the proposed architecture seamlessly to learn a hierarchical temporal representation for action recognition. We detail this method in Section 3.3. 3.1 Recap of Big-Little Network The Big-Little Net, abbreviated as bLNet in [8], is a CNN architecture for learning strong feature representations by combining multi-scale image information. The bLNet processes an image at different resolutions using a dual-path network, but with low computational loads based on a clever design. The key idea is to have a high-complexity subnet (Big-Net) along with a low-cost one (Little-Net) operate on the low-scale and high-scale parts of an image in parallel. By such a design, the two subnets learn features complementary to each other while using less computation. The two branches are merged at the end of each network layer to fuse the low-scale and high-scale information so as to form a stronger image representation. The bLNet approach demonstrates improvement of model efficiency and performance on both object and speech recognition, using popular architectures such as ResNet, ResNeXt and SEResNeXt. More details on bLNet can be found in the original paper. In this work, we mainly adopt bLResNet-50 and bLResNet-101 as backbone for our proposed architecture. 3.2 Big-Little Video Network as Video Representation We describe our architecture in the context of 2D convolutions. However our approach is not specific to 2D convolutions and potentially extendable to any architecture based on 3D convolutions. The approach of Temporal Segment Networks (TSN) [5] provides a generic framework for learning video representations. With a shared 2D ConvNet as backbone, TSN performs frame-level predictions and then aggregates the results into a final video-level prediction (Fig. 1a)). The framework of TSN is efficient and has been successfully adopted by some recent approaches for action recognition such as TRN [3] and TSM [4]. Given its efficiency, we also choose TSN as the underlying video framework for our work. Let F = {ft|t = 1 · · ·n} be a set of sampled input frames from a video. We divide F into two groups, namely odd frames Fodd = {fk ∈ F| mod (k, 2) 6= 0} at half of the input image resolution, and even frames Feven = {fk ∈ F| mod (k, 2) = 0} at the input image resolution. For convenience, from now on, Fodd is referred to as big frames and Feven as little frames. Note that big branch can take either of a pair of frames as input and the other frame goes to the little branch. In TSN, all input frames are ordered as a batch of size n, where the tth element corresponds to the tth frame. We denote the input and output feature maps of the tth frame at the kth layer of the model by xkt ∈ RC×W×H and ykt ∈ RC×W×H , respectively. Whenever possible, we omit k for clarity. The bLNet can be directly plugged into TSN as the backbone network for learning video-level representation. We refer to this architecture as TSN-bLNet to differentiate it from the vanilla TSN (Fig. 1b)). This network fully enjoys the efficiency of bLNet, cutting the computational costs down by 1.6 ∼ 2 times according to [8]. Mathematically, the output yt can be written as yt = F(netB([xt]1/2) + netL(xt), θt). (1) Here [·]s is an operator scaling a tensor up or down by a factor of s in the spatial domain; netB and netL are the Big-Net and Little-Net in the bLNet aforementioned; and θt are the model parameters. Following [8], F indicates an additional residual block applied after merging the big and little branches to stabilize and enhance the combined feature representation. The architecture described above only learns features from a single frame, so there are no interactions between frames. Alternatively, we can feed the odd and even frames separately into the big and little branches so that each branch obtains complementary information from different frames. This idea is illustrated in Fig. 1c) and the output yt in this case can be expressed by yt = { F(netB(bxtc1/2) + netL(xt+1), θt), if mod (t, 2) 6= 0 yt−1, otherwise (2) While the modification proposed above is simple, it leads to a new video architecture, which is called Big-Little-Video-Net, or bLVNet for short. The bLVNet makes two distinct differences from TSN-bLNet. Firstly, without increasing any computation, it can take input frames two times as many as TSN-bLNet. We shall demonstrate the benefit of leveraging more frames for temporal modeling in Section 4. Furthermore, the bLVNet has 1.5 ∼ 2.0× fewer FLOPs than TSN while seeing frames twice as many as TSN, thanks to the efficiency of the dual-path network. Secondly, the merging of the two branches in bLVNet now happens on two different frames carrying temporal information. We call this type of temporal interaction by local fusion, since it only captures temporal relations between two adjacent frames. In spite of that, local fusion gives rise to a significant performance boost for recognition, as shown later in Section 4.3. 3.3 Temporal Aggregation Module Temporal modeling is a challenging problem for video understanding. Theoretically, adding a recurrent layer such as LSTM [19] on top of a 2D ConvNet seems like a promising means to capture temporal ordering and long-term dependencies in actions. Nonetheless, such approaches are not practically competent with 3D ConvNets [1], which use spatio-temporal filters to learn hierarchical feature representations. One issue with 3D models is that they are heavy in parameters and costly in computation, making them hard to train. Even though some approaches like S3D [15] and R(2+1)D [16] alleviates this issue by separating a 3D convolution filter into a 2D spatial component followed by a 1D temporal component, they are in general still more expensive than 2D ConvNet models. With the efficient bLVNet architecture described above, our goal is to further improve its spatiotemporal representation by effective temporal modeling. The local fusion in bLVNet only exploits temporal relations between neighbored frames. To address this limitation, we develop a method to capture short-term as well as long-term dependencies across frames. Our basic idea is to fuse temporal information at each time instance by weighted channel-wise aggregation. As detailed below, this idea can be efficiently implemented as a network module to progressively learn spatio-temporal patterns in a hierarchical way. Let yt be the output (i.e. neural activation) of the tth frame ft at a layer of the network (see Eq. 2). To model the temporal dependencies between ft and its neighbors, we aggregate the activations of all the frames within a temporal range r around ft. A weight is learned for each channel of the activations to indicate its relevance. Specifically, the aggregation results can be written as ŷt = ReLU( j=br/2c∑ j=−br/2c wj ⊗ yt+j), (3) where ⊗ indicates the channel-wise multiplication and wj ∈ RC is the weights. The ⊗ is defined as: for a vector v = [v1 v2 · · · vC ] and a tensor M = [m1 m2 · · · mC ] with C feature channels, v ⊗M = [v1 ∗m1 v2 ∗m2 · · · vC ∗mC ]. We implement the temporal aggregation as a network module (Fig. 2). It involves three steps as follows, 1. apply 1×1 depthwise convolution r times to n input tensors to form an output matrix of size r × n; 2. shift the ith row left (or right) by |i− br/2c| positions if i > br/2c (or i ≤ br/2c) and if needed, pad leading or trailing zero tensors in the front or at the end; 3. perform temporal aggregation along the column to generate the output. The aggregation module(TAM), highlighted as a red box in Fig. 1d), is inserted as a separate layer after the local temporal fusion in the bLVNet, resulting in the final bLVNet-TAM architecture. Obviously none of the steps in the implementation above involve costly computation, so the module is fairly fast. A node in the network initially only sees r − 1 neighbors. As the network goes deeper, the amount of context that the node involves in the input grows quickly, similar to how the receptive field of a neuron is enlarged in a CNN. In such a manner, long-range temporal dependencies are thus potentially captured. For this reason, the temporal aggregation is also called global temporal fusion here, as opposed to the local temporal fusion discussed above. The work of TSM [4] has also applied temporal shifting to swap feature channels between neighboring frames. In such a case, TSM can be treated as a special case of our method where the weights are empirically set rather than learned from data. In Section 4.3, we demonstrate that the proposed TAM is more effective than TSM for temporal modeling under different video architectures. TAM is also related to S3D [15] and R(2+1)D [16] in that TAM is independent of spatial convolutions. However, TAM is based on depthwise convolution, thus has fewer parameters and less computation than S3D and R(2+1)D. The TAM can also be integrated into 3D convolutions such as C3D [20] and I3D [1] to further enhance the temporal modeling capability that already exists in these models. Due to the difference in how temporal data is presented between 2D-based and 3D-based models, the temporal shifting now needs to operate on feature channels within a tensor instead of on tensors themselves. 4 Experiments 4.1 Experimental Setup Datasets. We evaluate our approach on three large-scale datasets for video recognition, including the widely used Something-Something (Version 1 and Version 2) [9], Kinetics-400 [6] and the recent Moments-in-time dataset [10]. They are herein referred to as SS-V1, SS-V2, Kinetics-400 and Moments, respectively. Something-Something is a dataset containing videos of 174 types of predefined human-object interactions with everyday objects. The version 1 and 2 include 108k and 220k videos, respectively. This dataset focuses on human-object interactions in a rather simple setup with no scene contexts to be exploited for recognition. Instead temporal relationships are as important as appearance for reasoning about the interactions. Because of this, the dataset serves as a good benchmark for evaluating the efficacy of temporal modeling, such as proposed in our approach. Kinetics-400 [6] has emerged as a standard benchmark for action recognition after UCF101 [21] and HMDB [22], but on a significantly larger scale. The dataset consists of 240k training videos and 20k validation videos, with each video trimmed to around 10 seconds. It has a total of 400 human action categories. Moments-in-time [10] is a recent collection of one million labeled videos, involving actions from people, animals, objects or natural phenomena. It has 339 classes and each video clip is trimmed to 3 seconds long. Data Augmentation. During training, we follow the data augmentation used in TSN [5] to augment the video with different sizes spatially and flip the video horizontally with 50% probability. Furthermore, since our models are finetuned on pretrained ImageNet, we normalize the data with the mean and standard deviation of the ImageNet images. The model input is formed by uniform sampling, which first divides a video into n uniform segments and then selects one random frame from each segment as the input. During inference, we resize the smaller side of an image to 256 and then crop a centered 224×224 region. The center frame of each segment in uniform sampling is picked as the input. On SomethingSomething and Moments, our results are based on the single-crop and single-clip setting. On Kinetics-400, we use the common practice of multi-crop and multi-clip for evaluation. Training Details. Since all the three datasets are large-scale, we train the models in a progressive way. For each type of backbone (for example, bLResNet-50), we first finetune a base model on ImageNet with a minimum input length (i.e. 8×2 in our case) using 50 epochs. We adopt the Nesterov momentum optimizer with an initial weight of 0.01, a weight decay of 0.0005 and a momentum of 0.9. We then finetune a new model with longer input (for example, 16×2) on top of the corresponding base model, but with 25 epochs only. In this case, the initial learning rate is set to 0.01 on SomethingSomething and 0.005 on Kinetics and Moments. The learning rate is decreased by a factor of 10 at the 10-th and 20-th epoch, respectively. This strategy allows to significantly reduce the training time needed for all the models evaluated in our experiments. All our models were trained on a server with 8 GPU cards and a total of 128G GPU memory. We set the total batch size to 64 whenever possible. For models that require more memory to train, we adjust the batch size accordingly to the maximum number allowed. 4.2 Main Results Something-Something. We first report our results on the validation set of the Something-Something datasets in Table 1 and Table 2. With a moderately deep backbone bLResNet-50, our approach outperforms all 3D models on SS-V1 while using much fewer input frames (8×2) and being substantially more efficient. TSM [4] was the previously best approach on Something-Something. Under the same backbone (i.e. ResNet-50), our approach is better than TSM on both SS-V1 and SS-V2 while being more efficient (i.e our 8x2 model has 1.4 times fewer FLOPs than a 8-frame TSM model). When empowered with a stronger backbone bLResNet-101, our approach achieves even better results at 32×2 frames (53.1% top-1 accuracy on SS-V1, and 65.2% on SS-V2), establishing a new stateof-the-art on Something-Something. Notably, these results while based on RGB information only, are superior to those obtained from the best two-stream models at no more computational cost. This strongly demonstrates the effectiveness of our approach for temporal modeling. We further evaluated our models on the test set of Something-Something. Our results are consistently better than the best results reported by the other approaches in comparison including 2-stream models. Kinetics-400. Kinetics-400 is one of the most popular benchmarks for action recognition. Currently the best-performed models on this dataset are all based on 3D Convolutions. However, it has been shown in the literature that temporal ordering in this dataset does not seem to be as crucial as RGB information for recognition. For example, as experimented in S3D [15], the model trained on normal time-order data performs well on the time-reversed data on Kinetics. In accordance to this, our approach (3 crops and 3 clips) mainly performs on par with or better than the current large-scale architectures, but without outperforming them as clearly as on the Something-Something datasets, where the temporal relations are more essential for an overall understanding of the video content. Moments. We finally evaluate the proposed architecture on the Moments dataset [10], a large-scale action dataset with about three times more training samples than Kinetics-400. Since Moments is relatively new and results reported on it are limited, we only compare our results with those reported in the Moments paper [10]. As can been seen from Table 4, our approach outperforms all the single-stream models as well as the ensemble one. We hope our models provide stronger baseline results for future reference on this challenging dataset. It is also noted that our model trained with 16× 2 frames only produces slightly better top-1 accuracy than the model trained with 8 × 2 frames. We speculate that this has to do with the fact that the Moments clips are only as short as 3 seconds and that there is only a limited impact in choosing a finer temporal granularity on this dataset. 4.3 Ablation Studies In this section, we conduct ablation studies to provide more insights about our main ideas. Is temporal aggregation effective?. We validate the efficacy of the proposed temporal aggregation module (TAM), which is considered as a global fusion method (Section 3.3). Local fusion here is referred to the branch merging in the dual path network (Section 3.2). We compare TAM with the temporal shift module used in TSM [4] in Table 5 under two different video architectures: TSN and bLVNet proposed in this work. TAM demonstrates clear advantages over TSM, outperforming TSM by over 2% under both architectures. Interestingly, with the here proposed bLVNet baseline with local temporal fusion almost doubles the performance of a TSN baseline, improving the accuracy from 17.4% to 33.6%. On top of that, TAM boosts the performance by another 13% in both cases, suggesting that TAM is complementary to local fusion. This further confirms the significance of temporal reasoning on the Something-Something dataset. Does seeing more frames help?. One of the main contribution of this work is an efficient video architecture that makes it possible to train deeper models with more input frames using moderate GPU resources. Fig. 3a) shows consistent improvement of our approach on SS-V1 as the number of input frames increases. A similar trend in our results can be observed on Kinetics-400 in Table 3. On the other hand, the almost flattened line from TSN suggests that a model without effective temporal modeling cannot take much of the benefit from longer input frames. Memory Usage. We compare the memory usage between our approach based on bLResNet-50 and TSN based on ResNet-50. As shown in Fig. 3b), our approach is more memory friendly than TSN, achieving a saving of ∼2 times at the same number of input frames. The larger batch size allowed for training under the same computational budget is critical for our approach to obtain better models and reduce training time. 5 Conclusion We presented an efficient and memory-friendly video architecture for learning video representations. The proposed architecture allows for twice as many input frames as the baseline while using less computation and memory. This enables training of deeper models with richer input under the same GPU resources. We further developed a temporal aggregation method to capture temporal dependencies effectively across frames. Our models achieve strong performance on several action recognition benchmarks, and establish a state-of-the-art on the Something-Something dataset.
1. What are the strengths and weaknesses of the paper's experimental analysis? 2. How does the reviewer assess the paper's methodology and technical contribution? 3. What are the key findings and observations from the experimental results? 4. How does the reviewer evaluate the paper's overall impact on the community? 5. Are there any suggestions or recommendations for improving the paper?
Review
Review + Three challenging and large-scale video datasets are used: Something-Something, Kinetics, Moments-in-time. + Various ablations are provided. The experimental analyses lead to very interesting observations that are useful for the community. The fact that TSN [15] does not improve with more #frames, whereas bLVNet-TAM does is interesting to see (Fig 3). The performance breakdown for different components in Table 5 is nice. Overall, there are many take-home messages to learn from this paper’s experiments. - In terms of methodology, the paper combines existing blocks: Big-Little-Net [7], TSN [15], TSM [4]. Therefore, its technical contribution is limited. * Final rating: I keep my initial score, i.e. 6, after having read the rebuttal and the other reviews. I thank the authors for the clarifications provided. Despite the limited originality, I believe that the paper can be a valuable contribution to the community with its simplicity, positive results, and comprehensive experiments. I encourage the authors to incorporate the clarifications in the revised version.
NIPS
Title More Is Less: Learning Efficient Video Representations by Big-Little Network and Depthwise Temporal Aggregation Abstract Current state-of-the-art models for video action recognition are mostly based on expensive 3D ConvNets. This results in a need for large GPU clusters to train and evaluate such architectures. To address this problem, we present an lightweight and memory-friendly architecture for action recognition that performs on par with or better than current architectures by using only a fraction of resources. The proposed architecture is based on a combination of a deep subnet operating on low-resolution frames with a compact subnet operating on high-resolution frames, allowing for high efficiency and accuracy at the same time. We demonstrate that our approach achieves a reduction by 3 ∼ 4 times in FLOPs and ∼ 2 times in memory usage compared to the baseline. This enables training deeper models with more input frames under the same computational budget. To further obviate the need for large-scale 3D convolutions, a temporal aggregation module is proposed to model temporal dependencies in a video at very small additional computational costs. Our models achieve strong performance on several action recognition benchmarks including Kinetics, Something-Something and Moments-in-time. The code and models are available at https://github.com/IBM/bLVNet-TAM. 1 Introduction Current state-of-the-art approaches for video action recognition are based on convolutional neural networks (CNNs). These include the best performing 3D models, such as I3D [1] and ResNet3D [2], and some effective 2D models, such as Temporal Relation Networks (TRN) [3] and Temporal Shift Modules (TSM) [4]. A CNN-based model usually considers a sequence of frames as input, obtained through either uniform or dense sampling from a video [1, 5]. In general, Longer input sequences yield better recognition results. However, one problem arising for a model requesting more input frames is that the GPU resources required for training and inference also significantly increase in both memory and time. For example, the top-performing I3D models [1] on the Kinetics [6] dataset were trained with 64 frames on a cluster of 32 GPUs, and the non-local network [7] even uses 128 frames as input. Another problem for action recognition is the lack of effective methods for temporal modeling when moving away from 3D spatiotemporal convolutions. While 2D convolutional models are more resource-friendly than their 3D counterparts, they lack expressiveness over time and thus cannot take much benefit from richer input data. In this paper, we present an efficient and memory-friendly spatio-temporal representation for action recognition, which enables training of deeper models while allowing for more input frames. The first part of our approach is inspired by the Big-Little-Net architecture (bLNet [8]). We propose a new 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. video architecture that has two network branches with different complexities: one branch processing low-resolution frames in a very deep subnet, and another branch processing high-resolution frames in a compact subnet. The two branches complement each other through merging at the end of each network layer. With such a design, our approach can process twice as many frames as the baseline model without compromising efficiency. We refer to this architecture as “Big-Little-Video-Net” (bLVNet). In light of the limited ability of capturing temporal dependencies in bLVNet, we further develop an effective method to exploit temporal relations across frames by a so called “Depthwise Temporal Aggregation Module” (TAM). The method enables the exchange of temporal information between frames by weighted channel-wise aggregation. This aggregation is made learnable with 1×1 depthwise convolution, and implemented as an independent network module. The temporal aggregation module can be easily integrated into the proposed network architecture to progressively learn spatio-temporal patterns in a hierarchical way. Moreover, the module is extremely compact and adds only negligible computational costs and parameters to bLVNet. Our main contributions lie in the following two interconnected aspects: (1) We propose a lightweight video architecture based on dual-path network to learn video features, and (2) we develop a temporal aggregation module to enable effective temporal modeling without the need for computationally expensive 3D convolutions. We evaluate our approach on the Kinetics-400 [6], Something-Something [9] and Moments-intime [10] datasets. The evaluation shows that bLVNet-TAM successfully allows us to train actionclassification models with deeper backbones (i.e., ResNet-101) as well as more (up to 64) input frames, using a single compute node with 8 Tesla V100 GPUs. Our comprehensive experiments demonstrate that our approach achieves highly competitive results on all datasets while maintaining efficiency. Especially, it establishes a new state-of-the-art result on Something-Something and Moments-in-time by outperforming previous approaches in the literature by a large margin. 2 Related Work Activity classification has always been a challenging research topic, with first attempts reaching back by almost two decades [11]; deep-learning architectures nowadays achieve tremendous recognition rates on various challenging tasks, such as Kinetics [1], ActivityNet [12], or Thumos [13]. Most successful architectures in the field are usually based on the so-called two-stream model [14], processing a single RGB frame and optical-flow input in two separate CNNs with a late fusion in the upper layers. Over the last years, many approaches extend this idea by processing a stack of input frames in both streams, thus extending the temporal window of the architecture form 1 to up to 128 input frames per stream. To further capture the temporal correlation in the input over time, those architectures usually make use of 3D convolutions as, e.g., in I3D [1], S3D [15], and ResNet3D [2], usually leading to a large-scale parameter space to train. Another way to capture temporal relations has been proposed by [5], [3], and [4]. Those architectures mainly build on the idea of processing videos in the form of multiple segments, and then fusing them at the higher layers of the networks. The first approach with this pattern was the so-called Temporal Segment Networks (TSN) proposed by Wang et al. [5]. The idea of TSN has been extended by Temporal Relation Networks (TRN) [3], which apply the idea of relational networks to the modeling of temporal relations between observations in videos. Another approach for capturing temporal contexts has been proposed by Temporal Shift Modules (TSM) [4]. This approach shifts part of the channels along the temporal dimension, thereby allowing for information to be exchanged among neighboring frames. More complex approaches have been tried as well, e.g. in the context of nonlocal neural networks [7]. Our temporal aggregation module is based on depthwise 1×1 convolutions to capture temporal dependencies across frames effectively. Separate convolutions are considered in approaches such as [15, 16] to reduce costly computation in 3D convolutional models. More recently, SlowFast Network [17] uses a dual-pathway network to process a video at both slow and fast frame rates. The fast pathway is made lightweight, similar to Little Net in our proposed architecture. However, our approach reduces computation based on both a lightweight architecture and low image resolution. Furthermore, the recent work Timeception [18] applies the concept of “Inception" to temporal domain for capturing long-range temporal dependencies in a video. The Timeception layers involve group convolutions at different time scales while our TAM layers only use depthwise convolution. As a result, the Timeception has significantly more parameters than the TAM (10% vs. 0.1% of the total model parameters). 3 Our Approach We aim at developing efficient and effective video representations for video understanding. To address the computational challenge imposed by the desired long input to a model, we propose a new video architecture based on the Big-Little network (bLNet) [8] for learning video features.We first give a brief recap of bLNet in Section 3.1. We then show, in Section 3.2, how to extend bLNet to an efficient video architecture that allows for seeing more frames with less computation and memory. An example of the proposed network architecture can be found in the supplementary material (Section A). To make temporal modeling more effective in our approach, we further develop a temporal aggregation module (TAM) to capture short-term as well as long-term temporal dependencies across frames. Our method is implemented as a separate network module and integrated with the proposed architecture seamlessly to learn a hierarchical temporal representation for action recognition. We detail this method in Section 3.3. 3.1 Recap of Big-Little Network The Big-Little Net, abbreviated as bLNet in [8], is a CNN architecture for learning strong feature representations by combining multi-scale image information. The bLNet processes an image at different resolutions using a dual-path network, but with low computational loads based on a clever design. The key idea is to have a high-complexity subnet (Big-Net) along with a low-cost one (Little-Net) operate on the low-scale and high-scale parts of an image in parallel. By such a design, the two subnets learn features complementary to each other while using less computation. The two branches are merged at the end of each network layer to fuse the low-scale and high-scale information so as to form a stronger image representation. The bLNet approach demonstrates improvement of model efficiency and performance on both object and speech recognition, using popular architectures such as ResNet, ResNeXt and SEResNeXt. More details on bLNet can be found in the original paper. In this work, we mainly adopt bLResNet-50 and bLResNet-101 as backbone for our proposed architecture. 3.2 Big-Little Video Network as Video Representation We describe our architecture in the context of 2D convolutions. However our approach is not specific to 2D convolutions and potentially extendable to any architecture based on 3D convolutions. The approach of Temporal Segment Networks (TSN) [5] provides a generic framework for learning video representations. With a shared 2D ConvNet as backbone, TSN performs frame-level predictions and then aggregates the results into a final video-level prediction (Fig. 1a)). The framework of TSN is efficient and has been successfully adopted by some recent approaches for action recognition such as TRN [3] and TSM [4]. Given its efficiency, we also choose TSN as the underlying video framework for our work. Let F = {ft|t = 1 · · ·n} be a set of sampled input frames from a video. We divide F into two groups, namely odd frames Fodd = {fk ∈ F| mod (k, 2) 6= 0} at half of the input image resolution, and even frames Feven = {fk ∈ F| mod (k, 2) = 0} at the input image resolution. For convenience, from now on, Fodd is referred to as big frames and Feven as little frames. Note that big branch can take either of a pair of frames as input and the other frame goes to the little branch. In TSN, all input frames are ordered as a batch of size n, where the tth element corresponds to the tth frame. We denote the input and output feature maps of the tth frame at the kth layer of the model by xkt ∈ RC×W×H and ykt ∈ RC×W×H , respectively. Whenever possible, we omit k for clarity. The bLNet can be directly plugged into TSN as the backbone network for learning video-level representation. We refer to this architecture as TSN-bLNet to differentiate it from the vanilla TSN (Fig. 1b)). This network fully enjoys the efficiency of bLNet, cutting the computational costs down by 1.6 ∼ 2 times according to [8]. Mathematically, the output yt can be written as yt = F(netB([xt]1/2) + netL(xt), θt). (1) Here [·]s is an operator scaling a tensor up or down by a factor of s in the spatial domain; netB and netL are the Big-Net and Little-Net in the bLNet aforementioned; and θt are the model parameters. Following [8], F indicates an additional residual block applied after merging the big and little branches to stabilize and enhance the combined feature representation. The architecture described above only learns features from a single frame, so there are no interactions between frames. Alternatively, we can feed the odd and even frames separately into the big and little branches so that each branch obtains complementary information from different frames. This idea is illustrated in Fig. 1c) and the output yt in this case can be expressed by yt = { F(netB(bxtc1/2) + netL(xt+1), θt), if mod (t, 2) 6= 0 yt−1, otherwise (2) While the modification proposed above is simple, it leads to a new video architecture, which is called Big-Little-Video-Net, or bLVNet for short. The bLVNet makes two distinct differences from TSN-bLNet. Firstly, without increasing any computation, it can take input frames two times as many as TSN-bLNet. We shall demonstrate the benefit of leveraging more frames for temporal modeling in Section 4. Furthermore, the bLVNet has 1.5 ∼ 2.0× fewer FLOPs than TSN while seeing frames twice as many as TSN, thanks to the efficiency of the dual-path network. Secondly, the merging of the two branches in bLVNet now happens on two different frames carrying temporal information. We call this type of temporal interaction by local fusion, since it only captures temporal relations between two adjacent frames. In spite of that, local fusion gives rise to a significant performance boost for recognition, as shown later in Section 4.3. 3.3 Temporal Aggregation Module Temporal modeling is a challenging problem for video understanding. Theoretically, adding a recurrent layer such as LSTM [19] on top of a 2D ConvNet seems like a promising means to capture temporal ordering and long-term dependencies in actions. Nonetheless, such approaches are not practically competent with 3D ConvNets [1], which use spatio-temporal filters to learn hierarchical feature representations. One issue with 3D models is that they are heavy in parameters and costly in computation, making them hard to train. Even though some approaches like S3D [15] and R(2+1)D [16] alleviates this issue by separating a 3D convolution filter into a 2D spatial component followed by a 1D temporal component, they are in general still more expensive than 2D ConvNet models. With the efficient bLVNet architecture described above, our goal is to further improve its spatiotemporal representation by effective temporal modeling. The local fusion in bLVNet only exploits temporal relations between neighbored frames. To address this limitation, we develop a method to capture short-term as well as long-term dependencies across frames. Our basic idea is to fuse temporal information at each time instance by weighted channel-wise aggregation. As detailed below, this idea can be efficiently implemented as a network module to progressively learn spatio-temporal patterns in a hierarchical way. Let yt be the output (i.e. neural activation) of the tth frame ft at a layer of the network (see Eq. 2). To model the temporal dependencies between ft and its neighbors, we aggregate the activations of all the frames within a temporal range r around ft. A weight is learned for each channel of the activations to indicate its relevance. Specifically, the aggregation results can be written as ŷt = ReLU( j=br/2c∑ j=−br/2c wj ⊗ yt+j), (3) where ⊗ indicates the channel-wise multiplication and wj ∈ RC is the weights. The ⊗ is defined as: for a vector v = [v1 v2 · · · vC ] and a tensor M = [m1 m2 · · · mC ] with C feature channels, v ⊗M = [v1 ∗m1 v2 ∗m2 · · · vC ∗mC ]. We implement the temporal aggregation as a network module (Fig. 2). It involves three steps as follows, 1. apply 1×1 depthwise convolution r times to n input tensors to form an output matrix of size r × n; 2. shift the ith row left (or right) by |i− br/2c| positions if i > br/2c (or i ≤ br/2c) and if needed, pad leading or trailing zero tensors in the front or at the end; 3. perform temporal aggregation along the column to generate the output. The aggregation module(TAM), highlighted as a red box in Fig. 1d), is inserted as a separate layer after the local temporal fusion in the bLVNet, resulting in the final bLVNet-TAM architecture. Obviously none of the steps in the implementation above involve costly computation, so the module is fairly fast. A node in the network initially only sees r − 1 neighbors. As the network goes deeper, the amount of context that the node involves in the input grows quickly, similar to how the receptive field of a neuron is enlarged in a CNN. In such a manner, long-range temporal dependencies are thus potentially captured. For this reason, the temporal aggregation is also called global temporal fusion here, as opposed to the local temporal fusion discussed above. The work of TSM [4] has also applied temporal shifting to swap feature channels between neighboring frames. In such a case, TSM can be treated as a special case of our method where the weights are empirically set rather than learned from data. In Section 4.3, we demonstrate that the proposed TAM is more effective than TSM for temporal modeling under different video architectures. TAM is also related to S3D [15] and R(2+1)D [16] in that TAM is independent of spatial convolutions. However, TAM is based on depthwise convolution, thus has fewer parameters and less computation than S3D and R(2+1)D. The TAM can also be integrated into 3D convolutions such as C3D [20] and I3D [1] to further enhance the temporal modeling capability that already exists in these models. Due to the difference in how temporal data is presented between 2D-based and 3D-based models, the temporal shifting now needs to operate on feature channels within a tensor instead of on tensors themselves. 4 Experiments 4.1 Experimental Setup Datasets. We evaluate our approach on three large-scale datasets for video recognition, including the widely used Something-Something (Version 1 and Version 2) [9], Kinetics-400 [6] and the recent Moments-in-time dataset [10]. They are herein referred to as SS-V1, SS-V2, Kinetics-400 and Moments, respectively. Something-Something is a dataset containing videos of 174 types of predefined human-object interactions with everyday objects. The version 1 and 2 include 108k and 220k videos, respectively. This dataset focuses on human-object interactions in a rather simple setup with no scene contexts to be exploited for recognition. Instead temporal relationships are as important as appearance for reasoning about the interactions. Because of this, the dataset serves as a good benchmark for evaluating the efficacy of temporal modeling, such as proposed in our approach. Kinetics-400 [6] has emerged as a standard benchmark for action recognition after UCF101 [21] and HMDB [22], but on a significantly larger scale. The dataset consists of 240k training videos and 20k validation videos, with each video trimmed to around 10 seconds. It has a total of 400 human action categories. Moments-in-time [10] is a recent collection of one million labeled videos, involving actions from people, animals, objects or natural phenomena. It has 339 classes and each video clip is trimmed to 3 seconds long. Data Augmentation. During training, we follow the data augmentation used in TSN [5] to augment the video with different sizes spatially and flip the video horizontally with 50% probability. Furthermore, since our models are finetuned on pretrained ImageNet, we normalize the data with the mean and standard deviation of the ImageNet images. The model input is formed by uniform sampling, which first divides a video into n uniform segments and then selects one random frame from each segment as the input. During inference, we resize the smaller side of an image to 256 and then crop a centered 224×224 region. The center frame of each segment in uniform sampling is picked as the input. On SomethingSomething and Moments, our results are based on the single-crop and single-clip setting. On Kinetics-400, we use the common practice of multi-crop and multi-clip for evaluation. Training Details. Since all the three datasets are large-scale, we train the models in a progressive way. For each type of backbone (for example, bLResNet-50), we first finetune a base model on ImageNet with a minimum input length (i.e. 8×2 in our case) using 50 epochs. We adopt the Nesterov momentum optimizer with an initial weight of 0.01, a weight decay of 0.0005 and a momentum of 0.9. We then finetune a new model with longer input (for example, 16×2) on top of the corresponding base model, but with 25 epochs only. In this case, the initial learning rate is set to 0.01 on SomethingSomething and 0.005 on Kinetics and Moments. The learning rate is decreased by a factor of 10 at the 10-th and 20-th epoch, respectively. This strategy allows to significantly reduce the training time needed for all the models evaluated in our experiments. All our models were trained on a server with 8 GPU cards and a total of 128G GPU memory. We set the total batch size to 64 whenever possible. For models that require more memory to train, we adjust the batch size accordingly to the maximum number allowed. 4.2 Main Results Something-Something. We first report our results on the validation set of the Something-Something datasets in Table 1 and Table 2. With a moderately deep backbone bLResNet-50, our approach outperforms all 3D models on SS-V1 while using much fewer input frames (8×2) and being substantially more efficient. TSM [4] was the previously best approach on Something-Something. Under the same backbone (i.e. ResNet-50), our approach is better than TSM on both SS-V1 and SS-V2 while being more efficient (i.e our 8x2 model has 1.4 times fewer FLOPs than a 8-frame TSM model). When empowered with a stronger backbone bLResNet-101, our approach achieves even better results at 32×2 frames (53.1% top-1 accuracy on SS-V1, and 65.2% on SS-V2), establishing a new stateof-the-art on Something-Something. Notably, these results while based on RGB information only, are superior to those obtained from the best two-stream models at no more computational cost. This strongly demonstrates the effectiveness of our approach for temporal modeling. We further evaluated our models on the test set of Something-Something. Our results are consistently better than the best results reported by the other approaches in comparison including 2-stream models. Kinetics-400. Kinetics-400 is one of the most popular benchmarks for action recognition. Currently the best-performed models on this dataset are all based on 3D Convolutions. However, it has been shown in the literature that temporal ordering in this dataset does not seem to be as crucial as RGB information for recognition. For example, as experimented in S3D [15], the model trained on normal time-order data performs well on the time-reversed data on Kinetics. In accordance to this, our approach (3 crops and 3 clips) mainly performs on par with or better than the current large-scale architectures, but without outperforming them as clearly as on the Something-Something datasets, where the temporal relations are more essential for an overall understanding of the video content. Moments. We finally evaluate the proposed architecture on the Moments dataset [10], a large-scale action dataset with about three times more training samples than Kinetics-400. Since Moments is relatively new and results reported on it are limited, we only compare our results with those reported in the Moments paper [10]. As can been seen from Table 4, our approach outperforms all the single-stream models as well as the ensemble one. We hope our models provide stronger baseline results for future reference on this challenging dataset. It is also noted that our model trained with 16× 2 frames only produces slightly better top-1 accuracy than the model trained with 8 × 2 frames. We speculate that this has to do with the fact that the Moments clips are only as short as 3 seconds and that there is only a limited impact in choosing a finer temporal granularity on this dataset. 4.3 Ablation Studies In this section, we conduct ablation studies to provide more insights about our main ideas. Is temporal aggregation effective?. We validate the efficacy of the proposed temporal aggregation module (TAM), which is considered as a global fusion method (Section 3.3). Local fusion here is referred to the branch merging in the dual path network (Section 3.2). We compare TAM with the temporal shift module used in TSM [4] in Table 5 under two different video architectures: TSN and bLVNet proposed in this work. TAM demonstrates clear advantages over TSM, outperforming TSM by over 2% under both architectures. Interestingly, with the here proposed bLVNet baseline with local temporal fusion almost doubles the performance of a TSN baseline, improving the accuracy from 17.4% to 33.6%. On top of that, TAM boosts the performance by another 13% in both cases, suggesting that TAM is complementary to local fusion. This further confirms the significance of temporal reasoning on the Something-Something dataset. Does seeing more frames help?. One of the main contribution of this work is an efficient video architecture that makes it possible to train deeper models with more input frames using moderate GPU resources. Fig. 3a) shows consistent improvement of our approach on SS-V1 as the number of input frames increases. A similar trend in our results can be observed on Kinetics-400 in Table 3. On the other hand, the almost flattened line from TSN suggests that a model without effective temporal modeling cannot take much of the benefit from longer input frames. Memory Usage. We compare the memory usage between our approach based on bLResNet-50 and TSN based on ResNet-50. As shown in Fig. 3b), our approach is more memory friendly than TSN, achieving a saving of ∼2 times at the same number of input frames. The larger batch size allowed for training under the same computational budget is critical for our approach to obtain better models and reduce training time. 5 Conclusion We presented an efficient and memory-friendly video architecture for learning video representations. The proposed architecture allows for twice as many input frames as the baseline while using less computation and memory. This enables training of deeper models with richer input under the same GPU resources. We further developed a temporal aggregation method to capture temporal dependencies effectively across frames. Our models achieve strong performance on several action recognition benchmarks, and establish a state-of-the-art on the Something-Something dataset.
1. What are the strengths and weaknesses of the paper regarding its simplicity, technical approach, and innovation? 2. How does the reviewer assess the contribution of the paper, particularly in comparison to other works such as Timeception? 3. What are the limitations of the proposed schemes in terms of memory and efficiency gains? 4. How significant is the extension of the TAM to 3D CNNs? 5. Are there any concerns regarding the adoption of the proposed framework by the community?
Review
Review Strengths: * Simplicity. Both the bLVNet and the TAM are simple models, easy to implement and probably fairly straightforward to train. This is a good property. * Paper is well-written and the technical approach is easy to comprehend. * Although the TAM is demonstrated using a frame-based 2D CNN, it is straightforward to extend to 3D CNNs, with potential further gains in accuracy. * Comprehensive evaluation on 3 large-scale video datasets shows the memory/efficiency/accuracy gains enabled by the two proposed schemes (bLVNet and TAM). Weaknesses: * Technical innovation is fairly limited. The bLVNet is a straightforward extension of bLNet (an image model) to video. The TAM involves the use of 1D temporal convolution and depthwise convolution. Both mechanisms that have been widely leveraged before. On the other hand, the paper does not make bold novelty claims and recognizes the contribution as being more empirical than technical. The TAM shares many similarities with Timeception [Hussein et al., CVPR 19], which was not yet published at the time of this submission and thus does not diminish the value of this work. Nevertheless, given the many analogies between these concurrent approaches, it'd be advisable to discuss their relations in future versions (or the camera-ready version) of the paper. * While the memory/efficiency gains are convincingly demonstrated, they are not substantial enough to be a game-changer in the practice of training video understanding models. Due to the overhead of setting up the proposed framework (even though quite simple), adoption by the community may be fairly limited. Final rating: - After having read the other reviews and the author responses, I decide to maintain my initial rating (6). The contribution of this work is mostly empirical. The stronger results compared to more complex models and the promise to release the code imply that this work deserves to be known, even if fairly incremental.
NIPS
Title More Is Less: Learning Efficient Video Representations by Big-Little Network and Depthwise Temporal Aggregation Abstract Current state-of-the-art models for video action recognition are mostly based on expensive 3D ConvNets. This results in a need for large GPU clusters to train and evaluate such architectures. To address this problem, we present an lightweight and memory-friendly architecture for action recognition that performs on par with or better than current architectures by using only a fraction of resources. The proposed architecture is based on a combination of a deep subnet operating on low-resolution frames with a compact subnet operating on high-resolution frames, allowing for high efficiency and accuracy at the same time. We demonstrate that our approach achieves a reduction by 3 ∼ 4 times in FLOPs and ∼ 2 times in memory usage compared to the baseline. This enables training deeper models with more input frames under the same computational budget. To further obviate the need for large-scale 3D convolutions, a temporal aggregation module is proposed to model temporal dependencies in a video at very small additional computational costs. Our models achieve strong performance on several action recognition benchmarks including Kinetics, Something-Something and Moments-in-time. The code and models are available at https://github.com/IBM/bLVNet-TAM. 1 Introduction Current state-of-the-art approaches for video action recognition are based on convolutional neural networks (CNNs). These include the best performing 3D models, such as I3D [1] and ResNet3D [2], and some effective 2D models, such as Temporal Relation Networks (TRN) [3] and Temporal Shift Modules (TSM) [4]. A CNN-based model usually considers a sequence of frames as input, obtained through either uniform or dense sampling from a video [1, 5]. In general, Longer input sequences yield better recognition results. However, one problem arising for a model requesting more input frames is that the GPU resources required for training and inference also significantly increase in both memory and time. For example, the top-performing I3D models [1] on the Kinetics [6] dataset were trained with 64 frames on a cluster of 32 GPUs, and the non-local network [7] even uses 128 frames as input. Another problem for action recognition is the lack of effective methods for temporal modeling when moving away from 3D spatiotemporal convolutions. While 2D convolutional models are more resource-friendly than their 3D counterparts, they lack expressiveness over time and thus cannot take much benefit from richer input data. In this paper, we present an efficient and memory-friendly spatio-temporal representation for action recognition, which enables training of deeper models while allowing for more input frames. The first part of our approach is inspired by the Big-Little-Net architecture (bLNet [8]). We propose a new 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. video architecture that has two network branches with different complexities: one branch processing low-resolution frames in a very deep subnet, and another branch processing high-resolution frames in a compact subnet. The two branches complement each other through merging at the end of each network layer. With such a design, our approach can process twice as many frames as the baseline model without compromising efficiency. We refer to this architecture as “Big-Little-Video-Net” (bLVNet). In light of the limited ability of capturing temporal dependencies in bLVNet, we further develop an effective method to exploit temporal relations across frames by a so called “Depthwise Temporal Aggregation Module” (TAM). The method enables the exchange of temporal information between frames by weighted channel-wise aggregation. This aggregation is made learnable with 1×1 depthwise convolution, and implemented as an independent network module. The temporal aggregation module can be easily integrated into the proposed network architecture to progressively learn spatio-temporal patterns in a hierarchical way. Moreover, the module is extremely compact and adds only negligible computational costs and parameters to bLVNet. Our main contributions lie in the following two interconnected aspects: (1) We propose a lightweight video architecture based on dual-path network to learn video features, and (2) we develop a temporal aggregation module to enable effective temporal modeling without the need for computationally expensive 3D convolutions. We evaluate our approach on the Kinetics-400 [6], Something-Something [9] and Moments-intime [10] datasets. The evaluation shows that bLVNet-TAM successfully allows us to train actionclassification models with deeper backbones (i.e., ResNet-101) as well as more (up to 64) input frames, using a single compute node with 8 Tesla V100 GPUs. Our comprehensive experiments demonstrate that our approach achieves highly competitive results on all datasets while maintaining efficiency. Especially, it establishes a new state-of-the-art result on Something-Something and Moments-in-time by outperforming previous approaches in the literature by a large margin. 2 Related Work Activity classification has always been a challenging research topic, with first attempts reaching back by almost two decades [11]; deep-learning architectures nowadays achieve tremendous recognition rates on various challenging tasks, such as Kinetics [1], ActivityNet [12], or Thumos [13]. Most successful architectures in the field are usually based on the so-called two-stream model [14], processing a single RGB frame and optical-flow input in two separate CNNs with a late fusion in the upper layers. Over the last years, many approaches extend this idea by processing a stack of input frames in both streams, thus extending the temporal window of the architecture form 1 to up to 128 input frames per stream. To further capture the temporal correlation in the input over time, those architectures usually make use of 3D convolutions as, e.g., in I3D [1], S3D [15], and ResNet3D [2], usually leading to a large-scale parameter space to train. Another way to capture temporal relations has been proposed by [5], [3], and [4]. Those architectures mainly build on the idea of processing videos in the form of multiple segments, and then fusing them at the higher layers of the networks. The first approach with this pattern was the so-called Temporal Segment Networks (TSN) proposed by Wang et al. [5]. The idea of TSN has been extended by Temporal Relation Networks (TRN) [3], which apply the idea of relational networks to the modeling of temporal relations between observations in videos. Another approach for capturing temporal contexts has been proposed by Temporal Shift Modules (TSM) [4]. This approach shifts part of the channels along the temporal dimension, thereby allowing for information to be exchanged among neighboring frames. More complex approaches have been tried as well, e.g. in the context of nonlocal neural networks [7]. Our temporal aggregation module is based on depthwise 1×1 convolutions to capture temporal dependencies across frames effectively. Separate convolutions are considered in approaches such as [15, 16] to reduce costly computation in 3D convolutional models. More recently, SlowFast Network [17] uses a dual-pathway network to process a video at both slow and fast frame rates. The fast pathway is made lightweight, similar to Little Net in our proposed architecture. However, our approach reduces computation based on both a lightweight architecture and low image resolution. Furthermore, the recent work Timeception [18] applies the concept of “Inception" to temporal domain for capturing long-range temporal dependencies in a video. The Timeception layers involve group convolutions at different time scales while our TAM layers only use depthwise convolution. As a result, the Timeception has significantly more parameters than the TAM (10% vs. 0.1% of the total model parameters). 3 Our Approach We aim at developing efficient and effective video representations for video understanding. To address the computational challenge imposed by the desired long input to a model, we propose a new video architecture based on the Big-Little network (bLNet) [8] for learning video features.We first give a brief recap of bLNet in Section 3.1. We then show, in Section 3.2, how to extend bLNet to an efficient video architecture that allows for seeing more frames with less computation and memory. An example of the proposed network architecture can be found in the supplementary material (Section A). To make temporal modeling more effective in our approach, we further develop a temporal aggregation module (TAM) to capture short-term as well as long-term temporal dependencies across frames. Our method is implemented as a separate network module and integrated with the proposed architecture seamlessly to learn a hierarchical temporal representation for action recognition. We detail this method in Section 3.3. 3.1 Recap of Big-Little Network The Big-Little Net, abbreviated as bLNet in [8], is a CNN architecture for learning strong feature representations by combining multi-scale image information. The bLNet processes an image at different resolutions using a dual-path network, but with low computational loads based on a clever design. The key idea is to have a high-complexity subnet (Big-Net) along with a low-cost one (Little-Net) operate on the low-scale and high-scale parts of an image in parallel. By such a design, the two subnets learn features complementary to each other while using less computation. The two branches are merged at the end of each network layer to fuse the low-scale and high-scale information so as to form a stronger image representation. The bLNet approach demonstrates improvement of model efficiency and performance on both object and speech recognition, using popular architectures such as ResNet, ResNeXt and SEResNeXt. More details on bLNet can be found in the original paper. In this work, we mainly adopt bLResNet-50 and bLResNet-101 as backbone for our proposed architecture. 3.2 Big-Little Video Network as Video Representation We describe our architecture in the context of 2D convolutions. However our approach is not specific to 2D convolutions and potentially extendable to any architecture based on 3D convolutions. The approach of Temporal Segment Networks (TSN) [5] provides a generic framework for learning video representations. With a shared 2D ConvNet as backbone, TSN performs frame-level predictions and then aggregates the results into a final video-level prediction (Fig. 1a)). The framework of TSN is efficient and has been successfully adopted by some recent approaches for action recognition such as TRN [3] and TSM [4]. Given its efficiency, we also choose TSN as the underlying video framework for our work. Let F = {ft|t = 1 · · ·n} be a set of sampled input frames from a video. We divide F into two groups, namely odd frames Fodd = {fk ∈ F| mod (k, 2) 6= 0} at half of the input image resolution, and even frames Feven = {fk ∈ F| mod (k, 2) = 0} at the input image resolution. For convenience, from now on, Fodd is referred to as big frames and Feven as little frames. Note that big branch can take either of a pair of frames as input and the other frame goes to the little branch. In TSN, all input frames are ordered as a batch of size n, where the tth element corresponds to the tth frame. We denote the input and output feature maps of the tth frame at the kth layer of the model by xkt ∈ RC×W×H and ykt ∈ RC×W×H , respectively. Whenever possible, we omit k for clarity. The bLNet can be directly plugged into TSN as the backbone network for learning video-level representation. We refer to this architecture as TSN-bLNet to differentiate it from the vanilla TSN (Fig. 1b)). This network fully enjoys the efficiency of bLNet, cutting the computational costs down by 1.6 ∼ 2 times according to [8]. Mathematically, the output yt can be written as yt = F(netB([xt]1/2) + netL(xt), θt). (1) Here [·]s is an operator scaling a tensor up or down by a factor of s in the spatial domain; netB and netL are the Big-Net and Little-Net in the bLNet aforementioned; and θt are the model parameters. Following [8], F indicates an additional residual block applied after merging the big and little branches to stabilize and enhance the combined feature representation. The architecture described above only learns features from a single frame, so there are no interactions between frames. Alternatively, we can feed the odd and even frames separately into the big and little branches so that each branch obtains complementary information from different frames. This idea is illustrated in Fig. 1c) and the output yt in this case can be expressed by yt = { F(netB(bxtc1/2) + netL(xt+1), θt), if mod (t, 2) 6= 0 yt−1, otherwise (2) While the modification proposed above is simple, it leads to a new video architecture, which is called Big-Little-Video-Net, or bLVNet for short. The bLVNet makes two distinct differences from TSN-bLNet. Firstly, without increasing any computation, it can take input frames two times as many as TSN-bLNet. We shall demonstrate the benefit of leveraging more frames for temporal modeling in Section 4. Furthermore, the bLVNet has 1.5 ∼ 2.0× fewer FLOPs than TSN while seeing frames twice as many as TSN, thanks to the efficiency of the dual-path network. Secondly, the merging of the two branches in bLVNet now happens on two different frames carrying temporal information. We call this type of temporal interaction by local fusion, since it only captures temporal relations between two adjacent frames. In spite of that, local fusion gives rise to a significant performance boost for recognition, as shown later in Section 4.3. 3.3 Temporal Aggregation Module Temporal modeling is a challenging problem for video understanding. Theoretically, adding a recurrent layer such as LSTM [19] on top of a 2D ConvNet seems like a promising means to capture temporal ordering and long-term dependencies in actions. Nonetheless, such approaches are not practically competent with 3D ConvNets [1], which use spatio-temporal filters to learn hierarchical feature representations. One issue with 3D models is that they are heavy in parameters and costly in computation, making them hard to train. Even though some approaches like S3D [15] and R(2+1)D [16] alleviates this issue by separating a 3D convolution filter into a 2D spatial component followed by a 1D temporal component, they are in general still more expensive than 2D ConvNet models. With the efficient bLVNet architecture described above, our goal is to further improve its spatiotemporal representation by effective temporal modeling. The local fusion in bLVNet only exploits temporal relations between neighbored frames. To address this limitation, we develop a method to capture short-term as well as long-term dependencies across frames. Our basic idea is to fuse temporal information at each time instance by weighted channel-wise aggregation. As detailed below, this idea can be efficiently implemented as a network module to progressively learn spatio-temporal patterns in a hierarchical way. Let yt be the output (i.e. neural activation) of the tth frame ft at a layer of the network (see Eq. 2). To model the temporal dependencies between ft and its neighbors, we aggregate the activations of all the frames within a temporal range r around ft. A weight is learned for each channel of the activations to indicate its relevance. Specifically, the aggregation results can be written as ŷt = ReLU( j=br/2c∑ j=−br/2c wj ⊗ yt+j), (3) where ⊗ indicates the channel-wise multiplication and wj ∈ RC is the weights. The ⊗ is defined as: for a vector v = [v1 v2 · · · vC ] and a tensor M = [m1 m2 · · · mC ] with C feature channels, v ⊗M = [v1 ∗m1 v2 ∗m2 · · · vC ∗mC ]. We implement the temporal aggregation as a network module (Fig. 2). It involves three steps as follows, 1. apply 1×1 depthwise convolution r times to n input tensors to form an output matrix of size r × n; 2. shift the ith row left (or right) by |i− br/2c| positions if i > br/2c (or i ≤ br/2c) and if needed, pad leading or trailing zero tensors in the front or at the end; 3. perform temporal aggregation along the column to generate the output. The aggregation module(TAM), highlighted as a red box in Fig. 1d), is inserted as a separate layer after the local temporal fusion in the bLVNet, resulting in the final bLVNet-TAM architecture. Obviously none of the steps in the implementation above involve costly computation, so the module is fairly fast. A node in the network initially only sees r − 1 neighbors. As the network goes deeper, the amount of context that the node involves in the input grows quickly, similar to how the receptive field of a neuron is enlarged in a CNN. In such a manner, long-range temporal dependencies are thus potentially captured. For this reason, the temporal aggregation is also called global temporal fusion here, as opposed to the local temporal fusion discussed above. The work of TSM [4] has also applied temporal shifting to swap feature channels between neighboring frames. In such a case, TSM can be treated as a special case of our method where the weights are empirically set rather than learned from data. In Section 4.3, we demonstrate that the proposed TAM is more effective than TSM for temporal modeling under different video architectures. TAM is also related to S3D [15] and R(2+1)D [16] in that TAM is independent of spatial convolutions. However, TAM is based on depthwise convolution, thus has fewer parameters and less computation than S3D and R(2+1)D. The TAM can also be integrated into 3D convolutions such as C3D [20] and I3D [1] to further enhance the temporal modeling capability that already exists in these models. Due to the difference in how temporal data is presented between 2D-based and 3D-based models, the temporal shifting now needs to operate on feature channels within a tensor instead of on tensors themselves. 4 Experiments 4.1 Experimental Setup Datasets. We evaluate our approach on three large-scale datasets for video recognition, including the widely used Something-Something (Version 1 and Version 2) [9], Kinetics-400 [6] and the recent Moments-in-time dataset [10]. They are herein referred to as SS-V1, SS-V2, Kinetics-400 and Moments, respectively. Something-Something is a dataset containing videos of 174 types of predefined human-object interactions with everyday objects. The version 1 and 2 include 108k and 220k videos, respectively. This dataset focuses on human-object interactions in a rather simple setup with no scene contexts to be exploited for recognition. Instead temporal relationships are as important as appearance for reasoning about the interactions. Because of this, the dataset serves as a good benchmark for evaluating the efficacy of temporal modeling, such as proposed in our approach. Kinetics-400 [6] has emerged as a standard benchmark for action recognition after UCF101 [21] and HMDB [22], but on a significantly larger scale. The dataset consists of 240k training videos and 20k validation videos, with each video trimmed to around 10 seconds. It has a total of 400 human action categories. Moments-in-time [10] is a recent collection of one million labeled videos, involving actions from people, animals, objects or natural phenomena. It has 339 classes and each video clip is trimmed to 3 seconds long. Data Augmentation. During training, we follow the data augmentation used in TSN [5] to augment the video with different sizes spatially and flip the video horizontally with 50% probability. Furthermore, since our models are finetuned on pretrained ImageNet, we normalize the data with the mean and standard deviation of the ImageNet images. The model input is formed by uniform sampling, which first divides a video into n uniform segments and then selects one random frame from each segment as the input. During inference, we resize the smaller side of an image to 256 and then crop a centered 224×224 region. The center frame of each segment in uniform sampling is picked as the input. On SomethingSomething and Moments, our results are based on the single-crop and single-clip setting. On Kinetics-400, we use the common practice of multi-crop and multi-clip for evaluation. Training Details. Since all the three datasets are large-scale, we train the models in a progressive way. For each type of backbone (for example, bLResNet-50), we first finetune a base model on ImageNet with a minimum input length (i.e. 8×2 in our case) using 50 epochs. We adopt the Nesterov momentum optimizer with an initial weight of 0.01, a weight decay of 0.0005 and a momentum of 0.9. We then finetune a new model with longer input (for example, 16×2) on top of the corresponding base model, but with 25 epochs only. In this case, the initial learning rate is set to 0.01 on SomethingSomething and 0.005 on Kinetics and Moments. The learning rate is decreased by a factor of 10 at the 10-th and 20-th epoch, respectively. This strategy allows to significantly reduce the training time needed for all the models evaluated in our experiments. All our models were trained on a server with 8 GPU cards and a total of 128G GPU memory. We set the total batch size to 64 whenever possible. For models that require more memory to train, we adjust the batch size accordingly to the maximum number allowed. 4.2 Main Results Something-Something. We first report our results on the validation set of the Something-Something datasets in Table 1 and Table 2. With a moderately deep backbone bLResNet-50, our approach outperforms all 3D models on SS-V1 while using much fewer input frames (8×2) and being substantially more efficient. TSM [4] was the previously best approach on Something-Something. Under the same backbone (i.e. ResNet-50), our approach is better than TSM on both SS-V1 and SS-V2 while being more efficient (i.e our 8x2 model has 1.4 times fewer FLOPs than a 8-frame TSM model). When empowered with a stronger backbone bLResNet-101, our approach achieves even better results at 32×2 frames (53.1% top-1 accuracy on SS-V1, and 65.2% on SS-V2), establishing a new stateof-the-art on Something-Something. Notably, these results while based on RGB information only, are superior to those obtained from the best two-stream models at no more computational cost. This strongly demonstrates the effectiveness of our approach for temporal modeling. We further evaluated our models on the test set of Something-Something. Our results are consistently better than the best results reported by the other approaches in comparison including 2-stream models. Kinetics-400. Kinetics-400 is one of the most popular benchmarks for action recognition. Currently the best-performed models on this dataset are all based on 3D Convolutions. However, it has been shown in the literature that temporal ordering in this dataset does not seem to be as crucial as RGB information for recognition. For example, as experimented in S3D [15], the model trained on normal time-order data performs well on the time-reversed data on Kinetics. In accordance to this, our approach (3 crops and 3 clips) mainly performs on par with or better than the current large-scale architectures, but without outperforming them as clearly as on the Something-Something datasets, where the temporal relations are more essential for an overall understanding of the video content. Moments. We finally evaluate the proposed architecture on the Moments dataset [10], a large-scale action dataset with about three times more training samples than Kinetics-400. Since Moments is relatively new and results reported on it are limited, we only compare our results with those reported in the Moments paper [10]. As can been seen from Table 4, our approach outperforms all the single-stream models as well as the ensemble one. We hope our models provide stronger baseline results for future reference on this challenging dataset. It is also noted that our model trained with 16× 2 frames only produces slightly better top-1 accuracy than the model trained with 8 × 2 frames. We speculate that this has to do with the fact that the Moments clips are only as short as 3 seconds and that there is only a limited impact in choosing a finer temporal granularity on this dataset. 4.3 Ablation Studies In this section, we conduct ablation studies to provide more insights about our main ideas. Is temporal aggregation effective?. We validate the efficacy of the proposed temporal aggregation module (TAM), which is considered as a global fusion method (Section 3.3). Local fusion here is referred to the branch merging in the dual path network (Section 3.2). We compare TAM with the temporal shift module used in TSM [4] in Table 5 under two different video architectures: TSN and bLVNet proposed in this work. TAM demonstrates clear advantages over TSM, outperforming TSM by over 2% under both architectures. Interestingly, with the here proposed bLVNet baseline with local temporal fusion almost doubles the performance of a TSN baseline, improving the accuracy from 17.4% to 33.6%. On top of that, TAM boosts the performance by another 13% in both cases, suggesting that TAM is complementary to local fusion. This further confirms the significance of temporal reasoning on the Something-Something dataset. Does seeing more frames help?. One of the main contribution of this work is an efficient video architecture that makes it possible to train deeper models with more input frames using moderate GPU resources. Fig. 3a) shows consistent improvement of our approach on SS-V1 as the number of input frames increases. A similar trend in our results can be observed on Kinetics-400 in Table 3. On the other hand, the almost flattened line from TSN suggests that a model without effective temporal modeling cannot take much of the benefit from longer input frames. Memory Usage. We compare the memory usage between our approach based on bLResNet-50 and TSN based on ResNet-50. As shown in Fig. 3b), our approach is more memory friendly than TSN, achieving a saving of ∼2 times at the same number of input frames. The larger batch size allowed for training under the same computational budget is critical for our approach to obtain better models and reduce training time. 5 Conclusion We presented an efficient and memory-friendly video architecture for learning video representations. The proposed architecture allows for twice as many input frames as the baseline while using less computation and memory. This enables training of deeper models with richer input under the same GPU resources. We further developed a temporal aggregation method to capture temporal dependencies effectively across frames. Our models achieve strong performance on several action recognition benchmarks, and establish a state-of-the-art on the Something-Something dataset.
1. What is the main contribution of the paper, and how does it relate to previous works in the field? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its originality and quality? 3. Are there any concerns or questions regarding the paper's clarity, significance, and relevance to the field? 4. How does the reviewer assess the impact of the work, and what potential limitations or trade-offs should be considered?
Review
Review 1. Originality: WEAK a) I think this is the weakest part of this paper. Almost all of the contributions of this work have been individually explored. For example: b) The idea of parallel pathways for low-res/high-res processing was explored in SlowFast, 2-stream networks etc. Granted, authors use a somewhat different design from the ICLR'19 paper, where the spatial resolution is changed across the pathways, but the core idea is fairly well explored. c) The aggregation layer (TAM) is essentially TSM [4] with learned weights, and gets a few percentage points extra in performance. d) Missing related work: Aggregating context temporally for video representation learning has been explored in many previous works, which would be good to report in the related work. I point some here. - Aggregating using VLAD/Fisher vectors etc: Action recognition with stacked fisher vectors (ECCV'14), Learnable pooling with Context Gating for video classification (CVPR'17), ActionVLAD (CVPR'17), SeqVLAD (TIP'18) etc - Aggregation using attention: Attention Clusters (CVPR'18), Video Action Transformer Network (CVPR'19), Long-Term Feature Banks for Detailed Video Understanding (CVPR'19) etc - Other temporal modeling architectures: Timeception for Complex Action Recognition (CVPR'19), Videos as space-time region graphs (ECCV'18) etc 2. Quality: GOOD I think authors do a good job of doing thorough experiments, and comparing performance of recent works along with computational/memory costs. The ablations are useful as well. 3. Clarity: WEAK Quite a few aspects of the model were not immediately clear to me. I would encourage authors to clarify in the rebuttal: a) Splitting video into odd/even frames, setting odd as big and even as little: This seems very adhoc. Why enforce this rule? Why not just use pairs of frames and use one at lower resolution and other at higher? Is there a reason odd frames in the video must be bigger? b) What is the train-time complexity of the model? Since the aggregation layer has to be trained, and needs at least "r" clips temporally-shifted clips at the same time, it would limit the training batch size (something like TSN would not have that issue). Is that a limiting factor at all? I would like to see more discussion on that aspect in the final version. c) L221: What is "single-crop single-frame" testing? I assume it is done in TSN style -- so for SS-V1 model which uses 32x2 frames, you have 32 segments at test time and use a pair of frames from each segment (odd and even). d) If my understanding in (c) is correct, then what is the "multi-crop" setup used in Kinetics? How many frames are being used in Table 3? e) I am assuming the "Frames" column in the tables reports the *TOTAL* frames used in inference, including multiple crops etc. Is that correct? 4. Significance: MODERATE While the work doesn't significantly improve on the state of the art, it does seem to propose a cheaper alternative. That can be very useful for research groups with limited resources to work on related areas, if the code is made available. However it's not clear from the paper if the code for reproducing the reported results be released? Final rating ======== I have looked through the other reviews and author feedback. I appreciate authors efforts in responding to my concerns, and clarifying parts of the paper. As all reviewers note, the technical novelty of the work is limited, though the good performance on standard benchmarks with lower computation might be valuable. Given the newer results in rebuttal and the promise to release code, I am upgrading my rating to 6. However, I still think the writing and presentation at least needs quite a bit more work to explain their approach and setup clearly.
NIPS
Title Video Frame Interpolation without Temporal Priors Abstract Video frame interpolation, which aims to synthesize non-exist intermediate frames in a video sequence, is an important research topic in computer vision. Existing video frame interpolation methods have achieved remarkable results under specific assumptions, such as instant or known exposure time. However, in complicated realworld situations, the temporal priors of videos, i.e., frames per second (FPS) and frame exposure time, may vary from different camera sensors. When test videos are taken under different exposure settings from training ones, the interpolated frames will suffer significant misalignment problems. In this work, we solve the video frame interpolation problem in a general situation, where input frames can be acquired under uncertain exposure (and interval) time. Unlike previous methods that can only be applied to a specific temporal prior, we derive a general curvilinear motion trajectory formula from four consecutive sharp frames or two consecutive blurry frames without temporal priors. Moreover, utilizing constraints within adjacent motion trajectories, we devise a novel optical flow refinement strategy for better interpolation results. Finally, experiments demonstrate that one well-trained model is enough for synthesizing high-quality slow-motion videos under complicated real-world situations. Codes are available on https://github. com/yjzhang96/UTI-VFI. N/A Video frame interpolation, which aims to synthesize non-exist intermediate frames in a video sequence, is an important research topic in computer vision. Existing video frame interpolation methods have achieved remarkable results under specific assumptions, such as instant or known exposure time. However, in complicated realworld situations, the temporal priors of videos, i.e., frames per second (FPS) and frame exposure time, may vary from different camera sensors. When test videos are taken under different exposure settings from training ones, the interpolated frames will suffer significant misalignment problems. In this work, we solve the video frame interpolation problem in a general situation, where input frames can be acquired under uncertain exposure (and interval) time. Unlike previous methods that can only be applied to a specific temporal prior, we derive a general curvilinear motion trajectory formula from four consecutive sharp frames or two consecutive blurry frames without temporal priors. Moreover, utilizing constraints within adjacent motion trajectories, we devise a novel optical flow refinement strategy for better interpolation results. Finally, experiments demonstrate that one well-trained model is enough for synthesizing high-quality slow-motion videos under complicated real-world situations. Codes are available on https://github. com/yjzhang96/UTI-VFI. 1 Introduction Video frame interpolation aims to synthesize non-exist intermediate frames and thereby provides a visually fluid video sequence. It has broad application prospects, such as slow motion production [13], up-converting frame rate [3] and novel-view rendering [6]. Many state-of-the-art video interpolation methods [1, 12, 17, 34] aim to estimate the object motion and occlusion with the assistance of optical flow. Through refining forward and backward motion flows among several frames, these methods can directly warp pixels to synthesize desired intermediate frames. To achieve this goal, some popular datasets, of which either triplet images or 240fps highframe-rate videos, are collected as the ground-truth of real-world motions. Meanwhile, to evaluate the performance of proposed methods, the well-trained model is tested using frames collected in a similar way. Although significant improvement has demonstrated by experiments of recent works, people may ask if the same (or similar) performance can be achieved in complicated real-world situations. * indicates equal contribution. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. To comprehensively discuss this question, we first revisit the principle of video frame acquisition. As illustrated in Fig. 1 (a), the frame acquisition process usually includes two phases: exposure phase and readout phase. In the exposure phase, the shutter opens for a duration of t0 so that the photosensitive sensor is exposed. In the readout phase, the camera reads the charge on the pixel array and convert the signal to get the pixel value. Considering different technologies of cameras, the readout phase could be either overlapped or non-overlapped with the exposure phase. Here, Fig. 1 (a) is an example of non-overlapped exposure. For easy discussing, we define the time interval between two exposures as t1. Thus, a complete shutter period is defined as the time period t0 + t1. Correspondingly, frames per second (FPS) is defined as the reciprocal of the shutter period. Note that, t1 cannot be eliminated because of the intrinsic demand of the sensor. Meanwhile, t0 cannot be too short compared to shutter period, otherwise it will produce a visually discontinuous video. The exposure time t0 and the interval time t1 (or FPS 1t0+t1 ) are two important parameters of a camera sensor, and they could vary largely across different cameras [4]. Therefore, when we perform frame interpolation on real-world videos, following challenges should be further considered: 1) Due to the existence of exposure time, the movement of the camera and object may produce motion/dynamic blur within a video frame. Directly performing the interpolation between blurry frames would lead to inferior visual results. A more severe blur would usually occur in the lower frame-rate video since the exposure time is relatively long. 2) Simply combining deblurring and video interpolation techniques may not handle the blurry video frames well. For blurry video frames, we should not only focus on the inter-frame interpolation, but also perform the intra-frame interpolation. 3) Note that t0 and t1 may vary due to the limitation of equipment or different exposure settings, the number of interpolated frames and corresponding motion trajectories will vary accordingly. For example, in the instance of Fig 1 (a), if we want to up-convert the FPS by 10 times, we should interpolate 7 frames underlying each blurry frame, and 3 frames between the two consecutive frames. Similarly, the estimation of the motion trajectory must consider the uneven time intervals. According to our observation, most existing works cannot overcome these three challenges simultaneously. Although the most recent works [13, 27] manage to solve the problem of motion blur in video interpolation, they are trained on the specific exposure setting and could be hard to generalize to different situations. To address these issues, in this work, we consider the video frame interpolation problem in a more general situation and aim to deliver more accurate interpolation results. Specifically, giving a video sequence as the input, we first train a second-order residual key-states restoration network to synthesize the start and the end states for each frame, e.g. L0 and L1 in Fig. 1 (b). If there exists zero movement (misalignment) between two states, the video frame is regarded as one instant frame (i.e. without blur). Otherwise, the exposure time cannot be ignored, and both inter- and intra-interpolation are performed. Moreover, following the same assumption as [34], i.e. the acceleration of motion remains consistent during consecutive frames, we apply the quadratic model [18, 34] to the general video acquisition situation. We derive the general curvilinear motion representation without temporal priors from consecutive four key-states, such as L0, L1, L2 and L3 in Fig. 1 (b). Meanwhile, the relationship between t0 and t1 can be determined by the displacements between key-states, i.e. S01, S12 and S23. In addition, to reduce the adverse effects caused by inferior optical-flow estimation, we further refine the optical flows with the derived trajectory priors. Finally, the refined optical flows are utilized to perform high-quality intermediate frame synthesis. Overall, in this paper, we make following contributions: 1) We propose a restoration network to synthesize start and end states of the input video frames. This network is able to handle different exposure settings, and remove blur in the original video clip; 2) We derive a curvilinear motion representation which is sensitive to different exposure settings, thereby providing a more accurate frame alignment for uncertain time interval interpolation; 3) We further refine the optical flow with the trajectory priors to improve the interpolation results. We construct different datasets to simulate the different exposure settings in real scenarios. Comprehensive experiments on these datasets and real-world vidoes demonstrate the effectiveness of our proposed framework. 2 Related Works Video frame interpolation. Most popular video interpolation methods utilize optical flow [12, 34, 17, 1, 2, 35] to predict the motion for the interpolated frame. Some methods [23, 22, 9] estimate space-varying and separable convolution filters for each pixel, and synthesize the interpolated pixel from a convolution between two adjacent patches. Xu et al. [34] proposes a quadratic interpolation to allow the interpolated motion to be curvilinear instead of being uniform and linear. However, all these methods will encounter difficulties when processing the blurry video since the optical flow/motion estimation will be inaccurate. Video/Image deblurring. Conventional video deblurring methods [5, 10, 33] usually apply the deconvolution algorithm with the assistance of image priors or regulations. To make full use of adjacent frames, Hyun et al. [11] utilize inter-frame optical flow to estimate blur kernels. Ren et al. [25] also apply optical flow to facilitate the segmentation result. More recently, deep convolutional neural networks (CNN) have been applied to bypass the restriction of blur type or image priors [19, 30, 7, 20, 16, 36], and enable an end-to-end training scheme by introducing the synthetic realworld scene datasets [19, 30]. To exploit the temporal relationship, Nah et al. [20] propose a recurrent neural network (RNN) to iteratively update the hidden state for output frames. Wang et al. [32] devise a pyramid, cascading and deformable alignment module to conduct a better frame alignment in feature level, and their method won the first place in the NITRE19 video deblurring challenge [21]. There are also some works [37, 14, 24] learning to extract a video clip from a blurry image, which can be considered as a combination of image deblurring with intra-frame interpolation. Joint video deblurring and interpolation. Recent methods [13, 27] have been proposed to address the blurry video interpolation problem. Among them, Jin et al. [13] first extract several keyframes, and then interpolate the middle frame from two adjacent frames. Meantime, Shen et al. [27] proposed a joint interpolation method, where they simultaneously output the deblurred frame and interpolated frame in a pyramid framework. Both these methods have pre-defined a specific setting for the blurry video exposure mechanism, which may fail when applied to a video acquired from other equipment or other camera settings. 3 The Proposed Video Interpolation Scheme To address the aforementioned challenges of video frame interpolation without temporal priors, in this section, we introduce the proposed interpolation scheme in detail. Firstly, to overcome the problem caused by the uncertainty of the time interval, we derive a new quadratic formula for different exposure settings. Then, utilizing the motion flow priors contained in the formula, we further refine the estimated optical flow for more accurate time interval and trajectory estimation. Finally, we introduce the second-order residual learning strategy for key-states restoration from input frame sequences. 3.1 From equal time interval to uncertain time interval To interpolate intermediate frame Lt between two consecutive frames L1 and L2, the optical flow based video interpolation methods [12, 17, 34] aim to estimate the optical flow from frame L1 to Lt, or frame L2 to Lt. Recently, inspired by [18], Xu et al. [34] have relaxed the constrains of motion from linear displacement to quadratic curvilinear, which corresponds to acceleration-aware motions: S1t = (S12 − S01)/2× t2 + (S12 + S01)/2× t, (1) where Sab means the displacement of pixels from frame a to frame b, and it is calculated by optical flow. In order to keep the pixel coordinates aligned in each optical flow map, the start point of these optical flows should be the same. In general, the displacements are calculated as Ŝ12 = f1→2, Ŝ01 = −f1→0, where fa→b denotes the optical flow from frame a to frame b. However, Eq.(1) is based on the equal time interval assumption. This assumption is not applicable to the general situation where the time intervals t0 and t1 may vary accordingly. Here, we define a shutter period as one unit time, and the ratio between t1 and t0 is λ, i.e. t0 + t1 = 1, t1/t0 = λ. Different from [34] which employs three neighboring frames to calculate the quadratic trajectory, we take four consecutive key-states into consideration as shown in Fig. 1 (b). Naturally, if the time intervals become unknown, four key-states (i.e. three flows) are requested to determine a unique quadratic movement. If we assume the acceleration remains constant from frame L0 to L3, then we can express S01,S12 and S23 with velocity and acceleration: 2S12 = (2v1 + at1)× t1, S01 + S23 = (2v1 + at1)× t0. (2) This equation set indicates that vector S12 has a same direction with vector resultant S01 + S23. In addition, we can derive the time interval ratio λ as: λ = t1 t0 = 2S12 S01 + S23 . (3) By far, we are able to solve the t0 and t1 under the condition that t0 + t1 = 1. Further deriving the velocity and acceleration of the movement, we can get the expression of trajectory between frame L1 and frame L2: S1t = (λ+ 1)(S23 − S01)/2× t2 + (λS01 + (S01 + S23)/2)× t, t ∈ (tL0 , tL1). (4) Note that when the time intervals are equal, i.e. λ = 1, our Eq.(4) can be degraded to Eq.(1), i.e. the QVI interpolation [34] is a special case of our framework. 3.2 Optical flow refinement As shown in Eq.(3) (4), we can obtain flow S1t using the pixel displacements among four key-states. In order to keep the position aligned, S23 should be represented as: Ŝ23 = f1→3 − f1→2, (5) which denotes the movements of pixels in frame L1 from time tL2 to tL3 . In practical, we estimate all the optical flows using the state-of-the-art PWC-Net [31]. However, directly using flow calculated by Eq.(5) does not work well in our situation, since there may exist serious errors in two aspects: 1) the flow estimation error owing to the long time interval between frame 1 and frame 3, i.e. f1→3; 2) the pixel misalignment when we conduct vector subtraction. Therefore, we propose a flow refinement network FR to acquire the refined flow Ŝ ′ 23. Since it is hard to obtain the ground-truth of the target flow S23, we employ the trajectory prior implied in Eq.(2) (3) as our penalization. Specifically, Ŝ01 + Ŝ23 and Ŝ12 should have following two implicit constraints: 1) these two vectors have the same direction; 2) since λ is a constant, the ratio of the two vectors should be uniform across the image. With these two priors as constraints, we are able to correct the value of one optical flow when the other two are fixed. Note that, although f1→0 and f1→2 are also the estimation of pixel displacements, they could deliver more accurate motion estimation than Ŝ23. Therefore, the refinement process can be formulated as: Ŝ ′ 23 = FR(f1→0,f1→2, Ŝ23). (6) We use a U-Net [26] with skip connections to learn the mapping from the original flow to the refined outputs. Aforementioned priors are implicitly encoded into our loss function, where we utilize f0→1 and f1→2 to constrain the output flow. The loss function is calculated as: Lr = |Ŝ23 − (2/λf1→2 + f1→0)|1. (7) Finally, the refined Ŝ ′ 23 can be substituted into Eq.(4) to compute a more accurate S1t. 3.3 Second-order residual learning for key-states restoration Our principle for choosing key-states is to ensure that they are unambiguous under different exposure settings. For each input frame, we attempt to restore its instant states of the start and end of the exposure, since their physical meaning is consistent in different exposure settings. For sharp images (i.e. without motion blur), the start and end states should be the same. For blurry frames, the start and end states define the boundary of the motion blur, which makes them easier to restore. In addition, ours could short the temporal range for subsequent interpolation, which leads to more accurate interpolation results. More discussion can be found in the experiment section. As shown in Fig. 2, we propose the second-order residual learning pipeline to extract the key-states from input frames. Firstly, in order to avoid the temporal ambiguity of the start and end states, four consecutive frames are fed into the network F1. Utilizing the implicit motion direction existed in the input sequence, the network is trained to synthesize residuals to be summed up with input blurry frames, and deliver the start and end states of B1 and B2. This process can be formulated as: (L̂si , L̂ e i ) = F1(Bseq) +Bi, i = 1, 2, (8) where (L̂si , L̂ e i ) denotes the the estimated instant start and end states respectively, and Bseq denotes the input sequence {B0, · · · , B3}. In the experiments, although the network F1 achieves reasonable performance, we find it still suffers from some limitations. Firstly, the network gets four inputs to eliminate the temporal ambiguity, which decreases its deblurring capability more or less. Secondly, the fitting ability of residual is relatively poor when modeling a more severe blur. To address these issues, we further improve the deblurring performance by introducing the second-order residual learning. Specifically, we refer the Eq.(8) as first order residual, and derive the second-order residual learning as: (L̂si , L̂ e i ) = F2(Bi,F1(Bseq) +Bi) + F1(Bseq) +Bi, i = 1, 2. (9) Here, the network F2 aims to synthesize higher order residual of the target mapping. Since the temporal order of L̂si and L̂ e i has been initially determined by function F1, F2 can focus on restoring a pair of key-states. In experiments, this structure could improve the PSNR by around 1.5 dB. 4 Experiments In this section, we introduce the datasets we used for training and test, and the training configuration of our models. Then we compare the proposed framework with state-of-the-art methods both quantitative and qualitative. Finally, we carry out an ablation study of our proposed components. 4.1 Datasets To simulate the real-world situations and build datasets for more general video interpolation, we synthesize low-frame-rate videos from the sharp high-frame-rate video sequence. Considering the video acquisition principle we discussed before, we average several consecutive frames taken by a 240fps camera to simulate one frame taken by a low-frame-rate camera. Similar to all the existing blurry image/video datasets, such synthesis is feasible if the relative motion between camera and object is not too large to produce the ’ghosting’ artifacts. Meanwhile, we discard several consecutive frames to simulate the shutter closed time interval. In this way, we create videos filmed in different exposure settings by altering the number of frames averaged and discarded. Specifically, we denote the number of exposure frames as m, and the number of discarded frames as n, thus m+ n frames constitute a shutter period. We set m+ n = 10 to down-sample the original 240 fps video to 24 fps, which is a common FPS setting in our daily life. For fair comparisons, we set m as odd numbers (m = 5, 7, 9), since most other methods request the middle frame as ground-truth. We apply the synthetic rule on both GoPro dataset [19] and Adobe240 dataset [30], and name these synthetic datasets as “dataset-m-n”. Finally we get “GoPro-5-5”, “GoPro-7-3”, “GoPro-9-1” and “Adobe240-5-5”, “Adobe240-7-3”, “Adobe240-9-1” respectively. In addition, we also provide datasets “GoPro-5-3” and “GoPro-7-1” to perform a fair comparison with [27] since it can only upsample the video by multiple of 2. Noted that the other video interpolation datasets such as UCF101 [29] and Vimeo-90k [35] are not applicable for our comparison, since they only provide sharp frame triplets. 4.2 Implementation details To train the key-states restoration network, we first train the network F1 for 200 epochs and jointly train the network F1 and F2 for another 200 epochs. To train the optical flow refinement network, 100 epochs are enough for convergence. We use Adam [15] solver for optimization, with β1 = 0.9, β2 = 0.999 and = 10−8. The learning rate is set initially to 10−4, and linearly decayed to 0. All weights are initialized using Xavier [8], and bias is initialized to 0. In total, we have 34.4 million parameters for training. In test phase, it takes 0.23s and 0.18s to run a single forward for key-states restoration network and interpolation network respectively via a NVIDIA GeForce GTX 1080 Ti graphic card. 4.3 Comparison with the state-of-the-art methods Comparison methods. We employ two types of interpolation solution as our comparisons. The first one is the cascade model, which concatenates a deblurring model with a video frame interpolation model. Specifically, we combine the state-of-the-art image/video deblurring methods Gao et al.[7] and EDVR [32] with the state-of-the-art multi-frame interpolation methods QVI [34] and Super SloMo [12]. We follow the implementation of their official released code in all the experiments. The other is the joint model of TNTT and BIN proposed by Jin et al. [13] and Shen et al. [27], respectively. These methods jointly conduct deblurring and upsampling of frame rate. Since these two methods are devised for specific exposure setting, we make some workarounds to carry out a more fair and reasonable comparison. Since the original TNTT [13] model need to iteratively interpolate the middle frame to fill the vacant indexes, we devise a specific interpolation sequence for each exposure setting, namely TNTT*. In addition, since BIN [27] is devised to up-convert the frame rate by 2 times, which shares the similiar function with our key-states restoration module, we compare their initial results with our first stage outputs. For multi-frame interpolation results, we iteratively interpolate the outputs of BIN and obtain the “8x frame rate” results. For this comparison, we prepare the dataset “5-3” and “7-1” as two different exposure setting of 30 fps video. We re-train and test the BIN model using the mixed datasets “5-3” and “7-1”. Yet, our model is only trained on the mix datasets of “5-5”, “7-3” and “9-1”. Here, we also test our well-trained model on the “5-3” and “7-1” settings, experiments in Table 3 shows the great generalization ability of our proposed framework. Blurry video interpolation. As shown in Table 1, Table 2 and Table 3, both our deblurring and overall interpolation perform favorably against former methods. In addition, several important observations can be made from these results. Firstly, in the deblurring phase, former video deblurring methods encounter great difficulties in maintaining a promising performance in our datasets with different exposure settings. For example, the original TNTT which is trained on “GoPro-9-1” performs inferior in generalizing to other test sets. Moreover, even trained on our mixed datasets, the EDVR deteriorates significantly from dataset “5-5” to datasets “7-3” and “9-1”. For the final interpolation results, we can see that cascade models are sub-optimal for the overall performance. Although the deblurring module achieve a high score in PSNR, there is about 3 dB loss in the following interpolation stage. This may be mainly caused by the long temporal scope between two consecutive input frames. Similar conclusion is also obtained in work of [13, 27]. On the contrary, the joint models usually can achieve a more accurate interpolation results. However, we observe that the interpolation performance of TNTT/TNTT* deteriorates heavily in the exposure setting “5-5” (from 32.49 to 28.39 for GoPro dataset). This is mainly because of the iterative synthesis of the middle frame may lead to sub-optimal results in the inter-frame interpolation. Same conclusion can be obtained from Table 3, the BIN model performs inferior when they attempt to further interpolate the middle frame between former outputs. To intuitively visualize the comparison, we show two typical examples in Fig. 3. The first row shows that former methods fail in generating a visually clear intermediate frame. This is either because they fail in restoring a sharp frame in deblurring phase, e.g. EDVR [32] and TNTT [13], or the frame becomes blurry when interpolated from adjacent frames, e.g. Gao [7]+QVI [34], or BIN [27]. In the second row, we use Sobel operator [28] to extract the contour of interpolated results and overlap it with the contour of ground truth. Red line represents ground-truth contour and blue one means the synthesized outputs. The pinker and clearer overlapped image means a more accurate interpolation result. As we can see, our interpolated frame shows a best overlapped result with ground-truth image. Moreover, we shot 10 real 30 FPS videos using a telephone camera, and generate the interpolated high-frame-rate video results with our method, as well as TNTT and BIN. Since there is no objective criterion to compare the generation quality, a user study is conducted for a fair comparison. According to more than 1k response collected from Amazon Mechanical Turk, there are 78.4% of people think our results are better than TNTT’s, and 87.6% of people prefer ours over BIN’s results. The real world video interpolation results are provided in our supplementary video. Uncertain time interval interpolation (sharp frames). To futher validate the effectiveness of our proposed uncertain time interval interpolation algorithm. We compare different interpolation strategies when calculating the essential flow S1t. To construct the videos with different time interval ratios, we sample the original high-frame-rate GoPro dataset with a different sample interval, e.g. to sequentially sample one frame with intervals of 6 frames and 2 frames to achieve the dataset of λ = 7/3. We compare our uncertain time interval algorithm (Model UTI) and refined version (Model UTI-refine) with original QVI [34] model, which is derived under λ = 1. We also provide a model GT as the optical flow calculated with ground-truth λ. Table 4 shows that our UTI and UTI-refine performs favorably to QVI model except the situation when λ = 5/5, which is owning to the optical flow estimation error in S23. However, we can see the performance of QVI deteriorates more severe than ours when the value of λ deviates from 1. Also, the results show that our refine network significantly improves the performance. 4.4 Ablation study To see the effectiveness of our designed modules, we perform the following extensive experiments. For the key-state restoration phase, we compare the model using different structure/input frames with the proposed model. As we can see in Table 5, compared to the first-order residual, the model with second-order residual can increase the PSNR by around 1.5 dB. Also, the model simply cascades another stage-I’s architecture, i.e. without B1, B2 as input, performs inferior to our proposed structure, suggesting the original blurry information is essential for the second-order residual learning. Both the ablation experiments show that our second-order residual is effective in refining the output of the first stage. For the interpolation phase, we already analyzed the contribution of uncertain time interval interpolation in Table 4. Here, we evaluate the contribution of the flow refinement module. We fix the key-state restoration network and compare the interpolation outputs of the model with refinement (Model refine) and the model without refinement (Model w/o refine). As shown in Table 5, the model with refinement outperforms the model without refinement by around 0.6 dB. This improvement indicates that the Ŝ23 becomes more accurate after refinement. 5 Conclusion In this work, we propose a method to tackle the video frame interpolation problem without knowing temporal priors. Taken the relationship of exposure time and shutter period into consideration, we derive a general quadratic interpolation strategy without temporal prior. We also devise a key-states restoration network to extract the temporal unambiguous sharp content from blurry frames. Our proposed method is practical to synthesize a high-frame-rate sharp video from low-frame-rate blurry videos with different exposure settings. However, there is still limitation in our work, e.g., our uncertain time interval motion trajectory can only be derived when the acceleration remain constant. Though this assumption can approximate most situations in a short exposure time interval (around 1/20 s), the more challenging movement like variable acceleration motion is existing in the real scenario. We hope to relax this assumption and to have a more accurate trajectory estimation in our future works. Broader Impact Video frame interpolation (VFI), which aims to overcome the temporal limitation of camera sensors, is a popular and important technology in a wide range of video processing tasks. For example, it could produce slow-motion videos without professional high-speed cameras, and it could perform the frame rate up-conversion (or video restoration) for archival footage. However, existing VFI researches can mainly apply to videos with pre-defined temporal priors, such as sharp video frames or blurry videos with known exposure settings. It may largely limit their performance in complicated real-world situations. To our best knowledge, the video frame interpolation framework we introduced in this paper made the first attempt to overcome these limitations. Our proposed technique may potentially benefit a series of real-world applications and users. On the one hand, it could be more practical and convenient for users who want to convert their own videos to slow-motion, since they are not required to figure out the video sources, i.e. the complicated parameters of camera sensors. On the other hand, it could reduce the workload of VFI-related applications, i.e. it would not need to retrain new models for different exposure settings. Since the video frame interpolation aims at video restoration and up-conversion (i.e. the output video shares the same content as the given video), our method may not cause negative ethical impact, if we do not discuss the content of the input video. Acknowledgments and Disclosure of Funding This work was supported in part by the Australian Research Council Projects: FL-170100117, DP-180103424, IH-180100002 and IC-190100031.
1. What is the focus and contribution of the paper regarding frame deblurring and interpolation? 2. What are the strengths of the proposed approach, particularly in its novel perspective and decomposition of the key instant frame state? 3. What are the weaknesses of the paper, especially regarding its experimental setup and comparison with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper looks into the mechanism in frame acquisition that leads to the frame blurriness and low frame rate, and propose a new perspective on the joint frame deblurring and frame interpolation for various video exposure scenarios. The state transition model is interesting and novel to resolve the joint problem. In detail, the curvilinear representation and optical flow refinement are proposed to achieve better frame qualities than the state-of-the-art method. The idea of the state transition model is somewhat novel to the community. Although the paper has provided experiment results on uncertain time interval interpolation, it is still unknown how the model will behave on real-world blurry and low-frame-rate videos. Overall, I highly recommend the paper to be accepted. Below we summarize the main advantages and flaws of this paper. Strengths + The paper shows the frame acquisition process when capturing moving objects, depicting the origins for motion blur and low frame rate. + The key instant frame state is decomposed from the frame capturing process and will help the lateral construction of neural networks. + The experiments are sufficient and the ablation studies have validated the effectiveness of the proposed modules and applicability to various exposure scenes. Weaknesses - The synthesis of the dataset may not be consistent with the actual blurry video acquisition process. Nevertheless, BIN shares the same inconsistency problem. Thus, do you ever apply your trained model to the actual videos? What are the limitations? - The model size, running speed of the proposed method is not compared with existing methods such as BIN. - The term "generalized" in the title does not suit very well to the context of this paper. Because on the opposite side of "generalized", "scenario-Specific" might come to peoples' minds? However, is there a scenario-specific video frame interpolation?
NIPS
Title Video Frame Interpolation without Temporal Priors Abstract Video frame interpolation, which aims to synthesize non-exist intermediate frames in a video sequence, is an important research topic in computer vision. Existing video frame interpolation methods have achieved remarkable results under specific assumptions, such as instant or known exposure time. However, in complicated realworld situations, the temporal priors of videos, i.e., frames per second (FPS) and frame exposure time, may vary from different camera sensors. When test videos are taken under different exposure settings from training ones, the interpolated frames will suffer significant misalignment problems. In this work, we solve the video frame interpolation problem in a general situation, where input frames can be acquired under uncertain exposure (and interval) time. Unlike previous methods that can only be applied to a specific temporal prior, we derive a general curvilinear motion trajectory formula from four consecutive sharp frames or two consecutive blurry frames without temporal priors. Moreover, utilizing constraints within adjacent motion trajectories, we devise a novel optical flow refinement strategy for better interpolation results. Finally, experiments demonstrate that one well-trained model is enough for synthesizing high-quality slow-motion videos under complicated real-world situations. Codes are available on https://github. com/yjzhang96/UTI-VFI. N/A Video frame interpolation, which aims to synthesize non-exist intermediate frames in a video sequence, is an important research topic in computer vision. Existing video frame interpolation methods have achieved remarkable results under specific assumptions, such as instant or known exposure time. However, in complicated realworld situations, the temporal priors of videos, i.e., frames per second (FPS) and frame exposure time, may vary from different camera sensors. When test videos are taken under different exposure settings from training ones, the interpolated frames will suffer significant misalignment problems. In this work, we solve the video frame interpolation problem in a general situation, where input frames can be acquired under uncertain exposure (and interval) time. Unlike previous methods that can only be applied to a specific temporal prior, we derive a general curvilinear motion trajectory formula from four consecutive sharp frames or two consecutive blurry frames without temporal priors. Moreover, utilizing constraints within adjacent motion trajectories, we devise a novel optical flow refinement strategy for better interpolation results. Finally, experiments demonstrate that one well-trained model is enough for synthesizing high-quality slow-motion videos under complicated real-world situations. Codes are available on https://github. com/yjzhang96/UTI-VFI. 1 Introduction Video frame interpolation aims to synthesize non-exist intermediate frames and thereby provides a visually fluid video sequence. It has broad application prospects, such as slow motion production [13], up-converting frame rate [3] and novel-view rendering [6]. Many state-of-the-art video interpolation methods [1, 12, 17, 34] aim to estimate the object motion and occlusion with the assistance of optical flow. Through refining forward and backward motion flows among several frames, these methods can directly warp pixels to synthesize desired intermediate frames. To achieve this goal, some popular datasets, of which either triplet images or 240fps highframe-rate videos, are collected as the ground-truth of real-world motions. Meanwhile, to evaluate the performance of proposed methods, the well-trained model is tested using frames collected in a similar way. Although significant improvement has demonstrated by experiments of recent works, people may ask if the same (or similar) performance can be achieved in complicated real-world situations. * indicates equal contribution. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. To comprehensively discuss this question, we first revisit the principle of video frame acquisition. As illustrated in Fig. 1 (a), the frame acquisition process usually includes two phases: exposure phase and readout phase. In the exposure phase, the shutter opens for a duration of t0 so that the photosensitive sensor is exposed. In the readout phase, the camera reads the charge on the pixel array and convert the signal to get the pixel value. Considering different technologies of cameras, the readout phase could be either overlapped or non-overlapped with the exposure phase. Here, Fig. 1 (a) is an example of non-overlapped exposure. For easy discussing, we define the time interval between two exposures as t1. Thus, a complete shutter period is defined as the time period t0 + t1. Correspondingly, frames per second (FPS) is defined as the reciprocal of the shutter period. Note that, t1 cannot be eliminated because of the intrinsic demand of the sensor. Meanwhile, t0 cannot be too short compared to shutter period, otherwise it will produce a visually discontinuous video. The exposure time t0 and the interval time t1 (or FPS 1t0+t1 ) are two important parameters of a camera sensor, and they could vary largely across different cameras [4]. Therefore, when we perform frame interpolation on real-world videos, following challenges should be further considered: 1) Due to the existence of exposure time, the movement of the camera and object may produce motion/dynamic blur within a video frame. Directly performing the interpolation between blurry frames would lead to inferior visual results. A more severe blur would usually occur in the lower frame-rate video since the exposure time is relatively long. 2) Simply combining deblurring and video interpolation techniques may not handle the blurry video frames well. For blurry video frames, we should not only focus on the inter-frame interpolation, but also perform the intra-frame interpolation. 3) Note that t0 and t1 may vary due to the limitation of equipment or different exposure settings, the number of interpolated frames and corresponding motion trajectories will vary accordingly. For example, in the instance of Fig 1 (a), if we want to up-convert the FPS by 10 times, we should interpolate 7 frames underlying each blurry frame, and 3 frames between the two consecutive frames. Similarly, the estimation of the motion trajectory must consider the uneven time intervals. According to our observation, most existing works cannot overcome these three challenges simultaneously. Although the most recent works [13, 27] manage to solve the problem of motion blur in video interpolation, they are trained on the specific exposure setting and could be hard to generalize to different situations. To address these issues, in this work, we consider the video frame interpolation problem in a more general situation and aim to deliver more accurate interpolation results. Specifically, giving a video sequence as the input, we first train a second-order residual key-states restoration network to synthesize the start and the end states for each frame, e.g. L0 and L1 in Fig. 1 (b). If there exists zero movement (misalignment) between two states, the video frame is regarded as one instant frame (i.e. without blur). Otherwise, the exposure time cannot be ignored, and both inter- and intra-interpolation are performed. Moreover, following the same assumption as [34], i.e. the acceleration of motion remains consistent during consecutive frames, we apply the quadratic model [18, 34] to the general video acquisition situation. We derive the general curvilinear motion representation without temporal priors from consecutive four key-states, such as L0, L1, L2 and L3 in Fig. 1 (b). Meanwhile, the relationship between t0 and t1 can be determined by the displacements between key-states, i.e. S01, S12 and S23. In addition, to reduce the adverse effects caused by inferior optical-flow estimation, we further refine the optical flows with the derived trajectory priors. Finally, the refined optical flows are utilized to perform high-quality intermediate frame synthesis. Overall, in this paper, we make following contributions: 1) We propose a restoration network to synthesize start and end states of the input video frames. This network is able to handle different exposure settings, and remove blur in the original video clip; 2) We derive a curvilinear motion representation which is sensitive to different exposure settings, thereby providing a more accurate frame alignment for uncertain time interval interpolation; 3) We further refine the optical flow with the trajectory priors to improve the interpolation results. We construct different datasets to simulate the different exposure settings in real scenarios. Comprehensive experiments on these datasets and real-world vidoes demonstrate the effectiveness of our proposed framework. 2 Related Works Video frame interpolation. Most popular video interpolation methods utilize optical flow [12, 34, 17, 1, 2, 35] to predict the motion for the interpolated frame. Some methods [23, 22, 9] estimate space-varying and separable convolution filters for each pixel, and synthesize the interpolated pixel from a convolution between two adjacent patches. Xu et al. [34] proposes a quadratic interpolation to allow the interpolated motion to be curvilinear instead of being uniform and linear. However, all these methods will encounter difficulties when processing the blurry video since the optical flow/motion estimation will be inaccurate. Video/Image deblurring. Conventional video deblurring methods [5, 10, 33] usually apply the deconvolution algorithm with the assistance of image priors or regulations. To make full use of adjacent frames, Hyun et al. [11] utilize inter-frame optical flow to estimate blur kernels. Ren et al. [25] also apply optical flow to facilitate the segmentation result. More recently, deep convolutional neural networks (CNN) have been applied to bypass the restriction of blur type or image priors [19, 30, 7, 20, 16, 36], and enable an end-to-end training scheme by introducing the synthetic realworld scene datasets [19, 30]. To exploit the temporal relationship, Nah et al. [20] propose a recurrent neural network (RNN) to iteratively update the hidden state for output frames. Wang et al. [32] devise a pyramid, cascading and deformable alignment module to conduct a better frame alignment in feature level, and their method won the first place in the NITRE19 video deblurring challenge [21]. There are also some works [37, 14, 24] learning to extract a video clip from a blurry image, which can be considered as a combination of image deblurring with intra-frame interpolation. Joint video deblurring and interpolation. Recent methods [13, 27] have been proposed to address the blurry video interpolation problem. Among them, Jin et al. [13] first extract several keyframes, and then interpolate the middle frame from two adjacent frames. Meantime, Shen et al. [27] proposed a joint interpolation method, where they simultaneously output the deblurred frame and interpolated frame in a pyramid framework. Both these methods have pre-defined a specific setting for the blurry video exposure mechanism, which may fail when applied to a video acquired from other equipment or other camera settings. 3 The Proposed Video Interpolation Scheme To address the aforementioned challenges of video frame interpolation without temporal priors, in this section, we introduce the proposed interpolation scheme in detail. Firstly, to overcome the problem caused by the uncertainty of the time interval, we derive a new quadratic formula for different exposure settings. Then, utilizing the motion flow priors contained in the formula, we further refine the estimated optical flow for more accurate time interval and trajectory estimation. Finally, we introduce the second-order residual learning strategy for key-states restoration from input frame sequences. 3.1 From equal time interval to uncertain time interval To interpolate intermediate frame Lt between two consecutive frames L1 and L2, the optical flow based video interpolation methods [12, 17, 34] aim to estimate the optical flow from frame L1 to Lt, or frame L2 to Lt. Recently, inspired by [18], Xu et al. [34] have relaxed the constrains of motion from linear displacement to quadratic curvilinear, which corresponds to acceleration-aware motions: S1t = (S12 − S01)/2× t2 + (S12 + S01)/2× t, (1) where Sab means the displacement of pixels from frame a to frame b, and it is calculated by optical flow. In order to keep the pixel coordinates aligned in each optical flow map, the start point of these optical flows should be the same. In general, the displacements are calculated as Ŝ12 = f1→2, Ŝ01 = −f1→0, where fa→b denotes the optical flow from frame a to frame b. However, Eq.(1) is based on the equal time interval assumption. This assumption is not applicable to the general situation where the time intervals t0 and t1 may vary accordingly. Here, we define a shutter period as one unit time, and the ratio between t1 and t0 is λ, i.e. t0 + t1 = 1, t1/t0 = λ. Different from [34] which employs three neighboring frames to calculate the quadratic trajectory, we take four consecutive key-states into consideration as shown in Fig. 1 (b). Naturally, if the time intervals become unknown, four key-states (i.e. three flows) are requested to determine a unique quadratic movement. If we assume the acceleration remains constant from frame L0 to L3, then we can express S01,S12 and S23 with velocity and acceleration: 2S12 = (2v1 + at1)× t1, S01 + S23 = (2v1 + at1)× t0. (2) This equation set indicates that vector S12 has a same direction with vector resultant S01 + S23. In addition, we can derive the time interval ratio λ as: λ = t1 t0 = 2S12 S01 + S23 . (3) By far, we are able to solve the t0 and t1 under the condition that t0 + t1 = 1. Further deriving the velocity and acceleration of the movement, we can get the expression of trajectory between frame L1 and frame L2: S1t = (λ+ 1)(S23 − S01)/2× t2 + (λS01 + (S01 + S23)/2)× t, t ∈ (tL0 , tL1). (4) Note that when the time intervals are equal, i.e. λ = 1, our Eq.(4) can be degraded to Eq.(1), i.e. the QVI interpolation [34] is a special case of our framework. 3.2 Optical flow refinement As shown in Eq.(3) (4), we can obtain flow S1t using the pixel displacements among four key-states. In order to keep the position aligned, S23 should be represented as: Ŝ23 = f1→3 − f1→2, (5) which denotes the movements of pixels in frame L1 from time tL2 to tL3 . In practical, we estimate all the optical flows using the state-of-the-art PWC-Net [31]. However, directly using flow calculated by Eq.(5) does not work well in our situation, since there may exist serious errors in two aspects: 1) the flow estimation error owing to the long time interval between frame 1 and frame 3, i.e. f1→3; 2) the pixel misalignment when we conduct vector subtraction. Therefore, we propose a flow refinement network FR to acquire the refined flow Ŝ ′ 23. Since it is hard to obtain the ground-truth of the target flow S23, we employ the trajectory prior implied in Eq.(2) (3) as our penalization. Specifically, Ŝ01 + Ŝ23 and Ŝ12 should have following two implicit constraints: 1) these two vectors have the same direction; 2) since λ is a constant, the ratio of the two vectors should be uniform across the image. With these two priors as constraints, we are able to correct the value of one optical flow when the other two are fixed. Note that, although f1→0 and f1→2 are also the estimation of pixel displacements, they could deliver more accurate motion estimation than Ŝ23. Therefore, the refinement process can be formulated as: Ŝ ′ 23 = FR(f1→0,f1→2, Ŝ23). (6) We use a U-Net [26] with skip connections to learn the mapping from the original flow to the refined outputs. Aforementioned priors are implicitly encoded into our loss function, where we utilize f0→1 and f1→2 to constrain the output flow. The loss function is calculated as: Lr = |Ŝ23 − (2/λf1→2 + f1→0)|1. (7) Finally, the refined Ŝ ′ 23 can be substituted into Eq.(4) to compute a more accurate S1t. 3.3 Second-order residual learning for key-states restoration Our principle for choosing key-states is to ensure that they are unambiguous under different exposure settings. For each input frame, we attempt to restore its instant states of the start and end of the exposure, since their physical meaning is consistent in different exposure settings. For sharp images (i.e. without motion blur), the start and end states should be the same. For blurry frames, the start and end states define the boundary of the motion blur, which makes them easier to restore. In addition, ours could short the temporal range for subsequent interpolation, which leads to more accurate interpolation results. More discussion can be found in the experiment section. As shown in Fig. 2, we propose the second-order residual learning pipeline to extract the key-states from input frames. Firstly, in order to avoid the temporal ambiguity of the start and end states, four consecutive frames are fed into the network F1. Utilizing the implicit motion direction existed in the input sequence, the network is trained to synthesize residuals to be summed up with input blurry frames, and deliver the start and end states of B1 and B2. This process can be formulated as: (L̂si , L̂ e i ) = F1(Bseq) +Bi, i = 1, 2, (8) where (L̂si , L̂ e i ) denotes the the estimated instant start and end states respectively, and Bseq denotes the input sequence {B0, · · · , B3}. In the experiments, although the network F1 achieves reasonable performance, we find it still suffers from some limitations. Firstly, the network gets four inputs to eliminate the temporal ambiguity, which decreases its deblurring capability more or less. Secondly, the fitting ability of residual is relatively poor when modeling a more severe blur. To address these issues, we further improve the deblurring performance by introducing the second-order residual learning. Specifically, we refer the Eq.(8) as first order residual, and derive the second-order residual learning as: (L̂si , L̂ e i ) = F2(Bi,F1(Bseq) +Bi) + F1(Bseq) +Bi, i = 1, 2. (9) Here, the network F2 aims to synthesize higher order residual of the target mapping. Since the temporal order of L̂si and L̂ e i has been initially determined by function F1, F2 can focus on restoring a pair of key-states. In experiments, this structure could improve the PSNR by around 1.5 dB. 4 Experiments In this section, we introduce the datasets we used for training and test, and the training configuration of our models. Then we compare the proposed framework with state-of-the-art methods both quantitative and qualitative. Finally, we carry out an ablation study of our proposed components. 4.1 Datasets To simulate the real-world situations and build datasets for more general video interpolation, we synthesize low-frame-rate videos from the sharp high-frame-rate video sequence. Considering the video acquisition principle we discussed before, we average several consecutive frames taken by a 240fps camera to simulate one frame taken by a low-frame-rate camera. Similar to all the existing blurry image/video datasets, such synthesis is feasible if the relative motion between camera and object is not too large to produce the ’ghosting’ artifacts. Meanwhile, we discard several consecutive frames to simulate the shutter closed time interval. In this way, we create videos filmed in different exposure settings by altering the number of frames averaged and discarded. Specifically, we denote the number of exposure frames as m, and the number of discarded frames as n, thus m+ n frames constitute a shutter period. We set m+ n = 10 to down-sample the original 240 fps video to 24 fps, which is a common FPS setting in our daily life. For fair comparisons, we set m as odd numbers (m = 5, 7, 9), since most other methods request the middle frame as ground-truth. We apply the synthetic rule on both GoPro dataset [19] and Adobe240 dataset [30], and name these synthetic datasets as “dataset-m-n”. Finally we get “GoPro-5-5”, “GoPro-7-3”, “GoPro-9-1” and “Adobe240-5-5”, “Adobe240-7-3”, “Adobe240-9-1” respectively. In addition, we also provide datasets “GoPro-5-3” and “GoPro-7-1” to perform a fair comparison with [27] since it can only upsample the video by multiple of 2. Noted that the other video interpolation datasets such as UCF101 [29] and Vimeo-90k [35] are not applicable for our comparison, since they only provide sharp frame triplets. 4.2 Implementation details To train the key-states restoration network, we first train the network F1 for 200 epochs and jointly train the network F1 and F2 for another 200 epochs. To train the optical flow refinement network, 100 epochs are enough for convergence. We use Adam [15] solver for optimization, with β1 = 0.9, β2 = 0.999 and = 10−8. The learning rate is set initially to 10−4, and linearly decayed to 0. All weights are initialized using Xavier [8], and bias is initialized to 0. In total, we have 34.4 million parameters for training. In test phase, it takes 0.23s and 0.18s to run a single forward for key-states restoration network and interpolation network respectively via a NVIDIA GeForce GTX 1080 Ti graphic card. 4.3 Comparison with the state-of-the-art methods Comparison methods. We employ two types of interpolation solution as our comparisons. The first one is the cascade model, which concatenates a deblurring model with a video frame interpolation model. Specifically, we combine the state-of-the-art image/video deblurring methods Gao et al.[7] and EDVR [32] with the state-of-the-art multi-frame interpolation methods QVI [34] and Super SloMo [12]. We follow the implementation of their official released code in all the experiments. The other is the joint model of TNTT and BIN proposed by Jin et al. [13] and Shen et al. [27], respectively. These methods jointly conduct deblurring and upsampling of frame rate. Since these two methods are devised for specific exposure setting, we make some workarounds to carry out a more fair and reasonable comparison. Since the original TNTT [13] model need to iteratively interpolate the middle frame to fill the vacant indexes, we devise a specific interpolation sequence for each exposure setting, namely TNTT*. In addition, since BIN [27] is devised to up-convert the frame rate by 2 times, which shares the similiar function with our key-states restoration module, we compare their initial results with our first stage outputs. For multi-frame interpolation results, we iteratively interpolate the outputs of BIN and obtain the “8x frame rate” results. For this comparison, we prepare the dataset “5-3” and “7-1” as two different exposure setting of 30 fps video. We re-train and test the BIN model using the mixed datasets “5-3” and “7-1”. Yet, our model is only trained on the mix datasets of “5-5”, “7-3” and “9-1”. Here, we also test our well-trained model on the “5-3” and “7-1” settings, experiments in Table 3 shows the great generalization ability of our proposed framework. Blurry video interpolation. As shown in Table 1, Table 2 and Table 3, both our deblurring and overall interpolation perform favorably against former methods. In addition, several important observations can be made from these results. Firstly, in the deblurring phase, former video deblurring methods encounter great difficulties in maintaining a promising performance in our datasets with different exposure settings. For example, the original TNTT which is trained on “GoPro-9-1” performs inferior in generalizing to other test sets. Moreover, even trained on our mixed datasets, the EDVR deteriorates significantly from dataset “5-5” to datasets “7-3” and “9-1”. For the final interpolation results, we can see that cascade models are sub-optimal for the overall performance. Although the deblurring module achieve a high score in PSNR, there is about 3 dB loss in the following interpolation stage. This may be mainly caused by the long temporal scope between two consecutive input frames. Similar conclusion is also obtained in work of [13, 27]. On the contrary, the joint models usually can achieve a more accurate interpolation results. However, we observe that the interpolation performance of TNTT/TNTT* deteriorates heavily in the exposure setting “5-5” (from 32.49 to 28.39 for GoPro dataset). This is mainly because of the iterative synthesis of the middle frame may lead to sub-optimal results in the inter-frame interpolation. Same conclusion can be obtained from Table 3, the BIN model performs inferior when they attempt to further interpolate the middle frame between former outputs. To intuitively visualize the comparison, we show two typical examples in Fig. 3. The first row shows that former methods fail in generating a visually clear intermediate frame. This is either because they fail in restoring a sharp frame in deblurring phase, e.g. EDVR [32] and TNTT [13], or the frame becomes blurry when interpolated from adjacent frames, e.g. Gao [7]+QVI [34], or BIN [27]. In the second row, we use Sobel operator [28] to extract the contour of interpolated results and overlap it with the contour of ground truth. Red line represents ground-truth contour and blue one means the synthesized outputs. The pinker and clearer overlapped image means a more accurate interpolation result. As we can see, our interpolated frame shows a best overlapped result with ground-truth image. Moreover, we shot 10 real 30 FPS videos using a telephone camera, and generate the interpolated high-frame-rate video results with our method, as well as TNTT and BIN. Since there is no objective criterion to compare the generation quality, a user study is conducted for a fair comparison. According to more than 1k response collected from Amazon Mechanical Turk, there are 78.4% of people think our results are better than TNTT’s, and 87.6% of people prefer ours over BIN’s results. The real world video interpolation results are provided in our supplementary video. Uncertain time interval interpolation (sharp frames). To futher validate the effectiveness of our proposed uncertain time interval interpolation algorithm. We compare different interpolation strategies when calculating the essential flow S1t. To construct the videos with different time interval ratios, we sample the original high-frame-rate GoPro dataset with a different sample interval, e.g. to sequentially sample one frame with intervals of 6 frames and 2 frames to achieve the dataset of λ = 7/3. We compare our uncertain time interval algorithm (Model UTI) and refined version (Model UTI-refine) with original QVI [34] model, which is derived under λ = 1. We also provide a model GT as the optical flow calculated with ground-truth λ. Table 4 shows that our UTI and UTI-refine performs favorably to QVI model except the situation when λ = 5/5, which is owning to the optical flow estimation error in S23. However, we can see the performance of QVI deteriorates more severe than ours when the value of λ deviates from 1. Also, the results show that our refine network significantly improves the performance. 4.4 Ablation study To see the effectiveness of our designed modules, we perform the following extensive experiments. For the key-state restoration phase, we compare the model using different structure/input frames with the proposed model. As we can see in Table 5, compared to the first-order residual, the model with second-order residual can increase the PSNR by around 1.5 dB. Also, the model simply cascades another stage-I’s architecture, i.e. without B1, B2 as input, performs inferior to our proposed structure, suggesting the original blurry information is essential for the second-order residual learning. Both the ablation experiments show that our second-order residual is effective in refining the output of the first stage. For the interpolation phase, we already analyzed the contribution of uncertain time interval interpolation in Table 4. Here, we evaluate the contribution of the flow refinement module. We fix the key-state restoration network and compare the interpolation outputs of the model with refinement (Model refine) and the model without refinement (Model w/o refine). As shown in Table 5, the model with refinement outperforms the model without refinement by around 0.6 dB. This improvement indicates that the Ŝ23 becomes more accurate after refinement. 5 Conclusion In this work, we propose a method to tackle the video frame interpolation problem without knowing temporal priors. Taken the relationship of exposure time and shutter period into consideration, we derive a general quadratic interpolation strategy without temporal prior. We also devise a key-states restoration network to extract the temporal unambiguous sharp content from blurry frames. Our proposed method is practical to synthesize a high-frame-rate sharp video from low-frame-rate blurry videos with different exposure settings. However, there is still limitation in our work, e.g., our uncertain time interval motion trajectory can only be derived when the acceleration remain constant. Though this assumption can approximate most situations in a short exposure time interval (around 1/20 s), the more challenging movement like variable acceleration motion is existing in the real scenario. We hope to relax this assumption and to have a more accurate trajectory estimation in our future works. Broader Impact Video frame interpolation (VFI), which aims to overcome the temporal limitation of camera sensors, is a popular and important technology in a wide range of video processing tasks. For example, it could produce slow-motion videos without professional high-speed cameras, and it could perform the frame rate up-conversion (or video restoration) for archival footage. However, existing VFI researches can mainly apply to videos with pre-defined temporal priors, such as sharp video frames or blurry videos with known exposure settings. It may largely limit their performance in complicated real-world situations. To our best knowledge, the video frame interpolation framework we introduced in this paper made the first attempt to overcome these limitations. Our proposed technique may potentially benefit a series of real-world applications and users. On the one hand, it could be more practical and convenient for users who want to convert their own videos to slow-motion, since they are not required to figure out the video sources, i.e. the complicated parameters of camera sensors. On the other hand, it could reduce the workload of VFI-related applications, i.e. it would not need to retrain new models for different exposure settings. Since the video frame interpolation aims at video restoration and up-conversion (i.e. the output video shares the same content as the given video), our method may not cause negative ethical impact, if we do not discuss the content of the input video. Acknowledgments and Disclosure of Funding This work was supported in part by the Australian Research Council Projects: FL-170100117, DP-180103424, IH-180100002 and IC-190100031.
1. What is the focus of the paper regarding video frame interpolation? 2. What are the strengths of the proposed approach, particularly in its novelty and problem-solving? 3. What are the weaknesses of the paper, especially regarding its experimental procedures and limitations? 4. How does the reviewer assess the effectiveness of the proposed method on real videos? 5. Are there any concerns about the physical accuracy of the synthesis procedure used in the experiments?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper addresses the video frame interpolation problem. It explicitly models the exposure time and readout time to handle the scenario under uncertain exposure time. It presents a curvilinear motion trajectory formula and a novel optical flow refinement network for better results. Experiments show that the proposed method outperforms competitive methods quantitatively on synthetic datasets. Overall, there is sufficient novelty in the problem setting and the proposed solution. However, the experiments were only performed on synthetic datasets, and the synthesis procedure is not physically correct. Without seeing how the proposed method performs on real videos, it is not easy to judge its effectiveness. Strengths The paper has good novelty. It observes that direct interpolation between blurry frames leads to inferior results, and a naïve combination of deblurring and interpolation cannot handle blurry videos well. In addition to analyzing the exposure time and readout time explicitly, the proposed method also generalizes the quadratic motion to the curvilinear motion and designs the optical flow refinement network. From the quantitative evaluation, the proposed method outperforms methods with a simple combination of deblurring and interpolation and those performing the two tasks jointly. Weaknesses The procedure for synthesizing frames with different exposure times from a 240fps camera is not physically correct. For example, for obtaining a 24fps video with the 1/48-s exposure time, the procedure averages five consecutive frames to simulate the exposure time and discard the next five frames for the readout time. Since each frame of the five frames for averaging does not have the exposure time of 1/240 second (some is for reading out). The five frames do not span a continuous exposure of 1/48 second. Averaging them cannot accurately simulate the frame with 1/48-s exposure time. The experiments were only carried out on synthetic datasets. As pointed out above, the synthesis procedure is not physically correct. It is not easy to judge how the proposed method performs on real videos. The performance improvement is more significant in the 5-5 setting, but it is much less evident for the other two settings.
NIPS
Title Video Frame Interpolation without Temporal Priors Abstract Video frame interpolation, which aims to synthesize non-exist intermediate frames in a video sequence, is an important research topic in computer vision. Existing video frame interpolation methods have achieved remarkable results under specific assumptions, such as instant or known exposure time. However, in complicated realworld situations, the temporal priors of videos, i.e., frames per second (FPS) and frame exposure time, may vary from different camera sensors. When test videos are taken under different exposure settings from training ones, the interpolated frames will suffer significant misalignment problems. In this work, we solve the video frame interpolation problem in a general situation, where input frames can be acquired under uncertain exposure (and interval) time. Unlike previous methods that can only be applied to a specific temporal prior, we derive a general curvilinear motion trajectory formula from four consecutive sharp frames or two consecutive blurry frames without temporal priors. Moreover, utilizing constraints within adjacent motion trajectories, we devise a novel optical flow refinement strategy for better interpolation results. Finally, experiments demonstrate that one well-trained model is enough for synthesizing high-quality slow-motion videos under complicated real-world situations. Codes are available on https://github. com/yjzhang96/UTI-VFI. N/A Video frame interpolation, which aims to synthesize non-exist intermediate frames in a video sequence, is an important research topic in computer vision. Existing video frame interpolation methods have achieved remarkable results under specific assumptions, such as instant or known exposure time. However, in complicated realworld situations, the temporal priors of videos, i.e., frames per second (FPS) and frame exposure time, may vary from different camera sensors. When test videos are taken under different exposure settings from training ones, the interpolated frames will suffer significant misalignment problems. In this work, we solve the video frame interpolation problem in a general situation, where input frames can be acquired under uncertain exposure (and interval) time. Unlike previous methods that can only be applied to a specific temporal prior, we derive a general curvilinear motion trajectory formula from four consecutive sharp frames or two consecutive blurry frames without temporal priors. Moreover, utilizing constraints within adjacent motion trajectories, we devise a novel optical flow refinement strategy for better interpolation results. Finally, experiments demonstrate that one well-trained model is enough for synthesizing high-quality slow-motion videos under complicated real-world situations. Codes are available on https://github. com/yjzhang96/UTI-VFI. 1 Introduction Video frame interpolation aims to synthesize non-exist intermediate frames and thereby provides a visually fluid video sequence. It has broad application prospects, such as slow motion production [13], up-converting frame rate [3] and novel-view rendering [6]. Many state-of-the-art video interpolation methods [1, 12, 17, 34] aim to estimate the object motion and occlusion with the assistance of optical flow. Through refining forward and backward motion flows among several frames, these methods can directly warp pixels to synthesize desired intermediate frames. To achieve this goal, some popular datasets, of which either triplet images or 240fps highframe-rate videos, are collected as the ground-truth of real-world motions. Meanwhile, to evaluate the performance of proposed methods, the well-trained model is tested using frames collected in a similar way. Although significant improvement has demonstrated by experiments of recent works, people may ask if the same (or similar) performance can be achieved in complicated real-world situations. * indicates equal contribution. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. To comprehensively discuss this question, we first revisit the principle of video frame acquisition. As illustrated in Fig. 1 (a), the frame acquisition process usually includes two phases: exposure phase and readout phase. In the exposure phase, the shutter opens for a duration of t0 so that the photosensitive sensor is exposed. In the readout phase, the camera reads the charge on the pixel array and convert the signal to get the pixel value. Considering different technologies of cameras, the readout phase could be either overlapped or non-overlapped with the exposure phase. Here, Fig. 1 (a) is an example of non-overlapped exposure. For easy discussing, we define the time interval between two exposures as t1. Thus, a complete shutter period is defined as the time period t0 + t1. Correspondingly, frames per second (FPS) is defined as the reciprocal of the shutter period. Note that, t1 cannot be eliminated because of the intrinsic demand of the sensor. Meanwhile, t0 cannot be too short compared to shutter period, otherwise it will produce a visually discontinuous video. The exposure time t0 and the interval time t1 (or FPS 1t0+t1 ) are two important parameters of a camera sensor, and they could vary largely across different cameras [4]. Therefore, when we perform frame interpolation on real-world videos, following challenges should be further considered: 1) Due to the existence of exposure time, the movement of the camera and object may produce motion/dynamic blur within a video frame. Directly performing the interpolation between blurry frames would lead to inferior visual results. A more severe blur would usually occur in the lower frame-rate video since the exposure time is relatively long. 2) Simply combining deblurring and video interpolation techniques may not handle the blurry video frames well. For blurry video frames, we should not only focus on the inter-frame interpolation, but also perform the intra-frame interpolation. 3) Note that t0 and t1 may vary due to the limitation of equipment or different exposure settings, the number of interpolated frames and corresponding motion trajectories will vary accordingly. For example, in the instance of Fig 1 (a), if we want to up-convert the FPS by 10 times, we should interpolate 7 frames underlying each blurry frame, and 3 frames between the two consecutive frames. Similarly, the estimation of the motion trajectory must consider the uneven time intervals. According to our observation, most existing works cannot overcome these three challenges simultaneously. Although the most recent works [13, 27] manage to solve the problem of motion blur in video interpolation, they are trained on the specific exposure setting and could be hard to generalize to different situations. To address these issues, in this work, we consider the video frame interpolation problem in a more general situation and aim to deliver more accurate interpolation results. Specifically, giving a video sequence as the input, we first train a second-order residual key-states restoration network to synthesize the start and the end states for each frame, e.g. L0 and L1 in Fig. 1 (b). If there exists zero movement (misalignment) between two states, the video frame is regarded as one instant frame (i.e. without blur). Otherwise, the exposure time cannot be ignored, and both inter- and intra-interpolation are performed. Moreover, following the same assumption as [34], i.e. the acceleration of motion remains consistent during consecutive frames, we apply the quadratic model [18, 34] to the general video acquisition situation. We derive the general curvilinear motion representation without temporal priors from consecutive four key-states, such as L0, L1, L2 and L3 in Fig. 1 (b). Meanwhile, the relationship between t0 and t1 can be determined by the displacements between key-states, i.e. S01, S12 and S23. In addition, to reduce the adverse effects caused by inferior optical-flow estimation, we further refine the optical flows with the derived trajectory priors. Finally, the refined optical flows are utilized to perform high-quality intermediate frame synthesis. Overall, in this paper, we make following contributions: 1) We propose a restoration network to synthesize start and end states of the input video frames. This network is able to handle different exposure settings, and remove blur in the original video clip; 2) We derive a curvilinear motion representation which is sensitive to different exposure settings, thereby providing a more accurate frame alignment for uncertain time interval interpolation; 3) We further refine the optical flow with the trajectory priors to improve the interpolation results. We construct different datasets to simulate the different exposure settings in real scenarios. Comprehensive experiments on these datasets and real-world vidoes demonstrate the effectiveness of our proposed framework. 2 Related Works Video frame interpolation. Most popular video interpolation methods utilize optical flow [12, 34, 17, 1, 2, 35] to predict the motion for the interpolated frame. Some methods [23, 22, 9] estimate space-varying and separable convolution filters for each pixel, and synthesize the interpolated pixel from a convolution between two adjacent patches. Xu et al. [34] proposes a quadratic interpolation to allow the interpolated motion to be curvilinear instead of being uniform and linear. However, all these methods will encounter difficulties when processing the blurry video since the optical flow/motion estimation will be inaccurate. Video/Image deblurring. Conventional video deblurring methods [5, 10, 33] usually apply the deconvolution algorithm with the assistance of image priors or regulations. To make full use of adjacent frames, Hyun et al. [11] utilize inter-frame optical flow to estimate blur kernels. Ren et al. [25] also apply optical flow to facilitate the segmentation result. More recently, deep convolutional neural networks (CNN) have been applied to bypass the restriction of blur type or image priors [19, 30, 7, 20, 16, 36], and enable an end-to-end training scheme by introducing the synthetic realworld scene datasets [19, 30]. To exploit the temporal relationship, Nah et al. [20] propose a recurrent neural network (RNN) to iteratively update the hidden state for output frames. Wang et al. [32] devise a pyramid, cascading and deformable alignment module to conduct a better frame alignment in feature level, and their method won the first place in the NITRE19 video deblurring challenge [21]. There are also some works [37, 14, 24] learning to extract a video clip from a blurry image, which can be considered as a combination of image deblurring with intra-frame interpolation. Joint video deblurring and interpolation. Recent methods [13, 27] have been proposed to address the blurry video interpolation problem. Among them, Jin et al. [13] first extract several keyframes, and then interpolate the middle frame from two adjacent frames. Meantime, Shen et al. [27] proposed a joint interpolation method, where they simultaneously output the deblurred frame and interpolated frame in a pyramid framework. Both these methods have pre-defined a specific setting for the blurry video exposure mechanism, which may fail when applied to a video acquired from other equipment or other camera settings. 3 The Proposed Video Interpolation Scheme To address the aforementioned challenges of video frame interpolation without temporal priors, in this section, we introduce the proposed interpolation scheme in detail. Firstly, to overcome the problem caused by the uncertainty of the time interval, we derive a new quadratic formula for different exposure settings. Then, utilizing the motion flow priors contained in the formula, we further refine the estimated optical flow for more accurate time interval and trajectory estimation. Finally, we introduce the second-order residual learning strategy for key-states restoration from input frame sequences. 3.1 From equal time interval to uncertain time interval To interpolate intermediate frame Lt between two consecutive frames L1 and L2, the optical flow based video interpolation methods [12, 17, 34] aim to estimate the optical flow from frame L1 to Lt, or frame L2 to Lt. Recently, inspired by [18], Xu et al. [34] have relaxed the constrains of motion from linear displacement to quadratic curvilinear, which corresponds to acceleration-aware motions: S1t = (S12 − S01)/2× t2 + (S12 + S01)/2× t, (1) where Sab means the displacement of pixels from frame a to frame b, and it is calculated by optical flow. In order to keep the pixel coordinates aligned in each optical flow map, the start point of these optical flows should be the same. In general, the displacements are calculated as Ŝ12 = f1→2, Ŝ01 = −f1→0, where fa→b denotes the optical flow from frame a to frame b. However, Eq.(1) is based on the equal time interval assumption. This assumption is not applicable to the general situation where the time intervals t0 and t1 may vary accordingly. Here, we define a shutter period as one unit time, and the ratio between t1 and t0 is λ, i.e. t0 + t1 = 1, t1/t0 = λ. Different from [34] which employs three neighboring frames to calculate the quadratic trajectory, we take four consecutive key-states into consideration as shown in Fig. 1 (b). Naturally, if the time intervals become unknown, four key-states (i.e. three flows) are requested to determine a unique quadratic movement. If we assume the acceleration remains constant from frame L0 to L3, then we can express S01,S12 and S23 with velocity and acceleration: 2S12 = (2v1 + at1)× t1, S01 + S23 = (2v1 + at1)× t0. (2) This equation set indicates that vector S12 has a same direction with vector resultant S01 + S23. In addition, we can derive the time interval ratio λ as: λ = t1 t0 = 2S12 S01 + S23 . (3) By far, we are able to solve the t0 and t1 under the condition that t0 + t1 = 1. Further deriving the velocity and acceleration of the movement, we can get the expression of trajectory between frame L1 and frame L2: S1t = (λ+ 1)(S23 − S01)/2× t2 + (λS01 + (S01 + S23)/2)× t, t ∈ (tL0 , tL1). (4) Note that when the time intervals are equal, i.e. λ = 1, our Eq.(4) can be degraded to Eq.(1), i.e. the QVI interpolation [34] is a special case of our framework. 3.2 Optical flow refinement As shown in Eq.(3) (4), we can obtain flow S1t using the pixel displacements among four key-states. In order to keep the position aligned, S23 should be represented as: Ŝ23 = f1→3 − f1→2, (5) which denotes the movements of pixels in frame L1 from time tL2 to tL3 . In practical, we estimate all the optical flows using the state-of-the-art PWC-Net [31]. However, directly using flow calculated by Eq.(5) does not work well in our situation, since there may exist serious errors in two aspects: 1) the flow estimation error owing to the long time interval between frame 1 and frame 3, i.e. f1→3; 2) the pixel misalignment when we conduct vector subtraction. Therefore, we propose a flow refinement network FR to acquire the refined flow Ŝ ′ 23. Since it is hard to obtain the ground-truth of the target flow S23, we employ the trajectory prior implied in Eq.(2) (3) as our penalization. Specifically, Ŝ01 + Ŝ23 and Ŝ12 should have following two implicit constraints: 1) these two vectors have the same direction; 2) since λ is a constant, the ratio of the two vectors should be uniform across the image. With these two priors as constraints, we are able to correct the value of one optical flow when the other two are fixed. Note that, although f1→0 and f1→2 are also the estimation of pixel displacements, they could deliver more accurate motion estimation than Ŝ23. Therefore, the refinement process can be formulated as: Ŝ ′ 23 = FR(f1→0,f1→2, Ŝ23). (6) We use a U-Net [26] with skip connections to learn the mapping from the original flow to the refined outputs. Aforementioned priors are implicitly encoded into our loss function, where we utilize f0→1 and f1→2 to constrain the output flow. The loss function is calculated as: Lr = |Ŝ23 − (2/λf1→2 + f1→0)|1. (7) Finally, the refined Ŝ ′ 23 can be substituted into Eq.(4) to compute a more accurate S1t. 3.3 Second-order residual learning for key-states restoration Our principle for choosing key-states is to ensure that they are unambiguous under different exposure settings. For each input frame, we attempt to restore its instant states of the start and end of the exposure, since their physical meaning is consistent in different exposure settings. For sharp images (i.e. without motion blur), the start and end states should be the same. For blurry frames, the start and end states define the boundary of the motion blur, which makes them easier to restore. In addition, ours could short the temporal range for subsequent interpolation, which leads to more accurate interpolation results. More discussion can be found in the experiment section. As shown in Fig. 2, we propose the second-order residual learning pipeline to extract the key-states from input frames. Firstly, in order to avoid the temporal ambiguity of the start and end states, four consecutive frames are fed into the network F1. Utilizing the implicit motion direction existed in the input sequence, the network is trained to synthesize residuals to be summed up with input blurry frames, and deliver the start and end states of B1 and B2. This process can be formulated as: (L̂si , L̂ e i ) = F1(Bseq) +Bi, i = 1, 2, (8) where (L̂si , L̂ e i ) denotes the the estimated instant start and end states respectively, and Bseq denotes the input sequence {B0, · · · , B3}. In the experiments, although the network F1 achieves reasonable performance, we find it still suffers from some limitations. Firstly, the network gets four inputs to eliminate the temporal ambiguity, which decreases its deblurring capability more or less. Secondly, the fitting ability of residual is relatively poor when modeling a more severe blur. To address these issues, we further improve the deblurring performance by introducing the second-order residual learning. Specifically, we refer the Eq.(8) as first order residual, and derive the second-order residual learning as: (L̂si , L̂ e i ) = F2(Bi,F1(Bseq) +Bi) + F1(Bseq) +Bi, i = 1, 2. (9) Here, the network F2 aims to synthesize higher order residual of the target mapping. Since the temporal order of L̂si and L̂ e i has been initially determined by function F1, F2 can focus on restoring a pair of key-states. In experiments, this structure could improve the PSNR by around 1.5 dB. 4 Experiments In this section, we introduce the datasets we used for training and test, and the training configuration of our models. Then we compare the proposed framework with state-of-the-art methods both quantitative and qualitative. Finally, we carry out an ablation study of our proposed components. 4.1 Datasets To simulate the real-world situations and build datasets for more general video interpolation, we synthesize low-frame-rate videos from the sharp high-frame-rate video sequence. Considering the video acquisition principle we discussed before, we average several consecutive frames taken by a 240fps camera to simulate one frame taken by a low-frame-rate camera. Similar to all the existing blurry image/video datasets, such synthesis is feasible if the relative motion between camera and object is not too large to produce the ’ghosting’ artifacts. Meanwhile, we discard several consecutive frames to simulate the shutter closed time interval. In this way, we create videos filmed in different exposure settings by altering the number of frames averaged and discarded. Specifically, we denote the number of exposure frames as m, and the number of discarded frames as n, thus m+ n frames constitute a shutter period. We set m+ n = 10 to down-sample the original 240 fps video to 24 fps, which is a common FPS setting in our daily life. For fair comparisons, we set m as odd numbers (m = 5, 7, 9), since most other methods request the middle frame as ground-truth. We apply the synthetic rule on both GoPro dataset [19] and Adobe240 dataset [30], and name these synthetic datasets as “dataset-m-n”. Finally we get “GoPro-5-5”, “GoPro-7-3”, “GoPro-9-1” and “Adobe240-5-5”, “Adobe240-7-3”, “Adobe240-9-1” respectively. In addition, we also provide datasets “GoPro-5-3” and “GoPro-7-1” to perform a fair comparison with [27] since it can only upsample the video by multiple of 2. Noted that the other video interpolation datasets such as UCF101 [29] and Vimeo-90k [35] are not applicable for our comparison, since they only provide sharp frame triplets. 4.2 Implementation details To train the key-states restoration network, we first train the network F1 for 200 epochs and jointly train the network F1 and F2 for another 200 epochs. To train the optical flow refinement network, 100 epochs are enough for convergence. We use Adam [15] solver for optimization, with β1 = 0.9, β2 = 0.999 and = 10−8. The learning rate is set initially to 10−4, and linearly decayed to 0. All weights are initialized using Xavier [8], and bias is initialized to 0. In total, we have 34.4 million parameters for training. In test phase, it takes 0.23s and 0.18s to run a single forward for key-states restoration network and interpolation network respectively via a NVIDIA GeForce GTX 1080 Ti graphic card. 4.3 Comparison with the state-of-the-art methods Comparison methods. We employ two types of interpolation solution as our comparisons. The first one is the cascade model, which concatenates a deblurring model with a video frame interpolation model. Specifically, we combine the state-of-the-art image/video deblurring methods Gao et al.[7] and EDVR [32] with the state-of-the-art multi-frame interpolation methods QVI [34] and Super SloMo [12]. We follow the implementation of their official released code in all the experiments. The other is the joint model of TNTT and BIN proposed by Jin et al. [13] and Shen et al. [27], respectively. These methods jointly conduct deblurring and upsampling of frame rate. Since these two methods are devised for specific exposure setting, we make some workarounds to carry out a more fair and reasonable comparison. Since the original TNTT [13] model need to iteratively interpolate the middle frame to fill the vacant indexes, we devise a specific interpolation sequence for each exposure setting, namely TNTT*. In addition, since BIN [27] is devised to up-convert the frame rate by 2 times, which shares the similiar function with our key-states restoration module, we compare their initial results with our first stage outputs. For multi-frame interpolation results, we iteratively interpolate the outputs of BIN and obtain the “8x frame rate” results. For this comparison, we prepare the dataset “5-3” and “7-1” as two different exposure setting of 30 fps video. We re-train and test the BIN model using the mixed datasets “5-3” and “7-1”. Yet, our model is only trained on the mix datasets of “5-5”, “7-3” and “9-1”. Here, we also test our well-trained model on the “5-3” and “7-1” settings, experiments in Table 3 shows the great generalization ability of our proposed framework. Blurry video interpolation. As shown in Table 1, Table 2 and Table 3, both our deblurring and overall interpolation perform favorably against former methods. In addition, several important observations can be made from these results. Firstly, in the deblurring phase, former video deblurring methods encounter great difficulties in maintaining a promising performance in our datasets with different exposure settings. For example, the original TNTT which is trained on “GoPro-9-1” performs inferior in generalizing to other test sets. Moreover, even trained on our mixed datasets, the EDVR deteriorates significantly from dataset “5-5” to datasets “7-3” and “9-1”. For the final interpolation results, we can see that cascade models are sub-optimal for the overall performance. Although the deblurring module achieve a high score in PSNR, there is about 3 dB loss in the following interpolation stage. This may be mainly caused by the long temporal scope between two consecutive input frames. Similar conclusion is also obtained in work of [13, 27]. On the contrary, the joint models usually can achieve a more accurate interpolation results. However, we observe that the interpolation performance of TNTT/TNTT* deteriorates heavily in the exposure setting “5-5” (from 32.49 to 28.39 for GoPro dataset). This is mainly because of the iterative synthesis of the middle frame may lead to sub-optimal results in the inter-frame interpolation. Same conclusion can be obtained from Table 3, the BIN model performs inferior when they attempt to further interpolate the middle frame between former outputs. To intuitively visualize the comparison, we show two typical examples in Fig. 3. The first row shows that former methods fail in generating a visually clear intermediate frame. This is either because they fail in restoring a sharp frame in deblurring phase, e.g. EDVR [32] and TNTT [13], or the frame becomes blurry when interpolated from adjacent frames, e.g. Gao [7]+QVI [34], or BIN [27]. In the second row, we use Sobel operator [28] to extract the contour of interpolated results and overlap it with the contour of ground truth. Red line represents ground-truth contour and blue one means the synthesized outputs. The pinker and clearer overlapped image means a more accurate interpolation result. As we can see, our interpolated frame shows a best overlapped result with ground-truth image. Moreover, we shot 10 real 30 FPS videos using a telephone camera, and generate the interpolated high-frame-rate video results with our method, as well as TNTT and BIN. Since there is no objective criterion to compare the generation quality, a user study is conducted for a fair comparison. According to more than 1k response collected from Amazon Mechanical Turk, there are 78.4% of people think our results are better than TNTT’s, and 87.6% of people prefer ours over BIN’s results. The real world video interpolation results are provided in our supplementary video. Uncertain time interval interpolation (sharp frames). To futher validate the effectiveness of our proposed uncertain time interval interpolation algorithm. We compare different interpolation strategies when calculating the essential flow S1t. To construct the videos with different time interval ratios, we sample the original high-frame-rate GoPro dataset with a different sample interval, e.g. to sequentially sample one frame with intervals of 6 frames and 2 frames to achieve the dataset of λ = 7/3. We compare our uncertain time interval algorithm (Model UTI) and refined version (Model UTI-refine) with original QVI [34] model, which is derived under λ = 1. We also provide a model GT as the optical flow calculated with ground-truth λ. Table 4 shows that our UTI and UTI-refine performs favorably to QVI model except the situation when λ = 5/5, which is owning to the optical flow estimation error in S23. However, we can see the performance of QVI deteriorates more severe than ours when the value of λ deviates from 1. Also, the results show that our refine network significantly improves the performance. 4.4 Ablation study To see the effectiveness of our designed modules, we perform the following extensive experiments. For the key-state restoration phase, we compare the model using different structure/input frames with the proposed model. As we can see in Table 5, compared to the first-order residual, the model with second-order residual can increase the PSNR by around 1.5 dB. Also, the model simply cascades another stage-I’s architecture, i.e. without B1, B2 as input, performs inferior to our proposed structure, suggesting the original blurry information is essential for the second-order residual learning. Both the ablation experiments show that our second-order residual is effective in refining the output of the first stage. For the interpolation phase, we already analyzed the contribution of uncertain time interval interpolation in Table 4. Here, we evaluate the contribution of the flow refinement module. We fix the key-state restoration network and compare the interpolation outputs of the model with refinement (Model refine) and the model without refinement (Model w/o refine). As shown in Table 5, the model with refinement outperforms the model without refinement by around 0.6 dB. This improvement indicates that the Ŝ23 becomes more accurate after refinement. 5 Conclusion In this work, we propose a method to tackle the video frame interpolation problem without knowing temporal priors. Taken the relationship of exposure time and shutter period into consideration, we derive a general quadratic interpolation strategy without temporal prior. We also devise a key-states restoration network to extract the temporal unambiguous sharp content from blurry frames. Our proposed method is practical to synthesize a high-frame-rate sharp video from low-frame-rate blurry videos with different exposure settings. However, there is still limitation in our work, e.g., our uncertain time interval motion trajectory can only be derived when the acceleration remain constant. Though this assumption can approximate most situations in a short exposure time interval (around 1/20 s), the more challenging movement like variable acceleration motion is existing in the real scenario. We hope to relax this assumption and to have a more accurate trajectory estimation in our future works. Broader Impact Video frame interpolation (VFI), which aims to overcome the temporal limitation of camera sensors, is a popular and important technology in a wide range of video processing tasks. For example, it could produce slow-motion videos without professional high-speed cameras, and it could perform the frame rate up-conversion (or video restoration) for archival footage. However, existing VFI researches can mainly apply to videos with pre-defined temporal priors, such as sharp video frames or blurry videos with known exposure settings. It may largely limit their performance in complicated real-world situations. To our best knowledge, the video frame interpolation framework we introduced in this paper made the first attempt to overcome these limitations. Our proposed technique may potentially benefit a series of real-world applications and users. On the one hand, it could be more practical and convenient for users who want to convert their own videos to slow-motion, since they are not required to figure out the video sources, i.e. the complicated parameters of camera sensors. On the other hand, it could reduce the workload of VFI-related applications, i.e. it would not need to retrain new models for different exposure settings. Since the video frame interpolation aims at video restoration and up-conversion (i.e. the output video shares the same content as the given video), our method may not cause negative ethical impact, if we do not discuss the content of the input video. Acknowledgments and Disclosure of Funding This work was supported in part by the Australian Research Council Projects: FL-170100117, DP-180103424, IH-180100002 and IC-190100031.
1. What is the focus and contribution of the paper on video frame interpolation? 2. What are the strengths of the proposed approach, particularly in terms of its ability to handle unknown exposure times? 3. What are the weaknesses of the paper, especially in comparison to other works in the area of frame interpolation? 4. Do you have any concerns regarding the assumption of constant acceleration in the proposed method? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper presents a method for interpolating frames that may be blurry and/or have unknown exposure times. The ideas is to train networks to recover sharp beginning and end points in time for the video frames, to compute multi-frame flow assuming constant acceleration, and then use this to interpolate the frames. They show that the training and test data need not have the same exposure times and show good results in comparison to several other methods on e few datasets. Strengths The strength of this approach is that it doesn't make as many assumption as previous work on the exposure times of the input frames and can effectively jointly deblur and interpolate video frames. Weaknesses The main weakness of this paper is that in some cases the improvements over previous work are not that large, interns of both visual results (Figure 3) and PSNR and SSIM. There is a lot of work in the area of frame interpolation, I think the approach here is not groundbreaking. Another smaller concern is how reasonable the constant acceleration assumptions is in practice. Does it hold often? I imagine it may break down in real scenarios
NIPS
Title Video Frame Interpolation without Temporal Priors Abstract Video frame interpolation, which aims to synthesize non-exist intermediate frames in a video sequence, is an important research topic in computer vision. Existing video frame interpolation methods have achieved remarkable results under specific assumptions, such as instant or known exposure time. However, in complicated realworld situations, the temporal priors of videos, i.e., frames per second (FPS) and frame exposure time, may vary from different camera sensors. When test videos are taken under different exposure settings from training ones, the interpolated frames will suffer significant misalignment problems. In this work, we solve the video frame interpolation problem in a general situation, where input frames can be acquired under uncertain exposure (and interval) time. Unlike previous methods that can only be applied to a specific temporal prior, we derive a general curvilinear motion trajectory formula from four consecutive sharp frames or two consecutive blurry frames without temporal priors. Moreover, utilizing constraints within adjacent motion trajectories, we devise a novel optical flow refinement strategy for better interpolation results. Finally, experiments demonstrate that one well-trained model is enough for synthesizing high-quality slow-motion videos under complicated real-world situations. Codes are available on https://github. com/yjzhang96/UTI-VFI. N/A Video frame interpolation, which aims to synthesize non-exist intermediate frames in a video sequence, is an important research topic in computer vision. Existing video frame interpolation methods have achieved remarkable results under specific assumptions, such as instant or known exposure time. However, in complicated realworld situations, the temporal priors of videos, i.e., frames per second (FPS) and frame exposure time, may vary from different camera sensors. When test videos are taken under different exposure settings from training ones, the interpolated frames will suffer significant misalignment problems. In this work, we solve the video frame interpolation problem in a general situation, where input frames can be acquired under uncertain exposure (and interval) time. Unlike previous methods that can only be applied to a specific temporal prior, we derive a general curvilinear motion trajectory formula from four consecutive sharp frames or two consecutive blurry frames without temporal priors. Moreover, utilizing constraints within adjacent motion trajectories, we devise a novel optical flow refinement strategy for better interpolation results. Finally, experiments demonstrate that one well-trained model is enough for synthesizing high-quality slow-motion videos under complicated real-world situations. Codes are available on https://github. com/yjzhang96/UTI-VFI. 1 Introduction Video frame interpolation aims to synthesize non-exist intermediate frames and thereby provides a visually fluid video sequence. It has broad application prospects, such as slow motion production [13], up-converting frame rate [3] and novel-view rendering [6]. Many state-of-the-art video interpolation methods [1, 12, 17, 34] aim to estimate the object motion and occlusion with the assistance of optical flow. Through refining forward and backward motion flows among several frames, these methods can directly warp pixels to synthesize desired intermediate frames. To achieve this goal, some popular datasets, of which either triplet images or 240fps highframe-rate videos, are collected as the ground-truth of real-world motions. Meanwhile, to evaluate the performance of proposed methods, the well-trained model is tested using frames collected in a similar way. Although significant improvement has demonstrated by experiments of recent works, people may ask if the same (or similar) performance can be achieved in complicated real-world situations. * indicates equal contribution. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. To comprehensively discuss this question, we first revisit the principle of video frame acquisition. As illustrated in Fig. 1 (a), the frame acquisition process usually includes two phases: exposure phase and readout phase. In the exposure phase, the shutter opens for a duration of t0 so that the photosensitive sensor is exposed. In the readout phase, the camera reads the charge on the pixel array and convert the signal to get the pixel value. Considering different technologies of cameras, the readout phase could be either overlapped or non-overlapped with the exposure phase. Here, Fig. 1 (a) is an example of non-overlapped exposure. For easy discussing, we define the time interval between two exposures as t1. Thus, a complete shutter period is defined as the time period t0 + t1. Correspondingly, frames per second (FPS) is defined as the reciprocal of the shutter period. Note that, t1 cannot be eliminated because of the intrinsic demand of the sensor. Meanwhile, t0 cannot be too short compared to shutter period, otherwise it will produce a visually discontinuous video. The exposure time t0 and the interval time t1 (or FPS 1t0+t1 ) are two important parameters of a camera sensor, and they could vary largely across different cameras [4]. Therefore, when we perform frame interpolation on real-world videos, following challenges should be further considered: 1) Due to the existence of exposure time, the movement of the camera and object may produce motion/dynamic blur within a video frame. Directly performing the interpolation between blurry frames would lead to inferior visual results. A more severe blur would usually occur in the lower frame-rate video since the exposure time is relatively long. 2) Simply combining deblurring and video interpolation techniques may not handle the blurry video frames well. For blurry video frames, we should not only focus on the inter-frame interpolation, but also perform the intra-frame interpolation. 3) Note that t0 and t1 may vary due to the limitation of equipment or different exposure settings, the number of interpolated frames and corresponding motion trajectories will vary accordingly. For example, in the instance of Fig 1 (a), if we want to up-convert the FPS by 10 times, we should interpolate 7 frames underlying each blurry frame, and 3 frames between the two consecutive frames. Similarly, the estimation of the motion trajectory must consider the uneven time intervals. According to our observation, most existing works cannot overcome these three challenges simultaneously. Although the most recent works [13, 27] manage to solve the problem of motion blur in video interpolation, they are trained on the specific exposure setting and could be hard to generalize to different situations. To address these issues, in this work, we consider the video frame interpolation problem in a more general situation and aim to deliver more accurate interpolation results. Specifically, giving a video sequence as the input, we first train a second-order residual key-states restoration network to synthesize the start and the end states for each frame, e.g. L0 and L1 in Fig. 1 (b). If there exists zero movement (misalignment) between two states, the video frame is regarded as one instant frame (i.e. without blur). Otherwise, the exposure time cannot be ignored, and both inter- and intra-interpolation are performed. Moreover, following the same assumption as [34], i.e. the acceleration of motion remains consistent during consecutive frames, we apply the quadratic model [18, 34] to the general video acquisition situation. We derive the general curvilinear motion representation without temporal priors from consecutive four key-states, such as L0, L1, L2 and L3 in Fig. 1 (b). Meanwhile, the relationship between t0 and t1 can be determined by the displacements between key-states, i.e. S01, S12 and S23. In addition, to reduce the adverse effects caused by inferior optical-flow estimation, we further refine the optical flows with the derived trajectory priors. Finally, the refined optical flows are utilized to perform high-quality intermediate frame synthesis. Overall, in this paper, we make following contributions: 1) We propose a restoration network to synthesize start and end states of the input video frames. This network is able to handle different exposure settings, and remove blur in the original video clip; 2) We derive a curvilinear motion representation which is sensitive to different exposure settings, thereby providing a more accurate frame alignment for uncertain time interval interpolation; 3) We further refine the optical flow with the trajectory priors to improve the interpolation results. We construct different datasets to simulate the different exposure settings in real scenarios. Comprehensive experiments on these datasets and real-world vidoes demonstrate the effectiveness of our proposed framework. 2 Related Works Video frame interpolation. Most popular video interpolation methods utilize optical flow [12, 34, 17, 1, 2, 35] to predict the motion for the interpolated frame. Some methods [23, 22, 9] estimate space-varying and separable convolution filters for each pixel, and synthesize the interpolated pixel from a convolution between two adjacent patches. Xu et al. [34] proposes a quadratic interpolation to allow the interpolated motion to be curvilinear instead of being uniform and linear. However, all these methods will encounter difficulties when processing the blurry video since the optical flow/motion estimation will be inaccurate. Video/Image deblurring. Conventional video deblurring methods [5, 10, 33] usually apply the deconvolution algorithm with the assistance of image priors or regulations. To make full use of adjacent frames, Hyun et al. [11] utilize inter-frame optical flow to estimate blur kernels. Ren et al. [25] also apply optical flow to facilitate the segmentation result. More recently, deep convolutional neural networks (CNN) have been applied to bypass the restriction of blur type or image priors [19, 30, 7, 20, 16, 36], and enable an end-to-end training scheme by introducing the synthetic realworld scene datasets [19, 30]. To exploit the temporal relationship, Nah et al. [20] propose a recurrent neural network (RNN) to iteratively update the hidden state for output frames. Wang et al. [32] devise a pyramid, cascading and deformable alignment module to conduct a better frame alignment in feature level, and their method won the first place in the NITRE19 video deblurring challenge [21]. There are also some works [37, 14, 24] learning to extract a video clip from a blurry image, which can be considered as a combination of image deblurring with intra-frame interpolation. Joint video deblurring and interpolation. Recent methods [13, 27] have been proposed to address the blurry video interpolation problem. Among them, Jin et al. [13] first extract several keyframes, and then interpolate the middle frame from two adjacent frames. Meantime, Shen et al. [27] proposed a joint interpolation method, where they simultaneously output the deblurred frame and interpolated frame in a pyramid framework. Both these methods have pre-defined a specific setting for the blurry video exposure mechanism, which may fail when applied to a video acquired from other equipment or other camera settings. 3 The Proposed Video Interpolation Scheme To address the aforementioned challenges of video frame interpolation without temporal priors, in this section, we introduce the proposed interpolation scheme in detail. Firstly, to overcome the problem caused by the uncertainty of the time interval, we derive a new quadratic formula for different exposure settings. Then, utilizing the motion flow priors contained in the formula, we further refine the estimated optical flow for more accurate time interval and trajectory estimation. Finally, we introduce the second-order residual learning strategy for key-states restoration from input frame sequences. 3.1 From equal time interval to uncertain time interval To interpolate intermediate frame Lt between two consecutive frames L1 and L2, the optical flow based video interpolation methods [12, 17, 34] aim to estimate the optical flow from frame L1 to Lt, or frame L2 to Lt. Recently, inspired by [18], Xu et al. [34] have relaxed the constrains of motion from linear displacement to quadratic curvilinear, which corresponds to acceleration-aware motions: S1t = (S12 − S01)/2× t2 + (S12 + S01)/2× t, (1) where Sab means the displacement of pixels from frame a to frame b, and it is calculated by optical flow. In order to keep the pixel coordinates aligned in each optical flow map, the start point of these optical flows should be the same. In general, the displacements are calculated as Ŝ12 = f1→2, Ŝ01 = −f1→0, where fa→b denotes the optical flow from frame a to frame b. However, Eq.(1) is based on the equal time interval assumption. This assumption is not applicable to the general situation where the time intervals t0 and t1 may vary accordingly. Here, we define a shutter period as one unit time, and the ratio between t1 and t0 is λ, i.e. t0 + t1 = 1, t1/t0 = λ. Different from [34] which employs three neighboring frames to calculate the quadratic trajectory, we take four consecutive key-states into consideration as shown in Fig. 1 (b). Naturally, if the time intervals become unknown, four key-states (i.e. three flows) are requested to determine a unique quadratic movement. If we assume the acceleration remains constant from frame L0 to L3, then we can express S01,S12 and S23 with velocity and acceleration: 2S12 = (2v1 + at1)× t1, S01 + S23 = (2v1 + at1)× t0. (2) This equation set indicates that vector S12 has a same direction with vector resultant S01 + S23. In addition, we can derive the time interval ratio λ as: λ = t1 t0 = 2S12 S01 + S23 . (3) By far, we are able to solve the t0 and t1 under the condition that t0 + t1 = 1. Further deriving the velocity and acceleration of the movement, we can get the expression of trajectory between frame L1 and frame L2: S1t = (λ+ 1)(S23 − S01)/2× t2 + (λS01 + (S01 + S23)/2)× t, t ∈ (tL0 , tL1). (4) Note that when the time intervals are equal, i.e. λ = 1, our Eq.(4) can be degraded to Eq.(1), i.e. the QVI interpolation [34] is a special case of our framework. 3.2 Optical flow refinement As shown in Eq.(3) (4), we can obtain flow S1t using the pixel displacements among four key-states. In order to keep the position aligned, S23 should be represented as: Ŝ23 = f1→3 − f1→2, (5) which denotes the movements of pixels in frame L1 from time tL2 to tL3 . In practical, we estimate all the optical flows using the state-of-the-art PWC-Net [31]. However, directly using flow calculated by Eq.(5) does not work well in our situation, since there may exist serious errors in two aspects: 1) the flow estimation error owing to the long time interval between frame 1 and frame 3, i.e. f1→3; 2) the pixel misalignment when we conduct vector subtraction. Therefore, we propose a flow refinement network FR to acquire the refined flow Ŝ ′ 23. Since it is hard to obtain the ground-truth of the target flow S23, we employ the trajectory prior implied in Eq.(2) (3) as our penalization. Specifically, Ŝ01 + Ŝ23 and Ŝ12 should have following two implicit constraints: 1) these two vectors have the same direction; 2) since λ is a constant, the ratio of the two vectors should be uniform across the image. With these two priors as constraints, we are able to correct the value of one optical flow when the other two are fixed. Note that, although f1→0 and f1→2 are also the estimation of pixel displacements, they could deliver more accurate motion estimation than Ŝ23. Therefore, the refinement process can be formulated as: Ŝ ′ 23 = FR(f1→0,f1→2, Ŝ23). (6) We use a U-Net [26] with skip connections to learn the mapping from the original flow to the refined outputs. Aforementioned priors are implicitly encoded into our loss function, where we utilize f0→1 and f1→2 to constrain the output flow. The loss function is calculated as: Lr = |Ŝ23 − (2/λf1→2 + f1→0)|1. (7) Finally, the refined Ŝ ′ 23 can be substituted into Eq.(4) to compute a more accurate S1t. 3.3 Second-order residual learning for key-states restoration Our principle for choosing key-states is to ensure that they are unambiguous under different exposure settings. For each input frame, we attempt to restore its instant states of the start and end of the exposure, since their physical meaning is consistent in different exposure settings. For sharp images (i.e. without motion blur), the start and end states should be the same. For blurry frames, the start and end states define the boundary of the motion blur, which makes them easier to restore. In addition, ours could short the temporal range for subsequent interpolation, which leads to more accurate interpolation results. More discussion can be found in the experiment section. As shown in Fig. 2, we propose the second-order residual learning pipeline to extract the key-states from input frames. Firstly, in order to avoid the temporal ambiguity of the start and end states, four consecutive frames are fed into the network F1. Utilizing the implicit motion direction existed in the input sequence, the network is trained to synthesize residuals to be summed up with input blurry frames, and deliver the start and end states of B1 and B2. This process can be formulated as: (L̂si , L̂ e i ) = F1(Bseq) +Bi, i = 1, 2, (8) where (L̂si , L̂ e i ) denotes the the estimated instant start and end states respectively, and Bseq denotes the input sequence {B0, · · · , B3}. In the experiments, although the network F1 achieves reasonable performance, we find it still suffers from some limitations. Firstly, the network gets four inputs to eliminate the temporal ambiguity, which decreases its deblurring capability more or less. Secondly, the fitting ability of residual is relatively poor when modeling a more severe blur. To address these issues, we further improve the deblurring performance by introducing the second-order residual learning. Specifically, we refer the Eq.(8) as first order residual, and derive the second-order residual learning as: (L̂si , L̂ e i ) = F2(Bi,F1(Bseq) +Bi) + F1(Bseq) +Bi, i = 1, 2. (9) Here, the network F2 aims to synthesize higher order residual of the target mapping. Since the temporal order of L̂si and L̂ e i has been initially determined by function F1, F2 can focus on restoring a pair of key-states. In experiments, this structure could improve the PSNR by around 1.5 dB. 4 Experiments In this section, we introduce the datasets we used for training and test, and the training configuration of our models. Then we compare the proposed framework with state-of-the-art methods both quantitative and qualitative. Finally, we carry out an ablation study of our proposed components. 4.1 Datasets To simulate the real-world situations and build datasets for more general video interpolation, we synthesize low-frame-rate videos from the sharp high-frame-rate video sequence. Considering the video acquisition principle we discussed before, we average several consecutive frames taken by a 240fps camera to simulate one frame taken by a low-frame-rate camera. Similar to all the existing blurry image/video datasets, such synthesis is feasible if the relative motion between camera and object is not too large to produce the ’ghosting’ artifacts. Meanwhile, we discard several consecutive frames to simulate the shutter closed time interval. In this way, we create videos filmed in different exposure settings by altering the number of frames averaged and discarded. Specifically, we denote the number of exposure frames as m, and the number of discarded frames as n, thus m+ n frames constitute a shutter period. We set m+ n = 10 to down-sample the original 240 fps video to 24 fps, which is a common FPS setting in our daily life. For fair comparisons, we set m as odd numbers (m = 5, 7, 9), since most other methods request the middle frame as ground-truth. We apply the synthetic rule on both GoPro dataset [19] and Adobe240 dataset [30], and name these synthetic datasets as “dataset-m-n”. Finally we get “GoPro-5-5”, “GoPro-7-3”, “GoPro-9-1” and “Adobe240-5-5”, “Adobe240-7-3”, “Adobe240-9-1” respectively. In addition, we also provide datasets “GoPro-5-3” and “GoPro-7-1” to perform a fair comparison with [27] since it can only upsample the video by multiple of 2. Noted that the other video interpolation datasets such as UCF101 [29] and Vimeo-90k [35] are not applicable for our comparison, since they only provide sharp frame triplets. 4.2 Implementation details To train the key-states restoration network, we first train the network F1 for 200 epochs and jointly train the network F1 and F2 for another 200 epochs. To train the optical flow refinement network, 100 epochs are enough for convergence. We use Adam [15] solver for optimization, with β1 = 0.9, β2 = 0.999 and = 10−8. The learning rate is set initially to 10−4, and linearly decayed to 0. All weights are initialized using Xavier [8], and bias is initialized to 0. In total, we have 34.4 million parameters for training. In test phase, it takes 0.23s and 0.18s to run a single forward for key-states restoration network and interpolation network respectively via a NVIDIA GeForce GTX 1080 Ti graphic card. 4.3 Comparison with the state-of-the-art methods Comparison methods. We employ two types of interpolation solution as our comparisons. The first one is the cascade model, which concatenates a deblurring model with a video frame interpolation model. Specifically, we combine the state-of-the-art image/video deblurring methods Gao et al.[7] and EDVR [32] with the state-of-the-art multi-frame interpolation methods QVI [34] and Super SloMo [12]. We follow the implementation of their official released code in all the experiments. The other is the joint model of TNTT and BIN proposed by Jin et al. [13] and Shen et al. [27], respectively. These methods jointly conduct deblurring and upsampling of frame rate. Since these two methods are devised for specific exposure setting, we make some workarounds to carry out a more fair and reasonable comparison. Since the original TNTT [13] model need to iteratively interpolate the middle frame to fill the vacant indexes, we devise a specific interpolation sequence for each exposure setting, namely TNTT*. In addition, since BIN [27] is devised to up-convert the frame rate by 2 times, which shares the similiar function with our key-states restoration module, we compare their initial results with our first stage outputs. For multi-frame interpolation results, we iteratively interpolate the outputs of BIN and obtain the “8x frame rate” results. For this comparison, we prepare the dataset “5-3” and “7-1” as two different exposure setting of 30 fps video. We re-train and test the BIN model using the mixed datasets “5-3” and “7-1”. Yet, our model is only trained on the mix datasets of “5-5”, “7-3” and “9-1”. Here, we also test our well-trained model on the “5-3” and “7-1” settings, experiments in Table 3 shows the great generalization ability of our proposed framework. Blurry video interpolation. As shown in Table 1, Table 2 and Table 3, both our deblurring and overall interpolation perform favorably against former methods. In addition, several important observations can be made from these results. Firstly, in the deblurring phase, former video deblurring methods encounter great difficulties in maintaining a promising performance in our datasets with different exposure settings. For example, the original TNTT which is trained on “GoPro-9-1” performs inferior in generalizing to other test sets. Moreover, even trained on our mixed datasets, the EDVR deteriorates significantly from dataset “5-5” to datasets “7-3” and “9-1”. For the final interpolation results, we can see that cascade models are sub-optimal for the overall performance. Although the deblurring module achieve a high score in PSNR, there is about 3 dB loss in the following interpolation stage. This may be mainly caused by the long temporal scope between two consecutive input frames. Similar conclusion is also obtained in work of [13, 27]. On the contrary, the joint models usually can achieve a more accurate interpolation results. However, we observe that the interpolation performance of TNTT/TNTT* deteriorates heavily in the exposure setting “5-5” (from 32.49 to 28.39 for GoPro dataset). This is mainly because of the iterative synthesis of the middle frame may lead to sub-optimal results in the inter-frame interpolation. Same conclusion can be obtained from Table 3, the BIN model performs inferior when they attempt to further interpolate the middle frame between former outputs. To intuitively visualize the comparison, we show two typical examples in Fig. 3. The first row shows that former methods fail in generating a visually clear intermediate frame. This is either because they fail in restoring a sharp frame in deblurring phase, e.g. EDVR [32] and TNTT [13], or the frame becomes blurry when interpolated from adjacent frames, e.g. Gao [7]+QVI [34], or BIN [27]. In the second row, we use Sobel operator [28] to extract the contour of interpolated results and overlap it with the contour of ground truth. Red line represents ground-truth contour and blue one means the synthesized outputs. The pinker and clearer overlapped image means a more accurate interpolation result. As we can see, our interpolated frame shows a best overlapped result with ground-truth image. Moreover, we shot 10 real 30 FPS videos using a telephone camera, and generate the interpolated high-frame-rate video results with our method, as well as TNTT and BIN. Since there is no objective criterion to compare the generation quality, a user study is conducted for a fair comparison. According to more than 1k response collected from Amazon Mechanical Turk, there are 78.4% of people think our results are better than TNTT’s, and 87.6% of people prefer ours over BIN’s results. The real world video interpolation results are provided in our supplementary video. Uncertain time interval interpolation (sharp frames). To futher validate the effectiveness of our proposed uncertain time interval interpolation algorithm. We compare different interpolation strategies when calculating the essential flow S1t. To construct the videos with different time interval ratios, we sample the original high-frame-rate GoPro dataset with a different sample interval, e.g. to sequentially sample one frame with intervals of 6 frames and 2 frames to achieve the dataset of λ = 7/3. We compare our uncertain time interval algorithm (Model UTI) and refined version (Model UTI-refine) with original QVI [34] model, which is derived under λ = 1. We also provide a model GT as the optical flow calculated with ground-truth λ. Table 4 shows that our UTI and UTI-refine performs favorably to QVI model except the situation when λ = 5/5, which is owning to the optical flow estimation error in S23. However, we can see the performance of QVI deteriorates more severe than ours when the value of λ deviates from 1. Also, the results show that our refine network significantly improves the performance. 4.4 Ablation study To see the effectiveness of our designed modules, we perform the following extensive experiments. For the key-state restoration phase, we compare the model using different structure/input frames with the proposed model. As we can see in Table 5, compared to the first-order residual, the model with second-order residual can increase the PSNR by around 1.5 dB. Also, the model simply cascades another stage-I’s architecture, i.e. without B1, B2 as input, performs inferior to our proposed structure, suggesting the original blurry information is essential for the second-order residual learning. Both the ablation experiments show that our second-order residual is effective in refining the output of the first stage. For the interpolation phase, we already analyzed the contribution of uncertain time interval interpolation in Table 4. Here, we evaluate the contribution of the flow refinement module. We fix the key-state restoration network and compare the interpolation outputs of the model with refinement (Model refine) and the model without refinement (Model w/o refine). As shown in Table 5, the model with refinement outperforms the model without refinement by around 0.6 dB. This improvement indicates that the Ŝ23 becomes more accurate after refinement. 5 Conclusion In this work, we propose a method to tackle the video frame interpolation problem without knowing temporal priors. Taken the relationship of exposure time and shutter period into consideration, we derive a general quadratic interpolation strategy without temporal prior. We also devise a key-states restoration network to extract the temporal unambiguous sharp content from blurry frames. Our proposed method is practical to synthesize a high-frame-rate sharp video from low-frame-rate blurry videos with different exposure settings. However, there is still limitation in our work, e.g., our uncertain time interval motion trajectory can only be derived when the acceleration remain constant. Though this assumption can approximate most situations in a short exposure time interval (around 1/20 s), the more challenging movement like variable acceleration motion is existing in the real scenario. We hope to relax this assumption and to have a more accurate trajectory estimation in our future works. Broader Impact Video frame interpolation (VFI), which aims to overcome the temporal limitation of camera sensors, is a popular and important technology in a wide range of video processing tasks. For example, it could produce slow-motion videos without professional high-speed cameras, and it could perform the frame rate up-conversion (or video restoration) for archival footage. However, existing VFI researches can mainly apply to videos with pre-defined temporal priors, such as sharp video frames or blurry videos with known exposure settings. It may largely limit their performance in complicated real-world situations. To our best knowledge, the video frame interpolation framework we introduced in this paper made the first attempt to overcome these limitations. Our proposed technique may potentially benefit a series of real-world applications and users. On the one hand, it could be more practical and convenient for users who want to convert their own videos to slow-motion, since they are not required to figure out the video sources, i.e. the complicated parameters of camera sensors. On the other hand, it could reduce the workload of VFI-related applications, i.e. it would not need to retrain new models for different exposure settings. Since the video frame interpolation aims at video restoration and up-conversion (i.e. the output video shares the same content as the given video), our method may not cause negative ethical impact, if we do not discuss the content of the input video. Acknowledgments and Disclosure of Funding This work was supported in part by the Australian Research Council Projects: FL-170100117, DP-180103424, IH-180100002 and IC-190100031.
1. What is the focus and contribution of the paper regarding video frame interpolation? 2. What are the strengths of the proposed method, particularly in refining motion trajectory and training restoration and optical flow refinement models? 3. What are the weaknesses of the paper, such as concerns about the generalization ability and unclear derivations? 4. Do you have any questions regarding the restoration network, its two-stage process, and the improvement in dB? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The authors proposed a video frame interpolation method which can be generalized to different exposure time ratio. The main contributions of the paper is deriving a generalized motion trajectory, and proposing an optical flow refinement network trained with the derived constrains. The experiment results show the effectiveness of the method, and the visual quality is promising. Strengths The authors refine the motion trajectory computed from uncertain exposure interval. The derivation looks fine and the models implementing the new interpolation formula obtain a good visual and quantitative performance. Training the restoration model and optical flow refinement model with synthetic data is shown to be easily generalized to different settings even real inputs. Weaknesses Some of the concerns are below: 1. The claims of proposing a 'generalized' interpolation may be too strong. What could be the real cases which cannot be resolved by the proposed methods should also be discussed. I believe exposure setting could be only one of the problem of the poor generalization ability. 2. It's unclear to me how equation(2) in line 103 is derived. And in line 109, how equ(4) can be degraded to equ(1) given using different frames? 3. For the restoration network, it seems the network is just trying to achieve multi-frame deblurring with a two-stage process. What exactly the function of the two-stage network? Could the author show some outputs from different stages? In line 154, the improvement in dB can only come from more parameters in the model, but not the intuitive idea illustrated in the paper. Please double check it or visualize it. 4. Also for the restoration network, in line 147, what is the temporal ambiguity, and why the authors utilize four frames but not just 2 frame? In line 143, does the author mean B1 and B2? 5. More results on real videos should be reported. Currently, the results are reported on synthetic data and most previous methods do not share the same assumptions as this paper. Results on real videos in the supplementary material look fine, but related contents are missing in the paper, especially user study.
NIPS
Title On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them Abstract We analyze the influence of adversarial training on the loss landscape of machine learning models. To this end, we first provide analytical studies of the properties of adversarial loss functions under different adversarial budgets. We then demonstrate that the adversarial loss landscape is less favorable to optimization, due to increased curvature and more scattered gradients. Our conclusions are validated by numerical analyses, which show that training under large adversarial budgets impede the escape from suboptimal random initialization, cause non-vanishing gradients and make the model find sharper minima. Based on these observations, we show that a periodic adversarial scheduling (PAS) strategy can effectively overcome these challenges, yielding better results than vanilla adversarial training while being much less sensitive to the choice of learning rate. N/A We analyze the influence of adversarial training on the loss landscape of machine learning models. To this end, we first provide analytical studies of the properties of adversarial loss functions under different adversarial budgets. We then demonstrate that the adversarial loss landscape is less favorable to optimization, due to increased curvature and more scattered gradients. Our conclusions are validated by numerical analyses, which show that training under large adversarial budgets impede the escape from suboptimal random initialization, cause non-vanishing gradients and make the model find sharper minima. Based on these observations, we show that a periodic adversarial scheduling (PAS) strategy can effectively overcome these challenges, yielding better results than vanilla adversarial training while being much less sensitive to the choice of learning rate. 1 Introduction State-of-the-art deep learning models have been found to be vulnerable to adversarial attacks [18, 34, 45]. Imperceptible perturbations of the input can make the model produce wrong predictions with high confidence. This raises concerns about deep learning’s deployment in safety-critical applications. Although many training algorithms have been proposed to counter such adversarial attacks, most of them were observed to fail when facing stronger attacks [4, 10]. Adversarial training [33] is one of the few exceptions, so far remaining effective and thus popular. It uses adversarial examples generated with the attacker’s scheme to update the model parameters. However, adversarial training and its variants [2, 6, 24, 42, 53] have been found to have a much larger generalization gap [37] and to require larger model capacity for convergence [49]. Although recent works [6, 40] show that the adversarial training error reduces to almost 0% with a large enough model and that the generalization gap can be narrowed by using more training data, convergence in adversarial training remains much slower than in vanilla training on clean data. This indicates discrepancies in the underlying optimization landscapes. While much work has studied the loss landscape of deep networks in vanilla training [12, 13, 14, 15, 31], such an analysis in adversarial training remains unaddressed. Here we study optimization in adversarial training. Vanilla training can be considered as a special case where no perturbation is allowed, i.e., zero adversarial budget. Therefore, we focus on the impact of the adversarial budget size on the loss landscape. In this context, we investigate from a theoretical and empirical perspective how different adversarial budget sizes affect the loss landscape to make optimization more challenging. Our analyses start with linear models and then generalize to nonlinear deep learning ones. We study the whole training process and identify different behaviors in the early and final stages of training. Based on our observations, we then introduce a scheduling strategy for the adversarial budget during training. We empirically show this scheme to yield better performance and to be less sensitive to the learning rate than vanilla adversarial training. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. Contributions. Our contributions can be summarized as follows. 1) From a theoretical perspective, we show that, for linear models, adversarial training under a large enough budget produces a constant classifier. For general nonlinear models, we identify the existence of an abrupt change in the adversarial examples, which makes the loss landscape less smooth. This causes severe gradient scattering and slows down the convergence of training. 2) Our numerical analysis shows that training under large adversarial budgets hinders the model to escape from suboptimal initial regions, while also causing large non-vanishing gradients in the final stage of training. Furthermore, by Hessian analysis, we evidence that the minima reached in the adversarial loss landscape are sharper when the adversarial budget is bigger. 3) We show that a periodic adversarial scheduling (PAS) strategy, corresponding to a cyclic adversarial budget scheduling scheme with warmup, addresses these challenges. Specifically, it makes training less sensitive to the choice of learning rate and yields better robust accuracy than vanilla adversarial training without any computational overhead. Notation and Terminology. We use plain letters, bold lowercase letters and bold uppercase letters to represent scalars, vectors and matrices, respectively. ‖v‖ represents the Euclidean norm of vector v and [K] is an abbreviation of the set {0, 1, 2, ...,K − 1}. In a classification problem {(xi, yi)}Ni=1, where (xi, yi) ∈ Rm × [K], the classifier consists of a logit function f : Rm → Rk, which is usually a neural network, and a risk function ` : Rk× [K]→ R, which is the softmax cross-entropy loss. The adversarial budget S(p) (x) of a data point x, whose size is , is defined based on an lp norm-based constraint {x′|‖x− x′‖p ≤ }, and we use S (x) to denote the l∞ constraint for simplicity. Given the model parameters θ ∈ Θ, we use g(x, θ) : Rm ×Θ→ R to denote the loss function for an individual data point, ignoring the label y for simplicity. If we use L (θ) to denote the adversarial loss function under adversarial budget S(p) (x), adversarial training solves the min-max problem min θ L (θ) := 1 N N∑ i=1 g (xi, θ) where g (xi, θ) := max x′i∈S (p) (xi) g(x′i, θ) . (1) L(θ) := L0(θ) is the vanilla loss function. If 6= 0, the adversarial example x′i, i.e., the worst-case input in S(p) (xi), depends on the model parameters. We call the landscape of functions L(θ) and L (θ) the vanilla and adversarial loss landscape, respectively. Similarly, we use E(θ) and E (θ) to represent the clean error and robust error under adversarial budget S(p) (x). In this paper, we call a function smooth if it is C1-continuous. We use θ0 to denote the initial parameters. “Initial plateau” or “suboptimal region in the early stage of training” indicate the parameters that are close to the initial ones and have similar performance. “Vanilla training” means training based on clean input data, while “vanilla adversarial training” represents the popular adversarial training method in [33]. 2 Related Work Adversarial Robustness. In this work, we focus on white-box attacks, in which the attackers have access to the model parameters. Compared with black-box attacks, white box attacks better solve the inner maximization problem in (1). In this context, [18] proposes the fast gradient sign method (FGSM) to perturb the input in the direction of its gradient: x′ = x + sign(OxL(θ)). Projected gradient descent (PGD) [33] extends FGSM by iteratively running it with a smaller step size and projecting the perturbation back to the adversarial budget. Furthermore, PGD introduces randomness by starting at a random initial point inside the adversarial budget. As a result, PGD generates much stronger adversarial examples than FGSM and is believed to be the strongest attack utilizing the network’s first order information [33]. When it comes to robustness against attacks, some methods have been proposed to train provably robust models by linear approximation [5, 29, 48], semi-definite programming [36], interval bound propagation [20] or randomized smoothing [8, 39]. However, these methods either only apply to a specific type of network, have a significant computational overhead, or are unstable. Furthermore, compared with adversarial training, they have been found to over-regularize the model and significantly decrease the clean accuracy [54]. As a result, we focus on PGD-based adversarial training, which first generates adversarial examples x′ by PGD and then uses x′ to optimize the model parameters θ. In all our experiments, the adversarial loss landscape is approximated by the loss of adversarial examples found by PGD. Loss Landscape of Deep Neural Networks. Many existing works focus on the vanilla loss landscape of the objective function in deep learning. It is challenging, because the objective L(θ) of a deep neural network is a high-dimensional nonconvex function, of which we only know very few properties. [26] proves the nonexistence of poor local minima for general deep nonlinear networks. [30] shows that stochastic gradient descent (SGD) can almost surely escape the saddle points and converge to a local minimum. For over-parameterized ReLU networks, SGD is highly likely to find a monotonically decreasing trajectory from the initialization point to the global optimum [38]. Furthermore, some works have studied the geometric properties of local minima in the loss landscape of neural networks. In this context, [27, 35] empirically show that sharp minima usually have larger generalization gaps than flat ones. Specifically, to improve generalization, [51] uses adversarial training to avoid converging to sharp minima in large batch training. However, the correspondence between sharp minima and poor generalization is based on empirical findings and sometimes controversial. For example, [11] shows counterexamples in ReLU networks by rescaling the parameters and claims that sharp minima can generalize as well as flat ones. Moreover, different minima of the loss function have been found to be well-connected. That is, there exist hyper-curves connecting different minima that are flat in the loss landscape [12, 14]. [55] further shows that the learned path connection can help us to effectively repair models that are vulnerable to backdoor or error-injection attacks. Recently, some methods have been proposed to visualize the loss landscape [31, 44], leading to the observation that networks of different architectures have surprisingly different landscapes. Compared with chaotic landscapes, smooth and locally near-convex landscapes make gradient-based optimization much easier. All of the above-mentioned works, however, focus on networks that have been optimized with vanilla training. Here, by contrast, we study the case of adversarial training. 3 Theoretical Analysis In this section, we conduct an analytical study of the difference between L (θ) and L(θ). We start with linear classification models and then discuss general nonlinear ones. 3.1 Linear Classification Models For the simple but special case of logistic regression, i.e., K = 2, we can write the analytical form of L (θ). We defer the detailed discussion of this case to Appendix A.1, and here focus on linear multiclass classification, i.e., K ≥ 3. We parameterize the model by W := {wi}Ki=1 ∈ Rm×K and use f(W) = [wT1 x,w T 2 x, ...,w T Kx] as the logit function. Therefore, the vanilla loss function is convex as g(x,W) = log ( 1 + ∑ j 6=y exp (wj−wy)Tx ) . Although g (x,W) is also convex, it is no longer smooth everywhere. It is then difficult to write a unified expression of g (x,W). So we start with the version space V of g (x,W) defined as V = { W ∣∣∣∣(wi −wy)x′ ≤ 0,∀i ∈ [K],x′ ∈ S (x)}. By definition, V is the smallest convex closed set containing all solutions robust under the adversarial budget S (x). The proposition below states that the version space V shrinks with larger values of . Proposition 1. Given the definition of the version space V , then V 2 ⊆ V 1 when 1 ≤ 2. The proof of Proposition 1 is very straightforward, we put it in Appendix B.1. In addition to V , we define the set T as T = { W ∣∣∣∣0 ∈ arg minγ g (x, γW)}. T is the set of all directions in which the optimal point is the origin; that is, the corresponding models in this direction are all no better than a constant classifier. Although we cannot write the set T in roster notation, we show in the theorem below that T becomes larger as increases. Theorem 1. Given the definition of T , then T 2 ⊆ T 1 when 1 ≥ 2. In addition, ∃̄ such that ∀ ≥ ̄, T = Rm×K . In this case, 0 ∈ arg minW g (x,W). We defer the proof of Theorem 1 to Appendix B.2, where we also provide a lower bound for ̄. Theorem 1 indicates that when the adversarial budget is large enough, the optimal point is the origin. In this case, we will get a constant classifier, and training completely fails. L (W) is the average of g (x,W) over the dataset, so Theorem 1 and Proposition 1 still apply if we replace g with L in the definition of V and T . For nonlinear models like deep neural networks, these conclusions will not hold because g (x, θ) is no longer convex. Nevertheless, our experiments in Section 4.1 evidence the same phenomena as indicated by the theoretical analysis above. Larger make it harder for the optimizer to escape the initial suboptimal region. In some cases, training fails, and we obtain a constant classifier in the end. 3.2 General Nonlinear Classification Models For deep nonlinear neural networks, we cannot write the analytical form of g(x, θ) or g (x, θ). To analyze such models, we follow [43] and assume the smoothness of the function g. Assumption 1. The function g satisfies the following Lipschitzian smoothness conditions: ‖g(x, θ1)− g(x, θ2)‖ ≤ Lθ‖θ1 − θ2‖ , ‖Oθg(x, θ1)− Oθg(x, θ2)‖ ≤ Lθθ‖θ1 − θ2‖ , ‖Oθg(x1, θ)− Oθg(x2, θ)‖ ≤ Lθx‖x1 − x2‖p . (2) Based on this, we study the smoothness of L (θ). Proposition 2. If Assumption 1 holds, then we have 1 ‖L (θ1)− L (θ2)‖ ≤ Lθ‖θ1 − θ2‖ , ‖OθL (θ1)− OθL (θ2)‖ ≤ Lθθ‖θ1 − θ2‖+ 2 Lθx . (3) The proof is provided in Appendix B.3, in which we can see the upper bound in Proposition 2 is tight and can be achieved in the worst cases. Proposition 2 shows that the first-order smoothness of the objective function is preserved under adversarial attacks, but the second-order smoothness is not. That is to say, gradients in arbitrarily small neighborhoods in the θ-space can change discontinuously. The unsatisfying second-order property arises from the maximization operator defined in the functions g and L . For function g (x, θ), the non-smooth points are those where the optimal adversarial example x′ changes abruptly in a sufficiently small neighborhood. Formally, we use θ1 and x′1 to represent the model parameters and the corresponding optimal adversarial example. We assume different gradients of the model parameters for different inputs. If there exists a positive number a > 0 such that, ∀δ > 0, we can find θ2 ∈ {θ|‖θ − θ1‖ ≤ δ}, and the corresponding optimal adversarial example x′2 satisfies ‖x′1 − x′2‖p > a, then limθ→θ1 Oθg (x, θ) 6= Oθg (x, θ1). L (θ) is the aggregation of g (x, θ) over the dataset, so it also has such non-smooth points. In addition, as the 2 Lθx term in the second inequality of (3) indicates, the adversarial examples can change more under a larger adversarial budget. As a result, the (sub)gradients OθL (θ) can change more abruptly in the neighborhood of the parameter space. That is, the (sub)gradients are more scattered in the adversarial loss landscape. Figure 1 provides a 2D sketch diagram showing the non-smoothness introduced by adversarial training. The red curve represents the vanilla loss function g(x, θ). Under adversarial perturbation, the loss landscape fluctuates within the light blue band. Then, the blue curve represents the worst case we can encounter in the adversarial setting, i.e., g (x, θ). We can see that the blue curve is not smooth any more at the point where θ = 0. Importantly, as the light blue band becomes wider under a larger adversarial budget, the corresponding non-smooth point becomes sharper, which means that the difference between the gradients on both sides of the non-smooth point becomes larger. Based on Proposition 2, we show in the following theorem that the non-smoothness introduced by adversarial training makes the optimization by stochastic gradient descent (SGD) more difficult. Theorem 2. Let Assumption 1 hold, the stochastic gradient OθL̂ (θt) be unbiased and have bounded variance, and the SGD update θt+1 = θt − αtOθL̂ (θt) use a constant step size αt = α = 1Lθθ√T for T iterations. Given the trajectory of the parameters during optimization {θt}Tt=1, then we can bound the asymptotic probability of large gradients for a sufficient large value of T as ∀γ ≥ 2, P (‖OθL (θt)‖ > γ Lθx) < 4 γ2 − 2γ + 4 . (4) 1Strictly speaking, L (θ) is not differentiable at some point, so OθL (θ) might be ill-defined. In this paper, we use OθL (θ) for simplicity. Nevertheless, the inequality holds for any subgradient v ∈ ∂θL (θ). We provide the proof in Appendix B.4. In vanilla training, is 0 and L(θ) is smooth, and (4) implies that limt→+∞ ‖Oθg (θt)‖ = 0 almost surely. This is consistent with the fact that SGD converges to a critical point with non-convex smooth functions. By contrast, in adversarial training, i.e., > 0, we cannot guarantee convergence to a critical point. Instead, the gradients are non-vanishing, and we can only bound the probability of obtaining gradients whose magnitude is larger than 2 Lθx. For a fixed value of C := γ Lθx larger than 2 Lθx, the inequality (4) indicates that the probability P (‖OθL (θt)‖ > C) increases quadratically with . In deep learning practice, activation functions like sigmoid, tanh and ELU [7] satisfy the second-order smoothness in Assumption 1, but the most popular ReLU function does not. Nevertheless, adversarial training still causes gradient scattering and makes the optimization more difficult. That is, the bound of ‖OθL (θ1)−OθL (θ2)‖ still increases with , and the parameter gradients change abruptly in the adversarial loss landscape. We provide a more detailed discussion of this phenomenon in Appendix A.2, which shows that our analysis and conclusions easily extend to the ReLU case. The second-order Lipchitz constant indicates the magnitude of the gradient change for a unit change in parameters. Therefore, it is a good quantitative metric of gradient scattering. In practice, we are more interested in the effective local Lipschitz constant, which only considers the neighborhood of the current parameters, than in the global Lipschitz constant. In this case, the effective local second-order Lipschitz constant can be estimated by the top eigenvalues of the Hessian matrix O2θL (θ). 4 Numerical Analysis In this section, we conduct experiments on MNIST and CIFAR10 to empirically validate the theorems in Section 3. Detailed experimental settings are provided in Appendix C.1. Unless specified, we use LeNet models on MNIST and ResNet18 models on CIFAR10 in this and the following sections. Our code is available on https://github.com/liuchen11/AdversaryLossLandscape. 4.1 Gradient Magnitude In Section 3.1, we have shown that the training algorithm will get stuck at the origin and yield a constant classifier for linear models under large . For deep nonlinear models, the initial value of the parameters is close to the origin under most popular initialization schemes [17, 22]. Although Theorem 1 is not applicable here, we are still interested in investigating how effective gradient-based optimization is at escaping from the suboptimal initial parameters. To this end, we track the norm of the stochastic gradient ‖OθL̂ (θ)‖, the robust error E (θ) in the training set and the distance from the initial point ‖θ− θ0‖ during the first 2000 mini-batch updates for CIFAR10 models. Figure 2a, 2b, 2c evidence a clear difference between the models trained with different values of . When is small, the gradient magnitude is larger, and the model parameters move faster. Correspondingly, the training error decreases faster, which means that the model quickly escapes the initial suboptimal region. By contrast, when is large, the gradients are small, and the model gets stuck in the initial region. This implies that the loss landscape under a large adversarial budget impedes the escape from initial suboptimal plateaus in the early stage of training. For ReLU networks, adversarially-trained models have been found to have sparser weights and intermediate activations [9], i.e., they have more dead neurons. Dead neurons are implicitly favored by adversarial training, because the output is independent of the input perturbation. Note that training fails when all the neurons in one layer are dead for all training instances. The model is then effectively broken into two parts by this dead layer: the preceding layers will no longer be trained because the gradients are all blocked; the following layers do not depend on the input and thus give constant outputs. In essence, training is then stuck in a parameter space that only includes constant classifiers. In practice, this usually happens when the model has small width and the value of is large. This is consistent with previous findings that adversarial training needs higher model capacity [33] and too strong adversarial examples are harmful in the early stage of training [46]. Theorem 2 indicates that the gradients are non-vanishing in adversarial training and more likely to have large magnitude under large values of . This is validated by Figure 2d, in which we report the norm of the stochastic gradient ‖OθL̂ (θ)‖ in the last 2000 mini-batch updates for CIFAR10 models. In vanilla training, the gradient is almost zero in the end, indicating that the optimizer finds a critical point. In this case ‖OθL̂ (θ)‖ is dominated by the variance introduced by stochasticity. However, ‖OθL̂ (θ)‖ increases with . When is larger, ‖OθL̂ (θ)‖ is also larger and non-vanishing, indicating that the model is still bouncing around the parameter space at the end of training. The decreased gradient magnitude in the initial suboptimal region and the increased gradient magnitude in the final near-minimum region indicate that the adversarial loss landscape is not favorable to optimization when we train under large adversarial budgets. Additional results on MNIST models are provided in Figure 8 of Appendix C.2.1, where the same observations can be made. 4.2 Hessian Analysis To study the effective local Lipschitz constant of L (θ), we analyze the Hessian spectrum of models trained under different values of . It is known that the curvature in the neighborhood of model parameters is dominated by the top eigenvalues of the Hessian matrix O2L (θ). To this end, we use the power iteration method as in [51] to iteratively estimate the top 20 eigenvalues and the corresponding eigenvectors of the Hessian matrix. Furthermore, to discard the effect of the scale of function L (θ) for different , we estimate the scale of L (θ) by randomly sampling θ. We then normalize the top Hessian eigenvalues by the average value of L (θ) on these random samples. In addition, we show the learning curve of L (θ) on the training set during training in Figure 11 of Appendix C.2.2. It clearly show similar magnitude of L (θ) for different values of . In Figure 3, we show the top 20 Hessian eigenvalues, both before and after normalization, of CIFAR10 models under different adversarial budgets. We also provide 3D visualizations of the neighborhood in the directions of the top 2 eigenvectors in Figure 12 of Appendix C.2.2. It is clear that the local effective second-order Lipschitz constant of the model obtained consistently increases with the value of . That is, the minima found in L (θ) are sharper under larger . To validate the claim in Section 3.2 that non-smoothness arises from abrupt changes of the adversarial examples, we study the similarity of adversarial perturbations generated by different model parameter values in a small neighborhood. Specifically, we perturb the model parameters θ in opposite directions to θ + av and θ − av, where v is a unit vector and a is a scalar. Let x′av and x′−av represent the adversarial examples generated by the corresponding model parameters. We then calculate the average cosine similarity between the perturbation x′av − x and x′−av − x over the training set. The results on CIFAR10 models are provided in Figure 4. To account for the random start in PGD, we run each experiment 4 times and report the average value. The variances of all experiments are smaller than 0.005 and thus not shown in the figure. Note that, when v is a random unit vector, the robust error E (θ) of the parameters θ ± av on both the training and test sets remains unchanged for different values of a, indicating a flat landscape in the direction v. The adversarial examples in this case are mostly similar and have very high cosine similarity. By contrast, if v is the top eigenvector of the Hessian matrix, i.e., the most curvy direction, then we see a sharp increase in the robust error E (θ) when we increase a. Correspondingly, the cosine similarity between the adversarial perturbations is much lower, which indicates dramatic changes of the adversarial examples. We perform the same experiments on MNIST models in Appendix C.2.2 with the same observations. 5 Periodic Adversarial Scheduling In Sections 3 and 4, we have theoretically and empirically shown that the adversarial loss landscape becomes less favorable to optimization under large adversarial budgets. In this section, we introduce a simple adversarial budget scheduling scheme to overcome these problems. Inspired by the learning rate warmup heuristic used in deep learning [19, 25], we introduce warmup for the adversarial budget. Let d be the current epoch index and D be the warmup period’s length. We define a cosine scheduler cos and a linear scheduler lin, parameterized by max and min, as cos(d) = 1 2 (1− cos d D π)( max − min) + min, lin(d) = ( max − min) d D + min . (5) We clip cos(d) and lin(d) between 0 and target, the target value of . If min ≤ 0 and max > target, the value of starts from 0, gradually increases to target and remains constant then. This warmup strategy allows us to overcome the fact, highlighted in the previous sections, that adversarial training is more sensitive to the learning rate under a large budget because the gradients are more scattered. This is evidenced by Figure 5, which compares the robust test error of MNIST models relying on different adversarial budget scheduling schemes. For all models, we used = 0.4, and report results after 100 epochs with different but constant learning rates in Adam [28]. Our linear and cosine schedulers perform better than using a constant value of during training and yield good performance for a broader range of learning rates: in the small learning rate regime, they speed up training; in the large learning rate regime, they stabilize training and avoid divergence. Note that, as shown in Appendix C.2.3, warmup of the learning rate does not yield similar benefits. As shown in [25], periodic learning rates enable model ensembling to improve the performance. Here, we can follow the same strategy but also for the adversarial budget. To this end, we divide the training phase into several periods and store one model at the end of each period. We make final predictions based on the ensemble of these models. This periodic scheme has no computational overhead. We call it periodic adversarial scheduling (PAS). As before, we run experiments on MNIST and CIFAR10. For MNIST, we train each model for 100 epochs and do not use a periodic scheduling for the learning rate, which we found not to improve the results even if we use a constant adversarial budget. For CIFAR10, we train each model for 200 epochs. When there are no learning rate resets, our results indicate the final model after 200 epochs. When using a periodic learning rate, we divide the 200 epochs into 3 periods, i.e., we reset the learning rate and the adversarial budget after 100 and 150 epochs, and compute the results using an ensemble of these 3 models. The value of learning rate and the adversarial budget size are calculated based on the ratio of the current epoch index to the current period length. We provide more details about hyper-parameter settings in Appendix C.1. We compare different scheduler in adversarial budget under different tasks and settings. We evaluate the robustness of our trained models by different kinds of attacks. First we evaluate the models under the PGD attack used in training (PGD), i.e., 50-iteration PGD for MNIST models and 10-iteration PGD for CIFAR10 models. Then, we increase the number of iterations in PGD and compute the robust error under 100-iteration PGD. To solve the issue of suboptimal step size, we also evaluate our models using the state-of-the-art AutoPGD attack [10], which search for the optimal step sizes. We run AutoPGD for 100 iterations for evaluation, based on either cross-entropy loss (APGD100 CE) or the difference of logit ratio loss (APGD100 DLR). To avoid gradient masking, we also run the state-of-the-art black-box SquareAttack [3] for 5000 iterations (Square5K). The hyperparameter details are defered in Appendix C.1. The results are summarized in Table 1, where we compare the clean and robust accuracy under different adversarial attacks on the test set. It is clear that our proposed cosine or linear schedulers yield better performance, in both clean accuracy and robust accuracy, than using a constant adversarial budget in all cases. For MNIST, warmup not only makes training robust to different choices of learning rate, but also improves the final robust accuracy. For CIFAR10, model ensembling enabled by the periodic scheduler improves the robust accuracy. 6 Discussion Model capacity. In addition to the size of the adversarial budget, the capacity of the model also greatly affects the adversarial loss landscape and thus the performance of adversarial training. Adversarial training needs higher model capacity in two aspects: if we decrease the model capacity, adversarial training will fail to converge while vanilla training still works [33]; if we increase the model capacity, the robust accuracy of adversarial training continues to rise while the clean accuracy of normal training saturates [50]. Furthermore, we show in Appendix C.2.4 that smaller models are more likely to have dead layers because of their lower dimensionality. As a result, warmup in adversarial budget is also necessary for small models. In many cases, the parameter space of small models has good minima in terms of robustness, but adversarial training with a constant value of fails to find them. For example, one can obtain small but robust models by pruning large ones [21, 52]. Architecture. The network architecture encodes the parameterization of the model, so it greatly affects the adversarial loss landscape. For example, in Table 1, ResNet18 has fewer trainable parameters but better performance than VGG on CIFAR10, indicating that ResNet18 has a better parameterization in terms of robustness. Since the optimal architecture for adversarial robustness is not necessarily the same as the one for clean accuracy, we believe that finding architectures inherently favorable to adversarial training is an interesting but challenging topic for future research. Connectivity of minima. Local minima in the vanilla loss landscape are well-connected [12, 14]: there exist flat hyper curves connecting them. In Appendix C.2.5, we study the connectivity of converged model parameters in the adversarial setting. We find that the parameters of two adversarially trained models are less connected in the adversarial loss landscape than in the vanilla setting. That is, the path connecting them needs to go over suboptimal regions. Adversarial example generation We approximate the adversarial loss using adversarial examples generated by PGD, which is a good estimate of the inner maximization in (1). PGD-based adversarial training updates model parameters by near-optimal adversarial examples. However, recent works [41, 47] have shown that robust models can also be trained by suboptimal adversarial examples, which are faster to obtain. The formulation of these methods differs from (1), because the inner maximization problem is not approximately solved. Understanding why models (partially) trained on suboptimal adversarial examples are resistant to stronger adversarial examples needs more investigation. 7 Conclusion We have studied the properties of the loss landscape under adversarial training. We have shown that the adversarial loss landscape is non-smooth and not favorable to optimization, due to the dependency of adversarial examples on the model parameters. Furthermore, we have empirically evidenced that large adversarial budgets slow down training in the early stages and impedes convergence in the end. Finally, we have demonstrated the advantages of warmup and periodic scheduling of the adversarial budget size during training. They make training more robust to different choices of learning rate and yield better performance than vanilla adversarial training. 8 Broader Impact The existence of adversarial examples has raised serious concerns about the deployment of deep learning models in safety-sensitive domains, such as medical imaging [32] and autonomous navigation [1]. In these domains, as in many others, adversarial training remains the most popular, effective, and general method to train robust models. By studying the nature of optimization in adversarial training and proposing solutions to overcome the underlying challenges, our work has potential for high societal impact in these fields. Although the robust accuracy is much lower than the clean accuracy so far, the intrinsic properties of adversarial training we have discovered open up future research directions to improve its performance. From an ecological perspective, however, we acknowledge that the higher computational cost of adversarial training translates to higher carbon footprint than vanilla training. Nevertheless, we believe that the potential societal benefits of robustness to attacks outweigh this drawback. 9 Acknowledgements We thankfully acknowledge the support of the Hasler Foundation (Grant No. 16076) for this work.
1. What is the main contribution of the paper regarding adversarial training? 2. What are the strengths of the proposed approach, particularly in its theoretical analysis? 3. What are the weaknesses of the paper, especially regarding its experimental evaluation? 4. How can the authors improve their experimental evaluation to provide more convincing results? 5. Can the authors extend their analysis to other activation functions beyond ReLU?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The authors conducted a novel analysis of adversarial training. In particular, they showed that adversarial training with a large perturbation budget results in gradient scattering and slow convergence. The results are intuitive yet, novel. To address this issue, the authors proposed using a periodic learning rate schedule, a known technique that helps optimization in vanilla settings. The experiments showed that the periodic schedule leads to non-trivial improvement in robustness over adversarial training. Strengths - A novel analysis of adversarial training technique; a theoretical explanation of why adversarial training with larger perturbation is unstable and converges slowly. - A non-trivial improvement of adversarial training using a periodic adversarial budget schedule. Weaknesses - As the authors have noted, the second-order gradient of the ReLU function is zero. The theory presented in this paper does not explain why gradient scattering happens in neural networks with ReLu activation functions. - Experimental evaluation should be improved. In the experiments, the authors used a PGD attack with 0.01 step size and eps/.01 + 10 iterations (MNIST) and 10 iterations and eps / 4 for CIFAR10. This is definitely non-standard parameters for the attack. The authors might consider fixing the number of steps and perform the grid search for the optimal step size to find the best attack for each epsilon. In addition, to ensure that gradient masking is not happening, the authors should add evaluation with 100 random restarts and evaluation using black-box attacks.
NIPS
Title On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them Abstract We analyze the influence of adversarial training on the loss landscape of machine learning models. To this end, we first provide analytical studies of the properties of adversarial loss functions under different adversarial budgets. We then demonstrate that the adversarial loss landscape is less favorable to optimization, due to increased curvature and more scattered gradients. Our conclusions are validated by numerical analyses, which show that training under large adversarial budgets impede the escape from suboptimal random initialization, cause non-vanishing gradients and make the model find sharper minima. Based on these observations, we show that a periodic adversarial scheduling (PAS) strategy can effectively overcome these challenges, yielding better results than vanilla adversarial training while being much less sensitive to the choice of learning rate. N/A We analyze the influence of adversarial training on the loss landscape of machine learning models. To this end, we first provide analytical studies of the properties of adversarial loss functions under different adversarial budgets. We then demonstrate that the adversarial loss landscape is less favorable to optimization, due to increased curvature and more scattered gradients. Our conclusions are validated by numerical analyses, which show that training under large adversarial budgets impede the escape from suboptimal random initialization, cause non-vanishing gradients and make the model find sharper minima. Based on these observations, we show that a periodic adversarial scheduling (PAS) strategy can effectively overcome these challenges, yielding better results than vanilla adversarial training while being much less sensitive to the choice of learning rate. 1 Introduction State-of-the-art deep learning models have been found to be vulnerable to adversarial attacks [18, 34, 45]. Imperceptible perturbations of the input can make the model produce wrong predictions with high confidence. This raises concerns about deep learning’s deployment in safety-critical applications. Although many training algorithms have been proposed to counter such adversarial attacks, most of them were observed to fail when facing stronger attacks [4, 10]. Adversarial training [33] is one of the few exceptions, so far remaining effective and thus popular. It uses adversarial examples generated with the attacker’s scheme to update the model parameters. However, adversarial training and its variants [2, 6, 24, 42, 53] have been found to have a much larger generalization gap [37] and to require larger model capacity for convergence [49]. Although recent works [6, 40] show that the adversarial training error reduces to almost 0% with a large enough model and that the generalization gap can be narrowed by using more training data, convergence in adversarial training remains much slower than in vanilla training on clean data. This indicates discrepancies in the underlying optimization landscapes. While much work has studied the loss landscape of deep networks in vanilla training [12, 13, 14, 15, 31], such an analysis in adversarial training remains unaddressed. Here we study optimization in adversarial training. Vanilla training can be considered as a special case where no perturbation is allowed, i.e., zero adversarial budget. Therefore, we focus on the impact of the adversarial budget size on the loss landscape. In this context, we investigate from a theoretical and empirical perspective how different adversarial budget sizes affect the loss landscape to make optimization more challenging. Our analyses start with linear models and then generalize to nonlinear deep learning ones. We study the whole training process and identify different behaviors in the early and final stages of training. Based on our observations, we then introduce a scheduling strategy for the adversarial budget during training. We empirically show this scheme to yield better performance and to be less sensitive to the learning rate than vanilla adversarial training. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. Contributions. Our contributions can be summarized as follows. 1) From a theoretical perspective, we show that, for linear models, adversarial training under a large enough budget produces a constant classifier. For general nonlinear models, we identify the existence of an abrupt change in the adversarial examples, which makes the loss landscape less smooth. This causes severe gradient scattering and slows down the convergence of training. 2) Our numerical analysis shows that training under large adversarial budgets hinders the model to escape from suboptimal initial regions, while also causing large non-vanishing gradients in the final stage of training. Furthermore, by Hessian analysis, we evidence that the minima reached in the adversarial loss landscape are sharper when the adversarial budget is bigger. 3) We show that a periodic adversarial scheduling (PAS) strategy, corresponding to a cyclic adversarial budget scheduling scheme with warmup, addresses these challenges. Specifically, it makes training less sensitive to the choice of learning rate and yields better robust accuracy than vanilla adversarial training without any computational overhead. Notation and Terminology. We use plain letters, bold lowercase letters and bold uppercase letters to represent scalars, vectors and matrices, respectively. ‖v‖ represents the Euclidean norm of vector v and [K] is an abbreviation of the set {0, 1, 2, ...,K − 1}. In a classification problem {(xi, yi)}Ni=1, where (xi, yi) ∈ Rm × [K], the classifier consists of a logit function f : Rm → Rk, which is usually a neural network, and a risk function ` : Rk× [K]→ R, which is the softmax cross-entropy loss. The adversarial budget S(p) (x) of a data point x, whose size is , is defined based on an lp norm-based constraint {x′|‖x− x′‖p ≤ }, and we use S (x) to denote the l∞ constraint for simplicity. Given the model parameters θ ∈ Θ, we use g(x, θ) : Rm ×Θ→ R to denote the loss function for an individual data point, ignoring the label y for simplicity. If we use L (θ) to denote the adversarial loss function under adversarial budget S(p) (x), adversarial training solves the min-max problem min θ L (θ) := 1 N N∑ i=1 g (xi, θ) where g (xi, θ) := max x′i∈S (p) (xi) g(x′i, θ) . (1) L(θ) := L0(θ) is the vanilla loss function. If 6= 0, the adversarial example x′i, i.e., the worst-case input in S(p) (xi), depends on the model parameters. We call the landscape of functions L(θ) and L (θ) the vanilla and adversarial loss landscape, respectively. Similarly, we use E(θ) and E (θ) to represent the clean error and robust error under adversarial budget S(p) (x). In this paper, we call a function smooth if it is C1-continuous. We use θ0 to denote the initial parameters. “Initial plateau” or “suboptimal region in the early stage of training” indicate the parameters that are close to the initial ones and have similar performance. “Vanilla training” means training based on clean input data, while “vanilla adversarial training” represents the popular adversarial training method in [33]. 2 Related Work Adversarial Robustness. In this work, we focus on white-box attacks, in which the attackers have access to the model parameters. Compared with black-box attacks, white box attacks better solve the inner maximization problem in (1). In this context, [18] proposes the fast gradient sign method (FGSM) to perturb the input in the direction of its gradient: x′ = x + sign(OxL(θ)). Projected gradient descent (PGD) [33] extends FGSM by iteratively running it with a smaller step size and projecting the perturbation back to the adversarial budget. Furthermore, PGD introduces randomness by starting at a random initial point inside the adversarial budget. As a result, PGD generates much stronger adversarial examples than FGSM and is believed to be the strongest attack utilizing the network’s first order information [33]. When it comes to robustness against attacks, some methods have been proposed to train provably robust models by linear approximation [5, 29, 48], semi-definite programming [36], interval bound propagation [20] or randomized smoothing [8, 39]. However, these methods either only apply to a specific type of network, have a significant computational overhead, or are unstable. Furthermore, compared with adversarial training, they have been found to over-regularize the model and significantly decrease the clean accuracy [54]. As a result, we focus on PGD-based adversarial training, which first generates adversarial examples x′ by PGD and then uses x′ to optimize the model parameters θ. In all our experiments, the adversarial loss landscape is approximated by the loss of adversarial examples found by PGD. Loss Landscape of Deep Neural Networks. Many existing works focus on the vanilla loss landscape of the objective function in deep learning. It is challenging, because the objective L(θ) of a deep neural network is a high-dimensional nonconvex function, of which we only know very few properties. [26] proves the nonexistence of poor local minima for general deep nonlinear networks. [30] shows that stochastic gradient descent (SGD) can almost surely escape the saddle points and converge to a local minimum. For over-parameterized ReLU networks, SGD is highly likely to find a monotonically decreasing trajectory from the initialization point to the global optimum [38]. Furthermore, some works have studied the geometric properties of local minima in the loss landscape of neural networks. In this context, [27, 35] empirically show that sharp minima usually have larger generalization gaps than flat ones. Specifically, to improve generalization, [51] uses adversarial training to avoid converging to sharp minima in large batch training. However, the correspondence between sharp minima and poor generalization is based on empirical findings and sometimes controversial. For example, [11] shows counterexamples in ReLU networks by rescaling the parameters and claims that sharp minima can generalize as well as flat ones. Moreover, different minima of the loss function have been found to be well-connected. That is, there exist hyper-curves connecting different minima that are flat in the loss landscape [12, 14]. [55] further shows that the learned path connection can help us to effectively repair models that are vulnerable to backdoor or error-injection attacks. Recently, some methods have been proposed to visualize the loss landscape [31, 44], leading to the observation that networks of different architectures have surprisingly different landscapes. Compared with chaotic landscapes, smooth and locally near-convex landscapes make gradient-based optimization much easier. All of the above-mentioned works, however, focus on networks that have been optimized with vanilla training. Here, by contrast, we study the case of adversarial training. 3 Theoretical Analysis In this section, we conduct an analytical study of the difference between L (θ) and L(θ). We start with linear classification models and then discuss general nonlinear ones. 3.1 Linear Classification Models For the simple but special case of logistic regression, i.e., K = 2, we can write the analytical form of L (θ). We defer the detailed discussion of this case to Appendix A.1, and here focus on linear multiclass classification, i.e., K ≥ 3. We parameterize the model by W := {wi}Ki=1 ∈ Rm×K and use f(W) = [wT1 x,w T 2 x, ...,w T Kx] as the logit function. Therefore, the vanilla loss function is convex as g(x,W) = log ( 1 + ∑ j 6=y exp (wj−wy)Tx ) . Although g (x,W) is also convex, it is no longer smooth everywhere. It is then difficult to write a unified expression of g (x,W). So we start with the version space V of g (x,W) defined as V = { W ∣∣∣∣(wi −wy)x′ ≤ 0,∀i ∈ [K],x′ ∈ S (x)}. By definition, V is the smallest convex closed set containing all solutions robust under the adversarial budget S (x). The proposition below states that the version space V shrinks with larger values of . Proposition 1. Given the definition of the version space V , then V 2 ⊆ V 1 when 1 ≤ 2. The proof of Proposition 1 is very straightforward, we put it in Appendix B.1. In addition to V , we define the set T as T = { W ∣∣∣∣0 ∈ arg minγ g (x, γW)}. T is the set of all directions in which the optimal point is the origin; that is, the corresponding models in this direction are all no better than a constant classifier. Although we cannot write the set T in roster notation, we show in the theorem below that T becomes larger as increases. Theorem 1. Given the definition of T , then T 2 ⊆ T 1 when 1 ≥ 2. In addition, ∃̄ such that ∀ ≥ ̄, T = Rm×K . In this case, 0 ∈ arg minW g (x,W). We defer the proof of Theorem 1 to Appendix B.2, where we also provide a lower bound for ̄. Theorem 1 indicates that when the adversarial budget is large enough, the optimal point is the origin. In this case, we will get a constant classifier, and training completely fails. L (W) is the average of g (x,W) over the dataset, so Theorem 1 and Proposition 1 still apply if we replace g with L in the definition of V and T . For nonlinear models like deep neural networks, these conclusions will not hold because g (x, θ) is no longer convex. Nevertheless, our experiments in Section 4.1 evidence the same phenomena as indicated by the theoretical analysis above. Larger make it harder for the optimizer to escape the initial suboptimal region. In some cases, training fails, and we obtain a constant classifier in the end. 3.2 General Nonlinear Classification Models For deep nonlinear neural networks, we cannot write the analytical form of g(x, θ) or g (x, θ). To analyze such models, we follow [43] and assume the smoothness of the function g. Assumption 1. The function g satisfies the following Lipschitzian smoothness conditions: ‖g(x, θ1)− g(x, θ2)‖ ≤ Lθ‖θ1 − θ2‖ , ‖Oθg(x, θ1)− Oθg(x, θ2)‖ ≤ Lθθ‖θ1 − θ2‖ , ‖Oθg(x1, θ)− Oθg(x2, θ)‖ ≤ Lθx‖x1 − x2‖p . (2) Based on this, we study the smoothness of L (θ). Proposition 2. If Assumption 1 holds, then we have 1 ‖L (θ1)− L (θ2)‖ ≤ Lθ‖θ1 − θ2‖ , ‖OθL (θ1)− OθL (θ2)‖ ≤ Lθθ‖θ1 − θ2‖+ 2 Lθx . (3) The proof is provided in Appendix B.3, in which we can see the upper bound in Proposition 2 is tight and can be achieved in the worst cases. Proposition 2 shows that the first-order smoothness of the objective function is preserved under adversarial attacks, but the second-order smoothness is not. That is to say, gradients in arbitrarily small neighborhoods in the θ-space can change discontinuously. The unsatisfying second-order property arises from the maximization operator defined in the functions g and L . For function g (x, θ), the non-smooth points are those where the optimal adversarial example x′ changes abruptly in a sufficiently small neighborhood. Formally, we use θ1 and x′1 to represent the model parameters and the corresponding optimal adversarial example. We assume different gradients of the model parameters for different inputs. If there exists a positive number a > 0 such that, ∀δ > 0, we can find θ2 ∈ {θ|‖θ − θ1‖ ≤ δ}, and the corresponding optimal adversarial example x′2 satisfies ‖x′1 − x′2‖p > a, then limθ→θ1 Oθg (x, θ) 6= Oθg (x, θ1). L (θ) is the aggregation of g (x, θ) over the dataset, so it also has such non-smooth points. In addition, as the 2 Lθx term in the second inequality of (3) indicates, the adversarial examples can change more under a larger adversarial budget. As a result, the (sub)gradients OθL (θ) can change more abruptly in the neighborhood of the parameter space. That is, the (sub)gradients are more scattered in the adversarial loss landscape. Figure 1 provides a 2D sketch diagram showing the non-smoothness introduced by adversarial training. The red curve represents the vanilla loss function g(x, θ). Under adversarial perturbation, the loss landscape fluctuates within the light blue band. Then, the blue curve represents the worst case we can encounter in the adversarial setting, i.e., g (x, θ). We can see that the blue curve is not smooth any more at the point where θ = 0. Importantly, as the light blue band becomes wider under a larger adversarial budget, the corresponding non-smooth point becomes sharper, which means that the difference between the gradients on both sides of the non-smooth point becomes larger. Based on Proposition 2, we show in the following theorem that the non-smoothness introduced by adversarial training makes the optimization by stochastic gradient descent (SGD) more difficult. Theorem 2. Let Assumption 1 hold, the stochastic gradient OθL̂ (θt) be unbiased and have bounded variance, and the SGD update θt+1 = θt − αtOθL̂ (θt) use a constant step size αt = α = 1Lθθ√T for T iterations. Given the trajectory of the parameters during optimization {θt}Tt=1, then we can bound the asymptotic probability of large gradients for a sufficient large value of T as ∀γ ≥ 2, P (‖OθL (θt)‖ > γ Lθx) < 4 γ2 − 2γ + 4 . (4) 1Strictly speaking, L (θ) is not differentiable at some point, so OθL (θ) might be ill-defined. In this paper, we use OθL (θ) for simplicity. Nevertheless, the inequality holds for any subgradient v ∈ ∂θL (θ). We provide the proof in Appendix B.4. In vanilla training, is 0 and L(θ) is smooth, and (4) implies that limt→+∞ ‖Oθg (θt)‖ = 0 almost surely. This is consistent with the fact that SGD converges to a critical point with non-convex smooth functions. By contrast, in adversarial training, i.e., > 0, we cannot guarantee convergence to a critical point. Instead, the gradients are non-vanishing, and we can only bound the probability of obtaining gradients whose magnitude is larger than 2 Lθx. For a fixed value of C := γ Lθx larger than 2 Lθx, the inequality (4) indicates that the probability P (‖OθL (θt)‖ > C) increases quadratically with . In deep learning practice, activation functions like sigmoid, tanh and ELU [7] satisfy the second-order smoothness in Assumption 1, but the most popular ReLU function does not. Nevertheless, adversarial training still causes gradient scattering and makes the optimization more difficult. That is, the bound of ‖OθL (θ1)−OθL (θ2)‖ still increases with , and the parameter gradients change abruptly in the adversarial loss landscape. We provide a more detailed discussion of this phenomenon in Appendix A.2, which shows that our analysis and conclusions easily extend to the ReLU case. The second-order Lipchitz constant indicates the magnitude of the gradient change for a unit change in parameters. Therefore, it is a good quantitative metric of gradient scattering. In practice, we are more interested in the effective local Lipschitz constant, which only considers the neighborhood of the current parameters, than in the global Lipschitz constant. In this case, the effective local second-order Lipschitz constant can be estimated by the top eigenvalues of the Hessian matrix O2θL (θ). 4 Numerical Analysis In this section, we conduct experiments on MNIST and CIFAR10 to empirically validate the theorems in Section 3. Detailed experimental settings are provided in Appendix C.1. Unless specified, we use LeNet models on MNIST and ResNet18 models on CIFAR10 in this and the following sections. Our code is available on https://github.com/liuchen11/AdversaryLossLandscape. 4.1 Gradient Magnitude In Section 3.1, we have shown that the training algorithm will get stuck at the origin and yield a constant classifier for linear models under large . For deep nonlinear models, the initial value of the parameters is close to the origin under most popular initialization schemes [17, 22]. Although Theorem 1 is not applicable here, we are still interested in investigating how effective gradient-based optimization is at escaping from the suboptimal initial parameters. To this end, we track the norm of the stochastic gradient ‖OθL̂ (θ)‖, the robust error E (θ) in the training set and the distance from the initial point ‖θ− θ0‖ during the first 2000 mini-batch updates for CIFAR10 models. Figure 2a, 2b, 2c evidence a clear difference between the models trained with different values of . When is small, the gradient magnitude is larger, and the model parameters move faster. Correspondingly, the training error decreases faster, which means that the model quickly escapes the initial suboptimal region. By contrast, when is large, the gradients are small, and the model gets stuck in the initial region. This implies that the loss landscape under a large adversarial budget impedes the escape from initial suboptimal plateaus in the early stage of training. For ReLU networks, adversarially-trained models have been found to have sparser weights and intermediate activations [9], i.e., they have more dead neurons. Dead neurons are implicitly favored by adversarial training, because the output is independent of the input perturbation. Note that training fails when all the neurons in one layer are dead for all training instances. The model is then effectively broken into two parts by this dead layer: the preceding layers will no longer be trained because the gradients are all blocked; the following layers do not depend on the input and thus give constant outputs. In essence, training is then stuck in a parameter space that only includes constant classifiers. In practice, this usually happens when the model has small width and the value of is large. This is consistent with previous findings that adversarial training needs higher model capacity [33] and too strong adversarial examples are harmful in the early stage of training [46]. Theorem 2 indicates that the gradients are non-vanishing in adversarial training and more likely to have large magnitude under large values of . This is validated by Figure 2d, in which we report the norm of the stochastic gradient ‖OθL̂ (θ)‖ in the last 2000 mini-batch updates for CIFAR10 models. In vanilla training, the gradient is almost zero in the end, indicating that the optimizer finds a critical point. In this case ‖OθL̂ (θ)‖ is dominated by the variance introduced by stochasticity. However, ‖OθL̂ (θ)‖ increases with . When is larger, ‖OθL̂ (θ)‖ is also larger and non-vanishing, indicating that the model is still bouncing around the parameter space at the end of training. The decreased gradient magnitude in the initial suboptimal region and the increased gradient magnitude in the final near-minimum region indicate that the adversarial loss landscape is not favorable to optimization when we train under large adversarial budgets. Additional results on MNIST models are provided in Figure 8 of Appendix C.2.1, where the same observations can be made. 4.2 Hessian Analysis To study the effective local Lipschitz constant of L (θ), we analyze the Hessian spectrum of models trained under different values of . It is known that the curvature in the neighborhood of model parameters is dominated by the top eigenvalues of the Hessian matrix O2L (θ). To this end, we use the power iteration method as in [51] to iteratively estimate the top 20 eigenvalues and the corresponding eigenvectors of the Hessian matrix. Furthermore, to discard the effect of the scale of function L (θ) for different , we estimate the scale of L (θ) by randomly sampling θ. We then normalize the top Hessian eigenvalues by the average value of L (θ) on these random samples. In addition, we show the learning curve of L (θ) on the training set during training in Figure 11 of Appendix C.2.2. It clearly show similar magnitude of L (θ) for different values of . In Figure 3, we show the top 20 Hessian eigenvalues, both before and after normalization, of CIFAR10 models under different adversarial budgets. We also provide 3D visualizations of the neighborhood in the directions of the top 2 eigenvectors in Figure 12 of Appendix C.2.2. It is clear that the local effective second-order Lipschitz constant of the model obtained consistently increases with the value of . That is, the minima found in L (θ) are sharper under larger . To validate the claim in Section 3.2 that non-smoothness arises from abrupt changes of the adversarial examples, we study the similarity of adversarial perturbations generated by different model parameter values in a small neighborhood. Specifically, we perturb the model parameters θ in opposite directions to θ + av and θ − av, where v is a unit vector and a is a scalar. Let x′av and x′−av represent the adversarial examples generated by the corresponding model parameters. We then calculate the average cosine similarity between the perturbation x′av − x and x′−av − x over the training set. The results on CIFAR10 models are provided in Figure 4. To account for the random start in PGD, we run each experiment 4 times and report the average value. The variances of all experiments are smaller than 0.005 and thus not shown in the figure. Note that, when v is a random unit vector, the robust error E (θ) of the parameters θ ± av on both the training and test sets remains unchanged for different values of a, indicating a flat landscape in the direction v. The adversarial examples in this case are mostly similar and have very high cosine similarity. By contrast, if v is the top eigenvector of the Hessian matrix, i.e., the most curvy direction, then we see a sharp increase in the robust error E (θ) when we increase a. Correspondingly, the cosine similarity between the adversarial perturbations is much lower, which indicates dramatic changes of the adversarial examples. We perform the same experiments on MNIST models in Appendix C.2.2 with the same observations. 5 Periodic Adversarial Scheduling In Sections 3 and 4, we have theoretically and empirically shown that the adversarial loss landscape becomes less favorable to optimization under large adversarial budgets. In this section, we introduce a simple adversarial budget scheduling scheme to overcome these problems. Inspired by the learning rate warmup heuristic used in deep learning [19, 25], we introduce warmup for the adversarial budget. Let d be the current epoch index and D be the warmup period’s length. We define a cosine scheduler cos and a linear scheduler lin, parameterized by max and min, as cos(d) = 1 2 (1− cos d D π)( max − min) + min, lin(d) = ( max − min) d D + min . (5) We clip cos(d) and lin(d) between 0 and target, the target value of . If min ≤ 0 and max > target, the value of starts from 0, gradually increases to target and remains constant then. This warmup strategy allows us to overcome the fact, highlighted in the previous sections, that adversarial training is more sensitive to the learning rate under a large budget because the gradients are more scattered. This is evidenced by Figure 5, which compares the robust test error of MNIST models relying on different adversarial budget scheduling schemes. For all models, we used = 0.4, and report results after 100 epochs with different but constant learning rates in Adam [28]. Our linear and cosine schedulers perform better than using a constant value of during training and yield good performance for a broader range of learning rates: in the small learning rate regime, they speed up training; in the large learning rate regime, they stabilize training and avoid divergence. Note that, as shown in Appendix C.2.3, warmup of the learning rate does not yield similar benefits. As shown in [25], periodic learning rates enable model ensembling to improve the performance. Here, we can follow the same strategy but also for the adversarial budget. To this end, we divide the training phase into several periods and store one model at the end of each period. We make final predictions based on the ensemble of these models. This periodic scheme has no computational overhead. We call it periodic adversarial scheduling (PAS). As before, we run experiments on MNIST and CIFAR10. For MNIST, we train each model for 100 epochs and do not use a periodic scheduling for the learning rate, which we found not to improve the results even if we use a constant adversarial budget. For CIFAR10, we train each model for 200 epochs. When there are no learning rate resets, our results indicate the final model after 200 epochs. When using a periodic learning rate, we divide the 200 epochs into 3 periods, i.e., we reset the learning rate and the adversarial budget after 100 and 150 epochs, and compute the results using an ensemble of these 3 models. The value of learning rate and the adversarial budget size are calculated based on the ratio of the current epoch index to the current period length. We provide more details about hyper-parameter settings in Appendix C.1. We compare different scheduler in adversarial budget under different tasks and settings. We evaluate the robustness of our trained models by different kinds of attacks. First we evaluate the models under the PGD attack used in training (PGD), i.e., 50-iteration PGD for MNIST models and 10-iteration PGD for CIFAR10 models. Then, we increase the number of iterations in PGD and compute the robust error under 100-iteration PGD. To solve the issue of suboptimal step size, we also evaluate our models using the state-of-the-art AutoPGD attack [10], which search for the optimal step sizes. We run AutoPGD for 100 iterations for evaluation, based on either cross-entropy loss (APGD100 CE) or the difference of logit ratio loss (APGD100 DLR). To avoid gradient masking, we also run the state-of-the-art black-box SquareAttack [3] for 5000 iterations (Square5K). The hyperparameter details are defered in Appendix C.1. The results are summarized in Table 1, where we compare the clean and robust accuracy under different adversarial attacks on the test set. It is clear that our proposed cosine or linear schedulers yield better performance, in both clean accuracy and robust accuracy, than using a constant adversarial budget in all cases. For MNIST, warmup not only makes training robust to different choices of learning rate, but also improves the final robust accuracy. For CIFAR10, model ensembling enabled by the periodic scheduler improves the robust accuracy. 6 Discussion Model capacity. In addition to the size of the adversarial budget, the capacity of the model also greatly affects the adversarial loss landscape and thus the performance of adversarial training. Adversarial training needs higher model capacity in two aspects: if we decrease the model capacity, adversarial training will fail to converge while vanilla training still works [33]; if we increase the model capacity, the robust accuracy of adversarial training continues to rise while the clean accuracy of normal training saturates [50]. Furthermore, we show in Appendix C.2.4 that smaller models are more likely to have dead layers because of their lower dimensionality. As a result, warmup in adversarial budget is also necessary for small models. In many cases, the parameter space of small models has good minima in terms of robustness, but adversarial training with a constant value of fails to find them. For example, one can obtain small but robust models by pruning large ones [21, 52]. Architecture. The network architecture encodes the parameterization of the model, so it greatly affects the adversarial loss landscape. For example, in Table 1, ResNet18 has fewer trainable parameters but better performance than VGG on CIFAR10, indicating that ResNet18 has a better parameterization in terms of robustness. Since the optimal architecture for adversarial robustness is not necessarily the same as the one for clean accuracy, we believe that finding architectures inherently favorable to adversarial training is an interesting but challenging topic for future research. Connectivity of minima. Local minima in the vanilla loss landscape are well-connected [12, 14]: there exist flat hyper curves connecting them. In Appendix C.2.5, we study the connectivity of converged model parameters in the adversarial setting. We find that the parameters of two adversarially trained models are less connected in the adversarial loss landscape than in the vanilla setting. That is, the path connecting them needs to go over suboptimal regions. Adversarial example generation We approximate the adversarial loss using adversarial examples generated by PGD, which is a good estimate of the inner maximization in (1). PGD-based adversarial training updates model parameters by near-optimal adversarial examples. However, recent works [41, 47] have shown that robust models can also be trained by suboptimal adversarial examples, which are faster to obtain. The formulation of these methods differs from (1), because the inner maximization problem is not approximately solved. Understanding why models (partially) trained on suboptimal adversarial examples are resistant to stronger adversarial examples needs more investigation. 7 Conclusion We have studied the properties of the loss landscape under adversarial training. We have shown that the adversarial loss landscape is non-smooth and not favorable to optimization, due to the dependency of adversarial examples on the model parameters. Furthermore, we have empirically evidenced that large adversarial budgets slow down training in the early stages and impedes convergence in the end. Finally, we have demonstrated the advantages of warmup and periodic scheduling of the adversarial budget size during training. They make training more robust to different choices of learning rate and yield better performance than vanilla adversarial training. 8 Broader Impact The existence of adversarial examples has raised serious concerns about the deployment of deep learning models in safety-sensitive domains, such as medical imaging [32] and autonomous navigation [1]. In these domains, as in many others, adversarial training remains the most popular, effective, and general method to train robust models. By studying the nature of optimization in adversarial training and proposing solutions to overcome the underlying challenges, our work has potential for high societal impact in these fields. Although the robust accuracy is much lower than the clean accuracy so far, the intrinsic properties of adversarial training we have discovered open up future research directions to improve its performance. From an ecological perspective, however, we acknowledge that the higher computational cost of adversarial training translates to higher carbon footprint than vanilla training. Nevertheless, we believe that the potential societal benefits of robustness to attacks outweigh this drawback. 9 Acknowledgements We thankfully acknowledge the support of the Hasler Foundation (Grant No. 16076) for this work.
1. What is the primary contribution of the paper regarding the loss landscape of PGD-based adversarial training? 2. What are the strengths of the proposed approach, particularly in terms of its ability to explain the behavior of linear classifiers and the phenomenon of "gradient scattering"? 3. What are the weaknesses of the paper, such as the unclear definition of error and robust error, and the lack of proof or citation for the claim that T_eps is the complement of V_eps in binary classification? 4. Do you have any questions or concerns about the experimental results, such as the minimal improvement of the warm-up scheduling strategy over a constant LR schedule? 5. Would additional experiments or analyses, such as those involving linear classifiers on synthetic data or utilizing techniques like interval bound propagation or convex relaxations for analyzing the Hessian, enhance the paper's findings?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper studies the loss landscape of PGD-based adversarial training under the cross-entropy loss and compares this landscape with that of vanilla training of neural networks. They find that the landscape of adversarial training may be less favorable to optimization as the size of the allowable perturbation increases; to do so, they investigate linear models and they also analyze properties of SGD, which is commonly used to train such networks in the deep, nonlinear setting. Further, they propose a learning rate schedule and a method of chunking the training process to predict using an ensemble of models to lower the error rate in adversarial training. Strengths + Figure 5 is quite illustrative following Prop 2. I think it should go in the main text. This would much better help to bring across this "scattering" phenomenon, which I didn't fully get until I saw that picture. + This study is well-motivated. Indeed many works do look at the landscape of vanilla training, but much less is known about the landscape of adversarial training. + The linear analysis provides some interesting insights. In particular, it seems intuitive that we should first be able to explain the behavior of linear classifiers before understanding the more complex case of deep classifiers. The claim that linear classifiers tend to become constant for large enough adversarial budgets is quite interesting and should be of interest to the growing community of folks who are trying to understand the properties of adversarial training by studying the linear case. + This "gradient scattering" phenomenon is quite interesting and not necessarily intuitive. It should be of independent interest to the community and after reading it seems to me that this is the foremost result in the paper. I would like to see more work done here to further explain this phenomenon. + The experiments are relatively thorough and consider several well-known datasets and architectures. Weaknesses - What is the definition of error and robust error? Is it just 1-accuracy or something else? - Proposition 1 and its proof are not clear to me. In particular, the paragraph after Prop. 1 is confusing. You haven't yet even defined g_\eps(x, W), and it's unclear to me why the version space V_\eps is defined in terms of g_\eps(x,W). Can't we simply define the version space as a function of x and W rather than including the nebulously defined function g_\eps? Further, the "proof" seems strange to me. Why not just say the following: let \eps_1\leq\eps_2. Now let \tilde{W} be any element of V_{\eps_2}. Then \tilde{W} is robustly optimal for all x' \in S_{\eps_2}(x). Since by assumption \eps_2\geq\eps_1, it follows that S_{\eps_1} \subset S_{\eps_2}. Therefore, \tilde{W} is robustly optimal for all x''\in S_{\eps_1} and thus \tilde{W}\in V_{\eps_1}. Therefore, V_{\eps_2} \subset V_{\eps_1}. QED. This seems much more straightfoward and even more general than the "easy" proof that was presented. For example, while it seems true that g_\eps(x,W) <= log K due to the choice of the cross-entropy loss, but with the above proof you don't even need - The authors claim that T_\eps is the complement of V_\eps in binary classification. No proof is given and no previous work is cited. It's not obvious upon reading this why this claim is immediately true and not worthy of further comment. - On page 4, I am confused by the assumption "we assume different gradients of the model parameters for different inputs." Is this something that is valid to assume? It seems to be assuming the conclusion that you want, which is that \lim_{\theta\rightarrow\theta_1} \nabla_\theta g_\eps(x,\theta) \neq \nabla_\theta g_\eps(x,\theta_1). - To get the gradient discontinuity property that you are looking for after Prop2, I think you should appeal to the fact that the second bound is _tight_. If it were not tight, then the conclusion of non-smoothness would not necessarily hold. (I see in the appendix that it is in fact tight, but the main text should appeal to this point.) - The idea of introducing a warm-up scheduling is interesting, but based on Table 1 it seems that it doesn't have a very significant effect. In particular, on CIFAR10 with VGG, it's improvement over a constant LR schedule is minimal. And in the other cases the improvements are on the order of 1-2%. Thus I think the authors overclaim slightly when they say that their strategy "can effectively overcome these challenges." - It would have been nice to have some experiments with linear classifiers on synthetic data just to verify empirically the results in Theorem 1. - The Hessian sampling is interesting, but it would nicer is there was something that could give guarantees for a local Lipschtiz constant. In particular, the works that consider bounding the output set of a neural network using e.g. interval bound propagation or convex relaxations may be more useful here.
NIPS
Title On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them Abstract We analyze the influence of adversarial training on the loss landscape of machine learning models. To this end, we first provide analytical studies of the properties of adversarial loss functions under different adversarial budgets. We then demonstrate that the adversarial loss landscape is less favorable to optimization, due to increased curvature and more scattered gradients. Our conclusions are validated by numerical analyses, which show that training under large adversarial budgets impede the escape from suboptimal random initialization, cause non-vanishing gradients and make the model find sharper minima. Based on these observations, we show that a periodic adversarial scheduling (PAS) strategy can effectively overcome these challenges, yielding better results than vanilla adversarial training while being much less sensitive to the choice of learning rate. N/A We analyze the influence of adversarial training on the loss landscape of machine learning models. To this end, we first provide analytical studies of the properties of adversarial loss functions under different adversarial budgets. We then demonstrate that the adversarial loss landscape is less favorable to optimization, due to increased curvature and more scattered gradients. Our conclusions are validated by numerical analyses, which show that training under large adversarial budgets impede the escape from suboptimal random initialization, cause non-vanishing gradients and make the model find sharper minima. Based on these observations, we show that a periodic adversarial scheduling (PAS) strategy can effectively overcome these challenges, yielding better results than vanilla adversarial training while being much less sensitive to the choice of learning rate. 1 Introduction State-of-the-art deep learning models have been found to be vulnerable to adversarial attacks [18, 34, 45]. Imperceptible perturbations of the input can make the model produce wrong predictions with high confidence. This raises concerns about deep learning’s deployment in safety-critical applications. Although many training algorithms have been proposed to counter such adversarial attacks, most of them were observed to fail when facing stronger attacks [4, 10]. Adversarial training [33] is one of the few exceptions, so far remaining effective and thus popular. It uses adversarial examples generated with the attacker’s scheme to update the model parameters. However, adversarial training and its variants [2, 6, 24, 42, 53] have been found to have a much larger generalization gap [37] and to require larger model capacity for convergence [49]. Although recent works [6, 40] show that the adversarial training error reduces to almost 0% with a large enough model and that the generalization gap can be narrowed by using more training data, convergence in adversarial training remains much slower than in vanilla training on clean data. This indicates discrepancies in the underlying optimization landscapes. While much work has studied the loss landscape of deep networks in vanilla training [12, 13, 14, 15, 31], such an analysis in adversarial training remains unaddressed. Here we study optimization in adversarial training. Vanilla training can be considered as a special case where no perturbation is allowed, i.e., zero adversarial budget. Therefore, we focus on the impact of the adversarial budget size on the loss landscape. In this context, we investigate from a theoretical and empirical perspective how different adversarial budget sizes affect the loss landscape to make optimization more challenging. Our analyses start with linear models and then generalize to nonlinear deep learning ones. We study the whole training process and identify different behaviors in the early and final stages of training. Based on our observations, we then introduce a scheduling strategy for the adversarial budget during training. We empirically show this scheme to yield better performance and to be less sensitive to the learning rate than vanilla adversarial training. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. Contributions. Our contributions can be summarized as follows. 1) From a theoretical perspective, we show that, for linear models, adversarial training under a large enough budget produces a constant classifier. For general nonlinear models, we identify the existence of an abrupt change in the adversarial examples, which makes the loss landscape less smooth. This causes severe gradient scattering and slows down the convergence of training. 2) Our numerical analysis shows that training under large adversarial budgets hinders the model to escape from suboptimal initial regions, while also causing large non-vanishing gradients in the final stage of training. Furthermore, by Hessian analysis, we evidence that the minima reached in the adversarial loss landscape are sharper when the adversarial budget is bigger. 3) We show that a periodic adversarial scheduling (PAS) strategy, corresponding to a cyclic adversarial budget scheduling scheme with warmup, addresses these challenges. Specifically, it makes training less sensitive to the choice of learning rate and yields better robust accuracy than vanilla adversarial training without any computational overhead. Notation and Terminology. We use plain letters, bold lowercase letters and bold uppercase letters to represent scalars, vectors and matrices, respectively. ‖v‖ represents the Euclidean norm of vector v and [K] is an abbreviation of the set {0, 1, 2, ...,K − 1}. In a classification problem {(xi, yi)}Ni=1, where (xi, yi) ∈ Rm × [K], the classifier consists of a logit function f : Rm → Rk, which is usually a neural network, and a risk function ` : Rk× [K]→ R, which is the softmax cross-entropy loss. The adversarial budget S(p) (x) of a data point x, whose size is , is defined based on an lp norm-based constraint {x′|‖x− x′‖p ≤ }, and we use S (x) to denote the l∞ constraint for simplicity. Given the model parameters θ ∈ Θ, we use g(x, θ) : Rm ×Θ→ R to denote the loss function for an individual data point, ignoring the label y for simplicity. If we use L (θ) to denote the adversarial loss function under adversarial budget S(p) (x), adversarial training solves the min-max problem min θ L (θ) := 1 N N∑ i=1 g (xi, θ) where g (xi, θ) := max x′i∈S (p) (xi) g(x′i, θ) . (1) L(θ) := L0(θ) is the vanilla loss function. If 6= 0, the adversarial example x′i, i.e., the worst-case input in S(p) (xi), depends on the model parameters. We call the landscape of functions L(θ) and L (θ) the vanilla and adversarial loss landscape, respectively. Similarly, we use E(θ) and E (θ) to represent the clean error and robust error under adversarial budget S(p) (x). In this paper, we call a function smooth if it is C1-continuous. We use θ0 to denote the initial parameters. “Initial plateau” or “suboptimal region in the early stage of training” indicate the parameters that are close to the initial ones and have similar performance. “Vanilla training” means training based on clean input data, while “vanilla adversarial training” represents the popular adversarial training method in [33]. 2 Related Work Adversarial Robustness. In this work, we focus on white-box attacks, in which the attackers have access to the model parameters. Compared with black-box attacks, white box attacks better solve the inner maximization problem in (1). In this context, [18] proposes the fast gradient sign method (FGSM) to perturb the input in the direction of its gradient: x′ = x + sign(OxL(θ)). Projected gradient descent (PGD) [33] extends FGSM by iteratively running it with a smaller step size and projecting the perturbation back to the adversarial budget. Furthermore, PGD introduces randomness by starting at a random initial point inside the adversarial budget. As a result, PGD generates much stronger adversarial examples than FGSM and is believed to be the strongest attack utilizing the network’s first order information [33]. When it comes to robustness against attacks, some methods have been proposed to train provably robust models by linear approximation [5, 29, 48], semi-definite programming [36], interval bound propagation [20] or randomized smoothing [8, 39]. However, these methods either only apply to a specific type of network, have a significant computational overhead, or are unstable. Furthermore, compared with adversarial training, they have been found to over-regularize the model and significantly decrease the clean accuracy [54]. As a result, we focus on PGD-based adversarial training, which first generates adversarial examples x′ by PGD and then uses x′ to optimize the model parameters θ. In all our experiments, the adversarial loss landscape is approximated by the loss of adversarial examples found by PGD. Loss Landscape of Deep Neural Networks. Many existing works focus on the vanilla loss landscape of the objective function in deep learning. It is challenging, because the objective L(θ) of a deep neural network is a high-dimensional nonconvex function, of which we only know very few properties. [26] proves the nonexistence of poor local minima for general deep nonlinear networks. [30] shows that stochastic gradient descent (SGD) can almost surely escape the saddle points and converge to a local minimum. For over-parameterized ReLU networks, SGD is highly likely to find a monotonically decreasing trajectory from the initialization point to the global optimum [38]. Furthermore, some works have studied the geometric properties of local minima in the loss landscape of neural networks. In this context, [27, 35] empirically show that sharp minima usually have larger generalization gaps than flat ones. Specifically, to improve generalization, [51] uses adversarial training to avoid converging to sharp minima in large batch training. However, the correspondence between sharp minima and poor generalization is based on empirical findings and sometimes controversial. For example, [11] shows counterexamples in ReLU networks by rescaling the parameters and claims that sharp minima can generalize as well as flat ones. Moreover, different minima of the loss function have been found to be well-connected. That is, there exist hyper-curves connecting different minima that are flat in the loss landscape [12, 14]. [55] further shows that the learned path connection can help us to effectively repair models that are vulnerable to backdoor or error-injection attacks. Recently, some methods have been proposed to visualize the loss landscape [31, 44], leading to the observation that networks of different architectures have surprisingly different landscapes. Compared with chaotic landscapes, smooth and locally near-convex landscapes make gradient-based optimization much easier. All of the above-mentioned works, however, focus on networks that have been optimized with vanilla training. Here, by contrast, we study the case of adversarial training. 3 Theoretical Analysis In this section, we conduct an analytical study of the difference between L (θ) and L(θ). We start with linear classification models and then discuss general nonlinear ones. 3.1 Linear Classification Models For the simple but special case of logistic regression, i.e., K = 2, we can write the analytical form of L (θ). We defer the detailed discussion of this case to Appendix A.1, and here focus on linear multiclass classification, i.e., K ≥ 3. We parameterize the model by W := {wi}Ki=1 ∈ Rm×K and use f(W) = [wT1 x,w T 2 x, ...,w T Kx] as the logit function. Therefore, the vanilla loss function is convex as g(x,W) = log ( 1 + ∑ j 6=y exp (wj−wy)Tx ) . Although g (x,W) is also convex, it is no longer smooth everywhere. It is then difficult to write a unified expression of g (x,W). So we start with the version space V of g (x,W) defined as V = { W ∣∣∣∣(wi −wy)x′ ≤ 0,∀i ∈ [K],x′ ∈ S (x)}. By definition, V is the smallest convex closed set containing all solutions robust under the adversarial budget S (x). The proposition below states that the version space V shrinks with larger values of . Proposition 1. Given the definition of the version space V , then V 2 ⊆ V 1 when 1 ≤ 2. The proof of Proposition 1 is very straightforward, we put it in Appendix B.1. In addition to V , we define the set T as T = { W ∣∣∣∣0 ∈ arg minγ g (x, γW)}. T is the set of all directions in which the optimal point is the origin; that is, the corresponding models in this direction are all no better than a constant classifier. Although we cannot write the set T in roster notation, we show in the theorem below that T becomes larger as increases. Theorem 1. Given the definition of T , then T 2 ⊆ T 1 when 1 ≥ 2. In addition, ∃̄ such that ∀ ≥ ̄, T = Rm×K . In this case, 0 ∈ arg minW g (x,W). We defer the proof of Theorem 1 to Appendix B.2, where we also provide a lower bound for ̄. Theorem 1 indicates that when the adversarial budget is large enough, the optimal point is the origin. In this case, we will get a constant classifier, and training completely fails. L (W) is the average of g (x,W) over the dataset, so Theorem 1 and Proposition 1 still apply if we replace g with L in the definition of V and T . For nonlinear models like deep neural networks, these conclusions will not hold because g (x, θ) is no longer convex. Nevertheless, our experiments in Section 4.1 evidence the same phenomena as indicated by the theoretical analysis above. Larger make it harder for the optimizer to escape the initial suboptimal region. In some cases, training fails, and we obtain a constant classifier in the end. 3.2 General Nonlinear Classification Models For deep nonlinear neural networks, we cannot write the analytical form of g(x, θ) or g (x, θ). To analyze such models, we follow [43] and assume the smoothness of the function g. Assumption 1. The function g satisfies the following Lipschitzian smoothness conditions: ‖g(x, θ1)− g(x, θ2)‖ ≤ Lθ‖θ1 − θ2‖ , ‖Oθg(x, θ1)− Oθg(x, θ2)‖ ≤ Lθθ‖θ1 − θ2‖ , ‖Oθg(x1, θ)− Oθg(x2, θ)‖ ≤ Lθx‖x1 − x2‖p . (2) Based on this, we study the smoothness of L (θ). Proposition 2. If Assumption 1 holds, then we have 1 ‖L (θ1)− L (θ2)‖ ≤ Lθ‖θ1 − θ2‖ , ‖OθL (θ1)− OθL (θ2)‖ ≤ Lθθ‖θ1 − θ2‖+ 2 Lθx . (3) The proof is provided in Appendix B.3, in which we can see the upper bound in Proposition 2 is tight and can be achieved in the worst cases. Proposition 2 shows that the first-order smoothness of the objective function is preserved under adversarial attacks, but the second-order smoothness is not. That is to say, gradients in arbitrarily small neighborhoods in the θ-space can change discontinuously. The unsatisfying second-order property arises from the maximization operator defined in the functions g and L . For function g (x, θ), the non-smooth points are those where the optimal adversarial example x′ changes abruptly in a sufficiently small neighborhood. Formally, we use θ1 and x′1 to represent the model parameters and the corresponding optimal adversarial example. We assume different gradients of the model parameters for different inputs. If there exists a positive number a > 0 such that, ∀δ > 0, we can find θ2 ∈ {θ|‖θ − θ1‖ ≤ δ}, and the corresponding optimal adversarial example x′2 satisfies ‖x′1 − x′2‖p > a, then limθ→θ1 Oθg (x, θ) 6= Oθg (x, θ1). L (θ) is the aggregation of g (x, θ) over the dataset, so it also has such non-smooth points. In addition, as the 2 Lθx term in the second inequality of (3) indicates, the adversarial examples can change more under a larger adversarial budget. As a result, the (sub)gradients OθL (θ) can change more abruptly in the neighborhood of the parameter space. That is, the (sub)gradients are more scattered in the adversarial loss landscape. Figure 1 provides a 2D sketch diagram showing the non-smoothness introduced by adversarial training. The red curve represents the vanilla loss function g(x, θ). Under adversarial perturbation, the loss landscape fluctuates within the light blue band. Then, the blue curve represents the worst case we can encounter in the adversarial setting, i.e., g (x, θ). We can see that the blue curve is not smooth any more at the point where θ = 0. Importantly, as the light blue band becomes wider under a larger adversarial budget, the corresponding non-smooth point becomes sharper, which means that the difference between the gradients on both sides of the non-smooth point becomes larger. Based on Proposition 2, we show in the following theorem that the non-smoothness introduced by adversarial training makes the optimization by stochastic gradient descent (SGD) more difficult. Theorem 2. Let Assumption 1 hold, the stochastic gradient OθL̂ (θt) be unbiased and have bounded variance, and the SGD update θt+1 = θt − αtOθL̂ (θt) use a constant step size αt = α = 1Lθθ√T for T iterations. Given the trajectory of the parameters during optimization {θt}Tt=1, then we can bound the asymptotic probability of large gradients for a sufficient large value of T as ∀γ ≥ 2, P (‖OθL (θt)‖ > γ Lθx) < 4 γ2 − 2γ + 4 . (4) 1Strictly speaking, L (θ) is not differentiable at some point, so OθL (θ) might be ill-defined. In this paper, we use OθL (θ) for simplicity. Nevertheless, the inequality holds for any subgradient v ∈ ∂θL (θ). We provide the proof in Appendix B.4. In vanilla training, is 0 and L(θ) is smooth, and (4) implies that limt→+∞ ‖Oθg (θt)‖ = 0 almost surely. This is consistent with the fact that SGD converges to a critical point with non-convex smooth functions. By contrast, in adversarial training, i.e., > 0, we cannot guarantee convergence to a critical point. Instead, the gradients are non-vanishing, and we can only bound the probability of obtaining gradients whose magnitude is larger than 2 Lθx. For a fixed value of C := γ Lθx larger than 2 Lθx, the inequality (4) indicates that the probability P (‖OθL (θt)‖ > C) increases quadratically with . In deep learning practice, activation functions like sigmoid, tanh and ELU [7] satisfy the second-order smoothness in Assumption 1, but the most popular ReLU function does not. Nevertheless, adversarial training still causes gradient scattering and makes the optimization more difficult. That is, the bound of ‖OθL (θ1)−OθL (θ2)‖ still increases with , and the parameter gradients change abruptly in the adversarial loss landscape. We provide a more detailed discussion of this phenomenon in Appendix A.2, which shows that our analysis and conclusions easily extend to the ReLU case. The second-order Lipchitz constant indicates the magnitude of the gradient change for a unit change in parameters. Therefore, it is a good quantitative metric of gradient scattering. In practice, we are more interested in the effective local Lipschitz constant, which only considers the neighborhood of the current parameters, than in the global Lipschitz constant. In this case, the effective local second-order Lipschitz constant can be estimated by the top eigenvalues of the Hessian matrix O2θL (θ). 4 Numerical Analysis In this section, we conduct experiments on MNIST and CIFAR10 to empirically validate the theorems in Section 3. Detailed experimental settings are provided in Appendix C.1. Unless specified, we use LeNet models on MNIST and ResNet18 models on CIFAR10 in this and the following sections. Our code is available on https://github.com/liuchen11/AdversaryLossLandscape. 4.1 Gradient Magnitude In Section 3.1, we have shown that the training algorithm will get stuck at the origin and yield a constant classifier for linear models under large . For deep nonlinear models, the initial value of the parameters is close to the origin under most popular initialization schemes [17, 22]. Although Theorem 1 is not applicable here, we are still interested in investigating how effective gradient-based optimization is at escaping from the suboptimal initial parameters. To this end, we track the norm of the stochastic gradient ‖OθL̂ (θ)‖, the robust error E (θ) in the training set and the distance from the initial point ‖θ− θ0‖ during the first 2000 mini-batch updates for CIFAR10 models. Figure 2a, 2b, 2c evidence a clear difference between the models trained with different values of . When is small, the gradient magnitude is larger, and the model parameters move faster. Correspondingly, the training error decreases faster, which means that the model quickly escapes the initial suboptimal region. By contrast, when is large, the gradients are small, and the model gets stuck in the initial region. This implies that the loss landscape under a large adversarial budget impedes the escape from initial suboptimal plateaus in the early stage of training. For ReLU networks, adversarially-trained models have been found to have sparser weights and intermediate activations [9], i.e., they have more dead neurons. Dead neurons are implicitly favored by adversarial training, because the output is independent of the input perturbation. Note that training fails when all the neurons in one layer are dead for all training instances. The model is then effectively broken into two parts by this dead layer: the preceding layers will no longer be trained because the gradients are all blocked; the following layers do not depend on the input and thus give constant outputs. In essence, training is then stuck in a parameter space that only includes constant classifiers. In practice, this usually happens when the model has small width and the value of is large. This is consistent with previous findings that adversarial training needs higher model capacity [33] and too strong adversarial examples are harmful in the early stage of training [46]. Theorem 2 indicates that the gradients are non-vanishing in adversarial training and more likely to have large magnitude under large values of . This is validated by Figure 2d, in which we report the norm of the stochastic gradient ‖OθL̂ (θ)‖ in the last 2000 mini-batch updates for CIFAR10 models. In vanilla training, the gradient is almost zero in the end, indicating that the optimizer finds a critical point. In this case ‖OθL̂ (θ)‖ is dominated by the variance introduced by stochasticity. However, ‖OθL̂ (θ)‖ increases with . When is larger, ‖OθL̂ (θ)‖ is also larger and non-vanishing, indicating that the model is still bouncing around the parameter space at the end of training. The decreased gradient magnitude in the initial suboptimal region and the increased gradient magnitude in the final near-minimum region indicate that the adversarial loss landscape is not favorable to optimization when we train under large adversarial budgets. Additional results on MNIST models are provided in Figure 8 of Appendix C.2.1, where the same observations can be made. 4.2 Hessian Analysis To study the effective local Lipschitz constant of L (θ), we analyze the Hessian spectrum of models trained under different values of . It is known that the curvature in the neighborhood of model parameters is dominated by the top eigenvalues of the Hessian matrix O2L (θ). To this end, we use the power iteration method as in [51] to iteratively estimate the top 20 eigenvalues and the corresponding eigenvectors of the Hessian matrix. Furthermore, to discard the effect of the scale of function L (θ) for different , we estimate the scale of L (θ) by randomly sampling θ. We then normalize the top Hessian eigenvalues by the average value of L (θ) on these random samples. In addition, we show the learning curve of L (θ) on the training set during training in Figure 11 of Appendix C.2.2. It clearly show similar magnitude of L (θ) for different values of . In Figure 3, we show the top 20 Hessian eigenvalues, both before and after normalization, of CIFAR10 models under different adversarial budgets. We also provide 3D visualizations of the neighborhood in the directions of the top 2 eigenvectors in Figure 12 of Appendix C.2.2. It is clear that the local effective second-order Lipschitz constant of the model obtained consistently increases with the value of . That is, the minima found in L (θ) are sharper under larger . To validate the claim in Section 3.2 that non-smoothness arises from abrupt changes of the adversarial examples, we study the similarity of adversarial perturbations generated by different model parameter values in a small neighborhood. Specifically, we perturb the model parameters θ in opposite directions to θ + av and θ − av, where v is a unit vector and a is a scalar. Let x′av and x′−av represent the adversarial examples generated by the corresponding model parameters. We then calculate the average cosine similarity between the perturbation x′av − x and x′−av − x over the training set. The results on CIFAR10 models are provided in Figure 4. To account for the random start in PGD, we run each experiment 4 times and report the average value. The variances of all experiments are smaller than 0.005 and thus not shown in the figure. Note that, when v is a random unit vector, the robust error E (θ) of the parameters θ ± av on both the training and test sets remains unchanged for different values of a, indicating a flat landscape in the direction v. The adversarial examples in this case are mostly similar and have very high cosine similarity. By contrast, if v is the top eigenvector of the Hessian matrix, i.e., the most curvy direction, then we see a sharp increase in the robust error E (θ) when we increase a. Correspondingly, the cosine similarity between the adversarial perturbations is much lower, which indicates dramatic changes of the adversarial examples. We perform the same experiments on MNIST models in Appendix C.2.2 with the same observations. 5 Periodic Adversarial Scheduling In Sections 3 and 4, we have theoretically and empirically shown that the adversarial loss landscape becomes less favorable to optimization under large adversarial budgets. In this section, we introduce a simple adversarial budget scheduling scheme to overcome these problems. Inspired by the learning rate warmup heuristic used in deep learning [19, 25], we introduce warmup for the adversarial budget. Let d be the current epoch index and D be the warmup period’s length. We define a cosine scheduler cos and a linear scheduler lin, parameterized by max and min, as cos(d) = 1 2 (1− cos d D π)( max − min) + min, lin(d) = ( max − min) d D + min . (5) We clip cos(d) and lin(d) between 0 and target, the target value of . If min ≤ 0 and max > target, the value of starts from 0, gradually increases to target and remains constant then. This warmup strategy allows us to overcome the fact, highlighted in the previous sections, that adversarial training is more sensitive to the learning rate under a large budget because the gradients are more scattered. This is evidenced by Figure 5, which compares the robust test error of MNIST models relying on different adversarial budget scheduling schemes. For all models, we used = 0.4, and report results after 100 epochs with different but constant learning rates in Adam [28]. Our linear and cosine schedulers perform better than using a constant value of during training and yield good performance for a broader range of learning rates: in the small learning rate regime, they speed up training; in the large learning rate regime, they stabilize training and avoid divergence. Note that, as shown in Appendix C.2.3, warmup of the learning rate does not yield similar benefits. As shown in [25], periodic learning rates enable model ensembling to improve the performance. Here, we can follow the same strategy but also for the adversarial budget. To this end, we divide the training phase into several periods and store one model at the end of each period. We make final predictions based on the ensemble of these models. This periodic scheme has no computational overhead. We call it periodic adversarial scheduling (PAS). As before, we run experiments on MNIST and CIFAR10. For MNIST, we train each model for 100 epochs and do not use a periodic scheduling for the learning rate, which we found not to improve the results even if we use a constant adversarial budget. For CIFAR10, we train each model for 200 epochs. When there are no learning rate resets, our results indicate the final model after 200 epochs. When using a periodic learning rate, we divide the 200 epochs into 3 periods, i.e., we reset the learning rate and the adversarial budget after 100 and 150 epochs, and compute the results using an ensemble of these 3 models. The value of learning rate and the adversarial budget size are calculated based on the ratio of the current epoch index to the current period length. We provide more details about hyper-parameter settings in Appendix C.1. We compare different scheduler in adversarial budget under different tasks and settings. We evaluate the robustness of our trained models by different kinds of attacks. First we evaluate the models under the PGD attack used in training (PGD), i.e., 50-iteration PGD for MNIST models and 10-iteration PGD for CIFAR10 models. Then, we increase the number of iterations in PGD and compute the robust error under 100-iteration PGD. To solve the issue of suboptimal step size, we also evaluate our models using the state-of-the-art AutoPGD attack [10], which search for the optimal step sizes. We run AutoPGD for 100 iterations for evaluation, based on either cross-entropy loss (APGD100 CE) or the difference of logit ratio loss (APGD100 DLR). To avoid gradient masking, we also run the state-of-the-art black-box SquareAttack [3] for 5000 iterations (Square5K). The hyperparameter details are defered in Appendix C.1. The results are summarized in Table 1, where we compare the clean and robust accuracy under different adversarial attacks on the test set. It is clear that our proposed cosine or linear schedulers yield better performance, in both clean accuracy and robust accuracy, than using a constant adversarial budget in all cases. For MNIST, warmup not only makes training robust to different choices of learning rate, but also improves the final robust accuracy. For CIFAR10, model ensembling enabled by the periodic scheduler improves the robust accuracy. 6 Discussion Model capacity. In addition to the size of the adversarial budget, the capacity of the model also greatly affects the adversarial loss landscape and thus the performance of adversarial training. Adversarial training needs higher model capacity in two aspects: if we decrease the model capacity, adversarial training will fail to converge while vanilla training still works [33]; if we increase the model capacity, the robust accuracy of adversarial training continues to rise while the clean accuracy of normal training saturates [50]. Furthermore, we show in Appendix C.2.4 that smaller models are more likely to have dead layers because of their lower dimensionality. As a result, warmup in adversarial budget is also necessary for small models. In many cases, the parameter space of small models has good minima in terms of robustness, but adversarial training with a constant value of fails to find them. For example, one can obtain small but robust models by pruning large ones [21, 52]. Architecture. The network architecture encodes the parameterization of the model, so it greatly affects the adversarial loss landscape. For example, in Table 1, ResNet18 has fewer trainable parameters but better performance than VGG on CIFAR10, indicating that ResNet18 has a better parameterization in terms of robustness. Since the optimal architecture for adversarial robustness is not necessarily the same as the one for clean accuracy, we believe that finding architectures inherently favorable to adversarial training is an interesting but challenging topic for future research. Connectivity of minima. Local minima in the vanilla loss landscape are well-connected [12, 14]: there exist flat hyper curves connecting them. In Appendix C.2.5, we study the connectivity of converged model parameters in the adversarial setting. We find that the parameters of two adversarially trained models are less connected in the adversarial loss landscape than in the vanilla setting. That is, the path connecting them needs to go over suboptimal regions. Adversarial example generation We approximate the adversarial loss using adversarial examples generated by PGD, which is a good estimate of the inner maximization in (1). PGD-based adversarial training updates model parameters by near-optimal adversarial examples. However, recent works [41, 47] have shown that robust models can also be trained by suboptimal adversarial examples, which are faster to obtain. The formulation of these methods differs from (1), because the inner maximization problem is not approximately solved. Understanding why models (partially) trained on suboptimal adversarial examples are resistant to stronger adversarial examples needs more investigation. 7 Conclusion We have studied the properties of the loss landscape under adversarial training. We have shown that the adversarial loss landscape is non-smooth and not favorable to optimization, due to the dependency of adversarial examples on the model parameters. Furthermore, we have empirically evidenced that large adversarial budgets slow down training in the early stages and impedes convergence in the end. Finally, we have demonstrated the advantages of warmup and periodic scheduling of the adversarial budget size during training. They make training more robust to different choices of learning rate and yield better performance than vanilla adversarial training. 8 Broader Impact The existence of adversarial examples has raised serious concerns about the deployment of deep learning models in safety-sensitive domains, such as medical imaging [32] and autonomous navigation [1]. In these domains, as in many others, adversarial training remains the most popular, effective, and general method to train robust models. By studying the nature of optimization in adversarial training and proposing solutions to overcome the underlying challenges, our work has potential for high societal impact in these fields. Although the robust accuracy is much lower than the clean accuracy so far, the intrinsic properties of adversarial training we have discovered open up future research directions to improve its performance. From an ecological perspective, however, we acknowledge that the higher computational cost of adversarial training translates to higher carbon footprint than vanilla training. Nevertheless, we believe that the potential societal benefits of robustness to attacks outweigh this drawback. 9 Acknowledgements We thankfully acknowledge the support of the Hasler Foundation (Grant No. 16076) for this work.
1. What is the main contribution of the paper regarding adversarial training? 2. What are the strengths of the paper's mathematical analysis? 3. What are the weaknesses of the paper's experiments and comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The authors provide a rigorous and formal mathematical analysis of the loss surfaces of adversarial training methods. Their analysis shows that adversarial training can fail to converge, and based on this finding they suggest a training schedule for adversarial training that can mitigate these failures. Strengths -The paper provides a rigorous mathematical analysis of the loss surfaces of adversarial training, which is a useful insight -The paper shows that there is mathematical proof that adversarial training can fail to converge -The paper suggests a good way of overcoming this using a periodic adversarial training schedule Weaknesses The only tests were performed on MNIST and CIFAR10 with two different networks (LeNet and VGG/Resnet18 respectively). It would have been more convincing to see how the method compares with different datasets using the same network to preclude the network structure from explaining any differences. Additionally, a larger and more realistic dataset like imagenet would have been more convincing. Post-rebuttal: A quick literature review gave several recent techniques performing adversarial training on imagenet, although they are designed specifically with efficiency in mind as an extension of standard adversarial training to make it feasible on this large dataset. Based on this and the general portion of the rebuttal, I am not sure my reviews are adequately addressed.
NIPS
Title On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them Abstract We analyze the influence of adversarial training on the loss landscape of machine learning models. To this end, we first provide analytical studies of the properties of adversarial loss functions under different adversarial budgets. We then demonstrate that the adversarial loss landscape is less favorable to optimization, due to increased curvature and more scattered gradients. Our conclusions are validated by numerical analyses, which show that training under large adversarial budgets impede the escape from suboptimal random initialization, cause non-vanishing gradients and make the model find sharper minima. Based on these observations, we show that a periodic adversarial scheduling (PAS) strategy can effectively overcome these challenges, yielding better results than vanilla adversarial training while being much less sensitive to the choice of learning rate. N/A We analyze the influence of adversarial training on the loss landscape of machine learning models. To this end, we first provide analytical studies of the properties of adversarial loss functions under different adversarial budgets. We then demonstrate that the adversarial loss landscape is less favorable to optimization, due to increased curvature and more scattered gradients. Our conclusions are validated by numerical analyses, which show that training under large adversarial budgets impede the escape from suboptimal random initialization, cause non-vanishing gradients and make the model find sharper minima. Based on these observations, we show that a periodic adversarial scheduling (PAS) strategy can effectively overcome these challenges, yielding better results than vanilla adversarial training while being much less sensitive to the choice of learning rate. 1 Introduction State-of-the-art deep learning models have been found to be vulnerable to adversarial attacks [18, 34, 45]. Imperceptible perturbations of the input can make the model produce wrong predictions with high confidence. This raises concerns about deep learning’s deployment in safety-critical applications. Although many training algorithms have been proposed to counter such adversarial attacks, most of them were observed to fail when facing stronger attacks [4, 10]. Adversarial training [33] is one of the few exceptions, so far remaining effective and thus popular. It uses adversarial examples generated with the attacker’s scheme to update the model parameters. However, adversarial training and its variants [2, 6, 24, 42, 53] have been found to have a much larger generalization gap [37] and to require larger model capacity for convergence [49]. Although recent works [6, 40] show that the adversarial training error reduces to almost 0% with a large enough model and that the generalization gap can be narrowed by using more training data, convergence in adversarial training remains much slower than in vanilla training on clean data. This indicates discrepancies in the underlying optimization landscapes. While much work has studied the loss landscape of deep networks in vanilla training [12, 13, 14, 15, 31], such an analysis in adversarial training remains unaddressed. Here we study optimization in adversarial training. Vanilla training can be considered as a special case where no perturbation is allowed, i.e., zero adversarial budget. Therefore, we focus on the impact of the adversarial budget size on the loss landscape. In this context, we investigate from a theoretical and empirical perspective how different adversarial budget sizes affect the loss landscape to make optimization more challenging. Our analyses start with linear models and then generalize to nonlinear deep learning ones. We study the whole training process and identify different behaviors in the early and final stages of training. Based on our observations, we then introduce a scheduling strategy for the adversarial budget during training. We empirically show this scheme to yield better performance and to be less sensitive to the learning rate than vanilla adversarial training. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. Contributions. Our contributions can be summarized as follows. 1) From a theoretical perspective, we show that, for linear models, adversarial training under a large enough budget produces a constant classifier. For general nonlinear models, we identify the existence of an abrupt change in the adversarial examples, which makes the loss landscape less smooth. This causes severe gradient scattering and slows down the convergence of training. 2) Our numerical analysis shows that training under large adversarial budgets hinders the model to escape from suboptimal initial regions, while also causing large non-vanishing gradients in the final stage of training. Furthermore, by Hessian analysis, we evidence that the minima reached in the adversarial loss landscape are sharper when the adversarial budget is bigger. 3) We show that a periodic adversarial scheduling (PAS) strategy, corresponding to a cyclic adversarial budget scheduling scheme with warmup, addresses these challenges. Specifically, it makes training less sensitive to the choice of learning rate and yields better robust accuracy than vanilla adversarial training without any computational overhead. Notation and Terminology. We use plain letters, bold lowercase letters and bold uppercase letters to represent scalars, vectors and matrices, respectively. ‖v‖ represents the Euclidean norm of vector v and [K] is an abbreviation of the set {0, 1, 2, ...,K − 1}. In a classification problem {(xi, yi)}Ni=1, where (xi, yi) ∈ Rm × [K], the classifier consists of a logit function f : Rm → Rk, which is usually a neural network, and a risk function ` : Rk× [K]→ R, which is the softmax cross-entropy loss. The adversarial budget S(p) (x) of a data point x, whose size is , is defined based on an lp norm-based constraint {x′|‖x− x′‖p ≤ }, and we use S (x) to denote the l∞ constraint for simplicity. Given the model parameters θ ∈ Θ, we use g(x, θ) : Rm ×Θ→ R to denote the loss function for an individual data point, ignoring the label y for simplicity. If we use L (θ) to denote the adversarial loss function under adversarial budget S(p) (x), adversarial training solves the min-max problem min θ L (θ) := 1 N N∑ i=1 g (xi, θ) where g (xi, θ) := max x′i∈S (p) (xi) g(x′i, θ) . (1) L(θ) := L0(θ) is the vanilla loss function. If 6= 0, the adversarial example x′i, i.e., the worst-case input in S(p) (xi), depends on the model parameters. We call the landscape of functions L(θ) and L (θ) the vanilla and adversarial loss landscape, respectively. Similarly, we use E(θ) and E (θ) to represent the clean error and robust error under adversarial budget S(p) (x). In this paper, we call a function smooth if it is C1-continuous. We use θ0 to denote the initial parameters. “Initial plateau” or “suboptimal region in the early stage of training” indicate the parameters that are close to the initial ones and have similar performance. “Vanilla training” means training based on clean input data, while “vanilla adversarial training” represents the popular adversarial training method in [33]. 2 Related Work Adversarial Robustness. In this work, we focus on white-box attacks, in which the attackers have access to the model parameters. Compared with black-box attacks, white box attacks better solve the inner maximization problem in (1). In this context, [18] proposes the fast gradient sign method (FGSM) to perturb the input in the direction of its gradient: x′ = x + sign(OxL(θ)). Projected gradient descent (PGD) [33] extends FGSM by iteratively running it with a smaller step size and projecting the perturbation back to the adversarial budget. Furthermore, PGD introduces randomness by starting at a random initial point inside the adversarial budget. As a result, PGD generates much stronger adversarial examples than FGSM and is believed to be the strongest attack utilizing the network’s first order information [33]. When it comes to robustness against attacks, some methods have been proposed to train provably robust models by linear approximation [5, 29, 48], semi-definite programming [36], interval bound propagation [20] or randomized smoothing [8, 39]. However, these methods either only apply to a specific type of network, have a significant computational overhead, or are unstable. Furthermore, compared with adversarial training, they have been found to over-regularize the model and significantly decrease the clean accuracy [54]. As a result, we focus on PGD-based adversarial training, which first generates adversarial examples x′ by PGD and then uses x′ to optimize the model parameters θ. In all our experiments, the adversarial loss landscape is approximated by the loss of adversarial examples found by PGD. Loss Landscape of Deep Neural Networks. Many existing works focus on the vanilla loss landscape of the objective function in deep learning. It is challenging, because the objective L(θ) of a deep neural network is a high-dimensional nonconvex function, of which we only know very few properties. [26] proves the nonexistence of poor local minima for general deep nonlinear networks. [30] shows that stochastic gradient descent (SGD) can almost surely escape the saddle points and converge to a local minimum. For over-parameterized ReLU networks, SGD is highly likely to find a monotonically decreasing trajectory from the initialization point to the global optimum [38]. Furthermore, some works have studied the geometric properties of local minima in the loss landscape of neural networks. In this context, [27, 35] empirically show that sharp minima usually have larger generalization gaps than flat ones. Specifically, to improve generalization, [51] uses adversarial training to avoid converging to sharp minima in large batch training. However, the correspondence between sharp minima and poor generalization is based on empirical findings and sometimes controversial. For example, [11] shows counterexamples in ReLU networks by rescaling the parameters and claims that sharp minima can generalize as well as flat ones. Moreover, different minima of the loss function have been found to be well-connected. That is, there exist hyper-curves connecting different minima that are flat in the loss landscape [12, 14]. [55] further shows that the learned path connection can help us to effectively repair models that are vulnerable to backdoor or error-injection attacks. Recently, some methods have been proposed to visualize the loss landscape [31, 44], leading to the observation that networks of different architectures have surprisingly different landscapes. Compared with chaotic landscapes, smooth and locally near-convex landscapes make gradient-based optimization much easier. All of the above-mentioned works, however, focus on networks that have been optimized with vanilla training. Here, by contrast, we study the case of adversarial training. 3 Theoretical Analysis In this section, we conduct an analytical study of the difference between L (θ) and L(θ). We start with linear classification models and then discuss general nonlinear ones. 3.1 Linear Classification Models For the simple but special case of logistic regression, i.e., K = 2, we can write the analytical form of L (θ). We defer the detailed discussion of this case to Appendix A.1, and here focus on linear multiclass classification, i.e., K ≥ 3. We parameterize the model by W := {wi}Ki=1 ∈ Rm×K and use f(W) = [wT1 x,w T 2 x, ...,w T Kx] as the logit function. Therefore, the vanilla loss function is convex as g(x,W) = log ( 1 + ∑ j 6=y exp (wj−wy)Tx ) . Although g (x,W) is also convex, it is no longer smooth everywhere. It is then difficult to write a unified expression of g (x,W). So we start with the version space V of g (x,W) defined as V = { W ∣∣∣∣(wi −wy)x′ ≤ 0,∀i ∈ [K],x′ ∈ S (x)}. By definition, V is the smallest convex closed set containing all solutions robust under the adversarial budget S (x). The proposition below states that the version space V shrinks with larger values of . Proposition 1. Given the definition of the version space V , then V 2 ⊆ V 1 when 1 ≤ 2. The proof of Proposition 1 is very straightforward, we put it in Appendix B.1. In addition to V , we define the set T as T = { W ∣∣∣∣0 ∈ arg minγ g (x, γW)}. T is the set of all directions in which the optimal point is the origin; that is, the corresponding models in this direction are all no better than a constant classifier. Although we cannot write the set T in roster notation, we show in the theorem below that T becomes larger as increases. Theorem 1. Given the definition of T , then T 2 ⊆ T 1 when 1 ≥ 2. In addition, ∃̄ such that ∀ ≥ ̄, T = Rm×K . In this case, 0 ∈ arg minW g (x,W). We defer the proof of Theorem 1 to Appendix B.2, where we also provide a lower bound for ̄. Theorem 1 indicates that when the adversarial budget is large enough, the optimal point is the origin. In this case, we will get a constant classifier, and training completely fails. L (W) is the average of g (x,W) over the dataset, so Theorem 1 and Proposition 1 still apply if we replace g with L in the definition of V and T . For nonlinear models like deep neural networks, these conclusions will not hold because g (x, θ) is no longer convex. Nevertheless, our experiments in Section 4.1 evidence the same phenomena as indicated by the theoretical analysis above. Larger make it harder for the optimizer to escape the initial suboptimal region. In some cases, training fails, and we obtain a constant classifier in the end. 3.2 General Nonlinear Classification Models For deep nonlinear neural networks, we cannot write the analytical form of g(x, θ) or g (x, θ). To analyze such models, we follow [43] and assume the smoothness of the function g. Assumption 1. The function g satisfies the following Lipschitzian smoothness conditions: ‖g(x, θ1)− g(x, θ2)‖ ≤ Lθ‖θ1 − θ2‖ , ‖Oθg(x, θ1)− Oθg(x, θ2)‖ ≤ Lθθ‖θ1 − θ2‖ , ‖Oθg(x1, θ)− Oθg(x2, θ)‖ ≤ Lθx‖x1 − x2‖p . (2) Based on this, we study the smoothness of L (θ). Proposition 2. If Assumption 1 holds, then we have 1 ‖L (θ1)− L (θ2)‖ ≤ Lθ‖θ1 − θ2‖ , ‖OθL (θ1)− OθL (θ2)‖ ≤ Lθθ‖θ1 − θ2‖+ 2 Lθx . (3) The proof is provided in Appendix B.3, in which we can see the upper bound in Proposition 2 is tight and can be achieved in the worst cases. Proposition 2 shows that the first-order smoothness of the objective function is preserved under adversarial attacks, but the second-order smoothness is not. That is to say, gradients in arbitrarily small neighborhoods in the θ-space can change discontinuously. The unsatisfying second-order property arises from the maximization operator defined in the functions g and L . For function g (x, θ), the non-smooth points are those where the optimal adversarial example x′ changes abruptly in a sufficiently small neighborhood. Formally, we use θ1 and x′1 to represent the model parameters and the corresponding optimal adversarial example. We assume different gradients of the model parameters for different inputs. If there exists a positive number a > 0 such that, ∀δ > 0, we can find θ2 ∈ {θ|‖θ − θ1‖ ≤ δ}, and the corresponding optimal adversarial example x′2 satisfies ‖x′1 − x′2‖p > a, then limθ→θ1 Oθg (x, θ) 6= Oθg (x, θ1). L (θ) is the aggregation of g (x, θ) over the dataset, so it also has such non-smooth points. In addition, as the 2 Lθx term in the second inequality of (3) indicates, the adversarial examples can change more under a larger adversarial budget. As a result, the (sub)gradients OθL (θ) can change more abruptly in the neighborhood of the parameter space. That is, the (sub)gradients are more scattered in the adversarial loss landscape. Figure 1 provides a 2D sketch diagram showing the non-smoothness introduced by adversarial training. The red curve represents the vanilla loss function g(x, θ). Under adversarial perturbation, the loss landscape fluctuates within the light blue band. Then, the blue curve represents the worst case we can encounter in the adversarial setting, i.e., g (x, θ). We can see that the blue curve is not smooth any more at the point where θ = 0. Importantly, as the light blue band becomes wider under a larger adversarial budget, the corresponding non-smooth point becomes sharper, which means that the difference between the gradients on both sides of the non-smooth point becomes larger. Based on Proposition 2, we show in the following theorem that the non-smoothness introduced by adversarial training makes the optimization by stochastic gradient descent (SGD) more difficult. Theorem 2. Let Assumption 1 hold, the stochastic gradient OθL̂ (θt) be unbiased and have bounded variance, and the SGD update θt+1 = θt − αtOθL̂ (θt) use a constant step size αt = α = 1Lθθ√T for T iterations. Given the trajectory of the parameters during optimization {θt}Tt=1, then we can bound the asymptotic probability of large gradients for a sufficient large value of T as ∀γ ≥ 2, P (‖OθL (θt)‖ > γ Lθx) < 4 γ2 − 2γ + 4 . (4) 1Strictly speaking, L (θ) is not differentiable at some point, so OθL (θ) might be ill-defined. In this paper, we use OθL (θ) for simplicity. Nevertheless, the inequality holds for any subgradient v ∈ ∂θL (θ). We provide the proof in Appendix B.4. In vanilla training, is 0 and L(θ) is smooth, and (4) implies that limt→+∞ ‖Oθg (θt)‖ = 0 almost surely. This is consistent with the fact that SGD converges to a critical point with non-convex smooth functions. By contrast, in adversarial training, i.e., > 0, we cannot guarantee convergence to a critical point. Instead, the gradients are non-vanishing, and we can only bound the probability of obtaining gradients whose magnitude is larger than 2 Lθx. For a fixed value of C := γ Lθx larger than 2 Lθx, the inequality (4) indicates that the probability P (‖OθL (θt)‖ > C) increases quadratically with . In deep learning practice, activation functions like sigmoid, tanh and ELU [7] satisfy the second-order smoothness in Assumption 1, but the most popular ReLU function does not. Nevertheless, adversarial training still causes gradient scattering and makes the optimization more difficult. That is, the bound of ‖OθL (θ1)−OθL (θ2)‖ still increases with , and the parameter gradients change abruptly in the adversarial loss landscape. We provide a more detailed discussion of this phenomenon in Appendix A.2, which shows that our analysis and conclusions easily extend to the ReLU case. The second-order Lipchitz constant indicates the magnitude of the gradient change for a unit change in parameters. Therefore, it is a good quantitative metric of gradient scattering. In practice, we are more interested in the effective local Lipschitz constant, which only considers the neighborhood of the current parameters, than in the global Lipschitz constant. In this case, the effective local second-order Lipschitz constant can be estimated by the top eigenvalues of the Hessian matrix O2θL (θ). 4 Numerical Analysis In this section, we conduct experiments on MNIST and CIFAR10 to empirically validate the theorems in Section 3. Detailed experimental settings are provided in Appendix C.1. Unless specified, we use LeNet models on MNIST and ResNet18 models on CIFAR10 in this and the following sections. Our code is available on https://github.com/liuchen11/AdversaryLossLandscape. 4.1 Gradient Magnitude In Section 3.1, we have shown that the training algorithm will get stuck at the origin and yield a constant classifier for linear models under large . For deep nonlinear models, the initial value of the parameters is close to the origin under most popular initialization schemes [17, 22]. Although Theorem 1 is not applicable here, we are still interested in investigating how effective gradient-based optimization is at escaping from the suboptimal initial parameters. To this end, we track the norm of the stochastic gradient ‖OθL̂ (θ)‖, the robust error E (θ) in the training set and the distance from the initial point ‖θ− θ0‖ during the first 2000 mini-batch updates for CIFAR10 models. Figure 2a, 2b, 2c evidence a clear difference between the models trained with different values of . When is small, the gradient magnitude is larger, and the model parameters move faster. Correspondingly, the training error decreases faster, which means that the model quickly escapes the initial suboptimal region. By contrast, when is large, the gradients are small, and the model gets stuck in the initial region. This implies that the loss landscape under a large adversarial budget impedes the escape from initial suboptimal plateaus in the early stage of training. For ReLU networks, adversarially-trained models have been found to have sparser weights and intermediate activations [9], i.e., they have more dead neurons. Dead neurons are implicitly favored by adversarial training, because the output is independent of the input perturbation. Note that training fails when all the neurons in one layer are dead for all training instances. The model is then effectively broken into two parts by this dead layer: the preceding layers will no longer be trained because the gradients are all blocked; the following layers do not depend on the input and thus give constant outputs. In essence, training is then stuck in a parameter space that only includes constant classifiers. In practice, this usually happens when the model has small width and the value of is large. This is consistent with previous findings that adversarial training needs higher model capacity [33] and too strong adversarial examples are harmful in the early stage of training [46]. Theorem 2 indicates that the gradients are non-vanishing in adversarial training and more likely to have large magnitude under large values of . This is validated by Figure 2d, in which we report the norm of the stochastic gradient ‖OθL̂ (θ)‖ in the last 2000 mini-batch updates for CIFAR10 models. In vanilla training, the gradient is almost zero in the end, indicating that the optimizer finds a critical point. In this case ‖OθL̂ (θ)‖ is dominated by the variance introduced by stochasticity. However, ‖OθL̂ (θ)‖ increases with . When is larger, ‖OθL̂ (θ)‖ is also larger and non-vanishing, indicating that the model is still bouncing around the parameter space at the end of training. The decreased gradient magnitude in the initial suboptimal region and the increased gradient magnitude in the final near-minimum region indicate that the adversarial loss landscape is not favorable to optimization when we train under large adversarial budgets. Additional results on MNIST models are provided in Figure 8 of Appendix C.2.1, where the same observations can be made. 4.2 Hessian Analysis To study the effective local Lipschitz constant of L (θ), we analyze the Hessian spectrum of models trained under different values of . It is known that the curvature in the neighborhood of model parameters is dominated by the top eigenvalues of the Hessian matrix O2L (θ). To this end, we use the power iteration method as in [51] to iteratively estimate the top 20 eigenvalues and the corresponding eigenvectors of the Hessian matrix. Furthermore, to discard the effect of the scale of function L (θ) for different , we estimate the scale of L (θ) by randomly sampling θ. We then normalize the top Hessian eigenvalues by the average value of L (θ) on these random samples. In addition, we show the learning curve of L (θ) on the training set during training in Figure 11 of Appendix C.2.2. It clearly show similar magnitude of L (θ) for different values of . In Figure 3, we show the top 20 Hessian eigenvalues, both before and after normalization, of CIFAR10 models under different adversarial budgets. We also provide 3D visualizations of the neighborhood in the directions of the top 2 eigenvectors in Figure 12 of Appendix C.2.2. It is clear that the local effective second-order Lipschitz constant of the model obtained consistently increases with the value of . That is, the minima found in L (θ) are sharper under larger . To validate the claim in Section 3.2 that non-smoothness arises from abrupt changes of the adversarial examples, we study the similarity of adversarial perturbations generated by different model parameter values in a small neighborhood. Specifically, we perturb the model parameters θ in opposite directions to θ + av and θ − av, where v is a unit vector and a is a scalar. Let x′av and x′−av represent the adversarial examples generated by the corresponding model parameters. We then calculate the average cosine similarity between the perturbation x′av − x and x′−av − x over the training set. The results on CIFAR10 models are provided in Figure 4. To account for the random start in PGD, we run each experiment 4 times and report the average value. The variances of all experiments are smaller than 0.005 and thus not shown in the figure. Note that, when v is a random unit vector, the robust error E (θ) of the parameters θ ± av on both the training and test sets remains unchanged for different values of a, indicating a flat landscape in the direction v. The adversarial examples in this case are mostly similar and have very high cosine similarity. By contrast, if v is the top eigenvector of the Hessian matrix, i.e., the most curvy direction, then we see a sharp increase in the robust error E (θ) when we increase a. Correspondingly, the cosine similarity between the adversarial perturbations is much lower, which indicates dramatic changes of the adversarial examples. We perform the same experiments on MNIST models in Appendix C.2.2 with the same observations. 5 Periodic Adversarial Scheduling In Sections 3 and 4, we have theoretically and empirically shown that the adversarial loss landscape becomes less favorable to optimization under large adversarial budgets. In this section, we introduce a simple adversarial budget scheduling scheme to overcome these problems. Inspired by the learning rate warmup heuristic used in deep learning [19, 25], we introduce warmup for the adversarial budget. Let d be the current epoch index and D be the warmup period’s length. We define a cosine scheduler cos and a linear scheduler lin, parameterized by max and min, as cos(d) = 1 2 (1− cos d D π)( max − min) + min, lin(d) = ( max − min) d D + min . (5) We clip cos(d) and lin(d) between 0 and target, the target value of . If min ≤ 0 and max > target, the value of starts from 0, gradually increases to target and remains constant then. This warmup strategy allows us to overcome the fact, highlighted in the previous sections, that adversarial training is more sensitive to the learning rate under a large budget because the gradients are more scattered. This is evidenced by Figure 5, which compares the robust test error of MNIST models relying on different adversarial budget scheduling schemes. For all models, we used = 0.4, and report results after 100 epochs with different but constant learning rates in Adam [28]. Our linear and cosine schedulers perform better than using a constant value of during training and yield good performance for a broader range of learning rates: in the small learning rate regime, they speed up training; in the large learning rate regime, they stabilize training and avoid divergence. Note that, as shown in Appendix C.2.3, warmup of the learning rate does not yield similar benefits. As shown in [25], periodic learning rates enable model ensembling to improve the performance. Here, we can follow the same strategy but also for the adversarial budget. To this end, we divide the training phase into several periods and store one model at the end of each period. We make final predictions based on the ensemble of these models. This periodic scheme has no computational overhead. We call it periodic adversarial scheduling (PAS). As before, we run experiments on MNIST and CIFAR10. For MNIST, we train each model for 100 epochs and do not use a periodic scheduling for the learning rate, which we found not to improve the results even if we use a constant adversarial budget. For CIFAR10, we train each model for 200 epochs. When there are no learning rate resets, our results indicate the final model after 200 epochs. When using a periodic learning rate, we divide the 200 epochs into 3 periods, i.e., we reset the learning rate and the adversarial budget after 100 and 150 epochs, and compute the results using an ensemble of these 3 models. The value of learning rate and the adversarial budget size are calculated based on the ratio of the current epoch index to the current period length. We provide more details about hyper-parameter settings in Appendix C.1. We compare different scheduler in adversarial budget under different tasks and settings. We evaluate the robustness of our trained models by different kinds of attacks. First we evaluate the models under the PGD attack used in training (PGD), i.e., 50-iteration PGD for MNIST models and 10-iteration PGD for CIFAR10 models. Then, we increase the number of iterations in PGD and compute the robust error under 100-iteration PGD. To solve the issue of suboptimal step size, we also evaluate our models using the state-of-the-art AutoPGD attack [10], which search for the optimal step sizes. We run AutoPGD for 100 iterations for evaluation, based on either cross-entropy loss (APGD100 CE) or the difference of logit ratio loss (APGD100 DLR). To avoid gradient masking, we also run the state-of-the-art black-box SquareAttack [3] for 5000 iterations (Square5K). The hyperparameter details are defered in Appendix C.1. The results are summarized in Table 1, where we compare the clean and robust accuracy under different adversarial attacks on the test set. It is clear that our proposed cosine or linear schedulers yield better performance, in both clean accuracy and robust accuracy, than using a constant adversarial budget in all cases. For MNIST, warmup not only makes training robust to different choices of learning rate, but also improves the final robust accuracy. For CIFAR10, model ensembling enabled by the periodic scheduler improves the robust accuracy. 6 Discussion Model capacity. In addition to the size of the adversarial budget, the capacity of the model also greatly affects the adversarial loss landscape and thus the performance of adversarial training. Adversarial training needs higher model capacity in two aspects: if we decrease the model capacity, adversarial training will fail to converge while vanilla training still works [33]; if we increase the model capacity, the robust accuracy of adversarial training continues to rise while the clean accuracy of normal training saturates [50]. Furthermore, we show in Appendix C.2.4 that smaller models are more likely to have dead layers because of their lower dimensionality. As a result, warmup in adversarial budget is also necessary for small models. In many cases, the parameter space of small models has good minima in terms of robustness, but adversarial training with a constant value of fails to find them. For example, one can obtain small but robust models by pruning large ones [21, 52]. Architecture. The network architecture encodes the parameterization of the model, so it greatly affects the adversarial loss landscape. For example, in Table 1, ResNet18 has fewer trainable parameters but better performance than VGG on CIFAR10, indicating that ResNet18 has a better parameterization in terms of robustness. Since the optimal architecture for adversarial robustness is not necessarily the same as the one for clean accuracy, we believe that finding architectures inherently favorable to adversarial training is an interesting but challenging topic for future research. Connectivity of minima. Local minima in the vanilla loss landscape are well-connected [12, 14]: there exist flat hyper curves connecting them. In Appendix C.2.5, we study the connectivity of converged model parameters in the adversarial setting. We find that the parameters of two adversarially trained models are less connected in the adversarial loss landscape than in the vanilla setting. That is, the path connecting them needs to go over suboptimal regions. Adversarial example generation We approximate the adversarial loss using adversarial examples generated by PGD, which is a good estimate of the inner maximization in (1). PGD-based adversarial training updates model parameters by near-optimal adversarial examples. However, recent works [41, 47] have shown that robust models can also be trained by suboptimal adversarial examples, which are faster to obtain. The formulation of these methods differs from (1), because the inner maximization problem is not approximately solved. Understanding why models (partially) trained on suboptimal adversarial examples are resistant to stronger adversarial examples needs more investigation. 7 Conclusion We have studied the properties of the loss landscape under adversarial training. We have shown that the adversarial loss landscape is non-smooth and not favorable to optimization, due to the dependency of adversarial examples on the model parameters. Furthermore, we have empirically evidenced that large adversarial budgets slow down training in the early stages and impedes convergence in the end. Finally, we have demonstrated the advantages of warmup and periodic scheduling of the adversarial budget size during training. They make training more robust to different choices of learning rate and yield better performance than vanilla adversarial training. 8 Broader Impact The existence of adversarial examples has raised serious concerns about the deployment of deep learning models in safety-sensitive domains, such as medical imaging [32] and autonomous navigation [1]. In these domains, as in many others, adversarial training remains the most popular, effective, and general method to train robust models. By studying the nature of optimization in adversarial training and proposing solutions to overcome the underlying challenges, our work has potential for high societal impact in these fields. Although the robust accuracy is much lower than the clean accuracy so far, the intrinsic properties of adversarial training we have discovered open up future research directions to improve its performance. From an ecological perspective, however, we acknowledge that the higher computational cost of adversarial training translates to higher carbon footprint than vanilla training. Nevertheless, we believe that the potential societal benefits of robustness to attacks outweigh this drawback. 9 Acknowledgements We thankfully acknowledge the support of the Hasler Foundation (Grant No. 16076) for this work.
1. What is the main contribution of the paper regarding the loss landscape of adversarial training? 2. What are the strengths of the proposed approach, particularly in its simplicity and effectiveness? 3. What are the weaknesses of the paper, especially regarding its limitations in testing against other attack methods and lack of visualization? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper investigates the loss landscape of adversarial training and shows theoretical results that the landscape is spiky and hard to train on. More specifically, the authors prove that, under Lipschitz continuity assumptions on loss and its gradients, the probability that the norm of gradients is large increases quadratically with the norm of adversarial perturbation. The empirical results show that even for activation functions like ReLU, the so-called "gradient scattering" phenomenon still exists. Additional analysis on Hessians show that it is indeed the gradients corresponding to the large eigenvalues in the Hessian that are responsible for the non-smoothness of adversarial training loss landscape. The paper further proposes to periodically reset adversarial budget so that the optimization process can be smoothed (in the amortized way). The empirical results show that this simple periodic budget schedule significantly improves the performance. Strengths 1. With simple assumptions, the authors prove that the optimization landscape is highly non-smooth and large gradients are likely to appear more frequently with larger norm of perturbation. And the authors show empirical evidence that the gradient scattering phenomenon happens. 2. The paper introduces a simple yet effective training schedule with almost no additional computational cost. Weaknesses 1. The authors only test the simplest attack method. It is not clear whether the method works against other attack methods. 2. A loss landscape visualization like [1] will better justify the theoretical claims on the landscape. [1] Hao Li, Zheng Xu, Gavin Taylor, and Tom Goldstein. Visualizing the loss landscape of neural nets. arXiv preprint arXiv:1712.09913, 2017.
NIPS
Title Monocular Dynamic View Synthesis: A Reality Check Abstract We study the recent progress on dynamic view synthesis (DVS) from monocular video. Though existing approaches have demonstrated impressive results, we show a discrepancy between the practical capture process and the existing experimental protocols, which effectively leaks in multi-view signals during training. We define effective multi-view factors (EMFs) to quantify the amount of multi-view signal present in the input capture sequence based on the relative camera-scene motion. We introduce two new metrics: co-visibility masked image metrics and correspondence accuracy, which overcome the issue in existing protocols. We also propose a new iPhone dataset that includes more diverse real-life deformation sequences. Using our proposed experimental protocol, we show that the state-of-the-art approaches observe a 1-2 dB drop in masked PSNR in the absence of multi-view cues and 4-5 dB drop when modeling complex motion. Code and data can be found at http://hangg7.com/dycheck. 1 Introduction Dynamic scenes are ubiquitous in our everyday lives – people moving around, cats purring, and trees swaying in the wind. The ability to capture 3D dynamic sequences in a “casual” manner, particularly through monocular videos taken by a smartphone in an uncontrolled environment, will be a cornerstone in scaling up 3D content creation, performance capture, and augmented reality. Recent works have shown promising results in dynamic view synthesis (DVS) from a monocular video [1, 2, 3, 4, 5, 6, 7, 8]. However, upon close inspection, we found that there is a discrepancy between the problem statement and the experimental protocol employed. As illustrated in Figure 1, the input data to these algorithms either contain frames that “teleport” between multiple camera viewpoints at consecutive time steps, which is impractical to capture from a single camera, or depict quasi-static scenes, which do not represent real-life dynamics. In this paper, we provide a systematic means of characterizing the aforementioned discrepancy and propose a better set of practices for model fitting and evaluation. Concretely, we introduce effective multi-view factors (EMFs) to quantify the amount of multi-view signal in a monocular sequence based on the relative camera-scene motion. With EMFs, we show that the current experimental protocols operate under an effectively multi-view regime. For example, our analysis reveals that the aforementioned practice of camera teleportation makes the existing capture setup akin to an Olympic runner taking a video of a moving scene without introducing any motion blur. The reason behind the existing experimental protocol is that monocular DVS is a challenging problem that is also hard to evaluate. Unlike static novel-view synthesis where one may simply evaluate on held-out views of the captured scene, in the dynamic case, since the scene changes over time, evaluation requires another camera that observes the scene from a different viewpoint at the same time. However, this means that the test views often contain regions that were never observed in the input sequence. Camera teleportation, i.e., constructing a temporal sequence by alternating samples † Work partially done as part of HG’s internship at Adobe. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). from different cameras, addresses this issue at the expense of introducing multi-view cues, which are unavailable in the practical single-camera capture. We propose two sets of metrics to overcome this challenge without the use of camera teleportation. The first metric enables evaluating only on pixels that were seen in the input sequence by computing the co-visibility of every test pixel. The proposed co-visibility mask can be used to compute masked image metrics (PSNR, SSIM [9] and LPIPS [10]). While the masked image metrics measure the quality of rendering, they do not directly measure the quality of the inferred scene deformation. Thus, we also propose a second metric that evaluates the quality of established point correspondences by the percentage of correctly transferred keypoints (PCK-T) [11]. The correspondences may be evaluated between the input and test frames or even within the input frames, which enable evaluation on sequences that are captured with only a single camera. We conduct extensive evaluation on existing datasets [5, 7] as well as a new dataset that includes more challenging motion and diverse scenes. When tested on existing datasets without camera teleportation, the state-of-the-art methods observe a 1-2 dB drop in masked PSNR and ~5% drop in PCK-T. When tested on complex motion with the proposed dataset, existing approaches observe another 4-5 dB drop in masked PSNR and ~30% drop in PCK-T, suggesting a large room for improvement. We encourage future works to report EMFs on new data and adopt our experimental protocol to evaluate monocular DVS methods. Code and data are available at our project page. 2 Related work Non-rigid structure from motion (NR-SfM). Traditional NR-SfM tackles the task of dynamic 3D inference by fitting parametric 3D morphable models [12, 13, 14, 15, 16, 17, 18, 19], or fusing non-parametric depth scans of generic dynamic scenes [20, 21, 22, 23, 24]. All of these approaches aim to recover accurate surface geometry at each time step and their performance is measured with ground truth 3D geometry or 2D correspondences with PCK [25] when such ground truth is not available. In this paper, we analyze recent dynamic view synthesis methods whose goal is to generate a photo-realistic novel view. Due to their goal, these methods do not focus on evaluation against ground truth 3D geometry, but we take inspiration from prior NR-SfM works to evaluate the quality of the inferred 3D dynamic representation based on correspondences. We also draw inspiration from previous NR-SfM work that analyzed camera/object speed and 3D reconstruction quality [26, 27]. Monocular dynamic neural radiance fields (dynamic NeRFs). Dynamic NeRFs reconstruct moving scenes from multi-view inputs or given pre-defined deformation template [28, 29, 30, 31, 32, 33, 34, 35]. In contrast, there is a series of recent works that seek to synthesize high-quality novel views of generic dynamic scenes given a monocular video [1, 2, 3, 4, 5, 6, 7, 8]. These works can be classified into two categories: a deformed scene is directly modeled as a time-varying NeRF in the world space [1, 4, 6] or as a NeRF in canonical space with a time-dependent deformation [2, 3, 5, 7, 8]. The evaluation protocol in these works inherit from the original static-scene NeRF [36] that quantify the rendering quality of held-out viewpoints using image metrics, e.g., PSNR. However, in dynamic scenes, PSNR from an unseen camera view may not be meaningful since the novel view may include regions that were never seen in the training view (unless the method can infer unseen regions using learning based approaches). Existing approaches resolve this issue by incorporating views from multiple cameras during training, which we show results in an effectively multi-view setup. We introduce metrics to measure the difficulties of an input sequence, a monocular dataset with new evaluation protocol and metrics, which show that existing methods have a large room for improvement. 3 Effective multi-view in a monocular video We consider the problem of dynamic view synthesis (DVS) from a monocular video. A monocular dynamic capture consists of a single camera observing a moving scene. The lack of simultaneous multi-view in the monocular video makes this problem more challenging compared to the multi-view setting, such as reconstructing moving people from multiple cameras [30, 34, 37]. Contrary to the conventional perception that the effect of multi-view is binary for a capture (single versus multiple cameras), we show that it can be characterized on a continuous spectrum. Our insight is that a monocular sequence contains effective multi-view cues when the camera moves much faster than the scene, though technically the underlying scene is observed only once at each time step. 3.1 Characterizing effective multi-view in a monocular video Although a monocular video only sees the scene from one viewpoint at a time, depending on the capture method, it can still contains cues that are effectively similar to those captured by a multi-view camera rig, which we call as effective multi-view. As shown in Figure 2, when the scene moves significantly slower than the camera (to the far right end of the axis), the same scene is observed from multiple views, resulting in multiview capture. In this case, DVS re- duces to a well-constrained multi-view stereo problem at each time step. Consider another case where the camera moves significantly faster compared to the scene so that it observes roughly the same scene from different viewpoints. As the camera motion approaches the infinity this again reduces the monocular capture to a multi-view setup. We therefore propose to characterize the amount of multi-view cues by the relative camera-scene motion. 3.2 Quantifying effective multi-view in a monocular video For practicality, we propose two metrics, referred to as effective multi-view factors (EMFs). The first metric, full EMF Ω is defined as the relative ratio between the motion magnitude of the camera to the scene, which in theory characterizes the effective multi-view perfectly, but in practice can be expensive and challenging to compute. The second metric, angular EMF ω is defined as the camera angular velocity around the scene look-at point, which only considers the camera motion; while approximate, it is easy to compute and characterizes object-centric captures well. Full EMF Ω: ratio of camera-scene motion magnitude. Consider a monocular video of a moving scene over a set of time steps T . At each discrete time t ∈ T , let the camera’s 3D location be ot. We consider each point xt on the domain of observed scene surface S2t ⊂ R3. We define the camera-scene motion as the expected relative ratio, Ω = E t,t+1∈T [ E xt∈S2t [ ∥ot+1 − ot∥ ∥xt+1 − xt∥ ]] , (1) where the denominator xt+1−xt denotes the 3D scene flow and the the numerator ot+1−ot denotes the 3D camera motion, both over one time step forward. The 3D scene flow can be estimated via the 2D dense optical flow field and the metric depth map when available, or monocular depth map from off-the-shelf approaches [38, 39] in the general case. Please see the Appendix for more details. Note that Ω in theory captures the effective multi-view factor for any sequence. However, in practice, 3D scene flow estimation is an actively studied problem and may suffer from noisy or costly predictions. Angular EMF ω: camera angular velocity. We introduce a second metric ω that is easy to compute in practice. We make an additional assumption that the capture has a single look-at point in world space, which often holds true, particularly for captures involving a single centered subject. Specifically, given a look-at point a by triangulating the optical-axes of all cameras (as per [5]) and the frame rate N , the camera angular velocity ω is computed as a scaled expectation, ω = E t,t+1∈T [ arccos ( ⟨a− ot,a− ot+1⟩ ∥a− ot∥ · ∥a− ot+1∥ )] ·N. (2) Note that even though ω only considers the camera motion, it is indicative of effective multi-view in the majority of existing captures, which we describe in Section 4.1. For both Ω and ω, the larger the value, the more multi-view cue the sequence contains. For future works introducing new input sequences, we recommend always reporting angular EMF for its simplicity and reporting full EMF when possible. Next we inspect the existing experimentation practices under the lens of effective multi-view. 4 Towards better experimentation practice In this section, we reflect on the existing datasets and find that they operate under the effective multi-view regime, with either teleporting camera motion or quasi-static scene motion. The reason behind the existing protocol is that monocular DVS is challenging from both the modeling and evaluation perspective. While the former challenge is well known, the latter is less studied, as we expand below. To overcome the existing challenge in the evaluation and enable future research to experiment with casually captured monocular video, we propose a better toolkit, including two new metrics and a new dataset of complex motion in everyday lives. 4.1 Closer look at existing datasets We investigate the datasets used for evaluation in D-NeRF [3], HyperNeRF [7], Nerfies [5], and NSFF [4]. Table 1 shows their statistics. We evaluate the amount of effective multi-view cues via the proposed EMFs, shown in Figure 3. We find that existing datasets have large EMF values on both metrics. For example, the HyperNeRF dataset has an ω as large as ~200◦/s. To put these numbers in context, a person imaging an object 3m away has to move at 1m/s to get an ω = 20◦/s (close to the statistics in the proposed dataset). Some datasets exhibit ω higher than 120◦/s, which is equivalent to a camera motion faster than the Olympic 100m sprint record, without incurring any motion blur. Visualizing the actual training data shown in Figure 1 reveals that existing datasets feature non-practical captures of either (1) teleporting/fast camera motion or (2) quasi-static/slow scene motion. The former is not representative of practical captures from a hand-held camera, e.g., a smartphone, while the latter is not representative of moving objects in daily life. Note that, out of the 23 multi-camera sequences that these prior works used for quantitative evaluation, 22 have teleporting camera motion, and 1 has quasi-static scene motion – the CURLS sequence shown at the 5th column in Figure 1. All 13 single-camera sequences from HyperNeRF [7] used for qualitative evaluation have quasi-static scene motion. The four datasets also share a similar data protocol for generating effective multi-view input sequences from the original multi-camera rig capture. In Figure 4, we illustrate the teleporting protocol used in Nerfies [5] and HyperNeRF [7] as a canonical example. They sample alternating frames from two physical cameras (left and right in this case) mounted on a rig to create the training data. NSFF [4] samples alternating frames from 24 cameras based on the data released from Yoon et al. [40]. D-NeRF [3] experiments on synthetic dynamic scenes where cameras are randomly placed on a fixed hemisphere at every time step, in effect teleporting between 100-200 cameras. We encourage you to visit our project page to view the input videos from these datasets. Existing works adopt effective multi-view capture for two reasons. First, it makes monocular DVS more tractable. Second, it enables evaluating novel view on the full image, without worrying about the visibility of each test pixel, as all camera views were visible during training. We show this effect in Figure 5. When trained with camera teleportation (3rd column), the model can generate a high-quality full image from the test view. However, when trained without camera teleportation (4th column), the model struggles to hallucinate unseen pixels since NeRFs [5, 36] are not designed to predict completely unseen portions of the scene, unless they are specifically trained for generalization [41]. Next, we propose new metrics that enable evaluation without using camera teleportation. Note that when the model is trained without camera teleportation, the rendering quality also degrades, which we also evaluate. 4.2 Our proposed metrics While the existing setup allows evaluating on the full rendered image from the test view, the performance under such evaluation protocol, particularly with teleportation, confounds the efficacy of the proposed approaches and the multi-view signal present in the input sequence. To evaluate with an actual monocular setup, we propose two new metrics that evaluate only on seen pixels and measure the correspondence accuracy of the predicted deformation. Co-visibility masked image metrics. Existing works evaluate DVS models with image metrics on the full image, e.g., PSNR, SSIM [9] and LPIPS [10], following novel-view synthesis evaluation on static scenes [36]. However, in dynamic scenes, particularly for monocular capture with multi-camera validation, the test view contains regions that may not have been observed at all by the training camera. To circumvent this issue without resorting to camera teleportation, for each pixel in the test image, we propose co-visibility masking, which tests how many times a test pixel has been observed in the training images. Specifically, we use optical flow to compute correspondences between every test image and the training images, and only keep test pixels that have enough correspondences in the training images via thresholding. This results in a mask, illustrated in Figure 5, which we use to confine the image metrics. We follow the common practice from the image generation literature and adopt masked metrics, mPSNR and mLPIPS [42, 43]. Note that NSFF [4] adopts similar metrics but for evaluating the rendering quality on foreground versus background regions. We additionally report mSSIM by partial convolution [44], which only considers seen regions during its computation. More details are in the Appendix. Using masked image metrics, we quantify the performance gap in rendering when a model is trained with or without multi-view cues in Section 5.1. Percentage of correctly transferred keypoints (PCK-T). Correspondences lie at the heart of traditional non-rigid reconstruction [21], which is overlooked in the current image-based evaluation. We propose to evaluate 2D correspondences across training frames with the percentage of correctly transferred keypoints (PCK-T) [11], which directly evaluates the quality of the inferred deformation. Specifically, we sparsely annotate 2D keypoints across input frames to ensure that each keypoint is fully observed during training. For correspondence readout from existing methods, we use either root finding [45] or scene flow chaining. Please see the Appendix for details on our keypoint annotation, correspondence readout, and metric computation. As shown in Figure 6, evaluating correspondences reveal that high quality image rendering does not necessarily result in accurate correspondences, which indicates issues in the underlying surface, due to the ambiguous nature of the problem. 4.3 Proposed iPhone dataset Existing datasets can be rectified by removing camera teleportation and evaluated using the proposed metrics, as we do in Section 5.1. However, even after removing camera teleportation, the existing datasets are still not representative of practical in-the-wild capture. First, the existing datasets are limited in motion diversity. Second, the evaluation baseline in existing datasets is small, which can hide issues in incorrect deformation and resulting geometry. For these reasons, we propose a new dataset called the iPhone dataset shown in Figure 7. In contrast to existing datasets with repetitive object motion, we collect 14 sequences featuring non-repetitive motion, from various categories such as generic objects, humans, and pets. We deploy three cameras for multi-camera capture – one hand-held moving camera for training and two static cameras of large baseline for evaluation. Furthermore, our iPhone dataset comes with metric depth from the lidar sensors, which we use to provide ground-truth depth for supervision. In Section 5.2, we show that depth supervision, together with other regularizations, is beneficial for training DVS models. Please see the Appendix for details on our multi-camera capture setup, data processing, and more visualizations. 5 Reality check: re-evaluating the state of the art In this section, we conduct a series of empirical studies to disentangle the recent progress in dynamic view synthesis (DVS) given a monocular video from effective multi-view in the training data. We evaluate current state-of-the-art methods when the effective multi-view factor (EMF) is low. Existing approaches and baselines. We consider the following state-of-the-art approaches for our empirical studies: NSFF [4], Nerfies [5] and HyperNeRF [7]. We choose them as canonical examples for other approaches [1, 2, 3, 6, 8, 34, 35], discussed in Section 2. We also evaluate time-conditioned NeRF (T-NeRF) as a common baseline [1, 3, 4]. Unlike the state-of-the-art methods, it is not possible to extract correspondences from a T-NeRF. A summary of these methods can be found in the Appendix. Datasets. We evaluate on the existing datasets as well as the proposed dataset. For existing datasets, we use the multi-camera captures accompanying Nerfies [5] and HyperNeRF [7] for evalulation. Due to their similar capture protocol, we consider them as a single dataset in our experiment (denoted as the Nerfies-HyperNeRF dataset). It consists of 7 sequences in total, which we augment with keypoint annotations. Our dataset has 7 multi-camera captures and 7 single-camera captures. We evaluate novel-view synthesis on the multi-camera captures and correspondence on all captures. Our data adopts the data format from the Nerfies-HyperNeRF dataset, with additional support for depth and correspondence labels. All videos are at 480p resolution and all dynamic scenes are inward-facing. Masked image and correspondence metrics. Following Section 4.2, we evaluate co-visibility masked image metrics and the correspondence metric. We report masked image metrics: mPSNR, mSSIM [9, 44], and mLPIPS [4, 10, 42, 43]. We visualize the rendering results with the covisibility mask. For the correspondence metric, we report the percentage of correctly transferred keypoints (PCK-T) [11] with threshold ratio α = 0.05. Additional visualizations of full image rendering and inferred correspondences can be found in the Appendix. Implementation details. We consolidate Nerfies [5] and HyperNeRF [7] in one codebase using JAX [46]. Compared to the original official code releases, our implementation aligns all training and evaluation details between models and allows correspondence readout. Our implementation reproduces the quantitative results in the original papers. We implement T-NeRF in the same codebase. For NSFF [4], we tried both the official code base [47] and a public third-party re-implementation [48], where the former fails to converge on our proposed iPhone dataset while the latter works well. We thus report results using the third-party re-implementation. However, note that both the original and the third-party implementation represent the dynamic scene in normalized device coordinates (NDC). As NDC is designed for forward-facing but not considered inward-facing scenes, layered artifacts may appear due to its log-scale sampling rate in the world space, as shown in Figure 9. More details about aligning the training procedure and remaining differences are provided in the Appendix. Code, pretrained models, and data are available on the project page. 5.1 Reality check on the Nerfies-HyperNeRF dataset Impact of effective multi-view. We first study the impact of effective multi-view on the NerfiesHyperNeRF [5, 7] dataset. In this experiment, we rectify the effective multi-view sequences by only using the left camera during training as opposed to both the left and right cameras, illustrated in Figure 4. We denote the original setting as “teleporting” and the rectified sequences as “nonteleporting”. We train all approaches under these two settings with the same held-out validation frames and same set of co-visibility masks computed from common training frames. In Figure 8 (Top), all methods perform better across all metrics when trained under the teleporting setting compared to the non-teleporting one, with the exception of PCK-T for NSFF. We conjecture that this is because that NSFF has additional optical flow supervision, which is more accurate without camera teleportation. In Figure 8 (Bottom), we show qualitative results using Nerfies (we include visualizations of the other methods in the Appendix). Without effective multi-view, Nerfies fails at modeling physically plausible shape for broom and wires. Our results show that the effective multi-view in the existing experimental protocol inflates the synthesis quality of prior methods, and that truly monocular captures are more challenging. Benchmark results without camera teleportation. In Table 2 and Figure 9, we report the quantitative and qualitative results under the non-teleporting setting. Note that our implementation of the T-NeRF baseline performs the best among all four evaluated models in terms of mPSNR and mSSIM. In Figure 9, we confirm this result since T-NeRF renders high-quality novel view for both sequences. HyperNeRF produces the most photorealistic renderings, measured by mLPIPS. However it also produces distorted artifacts that do not align well with the ground truth (e.g., the incorrect shape in the CHICKEN sequence). 5.2 Reality check on the proposed iPhone dataset Ablation study on improving the state of the art. We find that existing methods perform poorly out-of-the-box on the proposed iPhone dataset with more diverse and complex real-life motions. In Figure 10 (Bottom), we demonstrate this finding with HyperNeRF [7] for it achieves the highest mLPIPS metric on the Nerfies-HyperNeRF dataset. Shown in the 3rd column, HyperNeRF produces visually implausible results with ghosting effects. Thus we explored incorporating additional regularizations from recent advances in neural rendering. Concretely, we consider the following: (+B) random background compositing [32]; (+D) a depth loss on the ray matching distance [1, 4]; and (+S) a sparsity regularization for scene surface [49]. In Figure 10 (Top), we show quantitative results from the ablation. In Figure 10 (Bottom), we show visualizations of the impact of each regularization. Adding additional regularizations consistently boosts model performance. While we find the random background compositing regularizations particularly helpful, extra depth supervision and surface regularization further improve the quality, e.g., the fan region of the paper windmill. Benchmarked results. In Figure 11, we show qualitative results from our benchmark using the best model settings from the ablation study, denoted as “++”. Note that it is non-trivial to apply the same enhancements to NSFF for its NDC formulation so we keep it as-is. We visualize the lidar depth re-projection from the training view (1st column) to the test view (2nd column), as a reference for qualitative comparison (3rd column). Note that the white region is occluded from the input view, whereas the green region is occluded from the most of input video frames. We observe that existing approaches do not handle complex deformation well. For example, all models fail at fusing a valid human shape on the SPACE OUT sequence. In Table 3, we find a similar trend as in the Nerfies-HyperNeRF dataset: the baseline T-NeRF performs the best in terms of mPSNR and mSSIM while HyperNeRF produces the most photorealistic renderings in terms of mLPIPS. The overall synthesis quality and correspondence accuracy of all methods drop considerably compared to the results on the Nerfies-HyperNeRF dataset. Taking Nerfies as an example, it drops 4.4 dB in mPSNR, 69.6% in mLPIPS, and 40.1% in PCK-T. Our study suggests an opportunity for large improvement when modeling complex motion. 6 Discussion and recommendation for future works In this work, we expose issues in the common practice and establish systematic means to calibrate performance metrics of existing and future works, in the spirit of papers like [50, 51, 52]. We provide initial attempts toward characterizing the difficulty of a monocular video for dynamic view synthesis (DVS) in terms of effective multi-view factors (EMFs). In practice, there are other challenging factors such as variable appearance, lighting condition, motion complexity and more. We leave their characterization for future works. We recommend future works to visualize the input sequences and report EMFs when demonstrating the results. We also recommend future works to evaluate the correspondence accuracy and strive for establishing better correspondences for DVS. Acknowledgements. We would like to thank Zhengqi Li and Keunhong Park for valuable feedback and discussions; Matthew Tancik and Ethan Weber for proofreading. We are also grateful to our pets: Sriracha, Haru, and Mochi, for being good during capture. This project is generously supported in part by the CONIX Research Center, sponsored by DARPA, as well as the BDD and BAIR sponsors.
1. What is the focus of the paper regarding novel view synthesis from monocular videos? 2. What are the strengths of the proposed effective multi-view factors (EMF) and the two new metrics, Masked PSNR and Correspondence? 3. Do you have any concerns about the applicability of the correspondence metric? 4. How does the reviewer assess the impact of camera angular velocity on the proposed approach? 5. Why did the authors choose to turn off view direction and appearance encoding during training? 6. Are there any typos or minor errors in the review that should be addressed? 7. What are the limitations of the proposed approach that the author could discuss further?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper studies the effective metrics for evaluating recent works on novel view synthesis from monocular videos. The proposed effective multi-view factors (EMF) measures multi-view signals in the evaluation of view synthesis. It also comes with a new dataset which tries to mitigate the multi-view signals in captures. Two new metrics are proposed to measure the quality of view synthesis and motion. Several state-of-the-art methods are further improved, but still struggle in the cases of motion and few multi-view signal. Strengths And Weaknesses Strengths EMF The effective multi-view factors measure the amount of ''multi-viewness'' in the capture. Such ''multi-viewness'' makes the task of novel view synthesis on dynamic scenes easier as the problem degrades into a multi-view setup. EMF consists of scene motion and camera angular velocity. And higher value in either suggests there is a strong multi-viewness in the capture. EMF basically tells how easy/difficult the capture is to reconstruct. Such metrics is nice to have in the future dataset release. Masked PSNR and Correspondence Two new metrics are proposed to further measure the quality of the synthesized image. Both are tailored for dynamic scenes, but I see them also useful in static scenes as well. Masked PSNR only calculates PSNR in the valid region, and PCK-T tells whether the predicted motion is correct. The design choices are smart. Weaknesses Correspondence I see masked PSNR can be applied to any method as long as the ground truth pose and depth are available. However, the correspondence metrics lacks generalizability. As shown in the paper, only methods that explicitly models motion can calculate the PCK-T score. It does not apply to methods like T-NeRF. Camera angular velocity Nerfies and HyperNeRF have a high camera angular velocity in Tbl. 1. As far as I see, this is because it is switching frames between two cameras. If the captures are reordered in a way that uses frames from camera 1 first and camera 2 second (or only frames from camera 1), will Nerfies still work? If so, it will have a much smaller velocity, thus less EMF. It would be interesting to see how the method performs vs. EMF. No view direction or appearance encoding It is mentioned in L297 both are turned off during training. But the goal of PSNR_M is to evaluate the quality of NVS in seen region. In practice, view direction and appearance encoding are essential in synthesizing new views. Turning them off will hurt the PSNR_M score a lot. Is there a specific reason doing so aside from overfitting? Typo L32 agnitude -> magnitude Questions See above Limitations Limitations are not discussed. The author can discuss the use case of EMF and two new metrics more.
NIPS
Title Monocular Dynamic View Synthesis: A Reality Check Abstract We study the recent progress on dynamic view synthesis (DVS) from monocular video. Though existing approaches have demonstrated impressive results, we show a discrepancy between the practical capture process and the existing experimental protocols, which effectively leaks in multi-view signals during training. We define effective multi-view factors (EMFs) to quantify the amount of multi-view signal present in the input capture sequence based on the relative camera-scene motion. We introduce two new metrics: co-visibility masked image metrics and correspondence accuracy, which overcome the issue in existing protocols. We also propose a new iPhone dataset that includes more diverse real-life deformation sequences. Using our proposed experimental protocol, we show that the state-of-the-art approaches observe a 1-2 dB drop in masked PSNR in the absence of multi-view cues and 4-5 dB drop when modeling complex motion. Code and data can be found at http://hangg7.com/dycheck. 1 Introduction Dynamic scenes are ubiquitous in our everyday lives – people moving around, cats purring, and trees swaying in the wind. The ability to capture 3D dynamic sequences in a “casual” manner, particularly through monocular videos taken by a smartphone in an uncontrolled environment, will be a cornerstone in scaling up 3D content creation, performance capture, and augmented reality. Recent works have shown promising results in dynamic view synthesis (DVS) from a monocular video [1, 2, 3, 4, 5, 6, 7, 8]. However, upon close inspection, we found that there is a discrepancy between the problem statement and the experimental protocol employed. As illustrated in Figure 1, the input data to these algorithms either contain frames that “teleport” between multiple camera viewpoints at consecutive time steps, which is impractical to capture from a single camera, or depict quasi-static scenes, which do not represent real-life dynamics. In this paper, we provide a systematic means of characterizing the aforementioned discrepancy and propose a better set of practices for model fitting and evaluation. Concretely, we introduce effective multi-view factors (EMFs) to quantify the amount of multi-view signal in a monocular sequence based on the relative camera-scene motion. With EMFs, we show that the current experimental protocols operate under an effectively multi-view regime. For example, our analysis reveals that the aforementioned practice of camera teleportation makes the existing capture setup akin to an Olympic runner taking a video of a moving scene without introducing any motion blur. The reason behind the existing experimental protocol is that monocular DVS is a challenging problem that is also hard to evaluate. Unlike static novel-view synthesis where one may simply evaluate on held-out views of the captured scene, in the dynamic case, since the scene changes over time, evaluation requires another camera that observes the scene from a different viewpoint at the same time. However, this means that the test views often contain regions that were never observed in the input sequence. Camera teleportation, i.e., constructing a temporal sequence by alternating samples † Work partially done as part of HG’s internship at Adobe. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). from different cameras, addresses this issue at the expense of introducing multi-view cues, which are unavailable in the practical single-camera capture. We propose two sets of metrics to overcome this challenge without the use of camera teleportation. The first metric enables evaluating only on pixels that were seen in the input sequence by computing the co-visibility of every test pixel. The proposed co-visibility mask can be used to compute masked image metrics (PSNR, SSIM [9] and LPIPS [10]). While the masked image metrics measure the quality of rendering, they do not directly measure the quality of the inferred scene deformation. Thus, we also propose a second metric that evaluates the quality of established point correspondences by the percentage of correctly transferred keypoints (PCK-T) [11]. The correspondences may be evaluated between the input and test frames or even within the input frames, which enable evaluation on sequences that are captured with only a single camera. We conduct extensive evaluation on existing datasets [5, 7] as well as a new dataset that includes more challenging motion and diverse scenes. When tested on existing datasets without camera teleportation, the state-of-the-art methods observe a 1-2 dB drop in masked PSNR and ~5% drop in PCK-T. When tested on complex motion with the proposed dataset, existing approaches observe another 4-5 dB drop in masked PSNR and ~30% drop in PCK-T, suggesting a large room for improvement. We encourage future works to report EMFs on new data and adopt our experimental protocol to evaluate monocular DVS methods. Code and data are available at our project page. 2 Related work Non-rigid structure from motion (NR-SfM). Traditional NR-SfM tackles the task of dynamic 3D inference by fitting parametric 3D morphable models [12, 13, 14, 15, 16, 17, 18, 19], or fusing non-parametric depth scans of generic dynamic scenes [20, 21, 22, 23, 24]. All of these approaches aim to recover accurate surface geometry at each time step and their performance is measured with ground truth 3D geometry or 2D correspondences with PCK [25] when such ground truth is not available. In this paper, we analyze recent dynamic view synthesis methods whose goal is to generate a photo-realistic novel view. Due to their goal, these methods do not focus on evaluation against ground truth 3D geometry, but we take inspiration from prior NR-SfM works to evaluate the quality of the inferred 3D dynamic representation based on correspondences. We also draw inspiration from previous NR-SfM work that analyzed camera/object speed and 3D reconstruction quality [26, 27]. Monocular dynamic neural radiance fields (dynamic NeRFs). Dynamic NeRFs reconstruct moving scenes from multi-view inputs or given pre-defined deformation template [28, 29, 30, 31, 32, 33, 34, 35]. In contrast, there is a series of recent works that seek to synthesize high-quality novel views of generic dynamic scenes given a monocular video [1, 2, 3, 4, 5, 6, 7, 8]. These works can be classified into two categories: a deformed scene is directly modeled as a time-varying NeRF in the world space [1, 4, 6] or as a NeRF in canonical space with a time-dependent deformation [2, 3, 5, 7, 8]. The evaluation protocol in these works inherit from the original static-scene NeRF [36] that quantify the rendering quality of held-out viewpoints using image metrics, e.g., PSNR. However, in dynamic scenes, PSNR from an unseen camera view may not be meaningful since the novel view may include regions that were never seen in the training view (unless the method can infer unseen regions using learning based approaches). Existing approaches resolve this issue by incorporating views from multiple cameras during training, which we show results in an effectively multi-view setup. We introduce metrics to measure the difficulties of an input sequence, a monocular dataset with new evaluation protocol and metrics, which show that existing methods have a large room for improvement. 3 Effective multi-view in a monocular video We consider the problem of dynamic view synthesis (DVS) from a monocular video. A monocular dynamic capture consists of a single camera observing a moving scene. The lack of simultaneous multi-view in the monocular video makes this problem more challenging compared to the multi-view setting, such as reconstructing moving people from multiple cameras [30, 34, 37]. Contrary to the conventional perception that the effect of multi-view is binary for a capture (single versus multiple cameras), we show that it can be characterized on a continuous spectrum. Our insight is that a monocular sequence contains effective multi-view cues when the camera moves much faster than the scene, though technically the underlying scene is observed only once at each time step. 3.1 Characterizing effective multi-view in a monocular video Although a monocular video only sees the scene from one viewpoint at a time, depending on the capture method, it can still contains cues that are effectively similar to those captured by a multi-view camera rig, which we call as effective multi-view. As shown in Figure 2, when the scene moves significantly slower than the camera (to the far right end of the axis), the same scene is observed from multiple views, resulting in multiview capture. In this case, DVS re- duces to a well-constrained multi-view stereo problem at each time step. Consider another case where the camera moves significantly faster compared to the scene so that it observes roughly the same scene from different viewpoints. As the camera motion approaches the infinity this again reduces the monocular capture to a multi-view setup. We therefore propose to characterize the amount of multi-view cues by the relative camera-scene motion. 3.2 Quantifying effective multi-view in a monocular video For practicality, we propose two metrics, referred to as effective multi-view factors (EMFs). The first metric, full EMF Ω is defined as the relative ratio between the motion magnitude of the camera to the scene, which in theory characterizes the effective multi-view perfectly, but in practice can be expensive and challenging to compute. The second metric, angular EMF ω is defined as the camera angular velocity around the scene look-at point, which only considers the camera motion; while approximate, it is easy to compute and characterizes object-centric captures well. Full EMF Ω: ratio of camera-scene motion magnitude. Consider a monocular video of a moving scene over a set of time steps T . At each discrete time t ∈ T , let the camera’s 3D location be ot. We consider each point xt on the domain of observed scene surface S2t ⊂ R3. We define the camera-scene motion as the expected relative ratio, Ω = E t,t+1∈T [ E xt∈S2t [ ∥ot+1 − ot∥ ∥xt+1 − xt∥ ]] , (1) where the denominator xt+1−xt denotes the 3D scene flow and the the numerator ot+1−ot denotes the 3D camera motion, both over one time step forward. The 3D scene flow can be estimated via the 2D dense optical flow field and the metric depth map when available, or monocular depth map from off-the-shelf approaches [38, 39] in the general case. Please see the Appendix for more details. Note that Ω in theory captures the effective multi-view factor for any sequence. However, in practice, 3D scene flow estimation is an actively studied problem and may suffer from noisy or costly predictions. Angular EMF ω: camera angular velocity. We introduce a second metric ω that is easy to compute in practice. We make an additional assumption that the capture has a single look-at point in world space, which often holds true, particularly for captures involving a single centered subject. Specifically, given a look-at point a by triangulating the optical-axes of all cameras (as per [5]) and the frame rate N , the camera angular velocity ω is computed as a scaled expectation, ω = E t,t+1∈T [ arccos ( ⟨a− ot,a− ot+1⟩ ∥a− ot∥ · ∥a− ot+1∥ )] ·N. (2) Note that even though ω only considers the camera motion, it is indicative of effective multi-view in the majority of existing captures, which we describe in Section 4.1. For both Ω and ω, the larger the value, the more multi-view cue the sequence contains. For future works introducing new input sequences, we recommend always reporting angular EMF for its simplicity and reporting full EMF when possible. Next we inspect the existing experimentation practices under the lens of effective multi-view. 4 Towards better experimentation practice In this section, we reflect on the existing datasets and find that they operate under the effective multi-view regime, with either teleporting camera motion or quasi-static scene motion. The reason behind the existing protocol is that monocular DVS is challenging from both the modeling and evaluation perspective. While the former challenge is well known, the latter is less studied, as we expand below. To overcome the existing challenge in the evaluation and enable future research to experiment with casually captured monocular video, we propose a better toolkit, including two new metrics and a new dataset of complex motion in everyday lives. 4.1 Closer look at existing datasets We investigate the datasets used for evaluation in D-NeRF [3], HyperNeRF [7], Nerfies [5], and NSFF [4]. Table 1 shows their statistics. We evaluate the amount of effective multi-view cues via the proposed EMFs, shown in Figure 3. We find that existing datasets have large EMF values on both metrics. For example, the HyperNeRF dataset has an ω as large as ~200◦/s. To put these numbers in context, a person imaging an object 3m away has to move at 1m/s to get an ω = 20◦/s (close to the statistics in the proposed dataset). Some datasets exhibit ω higher than 120◦/s, which is equivalent to a camera motion faster than the Olympic 100m sprint record, without incurring any motion blur. Visualizing the actual training data shown in Figure 1 reveals that existing datasets feature non-practical captures of either (1) teleporting/fast camera motion or (2) quasi-static/slow scene motion. The former is not representative of practical captures from a hand-held camera, e.g., a smartphone, while the latter is not representative of moving objects in daily life. Note that, out of the 23 multi-camera sequences that these prior works used for quantitative evaluation, 22 have teleporting camera motion, and 1 has quasi-static scene motion – the CURLS sequence shown at the 5th column in Figure 1. All 13 single-camera sequences from HyperNeRF [7] used for qualitative evaluation have quasi-static scene motion. The four datasets also share a similar data protocol for generating effective multi-view input sequences from the original multi-camera rig capture. In Figure 4, we illustrate the teleporting protocol used in Nerfies [5] and HyperNeRF [7] as a canonical example. They sample alternating frames from two physical cameras (left and right in this case) mounted on a rig to create the training data. NSFF [4] samples alternating frames from 24 cameras based on the data released from Yoon et al. [40]. D-NeRF [3] experiments on synthetic dynamic scenes where cameras are randomly placed on a fixed hemisphere at every time step, in effect teleporting between 100-200 cameras. We encourage you to visit our project page to view the input videos from these datasets. Existing works adopt effective multi-view capture for two reasons. First, it makes monocular DVS more tractable. Second, it enables evaluating novel view on the full image, without worrying about the visibility of each test pixel, as all camera views were visible during training. We show this effect in Figure 5. When trained with camera teleportation (3rd column), the model can generate a high-quality full image from the test view. However, when trained without camera teleportation (4th column), the model struggles to hallucinate unseen pixels since NeRFs [5, 36] are not designed to predict completely unseen portions of the scene, unless they are specifically trained for generalization [41]. Next, we propose new metrics that enable evaluation without using camera teleportation. Note that when the model is trained without camera teleportation, the rendering quality also degrades, which we also evaluate. 4.2 Our proposed metrics While the existing setup allows evaluating on the full rendered image from the test view, the performance under such evaluation protocol, particularly with teleportation, confounds the efficacy of the proposed approaches and the multi-view signal present in the input sequence. To evaluate with an actual monocular setup, we propose two new metrics that evaluate only on seen pixels and measure the correspondence accuracy of the predicted deformation. Co-visibility masked image metrics. Existing works evaluate DVS models with image metrics on the full image, e.g., PSNR, SSIM [9] and LPIPS [10], following novel-view synthesis evaluation on static scenes [36]. However, in dynamic scenes, particularly for monocular capture with multi-camera validation, the test view contains regions that may not have been observed at all by the training camera. To circumvent this issue without resorting to camera teleportation, for each pixel in the test image, we propose co-visibility masking, which tests how many times a test pixel has been observed in the training images. Specifically, we use optical flow to compute correspondences between every test image and the training images, and only keep test pixels that have enough correspondences in the training images via thresholding. This results in a mask, illustrated in Figure 5, which we use to confine the image metrics. We follow the common practice from the image generation literature and adopt masked metrics, mPSNR and mLPIPS [42, 43]. Note that NSFF [4] adopts similar metrics but for evaluating the rendering quality on foreground versus background regions. We additionally report mSSIM by partial convolution [44], which only considers seen regions during its computation. More details are in the Appendix. Using masked image metrics, we quantify the performance gap in rendering when a model is trained with or without multi-view cues in Section 5.1. Percentage of correctly transferred keypoints (PCK-T). Correspondences lie at the heart of traditional non-rigid reconstruction [21], which is overlooked in the current image-based evaluation. We propose to evaluate 2D correspondences across training frames with the percentage of correctly transferred keypoints (PCK-T) [11], which directly evaluates the quality of the inferred deformation. Specifically, we sparsely annotate 2D keypoints across input frames to ensure that each keypoint is fully observed during training. For correspondence readout from existing methods, we use either root finding [45] or scene flow chaining. Please see the Appendix for details on our keypoint annotation, correspondence readout, and metric computation. As shown in Figure 6, evaluating correspondences reveal that high quality image rendering does not necessarily result in accurate correspondences, which indicates issues in the underlying surface, due to the ambiguous nature of the problem. 4.3 Proposed iPhone dataset Existing datasets can be rectified by removing camera teleportation and evaluated using the proposed metrics, as we do in Section 5.1. However, even after removing camera teleportation, the existing datasets are still not representative of practical in-the-wild capture. First, the existing datasets are limited in motion diversity. Second, the evaluation baseline in existing datasets is small, which can hide issues in incorrect deformation and resulting geometry. For these reasons, we propose a new dataset called the iPhone dataset shown in Figure 7. In contrast to existing datasets with repetitive object motion, we collect 14 sequences featuring non-repetitive motion, from various categories such as generic objects, humans, and pets. We deploy three cameras for multi-camera capture – one hand-held moving camera for training and two static cameras of large baseline for evaluation. Furthermore, our iPhone dataset comes with metric depth from the lidar sensors, which we use to provide ground-truth depth for supervision. In Section 5.2, we show that depth supervision, together with other regularizations, is beneficial for training DVS models. Please see the Appendix for details on our multi-camera capture setup, data processing, and more visualizations. 5 Reality check: re-evaluating the state of the art In this section, we conduct a series of empirical studies to disentangle the recent progress in dynamic view synthesis (DVS) given a monocular video from effective multi-view in the training data. We evaluate current state-of-the-art methods when the effective multi-view factor (EMF) is low. Existing approaches and baselines. We consider the following state-of-the-art approaches for our empirical studies: NSFF [4], Nerfies [5] and HyperNeRF [7]. We choose them as canonical examples for other approaches [1, 2, 3, 6, 8, 34, 35], discussed in Section 2. We also evaluate time-conditioned NeRF (T-NeRF) as a common baseline [1, 3, 4]. Unlike the state-of-the-art methods, it is not possible to extract correspondences from a T-NeRF. A summary of these methods can be found in the Appendix. Datasets. We evaluate on the existing datasets as well as the proposed dataset. For existing datasets, we use the multi-camera captures accompanying Nerfies [5] and HyperNeRF [7] for evalulation. Due to their similar capture protocol, we consider them as a single dataset in our experiment (denoted as the Nerfies-HyperNeRF dataset). It consists of 7 sequences in total, which we augment with keypoint annotations. Our dataset has 7 multi-camera captures and 7 single-camera captures. We evaluate novel-view synthesis on the multi-camera captures and correspondence on all captures. Our data adopts the data format from the Nerfies-HyperNeRF dataset, with additional support for depth and correspondence labels. All videos are at 480p resolution and all dynamic scenes are inward-facing. Masked image and correspondence metrics. Following Section 4.2, we evaluate co-visibility masked image metrics and the correspondence metric. We report masked image metrics: mPSNR, mSSIM [9, 44], and mLPIPS [4, 10, 42, 43]. We visualize the rendering results with the covisibility mask. For the correspondence metric, we report the percentage of correctly transferred keypoints (PCK-T) [11] with threshold ratio α = 0.05. Additional visualizations of full image rendering and inferred correspondences can be found in the Appendix. Implementation details. We consolidate Nerfies [5] and HyperNeRF [7] in one codebase using JAX [46]. Compared to the original official code releases, our implementation aligns all training and evaluation details between models and allows correspondence readout. Our implementation reproduces the quantitative results in the original papers. We implement T-NeRF in the same codebase. For NSFF [4], we tried both the official code base [47] and a public third-party re-implementation [48], where the former fails to converge on our proposed iPhone dataset while the latter works well. We thus report results using the third-party re-implementation. However, note that both the original and the third-party implementation represent the dynamic scene in normalized device coordinates (NDC). As NDC is designed for forward-facing but not considered inward-facing scenes, layered artifacts may appear due to its log-scale sampling rate in the world space, as shown in Figure 9. More details about aligning the training procedure and remaining differences are provided in the Appendix. Code, pretrained models, and data are available on the project page. 5.1 Reality check on the Nerfies-HyperNeRF dataset Impact of effective multi-view. We first study the impact of effective multi-view on the NerfiesHyperNeRF [5, 7] dataset. In this experiment, we rectify the effective multi-view sequences by only using the left camera during training as opposed to both the left and right cameras, illustrated in Figure 4. We denote the original setting as “teleporting” and the rectified sequences as “nonteleporting”. We train all approaches under these two settings with the same held-out validation frames and same set of co-visibility masks computed from common training frames. In Figure 8 (Top), all methods perform better across all metrics when trained under the teleporting setting compared to the non-teleporting one, with the exception of PCK-T for NSFF. We conjecture that this is because that NSFF has additional optical flow supervision, which is more accurate without camera teleportation. In Figure 8 (Bottom), we show qualitative results using Nerfies (we include visualizations of the other methods in the Appendix). Without effective multi-view, Nerfies fails at modeling physically plausible shape for broom and wires. Our results show that the effective multi-view in the existing experimental protocol inflates the synthesis quality of prior methods, and that truly monocular captures are more challenging. Benchmark results without camera teleportation. In Table 2 and Figure 9, we report the quantitative and qualitative results under the non-teleporting setting. Note that our implementation of the T-NeRF baseline performs the best among all four evaluated models in terms of mPSNR and mSSIM. In Figure 9, we confirm this result since T-NeRF renders high-quality novel view for both sequences. HyperNeRF produces the most photorealistic renderings, measured by mLPIPS. However it also produces distorted artifacts that do not align well with the ground truth (e.g., the incorrect shape in the CHICKEN sequence). 5.2 Reality check on the proposed iPhone dataset Ablation study on improving the state of the art. We find that existing methods perform poorly out-of-the-box on the proposed iPhone dataset with more diverse and complex real-life motions. In Figure 10 (Bottom), we demonstrate this finding with HyperNeRF [7] for it achieves the highest mLPIPS metric on the Nerfies-HyperNeRF dataset. Shown in the 3rd column, HyperNeRF produces visually implausible results with ghosting effects. Thus we explored incorporating additional regularizations from recent advances in neural rendering. Concretely, we consider the following: (+B) random background compositing [32]; (+D) a depth loss on the ray matching distance [1, 4]; and (+S) a sparsity regularization for scene surface [49]. In Figure 10 (Top), we show quantitative results from the ablation. In Figure 10 (Bottom), we show visualizations of the impact of each regularization. Adding additional regularizations consistently boosts model performance. While we find the random background compositing regularizations particularly helpful, extra depth supervision and surface regularization further improve the quality, e.g., the fan region of the paper windmill. Benchmarked results. In Figure 11, we show qualitative results from our benchmark using the best model settings from the ablation study, denoted as “++”. Note that it is non-trivial to apply the same enhancements to NSFF for its NDC formulation so we keep it as-is. We visualize the lidar depth re-projection from the training view (1st column) to the test view (2nd column), as a reference for qualitative comparison (3rd column). Note that the white region is occluded from the input view, whereas the green region is occluded from the most of input video frames. We observe that existing approaches do not handle complex deformation well. For example, all models fail at fusing a valid human shape on the SPACE OUT sequence. In Table 3, we find a similar trend as in the Nerfies-HyperNeRF dataset: the baseline T-NeRF performs the best in terms of mPSNR and mSSIM while HyperNeRF produces the most photorealistic renderings in terms of mLPIPS. The overall synthesis quality and correspondence accuracy of all methods drop considerably compared to the results on the Nerfies-HyperNeRF dataset. Taking Nerfies as an example, it drops 4.4 dB in mPSNR, 69.6% in mLPIPS, and 40.1% in PCK-T. Our study suggests an opportunity for large improvement when modeling complex motion. 6 Discussion and recommendation for future works In this work, we expose issues in the common practice and establish systematic means to calibrate performance metrics of existing and future works, in the spirit of papers like [50, 51, 52]. We provide initial attempts toward characterizing the difficulty of a monocular video for dynamic view synthesis (DVS) in terms of effective multi-view factors (EMFs). In practice, there are other challenging factors such as variable appearance, lighting condition, motion complexity and more. We leave their characterization for future works. We recommend future works to visualize the input sequences and report EMFs when demonstrating the results. We also recommend future works to evaluate the correspondence accuracy and strive for establishing better correspondences for DVS. Acknowledgements. We would like to thank Zhengqi Li and Keunhong Park for valuable feedback and discussions; Matthew Tancik and Ethan Weber for proofreading. We are also grateful to our pets: Sriracha, Haru, and Mochi, for being good during capture. This project is generously supported in part by the CONIX Research Center, sponsored by DARPA, as well as the BDD and BAIR sponsors.
1. What are the strengths and weaknesses of the paper regarding its contributions to dynamic 3D scene synthesis? 2. What is the significance of the proposed metric, EMF, and how does it relate to the quality of existing datasets? 3. How does the reviewer assess the validity of the new dataset and its suitability for evaluating dynamic 3D scene synthesis methods? 4. Do you have any concerns or questions about the methodology or results presented in the paper? 5. Are there any limitations mentioned in the paper that the reviewer did not discuss?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper studied the problem of dynamic 3D scene synthesis from monocular video sequence and found flaws of over-representation of slow-moving objects with a fast-moving camera in existing datasets. The authors then proposed a new metric called effective multi-view actors (EMF) to quantify the amount of multi-view signal in the image sequence. The authors also introduced a new dataset with very low EMF and argued that the new dataset should be more suitable for evaluation of dynamic 3D scene synthesis methods. Finally the authors evaluated four representative algorithms on the new dataset and find performance gap not being noticed with previous existing datasets. Strengths And Weaknesses Strengths: The authors delve into the characteristic of existing dataset and managed to produce innovative metric to evaluate the difficulty of dataset in terms of monocular dynamics. Weaknesses While the ability of build 3d representation from monocular dynamics is desirable, real word video sequences could also contain a well proportion of high EMF data. So the argument that low EMF datasets is better for evaluation may not hold unconditionally. The difficulty of dataset may also come from other aspects, such as shape complexity and surface property of objects. Questions When evaluate on different sequences with different EMF value, do we always find lower loss for high EMF sequences on existing methods? Limitations The authors answered YES for the question “Did you describe the limitations of your work” but not stating which section contains it.
NIPS
Title Monocular Dynamic View Synthesis: A Reality Check Abstract We study the recent progress on dynamic view synthesis (DVS) from monocular video. Though existing approaches have demonstrated impressive results, we show a discrepancy between the practical capture process and the existing experimental protocols, which effectively leaks in multi-view signals during training. We define effective multi-view factors (EMFs) to quantify the amount of multi-view signal present in the input capture sequence based on the relative camera-scene motion. We introduce two new metrics: co-visibility masked image metrics and correspondence accuracy, which overcome the issue in existing protocols. We also propose a new iPhone dataset that includes more diverse real-life deformation sequences. Using our proposed experimental protocol, we show that the state-of-the-art approaches observe a 1-2 dB drop in masked PSNR in the absence of multi-view cues and 4-5 dB drop when modeling complex motion. Code and data can be found at http://hangg7.com/dycheck. 1 Introduction Dynamic scenes are ubiquitous in our everyday lives – people moving around, cats purring, and trees swaying in the wind. The ability to capture 3D dynamic sequences in a “casual” manner, particularly through monocular videos taken by a smartphone in an uncontrolled environment, will be a cornerstone in scaling up 3D content creation, performance capture, and augmented reality. Recent works have shown promising results in dynamic view synthesis (DVS) from a monocular video [1, 2, 3, 4, 5, 6, 7, 8]. However, upon close inspection, we found that there is a discrepancy between the problem statement and the experimental protocol employed. As illustrated in Figure 1, the input data to these algorithms either contain frames that “teleport” between multiple camera viewpoints at consecutive time steps, which is impractical to capture from a single camera, or depict quasi-static scenes, which do not represent real-life dynamics. In this paper, we provide a systematic means of characterizing the aforementioned discrepancy and propose a better set of practices for model fitting and evaluation. Concretely, we introduce effective multi-view factors (EMFs) to quantify the amount of multi-view signal in a monocular sequence based on the relative camera-scene motion. With EMFs, we show that the current experimental protocols operate under an effectively multi-view regime. For example, our analysis reveals that the aforementioned practice of camera teleportation makes the existing capture setup akin to an Olympic runner taking a video of a moving scene without introducing any motion blur. The reason behind the existing experimental protocol is that monocular DVS is a challenging problem that is also hard to evaluate. Unlike static novel-view synthesis where one may simply evaluate on held-out views of the captured scene, in the dynamic case, since the scene changes over time, evaluation requires another camera that observes the scene from a different viewpoint at the same time. However, this means that the test views often contain regions that were never observed in the input sequence. Camera teleportation, i.e., constructing a temporal sequence by alternating samples † Work partially done as part of HG’s internship at Adobe. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). from different cameras, addresses this issue at the expense of introducing multi-view cues, which are unavailable in the practical single-camera capture. We propose two sets of metrics to overcome this challenge without the use of camera teleportation. The first metric enables evaluating only on pixels that were seen in the input sequence by computing the co-visibility of every test pixel. The proposed co-visibility mask can be used to compute masked image metrics (PSNR, SSIM [9] and LPIPS [10]). While the masked image metrics measure the quality of rendering, they do not directly measure the quality of the inferred scene deformation. Thus, we also propose a second metric that evaluates the quality of established point correspondences by the percentage of correctly transferred keypoints (PCK-T) [11]. The correspondences may be evaluated between the input and test frames or even within the input frames, which enable evaluation on sequences that are captured with only a single camera. We conduct extensive evaluation on existing datasets [5, 7] as well as a new dataset that includes more challenging motion and diverse scenes. When tested on existing datasets without camera teleportation, the state-of-the-art methods observe a 1-2 dB drop in masked PSNR and ~5% drop in PCK-T. When tested on complex motion with the proposed dataset, existing approaches observe another 4-5 dB drop in masked PSNR and ~30% drop in PCK-T, suggesting a large room for improvement. We encourage future works to report EMFs on new data and adopt our experimental protocol to evaluate monocular DVS methods. Code and data are available at our project page. 2 Related work Non-rigid structure from motion (NR-SfM). Traditional NR-SfM tackles the task of dynamic 3D inference by fitting parametric 3D morphable models [12, 13, 14, 15, 16, 17, 18, 19], or fusing non-parametric depth scans of generic dynamic scenes [20, 21, 22, 23, 24]. All of these approaches aim to recover accurate surface geometry at each time step and their performance is measured with ground truth 3D geometry or 2D correspondences with PCK [25] when such ground truth is not available. In this paper, we analyze recent dynamic view synthesis methods whose goal is to generate a photo-realistic novel view. Due to their goal, these methods do not focus on evaluation against ground truth 3D geometry, but we take inspiration from prior NR-SfM works to evaluate the quality of the inferred 3D dynamic representation based on correspondences. We also draw inspiration from previous NR-SfM work that analyzed camera/object speed and 3D reconstruction quality [26, 27]. Monocular dynamic neural radiance fields (dynamic NeRFs). Dynamic NeRFs reconstruct moving scenes from multi-view inputs or given pre-defined deformation template [28, 29, 30, 31, 32, 33, 34, 35]. In contrast, there is a series of recent works that seek to synthesize high-quality novel views of generic dynamic scenes given a monocular video [1, 2, 3, 4, 5, 6, 7, 8]. These works can be classified into two categories: a deformed scene is directly modeled as a time-varying NeRF in the world space [1, 4, 6] or as a NeRF in canonical space with a time-dependent deformation [2, 3, 5, 7, 8]. The evaluation protocol in these works inherit from the original static-scene NeRF [36] that quantify the rendering quality of held-out viewpoints using image metrics, e.g., PSNR. However, in dynamic scenes, PSNR from an unseen camera view may not be meaningful since the novel view may include regions that were never seen in the training view (unless the method can infer unseen regions using learning based approaches). Existing approaches resolve this issue by incorporating views from multiple cameras during training, which we show results in an effectively multi-view setup. We introduce metrics to measure the difficulties of an input sequence, a monocular dataset with new evaluation protocol and metrics, which show that existing methods have a large room for improvement. 3 Effective multi-view in a monocular video We consider the problem of dynamic view synthesis (DVS) from a monocular video. A monocular dynamic capture consists of a single camera observing a moving scene. The lack of simultaneous multi-view in the monocular video makes this problem more challenging compared to the multi-view setting, such as reconstructing moving people from multiple cameras [30, 34, 37]. Contrary to the conventional perception that the effect of multi-view is binary for a capture (single versus multiple cameras), we show that it can be characterized on a continuous spectrum. Our insight is that a monocular sequence contains effective multi-view cues when the camera moves much faster than the scene, though technically the underlying scene is observed only once at each time step. 3.1 Characterizing effective multi-view in a monocular video Although a monocular video only sees the scene from one viewpoint at a time, depending on the capture method, it can still contains cues that are effectively similar to those captured by a multi-view camera rig, which we call as effective multi-view. As shown in Figure 2, when the scene moves significantly slower than the camera (to the far right end of the axis), the same scene is observed from multiple views, resulting in multiview capture. In this case, DVS re- duces to a well-constrained multi-view stereo problem at each time step. Consider another case where the camera moves significantly faster compared to the scene so that it observes roughly the same scene from different viewpoints. As the camera motion approaches the infinity this again reduces the monocular capture to a multi-view setup. We therefore propose to characterize the amount of multi-view cues by the relative camera-scene motion. 3.2 Quantifying effective multi-view in a monocular video For practicality, we propose two metrics, referred to as effective multi-view factors (EMFs). The first metric, full EMF Ω is defined as the relative ratio between the motion magnitude of the camera to the scene, which in theory characterizes the effective multi-view perfectly, but in practice can be expensive and challenging to compute. The second metric, angular EMF ω is defined as the camera angular velocity around the scene look-at point, which only considers the camera motion; while approximate, it is easy to compute and characterizes object-centric captures well. Full EMF Ω: ratio of camera-scene motion magnitude. Consider a monocular video of a moving scene over a set of time steps T . At each discrete time t ∈ T , let the camera’s 3D location be ot. We consider each point xt on the domain of observed scene surface S2t ⊂ R3. We define the camera-scene motion as the expected relative ratio, Ω = E t,t+1∈T [ E xt∈S2t [ ∥ot+1 − ot∥ ∥xt+1 − xt∥ ]] , (1) where the denominator xt+1−xt denotes the 3D scene flow and the the numerator ot+1−ot denotes the 3D camera motion, both over one time step forward. The 3D scene flow can be estimated via the 2D dense optical flow field and the metric depth map when available, or monocular depth map from off-the-shelf approaches [38, 39] in the general case. Please see the Appendix for more details. Note that Ω in theory captures the effective multi-view factor for any sequence. However, in practice, 3D scene flow estimation is an actively studied problem and may suffer from noisy or costly predictions. Angular EMF ω: camera angular velocity. We introduce a second metric ω that is easy to compute in practice. We make an additional assumption that the capture has a single look-at point in world space, which often holds true, particularly for captures involving a single centered subject. Specifically, given a look-at point a by triangulating the optical-axes of all cameras (as per [5]) and the frame rate N , the camera angular velocity ω is computed as a scaled expectation, ω = E t,t+1∈T [ arccos ( ⟨a− ot,a− ot+1⟩ ∥a− ot∥ · ∥a− ot+1∥ )] ·N. (2) Note that even though ω only considers the camera motion, it is indicative of effective multi-view in the majority of existing captures, which we describe in Section 4.1. For both Ω and ω, the larger the value, the more multi-view cue the sequence contains. For future works introducing new input sequences, we recommend always reporting angular EMF for its simplicity and reporting full EMF when possible. Next we inspect the existing experimentation practices under the lens of effective multi-view. 4 Towards better experimentation practice In this section, we reflect on the existing datasets and find that they operate under the effective multi-view regime, with either teleporting camera motion or quasi-static scene motion. The reason behind the existing protocol is that monocular DVS is challenging from both the modeling and evaluation perspective. While the former challenge is well known, the latter is less studied, as we expand below. To overcome the existing challenge in the evaluation and enable future research to experiment with casually captured monocular video, we propose a better toolkit, including two new metrics and a new dataset of complex motion in everyday lives. 4.1 Closer look at existing datasets We investigate the datasets used for evaluation in D-NeRF [3], HyperNeRF [7], Nerfies [5], and NSFF [4]. Table 1 shows their statistics. We evaluate the amount of effective multi-view cues via the proposed EMFs, shown in Figure 3. We find that existing datasets have large EMF values on both metrics. For example, the HyperNeRF dataset has an ω as large as ~200◦/s. To put these numbers in context, a person imaging an object 3m away has to move at 1m/s to get an ω = 20◦/s (close to the statistics in the proposed dataset). Some datasets exhibit ω higher than 120◦/s, which is equivalent to a camera motion faster than the Olympic 100m sprint record, without incurring any motion blur. Visualizing the actual training data shown in Figure 1 reveals that existing datasets feature non-practical captures of either (1) teleporting/fast camera motion or (2) quasi-static/slow scene motion. The former is not representative of practical captures from a hand-held camera, e.g., a smartphone, while the latter is not representative of moving objects in daily life. Note that, out of the 23 multi-camera sequences that these prior works used for quantitative evaluation, 22 have teleporting camera motion, and 1 has quasi-static scene motion – the CURLS sequence shown at the 5th column in Figure 1. All 13 single-camera sequences from HyperNeRF [7] used for qualitative evaluation have quasi-static scene motion. The four datasets also share a similar data protocol for generating effective multi-view input sequences from the original multi-camera rig capture. In Figure 4, we illustrate the teleporting protocol used in Nerfies [5] and HyperNeRF [7] as a canonical example. They sample alternating frames from two physical cameras (left and right in this case) mounted on a rig to create the training data. NSFF [4] samples alternating frames from 24 cameras based on the data released from Yoon et al. [40]. D-NeRF [3] experiments on synthetic dynamic scenes where cameras are randomly placed on a fixed hemisphere at every time step, in effect teleporting between 100-200 cameras. We encourage you to visit our project page to view the input videos from these datasets. Existing works adopt effective multi-view capture for two reasons. First, it makes monocular DVS more tractable. Second, it enables evaluating novel view on the full image, without worrying about the visibility of each test pixel, as all camera views were visible during training. We show this effect in Figure 5. When trained with camera teleportation (3rd column), the model can generate a high-quality full image from the test view. However, when trained without camera teleportation (4th column), the model struggles to hallucinate unseen pixels since NeRFs [5, 36] are not designed to predict completely unseen portions of the scene, unless they are specifically trained for generalization [41]. Next, we propose new metrics that enable evaluation without using camera teleportation. Note that when the model is trained without camera teleportation, the rendering quality also degrades, which we also evaluate. 4.2 Our proposed metrics While the existing setup allows evaluating on the full rendered image from the test view, the performance under such evaluation protocol, particularly with teleportation, confounds the efficacy of the proposed approaches and the multi-view signal present in the input sequence. To evaluate with an actual monocular setup, we propose two new metrics that evaluate only on seen pixels and measure the correspondence accuracy of the predicted deformation. Co-visibility masked image metrics. Existing works evaluate DVS models with image metrics on the full image, e.g., PSNR, SSIM [9] and LPIPS [10], following novel-view synthesis evaluation on static scenes [36]. However, in dynamic scenes, particularly for monocular capture with multi-camera validation, the test view contains regions that may not have been observed at all by the training camera. To circumvent this issue without resorting to camera teleportation, for each pixel in the test image, we propose co-visibility masking, which tests how many times a test pixel has been observed in the training images. Specifically, we use optical flow to compute correspondences between every test image and the training images, and only keep test pixels that have enough correspondences in the training images via thresholding. This results in a mask, illustrated in Figure 5, which we use to confine the image metrics. We follow the common practice from the image generation literature and adopt masked metrics, mPSNR and mLPIPS [42, 43]. Note that NSFF [4] adopts similar metrics but for evaluating the rendering quality on foreground versus background regions. We additionally report mSSIM by partial convolution [44], which only considers seen regions during its computation. More details are in the Appendix. Using masked image metrics, we quantify the performance gap in rendering when a model is trained with or without multi-view cues in Section 5.1. Percentage of correctly transferred keypoints (PCK-T). Correspondences lie at the heart of traditional non-rigid reconstruction [21], which is overlooked in the current image-based evaluation. We propose to evaluate 2D correspondences across training frames with the percentage of correctly transferred keypoints (PCK-T) [11], which directly evaluates the quality of the inferred deformation. Specifically, we sparsely annotate 2D keypoints across input frames to ensure that each keypoint is fully observed during training. For correspondence readout from existing methods, we use either root finding [45] or scene flow chaining. Please see the Appendix for details on our keypoint annotation, correspondence readout, and metric computation. As shown in Figure 6, evaluating correspondences reveal that high quality image rendering does not necessarily result in accurate correspondences, which indicates issues in the underlying surface, due to the ambiguous nature of the problem. 4.3 Proposed iPhone dataset Existing datasets can be rectified by removing camera teleportation and evaluated using the proposed metrics, as we do in Section 5.1. However, even after removing camera teleportation, the existing datasets are still not representative of practical in-the-wild capture. First, the existing datasets are limited in motion diversity. Second, the evaluation baseline in existing datasets is small, which can hide issues in incorrect deformation and resulting geometry. For these reasons, we propose a new dataset called the iPhone dataset shown in Figure 7. In contrast to existing datasets with repetitive object motion, we collect 14 sequences featuring non-repetitive motion, from various categories such as generic objects, humans, and pets. We deploy three cameras for multi-camera capture – one hand-held moving camera for training and two static cameras of large baseline for evaluation. Furthermore, our iPhone dataset comes with metric depth from the lidar sensors, which we use to provide ground-truth depth for supervision. In Section 5.2, we show that depth supervision, together with other regularizations, is beneficial for training DVS models. Please see the Appendix for details on our multi-camera capture setup, data processing, and more visualizations. 5 Reality check: re-evaluating the state of the art In this section, we conduct a series of empirical studies to disentangle the recent progress in dynamic view synthesis (DVS) given a monocular video from effective multi-view in the training data. We evaluate current state-of-the-art methods when the effective multi-view factor (EMF) is low. Existing approaches and baselines. We consider the following state-of-the-art approaches for our empirical studies: NSFF [4], Nerfies [5] and HyperNeRF [7]. We choose them as canonical examples for other approaches [1, 2, 3, 6, 8, 34, 35], discussed in Section 2. We also evaluate time-conditioned NeRF (T-NeRF) as a common baseline [1, 3, 4]. Unlike the state-of-the-art methods, it is not possible to extract correspondences from a T-NeRF. A summary of these methods can be found in the Appendix. Datasets. We evaluate on the existing datasets as well as the proposed dataset. For existing datasets, we use the multi-camera captures accompanying Nerfies [5] and HyperNeRF [7] for evalulation. Due to their similar capture protocol, we consider them as a single dataset in our experiment (denoted as the Nerfies-HyperNeRF dataset). It consists of 7 sequences in total, which we augment with keypoint annotations. Our dataset has 7 multi-camera captures and 7 single-camera captures. We evaluate novel-view synthesis on the multi-camera captures and correspondence on all captures. Our data adopts the data format from the Nerfies-HyperNeRF dataset, with additional support for depth and correspondence labels. All videos are at 480p resolution and all dynamic scenes are inward-facing. Masked image and correspondence metrics. Following Section 4.2, we evaluate co-visibility masked image metrics and the correspondence metric. We report masked image metrics: mPSNR, mSSIM [9, 44], and mLPIPS [4, 10, 42, 43]. We visualize the rendering results with the covisibility mask. For the correspondence metric, we report the percentage of correctly transferred keypoints (PCK-T) [11] with threshold ratio α = 0.05. Additional visualizations of full image rendering and inferred correspondences can be found in the Appendix. Implementation details. We consolidate Nerfies [5] and HyperNeRF [7] in one codebase using JAX [46]. Compared to the original official code releases, our implementation aligns all training and evaluation details between models and allows correspondence readout. Our implementation reproduces the quantitative results in the original papers. We implement T-NeRF in the same codebase. For NSFF [4], we tried both the official code base [47] and a public third-party re-implementation [48], where the former fails to converge on our proposed iPhone dataset while the latter works well. We thus report results using the third-party re-implementation. However, note that both the original and the third-party implementation represent the dynamic scene in normalized device coordinates (NDC). As NDC is designed for forward-facing but not considered inward-facing scenes, layered artifacts may appear due to its log-scale sampling rate in the world space, as shown in Figure 9. More details about aligning the training procedure and remaining differences are provided in the Appendix. Code, pretrained models, and data are available on the project page. 5.1 Reality check on the Nerfies-HyperNeRF dataset Impact of effective multi-view. We first study the impact of effective multi-view on the NerfiesHyperNeRF [5, 7] dataset. In this experiment, we rectify the effective multi-view sequences by only using the left camera during training as opposed to both the left and right cameras, illustrated in Figure 4. We denote the original setting as “teleporting” and the rectified sequences as “nonteleporting”. We train all approaches under these two settings with the same held-out validation frames and same set of co-visibility masks computed from common training frames. In Figure 8 (Top), all methods perform better across all metrics when trained under the teleporting setting compared to the non-teleporting one, with the exception of PCK-T for NSFF. We conjecture that this is because that NSFF has additional optical flow supervision, which is more accurate without camera teleportation. In Figure 8 (Bottom), we show qualitative results using Nerfies (we include visualizations of the other methods in the Appendix). Without effective multi-view, Nerfies fails at modeling physically plausible shape for broom and wires. Our results show that the effective multi-view in the existing experimental protocol inflates the synthesis quality of prior methods, and that truly monocular captures are more challenging. Benchmark results without camera teleportation. In Table 2 and Figure 9, we report the quantitative and qualitative results under the non-teleporting setting. Note that our implementation of the T-NeRF baseline performs the best among all four evaluated models in terms of mPSNR and mSSIM. In Figure 9, we confirm this result since T-NeRF renders high-quality novel view for both sequences. HyperNeRF produces the most photorealistic renderings, measured by mLPIPS. However it also produces distorted artifacts that do not align well with the ground truth (e.g., the incorrect shape in the CHICKEN sequence). 5.2 Reality check on the proposed iPhone dataset Ablation study on improving the state of the art. We find that existing methods perform poorly out-of-the-box on the proposed iPhone dataset with more diverse and complex real-life motions. In Figure 10 (Bottom), we demonstrate this finding with HyperNeRF [7] for it achieves the highest mLPIPS metric on the Nerfies-HyperNeRF dataset. Shown in the 3rd column, HyperNeRF produces visually implausible results with ghosting effects. Thus we explored incorporating additional regularizations from recent advances in neural rendering. Concretely, we consider the following: (+B) random background compositing [32]; (+D) a depth loss on the ray matching distance [1, 4]; and (+S) a sparsity regularization for scene surface [49]. In Figure 10 (Top), we show quantitative results from the ablation. In Figure 10 (Bottom), we show visualizations of the impact of each regularization. Adding additional regularizations consistently boosts model performance. While we find the random background compositing regularizations particularly helpful, extra depth supervision and surface regularization further improve the quality, e.g., the fan region of the paper windmill. Benchmarked results. In Figure 11, we show qualitative results from our benchmark using the best model settings from the ablation study, denoted as “++”. Note that it is non-trivial to apply the same enhancements to NSFF for its NDC formulation so we keep it as-is. We visualize the lidar depth re-projection from the training view (1st column) to the test view (2nd column), as a reference for qualitative comparison (3rd column). Note that the white region is occluded from the input view, whereas the green region is occluded from the most of input video frames. We observe that existing approaches do not handle complex deformation well. For example, all models fail at fusing a valid human shape on the SPACE OUT sequence. In Table 3, we find a similar trend as in the Nerfies-HyperNeRF dataset: the baseline T-NeRF performs the best in terms of mPSNR and mSSIM while HyperNeRF produces the most photorealistic renderings in terms of mLPIPS. The overall synthesis quality and correspondence accuracy of all methods drop considerably compared to the results on the Nerfies-HyperNeRF dataset. Taking Nerfies as an example, it drops 4.4 dB in mPSNR, 69.6% in mLPIPS, and 40.1% in PCK-T. Our study suggests an opportunity for large improvement when modeling complex motion. 6 Discussion and recommendation for future works In this work, we expose issues in the common practice and establish systematic means to calibrate performance metrics of existing and future works, in the spirit of papers like [50, 51, 52]. We provide initial attempts toward characterizing the difficulty of a monocular video for dynamic view synthesis (DVS) in terms of effective multi-view factors (EMFs). In practice, there are other challenging factors such as variable appearance, lighting condition, motion complexity and more. We leave their characterization for future works. We recommend future works to visualize the input sequences and report EMFs when demonstrating the results. We also recommend future works to evaluate the correspondence accuracy and strive for establishing better correspondences for DVS. Acknowledgements. We would like to thank Zhengqi Li and Keunhong Park for valuable feedback and discussions; Matthew Tancik and Ethan Weber for proofreading. We are also grateful to our pets: Sriracha, Haru, and Mochi, for being good during capture. This project is generously supported in part by the CONIX Research Center, sponsored by DARPA, as well as the BDD and BAIR sponsors.
1. What is the main contribution of the paper regarding dynamic 3D scene recovery from monocular videos? 2. What are the strengths of the proposed approach, particularly in quantifying multi-view cues and evaluating metrics? 3. Are there any limitations or weaknesses in the paper, such as the suitability of the proposed approach for the benchmark track? 4. How were the key points selected for each training sequence, and what was the criteria for selecting them? 5. Why did the authors not discuss any limitations of their approach in the paper?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper does a complete review for existing approaches recovering dynamic 3D scenes from monocular videos, especially Nerfies, HyperNeRF and NSFF. It studies the camera trajectory, proposes a metric Effective Multiview Factors (EMF) to quantify multi-view cues in a dynamic scene with moving cameras. Existing datasets typically have high multi-view cues. Therefore, a new dataset captured by iPhone is introduced with little multi-view cue. Additional metrics such as Masked PSNR and PCK-T are also introduced to ensure the fairness of evaluation. The new benchmark brings additional challenges for existing approaches of dynamic 3D capture. Strengths And Weaknesses Strengths: Quantifying multi-view cues using EMF is neat. Different datasets in dynamic 3D capture proposes different data including different camera trajectories. And EMF quatifies the difficulties of all the data. I also like PCK-T correspondence besides normal PSNR. Right now a lot of novel view synthesis approaches focus on PSNR but PSNR does not directly reflect how well the model understands the 3D world. PCK-T is more explicit and I think reporting both masked PSNR and PCK-T is a good idea. Supp webpage is fantastic. Thank you! Weaknesses: I don't see any weaknesses, although I think the paper is more like a new benchmark -- It might be more suitable for the benchmark track. Additional comments (no need to address): Page 8 footnote: We find that this code base performs better than the original code release missing a period. Questions (L213 - 215) Each training sequence is annotated with 10 to 20 keypoints in every 10 frames How do you select these keypoints? Limitations Limitations are not discussed in the paper.
NIPS
Title Monocular Dynamic View Synthesis: A Reality Check Abstract We study the recent progress on dynamic view synthesis (DVS) from monocular video. Though existing approaches have demonstrated impressive results, we show a discrepancy between the practical capture process and the existing experimental protocols, which effectively leaks in multi-view signals during training. We define effective multi-view factors (EMFs) to quantify the amount of multi-view signal present in the input capture sequence based on the relative camera-scene motion. We introduce two new metrics: co-visibility masked image metrics and correspondence accuracy, which overcome the issue in existing protocols. We also propose a new iPhone dataset that includes more diverse real-life deformation sequences. Using our proposed experimental protocol, we show that the state-of-the-art approaches observe a 1-2 dB drop in masked PSNR in the absence of multi-view cues and 4-5 dB drop when modeling complex motion. Code and data can be found at http://hangg7.com/dycheck. 1 Introduction Dynamic scenes are ubiquitous in our everyday lives – people moving around, cats purring, and trees swaying in the wind. The ability to capture 3D dynamic sequences in a “casual” manner, particularly through monocular videos taken by a smartphone in an uncontrolled environment, will be a cornerstone in scaling up 3D content creation, performance capture, and augmented reality. Recent works have shown promising results in dynamic view synthesis (DVS) from a monocular video [1, 2, 3, 4, 5, 6, 7, 8]. However, upon close inspection, we found that there is a discrepancy between the problem statement and the experimental protocol employed. As illustrated in Figure 1, the input data to these algorithms either contain frames that “teleport” between multiple camera viewpoints at consecutive time steps, which is impractical to capture from a single camera, or depict quasi-static scenes, which do not represent real-life dynamics. In this paper, we provide a systematic means of characterizing the aforementioned discrepancy and propose a better set of practices for model fitting and evaluation. Concretely, we introduce effective multi-view factors (EMFs) to quantify the amount of multi-view signal in a monocular sequence based on the relative camera-scene motion. With EMFs, we show that the current experimental protocols operate under an effectively multi-view regime. For example, our analysis reveals that the aforementioned practice of camera teleportation makes the existing capture setup akin to an Olympic runner taking a video of a moving scene without introducing any motion blur. The reason behind the existing experimental protocol is that monocular DVS is a challenging problem that is also hard to evaluate. Unlike static novel-view synthesis where one may simply evaluate on held-out views of the captured scene, in the dynamic case, since the scene changes over time, evaluation requires another camera that observes the scene from a different viewpoint at the same time. However, this means that the test views often contain regions that were never observed in the input sequence. Camera teleportation, i.e., constructing a temporal sequence by alternating samples † Work partially done as part of HG’s internship at Adobe. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). from different cameras, addresses this issue at the expense of introducing multi-view cues, which are unavailable in the practical single-camera capture. We propose two sets of metrics to overcome this challenge without the use of camera teleportation. The first metric enables evaluating only on pixels that were seen in the input sequence by computing the co-visibility of every test pixel. The proposed co-visibility mask can be used to compute masked image metrics (PSNR, SSIM [9] and LPIPS [10]). While the masked image metrics measure the quality of rendering, they do not directly measure the quality of the inferred scene deformation. Thus, we also propose a second metric that evaluates the quality of established point correspondences by the percentage of correctly transferred keypoints (PCK-T) [11]. The correspondences may be evaluated between the input and test frames or even within the input frames, which enable evaluation on sequences that are captured with only a single camera. We conduct extensive evaluation on existing datasets [5, 7] as well as a new dataset that includes more challenging motion and diverse scenes. When tested on existing datasets without camera teleportation, the state-of-the-art methods observe a 1-2 dB drop in masked PSNR and ~5% drop in PCK-T. When tested on complex motion with the proposed dataset, existing approaches observe another 4-5 dB drop in masked PSNR and ~30% drop in PCK-T, suggesting a large room for improvement. We encourage future works to report EMFs on new data and adopt our experimental protocol to evaluate monocular DVS methods. Code and data are available at our project page. 2 Related work Non-rigid structure from motion (NR-SfM). Traditional NR-SfM tackles the task of dynamic 3D inference by fitting parametric 3D morphable models [12, 13, 14, 15, 16, 17, 18, 19], or fusing non-parametric depth scans of generic dynamic scenes [20, 21, 22, 23, 24]. All of these approaches aim to recover accurate surface geometry at each time step and their performance is measured with ground truth 3D geometry or 2D correspondences with PCK [25] when such ground truth is not available. In this paper, we analyze recent dynamic view synthesis methods whose goal is to generate a photo-realistic novel view. Due to their goal, these methods do not focus on evaluation against ground truth 3D geometry, but we take inspiration from prior NR-SfM works to evaluate the quality of the inferred 3D dynamic representation based on correspondences. We also draw inspiration from previous NR-SfM work that analyzed camera/object speed and 3D reconstruction quality [26, 27]. Monocular dynamic neural radiance fields (dynamic NeRFs). Dynamic NeRFs reconstruct moving scenes from multi-view inputs or given pre-defined deformation template [28, 29, 30, 31, 32, 33, 34, 35]. In contrast, there is a series of recent works that seek to synthesize high-quality novel views of generic dynamic scenes given a monocular video [1, 2, 3, 4, 5, 6, 7, 8]. These works can be classified into two categories: a deformed scene is directly modeled as a time-varying NeRF in the world space [1, 4, 6] or as a NeRF in canonical space with a time-dependent deformation [2, 3, 5, 7, 8]. The evaluation protocol in these works inherit from the original static-scene NeRF [36] that quantify the rendering quality of held-out viewpoints using image metrics, e.g., PSNR. However, in dynamic scenes, PSNR from an unseen camera view may not be meaningful since the novel view may include regions that were never seen in the training view (unless the method can infer unseen regions using learning based approaches). Existing approaches resolve this issue by incorporating views from multiple cameras during training, which we show results in an effectively multi-view setup. We introduce metrics to measure the difficulties of an input sequence, a monocular dataset with new evaluation protocol and metrics, which show that existing methods have a large room for improvement. 3 Effective multi-view in a monocular video We consider the problem of dynamic view synthesis (DVS) from a monocular video. A monocular dynamic capture consists of a single camera observing a moving scene. The lack of simultaneous multi-view in the monocular video makes this problem more challenging compared to the multi-view setting, such as reconstructing moving people from multiple cameras [30, 34, 37]. Contrary to the conventional perception that the effect of multi-view is binary for a capture (single versus multiple cameras), we show that it can be characterized on a continuous spectrum. Our insight is that a monocular sequence contains effective multi-view cues when the camera moves much faster than the scene, though technically the underlying scene is observed only once at each time step. 3.1 Characterizing effective multi-view in a monocular video Although a monocular video only sees the scene from one viewpoint at a time, depending on the capture method, it can still contains cues that are effectively similar to those captured by a multi-view camera rig, which we call as effective multi-view. As shown in Figure 2, when the scene moves significantly slower than the camera (to the far right end of the axis), the same scene is observed from multiple views, resulting in multiview capture. In this case, DVS re- duces to a well-constrained multi-view stereo problem at each time step. Consider another case where the camera moves significantly faster compared to the scene so that it observes roughly the same scene from different viewpoints. As the camera motion approaches the infinity this again reduces the monocular capture to a multi-view setup. We therefore propose to characterize the amount of multi-view cues by the relative camera-scene motion. 3.2 Quantifying effective multi-view in a monocular video For practicality, we propose two metrics, referred to as effective multi-view factors (EMFs). The first metric, full EMF Ω is defined as the relative ratio between the motion magnitude of the camera to the scene, which in theory characterizes the effective multi-view perfectly, but in practice can be expensive and challenging to compute. The second metric, angular EMF ω is defined as the camera angular velocity around the scene look-at point, which only considers the camera motion; while approximate, it is easy to compute and characterizes object-centric captures well. Full EMF Ω: ratio of camera-scene motion magnitude. Consider a monocular video of a moving scene over a set of time steps T . At each discrete time t ∈ T , let the camera’s 3D location be ot. We consider each point xt on the domain of observed scene surface S2t ⊂ R3. We define the camera-scene motion as the expected relative ratio, Ω = E t,t+1∈T [ E xt∈S2t [ ∥ot+1 − ot∥ ∥xt+1 − xt∥ ]] , (1) where the denominator xt+1−xt denotes the 3D scene flow and the the numerator ot+1−ot denotes the 3D camera motion, both over one time step forward. The 3D scene flow can be estimated via the 2D dense optical flow field and the metric depth map when available, or monocular depth map from off-the-shelf approaches [38, 39] in the general case. Please see the Appendix for more details. Note that Ω in theory captures the effective multi-view factor for any sequence. However, in practice, 3D scene flow estimation is an actively studied problem and may suffer from noisy or costly predictions. Angular EMF ω: camera angular velocity. We introduce a second metric ω that is easy to compute in practice. We make an additional assumption that the capture has a single look-at point in world space, which often holds true, particularly for captures involving a single centered subject. Specifically, given a look-at point a by triangulating the optical-axes of all cameras (as per [5]) and the frame rate N , the camera angular velocity ω is computed as a scaled expectation, ω = E t,t+1∈T [ arccos ( ⟨a− ot,a− ot+1⟩ ∥a− ot∥ · ∥a− ot+1∥ )] ·N. (2) Note that even though ω only considers the camera motion, it is indicative of effective multi-view in the majority of existing captures, which we describe in Section 4.1. For both Ω and ω, the larger the value, the more multi-view cue the sequence contains. For future works introducing new input sequences, we recommend always reporting angular EMF for its simplicity and reporting full EMF when possible. Next we inspect the existing experimentation practices under the lens of effective multi-view. 4 Towards better experimentation practice In this section, we reflect on the existing datasets and find that they operate under the effective multi-view regime, with either teleporting camera motion or quasi-static scene motion. The reason behind the existing protocol is that monocular DVS is challenging from both the modeling and evaluation perspective. While the former challenge is well known, the latter is less studied, as we expand below. To overcome the existing challenge in the evaluation and enable future research to experiment with casually captured monocular video, we propose a better toolkit, including two new metrics and a new dataset of complex motion in everyday lives. 4.1 Closer look at existing datasets We investigate the datasets used for evaluation in D-NeRF [3], HyperNeRF [7], Nerfies [5], and NSFF [4]. Table 1 shows their statistics. We evaluate the amount of effective multi-view cues via the proposed EMFs, shown in Figure 3. We find that existing datasets have large EMF values on both metrics. For example, the HyperNeRF dataset has an ω as large as ~200◦/s. To put these numbers in context, a person imaging an object 3m away has to move at 1m/s to get an ω = 20◦/s (close to the statistics in the proposed dataset). Some datasets exhibit ω higher than 120◦/s, which is equivalent to a camera motion faster than the Olympic 100m sprint record, without incurring any motion blur. Visualizing the actual training data shown in Figure 1 reveals that existing datasets feature non-practical captures of either (1) teleporting/fast camera motion or (2) quasi-static/slow scene motion. The former is not representative of practical captures from a hand-held camera, e.g., a smartphone, while the latter is not representative of moving objects in daily life. Note that, out of the 23 multi-camera sequences that these prior works used for quantitative evaluation, 22 have teleporting camera motion, and 1 has quasi-static scene motion – the CURLS sequence shown at the 5th column in Figure 1. All 13 single-camera sequences from HyperNeRF [7] used for qualitative evaluation have quasi-static scene motion. The four datasets also share a similar data protocol for generating effective multi-view input sequences from the original multi-camera rig capture. In Figure 4, we illustrate the teleporting protocol used in Nerfies [5] and HyperNeRF [7] as a canonical example. They sample alternating frames from two physical cameras (left and right in this case) mounted on a rig to create the training data. NSFF [4] samples alternating frames from 24 cameras based on the data released from Yoon et al. [40]. D-NeRF [3] experiments on synthetic dynamic scenes where cameras are randomly placed on a fixed hemisphere at every time step, in effect teleporting between 100-200 cameras. We encourage you to visit our project page to view the input videos from these datasets. Existing works adopt effective multi-view capture for two reasons. First, it makes monocular DVS more tractable. Second, it enables evaluating novel view on the full image, without worrying about the visibility of each test pixel, as all camera views were visible during training. We show this effect in Figure 5. When trained with camera teleportation (3rd column), the model can generate a high-quality full image from the test view. However, when trained without camera teleportation (4th column), the model struggles to hallucinate unseen pixels since NeRFs [5, 36] are not designed to predict completely unseen portions of the scene, unless they are specifically trained for generalization [41]. Next, we propose new metrics that enable evaluation without using camera teleportation. Note that when the model is trained without camera teleportation, the rendering quality also degrades, which we also evaluate. 4.2 Our proposed metrics While the existing setup allows evaluating on the full rendered image from the test view, the performance under such evaluation protocol, particularly with teleportation, confounds the efficacy of the proposed approaches and the multi-view signal present in the input sequence. To evaluate with an actual monocular setup, we propose two new metrics that evaluate only on seen pixels and measure the correspondence accuracy of the predicted deformation. Co-visibility masked image metrics. Existing works evaluate DVS models with image metrics on the full image, e.g., PSNR, SSIM [9] and LPIPS [10], following novel-view synthesis evaluation on static scenes [36]. However, in dynamic scenes, particularly for monocular capture with multi-camera validation, the test view contains regions that may not have been observed at all by the training camera. To circumvent this issue without resorting to camera teleportation, for each pixel in the test image, we propose co-visibility masking, which tests how many times a test pixel has been observed in the training images. Specifically, we use optical flow to compute correspondences between every test image and the training images, and only keep test pixels that have enough correspondences in the training images via thresholding. This results in a mask, illustrated in Figure 5, which we use to confine the image metrics. We follow the common practice from the image generation literature and adopt masked metrics, mPSNR and mLPIPS [42, 43]. Note that NSFF [4] adopts similar metrics but for evaluating the rendering quality on foreground versus background regions. We additionally report mSSIM by partial convolution [44], which only considers seen regions during its computation. More details are in the Appendix. Using masked image metrics, we quantify the performance gap in rendering when a model is trained with or without multi-view cues in Section 5.1. Percentage of correctly transferred keypoints (PCK-T). Correspondences lie at the heart of traditional non-rigid reconstruction [21], which is overlooked in the current image-based evaluation. We propose to evaluate 2D correspondences across training frames with the percentage of correctly transferred keypoints (PCK-T) [11], which directly evaluates the quality of the inferred deformation. Specifically, we sparsely annotate 2D keypoints across input frames to ensure that each keypoint is fully observed during training. For correspondence readout from existing methods, we use either root finding [45] or scene flow chaining. Please see the Appendix for details on our keypoint annotation, correspondence readout, and metric computation. As shown in Figure 6, evaluating correspondences reveal that high quality image rendering does not necessarily result in accurate correspondences, which indicates issues in the underlying surface, due to the ambiguous nature of the problem. 4.3 Proposed iPhone dataset Existing datasets can be rectified by removing camera teleportation and evaluated using the proposed metrics, as we do in Section 5.1. However, even after removing camera teleportation, the existing datasets are still not representative of practical in-the-wild capture. First, the existing datasets are limited in motion diversity. Second, the evaluation baseline in existing datasets is small, which can hide issues in incorrect deformation and resulting geometry. For these reasons, we propose a new dataset called the iPhone dataset shown in Figure 7. In contrast to existing datasets with repetitive object motion, we collect 14 sequences featuring non-repetitive motion, from various categories such as generic objects, humans, and pets. We deploy three cameras for multi-camera capture – one hand-held moving camera for training and two static cameras of large baseline for evaluation. Furthermore, our iPhone dataset comes with metric depth from the lidar sensors, which we use to provide ground-truth depth for supervision. In Section 5.2, we show that depth supervision, together with other regularizations, is beneficial for training DVS models. Please see the Appendix for details on our multi-camera capture setup, data processing, and more visualizations. 5 Reality check: re-evaluating the state of the art In this section, we conduct a series of empirical studies to disentangle the recent progress in dynamic view synthesis (DVS) given a monocular video from effective multi-view in the training data. We evaluate current state-of-the-art methods when the effective multi-view factor (EMF) is low. Existing approaches and baselines. We consider the following state-of-the-art approaches for our empirical studies: NSFF [4], Nerfies [5] and HyperNeRF [7]. We choose them as canonical examples for other approaches [1, 2, 3, 6, 8, 34, 35], discussed in Section 2. We also evaluate time-conditioned NeRF (T-NeRF) as a common baseline [1, 3, 4]. Unlike the state-of-the-art methods, it is not possible to extract correspondences from a T-NeRF. A summary of these methods can be found in the Appendix. Datasets. We evaluate on the existing datasets as well as the proposed dataset. For existing datasets, we use the multi-camera captures accompanying Nerfies [5] and HyperNeRF [7] for evalulation. Due to their similar capture protocol, we consider them as a single dataset in our experiment (denoted as the Nerfies-HyperNeRF dataset). It consists of 7 sequences in total, which we augment with keypoint annotations. Our dataset has 7 multi-camera captures and 7 single-camera captures. We evaluate novel-view synthesis on the multi-camera captures and correspondence on all captures. Our data adopts the data format from the Nerfies-HyperNeRF dataset, with additional support for depth and correspondence labels. All videos are at 480p resolution and all dynamic scenes are inward-facing. Masked image and correspondence metrics. Following Section 4.2, we evaluate co-visibility masked image metrics and the correspondence metric. We report masked image metrics: mPSNR, mSSIM [9, 44], and mLPIPS [4, 10, 42, 43]. We visualize the rendering results with the covisibility mask. For the correspondence metric, we report the percentage of correctly transferred keypoints (PCK-T) [11] with threshold ratio α = 0.05. Additional visualizations of full image rendering and inferred correspondences can be found in the Appendix. Implementation details. We consolidate Nerfies [5] and HyperNeRF [7] in one codebase using JAX [46]. Compared to the original official code releases, our implementation aligns all training and evaluation details between models and allows correspondence readout. Our implementation reproduces the quantitative results in the original papers. We implement T-NeRF in the same codebase. For NSFF [4], we tried both the official code base [47] and a public third-party re-implementation [48], where the former fails to converge on our proposed iPhone dataset while the latter works well. We thus report results using the third-party re-implementation. However, note that both the original and the third-party implementation represent the dynamic scene in normalized device coordinates (NDC). As NDC is designed for forward-facing but not considered inward-facing scenes, layered artifacts may appear due to its log-scale sampling rate in the world space, as shown in Figure 9. More details about aligning the training procedure and remaining differences are provided in the Appendix. Code, pretrained models, and data are available on the project page. 5.1 Reality check on the Nerfies-HyperNeRF dataset Impact of effective multi-view. We first study the impact of effective multi-view on the NerfiesHyperNeRF [5, 7] dataset. In this experiment, we rectify the effective multi-view sequences by only using the left camera during training as opposed to both the left and right cameras, illustrated in Figure 4. We denote the original setting as “teleporting” and the rectified sequences as “nonteleporting”. We train all approaches under these two settings with the same held-out validation frames and same set of co-visibility masks computed from common training frames. In Figure 8 (Top), all methods perform better across all metrics when trained under the teleporting setting compared to the non-teleporting one, with the exception of PCK-T for NSFF. We conjecture that this is because that NSFF has additional optical flow supervision, which is more accurate without camera teleportation. In Figure 8 (Bottom), we show qualitative results using Nerfies (we include visualizations of the other methods in the Appendix). Without effective multi-view, Nerfies fails at modeling physically plausible shape for broom and wires. Our results show that the effective multi-view in the existing experimental protocol inflates the synthesis quality of prior methods, and that truly monocular captures are more challenging. Benchmark results without camera teleportation. In Table 2 and Figure 9, we report the quantitative and qualitative results under the non-teleporting setting. Note that our implementation of the T-NeRF baseline performs the best among all four evaluated models in terms of mPSNR and mSSIM. In Figure 9, we confirm this result since T-NeRF renders high-quality novel view for both sequences. HyperNeRF produces the most photorealistic renderings, measured by mLPIPS. However it also produces distorted artifacts that do not align well with the ground truth (e.g., the incorrect shape in the CHICKEN sequence). 5.2 Reality check on the proposed iPhone dataset Ablation study on improving the state of the art. We find that existing methods perform poorly out-of-the-box on the proposed iPhone dataset with more diverse and complex real-life motions. In Figure 10 (Bottom), we demonstrate this finding with HyperNeRF [7] for it achieves the highest mLPIPS metric on the Nerfies-HyperNeRF dataset. Shown in the 3rd column, HyperNeRF produces visually implausible results with ghosting effects. Thus we explored incorporating additional regularizations from recent advances in neural rendering. Concretely, we consider the following: (+B) random background compositing [32]; (+D) a depth loss on the ray matching distance [1, 4]; and (+S) a sparsity regularization for scene surface [49]. In Figure 10 (Top), we show quantitative results from the ablation. In Figure 10 (Bottom), we show visualizations of the impact of each regularization. Adding additional regularizations consistently boosts model performance. While we find the random background compositing regularizations particularly helpful, extra depth supervision and surface regularization further improve the quality, e.g., the fan region of the paper windmill. Benchmarked results. In Figure 11, we show qualitative results from our benchmark using the best model settings from the ablation study, denoted as “++”. Note that it is non-trivial to apply the same enhancements to NSFF for its NDC formulation so we keep it as-is. We visualize the lidar depth re-projection from the training view (1st column) to the test view (2nd column), as a reference for qualitative comparison (3rd column). Note that the white region is occluded from the input view, whereas the green region is occluded from the most of input video frames. We observe that existing approaches do not handle complex deformation well. For example, all models fail at fusing a valid human shape on the SPACE OUT sequence. In Table 3, we find a similar trend as in the Nerfies-HyperNeRF dataset: the baseline T-NeRF performs the best in terms of mPSNR and mSSIM while HyperNeRF produces the most photorealistic renderings in terms of mLPIPS. The overall synthesis quality and correspondence accuracy of all methods drop considerably compared to the results on the Nerfies-HyperNeRF dataset. Taking Nerfies as an example, it drops 4.4 dB in mPSNR, 69.6% in mLPIPS, and 40.1% in PCK-T. Our study suggests an opportunity for large improvement when modeling complex motion. 6 Discussion and recommendation for future works In this work, we expose issues in the common practice and establish systematic means to calibrate performance metrics of existing and future works, in the spirit of papers like [50, 51, 52]. We provide initial attempts toward characterizing the difficulty of a monocular video for dynamic view synthesis (DVS) in terms of effective multi-view factors (EMFs). In practice, there are other challenging factors such as variable appearance, lighting condition, motion complexity and more. We leave their characterization for future works. We recommend future works to visualize the input sequences and report EMFs when demonstrating the results. We also recommend future works to evaluate the correspondence accuracy and strive for establishing better correspondences for DVS. Acknowledgements. We would like to thank Zhengqi Li and Keunhong Park for valuable feedback and discussions; Matthew Tancik and Ethan Weber for proofreading. We are also grateful to our pets: Sriracha, Haru, and Mochi, for being good during capture. This project is generously supported in part by the CONIX Research Center, sponsored by DARPA, as well as the BDD and BAIR sponsors.
1. What is the focus and contribution of the paper regarding non-rigid novel view synthesis? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its reliance on a single-camera setup and the idea of Masked-PSNR? 3. Do you have any concerns or suggestions regarding the significance and impact of the paper's contribution? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any limitations or potential drawbacks associated with the proposed solution?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper proposes a dataset of monocular videos to evaluate non-rigid novel view synthesis methods. A factor for measuring the multi-view effect is proposed, and two evaluation metrics are proposed. Strengths And Weaknesses Strengths: The paper is well-organized, and it is easy to read. The datasets with evaluation metrics are proposed. Several methods of non-rigid novel view synthesis are evaluated on the proposed datasets. Weakness: Using a multi-camera setup is more reliable than single-camera setup for performance analysis. After all, the goal is to evaluate methods instead of training models, I don’t think that it is the drawback of existing methods. We do not always need to capture new data for non-rigid novel view synthesis evaluation, so it is difficult for me to understand the strengths of the proposed solution. The idea of Masked-PSNR is not surprising. First, if our goal is to evaluate the regions that are observed, it would be straightforward to mask other regions out. Second, we may also hope that the methods can predict and synthesize the regions that are not observed in training data. In this scenario, the mask should not be used. Overall, I don’t think that the masking is novel. Although Nerfies trained models on images sampled from two cameras, the method is able to train models using a single-camera setup. The factor of the multi-view effect is just a simple trick, and there exist many other variants if we want. It is correct, but I don’t think that it is sufficiently significant in this problem. Post rebuttal, After discussion, I agree with Reviewer vDbH, and I would strongly suggest the authors use the comments by vDbH in the final submission to position the work. Questions If the authors propose a new method, it is good to evaluate it in the proposed datasets and evaluation metrics. However, from my perspective, the contribution is not significant enough for presentation in NeurIPS. Limitations yes
NIPS
Title Structured Bayesian Pruning via Log-Normal Multiplicative Noise Abstract Dropout-based regularization methods can be regarded as injecting random noise with pre-defined magnitude to different parts of the neural network during training. It was recently shown that Bayesian dropout procedure not only improves generalization but also leads to extremely sparse neural architectures by automatically setting the individual noise magnitude per weight. However, this sparsity can hardly be used for acceleration since it is unstructured. In the paper, we propose a new Bayesian model that takes into account the computational structure of neural networks and provides structured sparsity, e.g. removes neurons and/or convolutional channels in CNNs. To do this we inject noise to the neurons outputs while keeping the weights unregularized. We establish the probabilistic model with a proper truncated log-uniform prior over the noise and truncated log-normal variational approximation that ensures that the KL-term in the evidence lower bound is computed in closed-form. The model leads to structured sparsity by removing elements with a low SNR from the computation graph and provides significant acceleration on a number of deep neural architectures. The model is easy to implement as it can be formulated as a separate dropout-like layer. 1 Introduction Deep neural networks are a flexible family of models which provides state-of-the-art results in many machine learning problems [14, 20]. However, this flexibility often results in overfitting. A common solution for this problem is regularization. One of the most popular ways of regularization is Binary Dropout [19] that prevents co-adaptation of neurons by randomly dropping them during training. An equally effective alternative is Gaussian Dropout [19] that multiplies the outputs of the neurons by Gaussian random noise. In recent years several Bayesian generalizations of these techniques have been developed, e.g. Variational Dropout [8] and Variational Spike-and-Slab Neural Networks [13]. These techniques provide theoretical justification of different kinds of Dropout and also allow for automatic tuning of dropout rates, which is an important practical result. Besides overfitting, compression and acceleration of neural networks are other important challenges, especially when memory or computational resources are restricted. Further studies of Variational Dropout show that individual dropout rates for each weight allow to shrink the original network architecture and result in a highly sparse model [16]. General sparsity provides a way of neural network compression, while the time of network evaluation may remain the same, as most modern DNN-oriented software can’t work with sparse matrices efficiently. At the same time, it is possible to achieve acceleration by enforcing structured sparsity in convolutional filters or data tensors. In the simplest case it means removing redundant neurons or convolutional filters instead of separate weights; but more complex patterns can also be considered. This way Group-wise Brain Damage 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. [10] employs group-wise sparsity in convolutional filters, Perforated CNNs [3] drop redundant rows from the intermediate dataframe matrices that are used to compute convolutions, and Structured Sparsity Learning [24] provides a way to remove entire convolutional filters or even layers in residual networks. These methods allow to obtain practical acceleration with little to no modifications of the existing software. In this paper, we propose a tool that is able to induce an arbitrary pattern of structured sparsity on neural network parameters or intermediate data tensors. We propose a dropout-like layer with a parametric multiplicative noise and use stochastic variational inference to tune its parameters in a Bayesian way. We introduce a proper analog of sparsity-inducing log-uniform prior distribution [8, 16] that allows us to formulate a correct probabilistic model and avoid the problems that come from using an improper prior. This way we obtain a novel Bayesian method of regularization of neural networks that results in structured sparsity. Our model can be represented as a separate dropout-like layer that allows for a simple and flexible implementation with almost no computational overhead, and can be incorporated into existing neural networks. Our experiments show that our model leads to high group sparsity level and significant acceleration of convolutional neural networks with negligible accuracy drop. We demonstrate the performance of our method on LeNet and VGG-like architectures using MNIST and CIFAR-10 datasets. 2 Related Work Deep neural networks are extremely prone to overfitting, and extensive regularization is crucial. The most popular regularization methods are based on injection of multiplicative noise over layer inputs, parameters or activations [8, 19, 22]. Different kinds of multiplicative noise have been used in practice; the most popular choices are Bernoulli and Gaussian distributions. Another type of regularization of deep neural networks is based on reducing the number of parameters. One approach is to use low-rank approximations, e.g. tensor decompositions [4, 17], and the other approach is to induce sparsity, e.g. by pruning [5] or L1 regularization [24]. Sparsity can also be induced by using the Sparse Bayesian Learning framework with empirical Bayes [21] or with sparsity-inducing priors [12, 15, 16]. High sparsity is one of the key factors for the compression of DNNs [5, 21]. However, in addition to compression it is beneficial to obtain acceleration. Recent papers propose different approaches to acceleration of DNNs, e.g. Spatial Skipped Convolutions [3] and Spatially Adaptive Computation Time [2] that propose different ways to reduce the number of computed convolutions, Binary Networks [18] that achieve speedup by using only 1 bit to store a single weight of a DNN, Low-Rank Expansions [6] that use low-rank filter approximations, and Structured Sparsity Learning [24] that allows to remove separate neurons or filters. As reported in [24] it is possible to obtain acceleration of DNNs by introducing structured sparsity, e.g. by removing whole neurons, filters or layers. However, non-adaptive regularization techniques require tuning of a huge number of hyperparameters that makes it difficult to apply in practice. In this paper we apply the Bayesian learning framework to obtain structured sparsity and focus on acceleration of neural networks. 3 Stochastic Variational Inference Given a probabilistic model p(y |x, ✓) we want to tune parameters ✓ of the model using training dataset D = {(xi, yi)}Ni=1. The prior knowledge about parameters ✓ is defined by prior distribution p(✓). Using the Bayes rule we obtain the posterior distribution p(✓ | D) = p(D | ✓)p(✓)/p(D). However, computing posterior distribution using the Bayes rule usually involves computation of intractable integrals, so we need to use approximation techniques. One of the most widely used approximation techniques is Variational Inference. In this approach the unknown distribution p(✓ | D) is approximated by a parametric distribution q (✓) by minimization of the Kullback-Leibler divergence KL(q (✓) k p(✓ | D)). Minimization of the KL divergence is equivalent to maximization of the variational lower bound L( ). L( ) = LD( ) KL(q (✓) k p(✓)), (1) where LD( ) = NX i=1 Eq (✓) log p(yi |xi, ✓) (2) LD( ) is a so-called expected log-likelihood function which is intractable in case of complex probabilistic model p(y |x, ✓). Following [8] we use the Reparametrization trick to obtain an unbiased differentiable minibatch-based Monte Carlo estimator of the expected log-likelihood. Here N is the total number of objects, M is the minibatch size, and f( , ") provides samples from the approximate posterior q (✓) as a deterministic function of a non-parametric noise " ⇠ p("). LD( ) ' LSGV BD ( ) = N M MX k=1 log p(yik |xik , wik = f( , "ik)) (3) L( ) ' LSGV B( ) = LSGV BD ( ) KL(q (w) k p(w)) (4) r LD( ) ' r LSGV BD ( ) (5) This way we obtain a procedure of approximate Bayesian inference where we solve optimization problem (4) by stochastic gradient ascent w.r.t. variational parameters . This procedure can be efficiently applied to Deep Neural Networks and usually the computational overhead is very small, as compared to ordinary DNNs. If the model p(y |x, ✓, w) has another set of parameters w that we do not want to be Bayesian about, we can still use the same variational lower bound objective: L( , w) = LD( , w) KL(q (✓) k p(✓)) ! max ,w , (6) where LD( , w) = NX i=1 Eq (✓) log p(yi |xi, ✓, w) (7) This objective corresponds the maximum likelihood estimation wML of parameters w, while finding the approximate posterior distribution q (✓) ⇡ p(✓ | D, wML). In this paper we denote the weights of the neural networks, the biases, etc. as w and find their maximum likelihood estimation as described above. The parameters ✓ that undergo the Bayesian treatment are the noisy masks in the proposed dropout-like layer (SBP layer). They are described in the following section. 4 Group Sparsity with Log-normal Multiplicative Noise Variational Inference with a sparsity-inducing log-uniform prior over the weights of a neural network is an efficient way to enforce general sparsity on weight matrices [16]. However, it is difficult to apply this approach to explicitly enforce structured sparsity. We introduce a dropout-like layer with a certain kind of multiplicative noise. We also make use of the sparsity-inducing log-uniform prior, but put it over the noise variables rather than weights. By sharing those noise variables we can enforce group-wise sparsity with any form of groups. 4.1 Variational Inference for Group Sparsity Model We consider a single dropout-like layer with an input vector x 2 RI that represents one object with I features, and an output vector y 2 RI of the same size. The input vector x is usually supposed to come from the activations of the preceding layer. The output vector y would then serve as an input vector for the following layer. We follow the general way to build dropoutlike layers (8). Each input feature xi is multiplied by a noise variable ✓i that comes from some distribution pnoise(✓). For example, for Binary Dropout pnoise(✓) would be a fully factorized Bernoulli distribution with pnoise(✓i) = Bernoulli(p), and for Gaussian dropout it would be a fully-factorized Gaussian distribution with pnoise(✓i) = N (1,↵). yi = xi · ✓i ✓ ⇠ pnoise(✓) (8) Note that if we have a minibatch XM⇥I of M objects, we would independently sample a separate noise vector ✓m for each object xm. This would be the case throughout the paper, but for the sake of simplicity we would consider a single object x in all following formulas. Also note that the noise ✓ is usually only sampled during the training phase. A common approximation during the testing phase is to use the expected value E✓ instead of sampling ✓. All implementation details are provided and discussed in Section 4.5. We follow a Bayesian treatment of the variable ✓, as described in Section 3. In order to obtain a sparse solution, we choose the prior distribution p(✓) to be a fully-factorized improper log-uniform distribution. We denote this distribution as LogU1(·) to stress that it has infinite domain. This distribution is known for its sparsification properties and works well in practice for deep neural networks [16]. p(✓) = IY i=1 p(✓i) p(✓i) = LogU1(✓i) / 1 ✓ i ✓i > 0 (9) In order to train the model, i.e. perform variational inference, we need to choose an approximation family q for the posterior distribution p(✓ | D) ⇡ q (✓). q (✓) = IY i=1 q(✓i |µi, i) = IY i=1 LogN(✓i |µi, 2i ) (10) ✓i ⇠ LogN(✓i |µi, 2i ) () log ✓i ⇠ N (log ✓i |µi, 2i ) (11) A common choice of variational distribution q(·) is a fully-factorized Gaussian distribution. However, for this particular model we choose q(✓) to be a fully-factorized log-normal distribution (10–11). To make this choice, we were guided by the following reasons: • The log-uniform distribution is a specific case of the log-normal distribution when the parameter goes to infinity and µ remains fixed. Thus we can guarantee that in the case of no data our variational approximation can be made exact. Hence this variational family has no "prior gap". • We consider a model with multiplicative noise. The scale of this noise corresponds to its shift in the logarithmic space. By establishing the log-uniform prior we set no preferences on different scales of this multiplicative noise. The usual use of a Gaussian as a posterior immediately implies very asymmetric skewed distribution in the logarithmic space. Moreover log-uniform and Gaussian distributions have different supports and that will require establishing two log-uniform distributions for positive and negative noises. In this case Gaussian variational approximation would have quite exotic bi-modal form (one mode in the log-space of positive noises and another one in the log-space of negative noises). On the other hand, the log-normal posterior for the multiplicative noise corresponds to a Gaussian posterior for the additive noise in the logarithmic scale, which is much easier to interpret. • Log-normal noise is always non-negative both during training and testing phase, therefore it does not change the sign of its input. This is in contrast to Gaussian multiplicative noise N (✓i | 1,↵) that is a standard choice for Gaussian dropout and its modifications [8, 19, 23]. During the training phase Gaussian noise can take negative values, so the input to the following layer can be of arbitrary sign. However, during the testing phase noise ✓ is equal to 1, so the input to the following layer is non-negative with many popular non-linearities (e.g. ReLU, sigmoid, softplus). Although Gaussian dropout works well in practice, it is difficult to justify notoriously different input distributions during training and testing phases. • The log-normal approximate posterior is tractable. Specifically, the KL divergence term KL(LogN(✓ |µ, 2) kLogU1(✓)) can be computed analytically. The final loss function is presented in equation (12) and is essentially the original variational lower bound (4). LSGV B( ) = LSGV B D (µ, ,W ) KL(q(✓ |µ, ) k p(✓)) ! max µ, ,W , (12) where µ and are the variatianal parameters, and W denotes all other trainable parameters of the neural network, e.g. the weight matrices, the biases, batch normalization parameters, etc. Note that we can optimize the variational lower bound w.r.t. the parameters µ and of the log-normal noise ✓. We do not fix the mean of the noise thus making our variational approximation more tight. 4.2 Problems of Variational Inference with Improper Log-Uniform Prior The log-normal posterior in combination with a log-uniform prior has a number of attractive features. However, the maximization of the variational lower bound with a log-uniform prior and a log-normal posterior is an ill-posed optimization problem. As the log-uniform distribution is an improper prior, the KL-divergence between a log-normal distribution LogN(µ, 2) and a log-uniform distribution LogU1 is infinite for any finite value of parameters µ and . KL LogN(x |µ, 2) kLogU1(x) = C log C = +1 (13) A common way to tackle this problem is to consider the density of the log-uniform distribution to be equal to C✓ and to treat C as some finite constant. This trick works well for the case of a Gaussian posterior distribution [8, 16]. The KL divergence between a Gaussian posterior and a log-uniform prior has an infinite gap, but can be calculated up to this infinite constant in a meaningful way [16]. However, for the case of the log-normal posterior the KL divergence is infinite for any finite values of variational parameters, and is equal to zero for a fixed finite µ and infinite . As the data-term (3) is bounded for any value of variational parameters, the only global optimum of the variational lower bound is achieved when µ is finite and fixed, and goes to infinity. In this case the posterior distribution collapses into the prior distribution and the model fails to extract any information about the data. This effect is wholly caused by the fact that the log-uniform prior is an improper (non-normalizable) distribution, which makes the whole probabilistic model flawed. 4.3 Variational Inference with Truncated Approximation Family Due to the improper prior the optimization problem becomes ill-posed. But do we really need to use an improper prior distribution? The most common number format that is used to represent the parameters of a neural network is the floating-point format. The floating-point format is only able to represent numbers from a limited range. For example, a single-point precision variable can only represent numbers from the range 3.4⇥ 1038 to +3.4⇥ 1038, and the smallest possible positive number is equal to 1.2⇥ 10 38. All of probability mass of the improper log-uniform prior is concentrated beyond the single-point precision (and essentially any practical floating point precision), not to mention that the actual relevant range of values of neural network parameters is much smaller. It means that in practice this prior is not a good choice for software implementation of neural networks. We propose to use a truncated log-uniform distribution (14) as a proper analog of the log-uniform distribution. Here I[a,b](x) denotes the indicator function for the interval x 2 [a, b]. The posterior distribution should be defined on the same support as the prior distribution, so we also need to use a truncated log-normal distribution (14). LogU[a,b](✓i) / LogU1(✓i) · I[a,b](log ✓i) LogN[a,b](✓i) / LogN(✓i |µi, 2i ) · I[a,b](log ✓i) (14) Our final model then can be formulated as follows. y i = x i · ✓ i p(✓ i ) = LogU[a,b](✓i) q(✓i |µi, i) = LogN[a,b](✓i |µi, 2 i ) (15) Note that all the nice facts about the log-normal posterior distribution from the Section 4.1 are also true for the truncated log-normal posterior. However, now we have a proper probabilistic model and the Stochastic Variational Inference can be preformed correctly. Unlike (13), now the KL divergence term (16–17) can be calculated correctly for all valid values of variational parameters (see Appendix A for details). KL(q(✓ |µ, ) k p(✓)) = IX i=1 KL(q(✓i |µi, i) k p(✓i)) (16) KL(q(✓i |µi, i) k p(✓i)) = log b ap 2⇡e 2i log( ( i) (↵i)) ↵i (↵i) i ( i) 2( ( i) (↵i)) , (17) where ↵i = a µi i , i = b µi i , (·) and (·) are the density and the CDF of the standard normal distribution. The reparameterization trick also can still be performed (18) using the inverse CDF of the truncated normal distribution (see Appendix B). ✓ i = exp µ i + i 1 ( (↵ i ) + ( ( i ) (↵ i )) y i ) , where y i ⇠ U(y | 0, 1) (18) The final loss and the set of parameters is the same as described in Section 4.1, and the training procedure remains the same. 4.4 Sparsity Log-uniform prior is known to lead to a sparse solution [16]. In the variational dropout paper authors interpret the parameter ↵ of the multiplicative noise N (1,↵) as a Gaussian dropout rate and use it as a thresholding criterion for weight pruning. Unlike the binary or Gaussian dropout, in the truncated log-normal model there is no "dropout rate" variable. However, we can use the signal-to-noise ratio E✓/ p Var(✓) (SNR) for thresholding. SNR(✓ i ) = ( ( i ↵ i ) ( i i ))/ p ( i ) (↵ i )p exp( 2 i )( (2 i ↵ i ) (2 i i )) ( ( i ↵ i ) ( i i )) 2 (19) The SNR can be computed analytically, the derivation can be found in the appendix. It has a simple interpretation. If the SNR is low, the corresponding neuron becomes very noisy and its output no longer contains any useful information. If the SNR is high, it means that the neuron output contains little noise and is important for prediction. Therefore we can remove all neurons or filters with a low SNR and set their output to constant zero. 4.5 Implementation details We perform a minibatch-based stochastic variational inference for training. The training procedure looks as follows. On each training step we take a minibatch of M objects and feed it into the neural network. Consider a single SBP layer with input XM⇥I and output Y M⇥I . We independently sample a separate noise vector ✓m ⇠ q(✓) for each object xm and obtain a noise matrix ✓M⇥I . The output matrix Y M⇥I is then obtained by component-wise multiplication of the input matrix and the noise matrix: ymi = xmi · ✓mi . To be fully Bayesian, one would also sample and average over different dropout masks ✓ during testing, i.e. perform Bayesian ensembling. Although this procedure can be used to slightly improve the final accuracy, it is usually avoided. Bayesian ensembling essentially requires sampling of different copies of neural networks, which makes the evaluation K times slower for averaging over K samples. Instead, during the testing phase in most dropout-based techniques the noise variable ✓ is replaced with its expected value. In this paper we follow the same approach and replace all non-pruned ✓i with their expectations (20) during testing. The derivation of the expectation of the truncated log-normal distribution is presented in Appendix C. E✓ i = exp(µ i + 2 i /2) ( i ) (↵ i ) ✓ 2 i + µ i a i ◆ ✓ 2 i + µ i b i ◆ (20) We tried to use Bayesian ensembling with this model, and experienced almost no gain of accuracy. It means that the variance of the learned approximate posterior distribution is low and does not provide a rich ensemble. Throughout the paper we introduced the SBP dropout layer for the case when input objects are represented as one-dimensional vectors x. When defined like that, it would induce general sparsity on the input vector x. It works as intended for fully-connected layers, as a single input feature corresponds to a single output neuron of a preceding fully-connected layer and a single output neuron of the following layer. However, it is possible to apply the SBP layer in a more generic setting. Firstly, if the input object is represented as a multidimensional tensor X with shape I1 ⇥ I2 ⇥ · · ·⇥ Id, the noise vector ✓ of length I = I1 ⇥ I2 ⇥ · · ·⇥ Id can be reshaped into a tensor with the same shape. Then the output tensor Y can be obtained as a component-wise product of the input tensor X and the noise tensor ✓. Secondly, the SBP layer can induce any form of structured sparsity on this input tensor X . To do it, one would simply need to use a single random variable ✓i for the group of input features that should be removed simultaneously. For example, consider an input tensor XH⇥W⇥C that comes from a convolutional layer, H and W being the size of the image, and C being the number of channels. Then, in order to remove redundant filters from the preceding layer (and at the same time redundant channels from the following layer), one need to share the random variables ✓ in the following way: yhwc = xhwc · ✓c ✓c ⇠ LogN[a,b](✓c |µc, 2c ) (21) Note that now there is one sample ✓ 2 RC for one object XH⇥W⇥C on each training step. If the signal-to-noise ratio becomes lower than 1 for a component ✓c, that would mean that we can permanently remove the c-th channel of the input tensor, and therefore delete the c-th filter from the preceding layer and the c-th channel from the following layer. All the experiments with convolutional architectures used this formulation of SBP. This is a general approach that is not limited to reducing the shape of the input tensor. It is possible to obtain any fixed pattern of group-wise sparsity using this technique. Similarly, the SBP layer can be applied in a DropConnect fashion. One would just need to multiply the weight tensor W by a noise tensor ✓ of similar shape. The training procedure remains the same. It is still possible to enforce any structured sparsity pattern for the weight tensor W by sharing the random variables as described above. 5 Experiments We perform an evaluation on different supervised classification tasks and with different architectures of neural networks including deep VGG-like architectures with batch normalization layers. For each architecture, we report the number of retained neurons and filters, and obtained acceleration. Our experiments show that Structured Bayesian Pruning leads to a high level of structured sparsity in convolutional filters and neurons of DNNs without significant accuracy drop. We also demonstrate that optimization w.r.t. the full set of variational parameters (µ, ) leads to improving model quality and allows us to perform sparsification in a more efficient way, as compared to tuning of only one free parameter that corresponds to the noise variance. As a nice bonus, we show that Structured Bayesian Pruning network does not overfit on randomly labeled data, that is a common weakness of non-bayesian dropout networks. The source code is available in Theano [7] and Lasagne, and also in TensorFlow [1] (https://github.com/necludov/group-sparsity-sbp). 5.1 Experiment Setup The truncation parameters a and b are the hyperparameters of our model. As our layer is meant for regularization of the model, we would like our layer not to amplify the input signal and restrict the noise ✓ to an interval [0, 1]. This choice corresponds to the right truncation threshold b set to 0. We find empirically that the left truncation parameter a does not influence the final result much. We use values a = 20 and b = 0 in all experiments. We define redundant neurons by the signal-to-noise ratio of the corresponding multiplicative noise ✓. See Section 4.4 for more details. By removing all neurons and filters with the SNR < 1 we experience no accuracy drop in all our experiments. SBP dropout layers were put after each convolutional layer to remove its filters, and before each fully-connected layer to remove its input neurons. As one filter of the last convolutional layer usually corresponds to a group of neurons in the following dense layer, it means that we can remove more input neurons in the first dense layer. Note that it means that we have two consecutive dropout layers between the last convolutional layer and the first fully-connected layer in CNNs, and a dropout layer before the first fully-connected layer in FC networks (see Fig. 2). 5.2 More Flexible Variational Approximation Usually during automatic training of dropout rates the mean of the noise distribution remains fixed. In the case of our model it is possible to train both mean and variance of the multiplicative noise. By using a more flexible distribution we obtain a tighter variational lower bound and a higher sparsity level. In order to demonstrate this effect, we performed an experiment on MNIST dataset with a fully connected neural network that contains two hidden layers with 1000 neurons each. The results are presented in Fig. 1. 5.3 LeNet5 and Fully-Connected Net on MNIST We compare our method with other sparsity inducing methods on the MNIST dataset using a fully connected architecture LeNet-500-300 and a convolutional architecture LeNet-5-Caffe. These networks were trained with Adam without any data augmentation. The LeNet-500-300 network was trained from scratch, and the LeNet-5-Caffe1 network was pretrained with weight decay. An illustration of trained SNR for the image features for the LeNet-500-3002 network is shown in Fig. 2. The final accuracy, group-wise sparsity levels and speedup for these architectures for different methods are shown in Table 1. 5.4 VGG-like on CIFAR-10 To prove that SBP scales to deep architectures, we apply it to a VGG-like network [25] that was adapted for the CIFAR-10 [9] dataset. The network consists of 13 convolutional and two fullyconnected layers, trained with pre-activation batch normalization and Binary Dropout. At the start of the training procedure, we use pre-trained weights for initialization. Results with different scaling of the number of units are presented in Table 2. We present results for two architectures with different scaling coefficient k 2 {1.0, 1.5} . For smaller values of scaling coefficient k 2 {0.25, 0.5} we obtain less sparse architecture since these networks have small learning capacities. Besides the results for the standard StructuredBP procedure, we also provide the results for SBP with KL scaling (StructuredBPa). Scaling the KL term of the variational lower bound proportional to the computational complexity of the layer leads to a higher sparsity level for the first layers, providing 1A modified version of LeNet5 from [11]. Caffe Model specification: https://goo.gl/4yI3dL 2Fully Connected Neural Net with 2 hidden layers that contains 500 and 300 neurons respectively. more acceleration. Despite the higher error values, we obtain the higher value of true variational lower bound during KL scaling, hence, we find its another local maximum. 5.5 Random Labels A recent work shows that Deep Neural Networks have so much capacity that they can easily memorize the data even with random labeling [26]. Binary dropout as well as other standard regularization techniques do not prevent the networks from overfitting in this scenario. However, recently it was shown that Bayesian regularization may help [16]. Following these works, we conducted similar experiments. We used a Lenet5 network on the MNIST dataset and a VGG-like network on CIFAR-10. Although Binary Dropout does not prevent these networks from overfitting, SBP decides to remove all neurons of the neural network and provides a constant prediction. In other words, in this case SBP chooses the simplest model that achieves the same testing error rate. This is another confirmation that Bayesian regularization is more powerful than other popular regularization techniques. 6 Conclusion We propose Structured Bayesian Pruning, or SBP, a dropout-like layer that induces multiplicative random noise over the output of the preceding layer. We put a sparsity-inducing prior over the noise variables and tune the noise distribution using stochastic variational inference. SBP layer can induce an arbitrary structured sparsity pattern over its input and provides adaptive regularization. We apply SBP to cut down the number of neurons and filters in convolutional neural networks and report significant practical acceleration with no modification of the existing software implementation of these architectures. Acknowledgments We would like to thank Christos Louizos and Max Welling for valuable discussions. Kirill Neklyudov and Arsenii Ashukha were supported by HSE International lab of Deep Learning and Bayesian Methods which is funded by the Russian Academic Excellence Project ’5-100’. Dmitry Molchanov was supported by the Ministry of Education and Science of the Russian Federation (grant 14.756.31.0001). Dmitry Vetrov was supported by the Russian Science Foundation grant 17-11-01027.
1. What is the focus of the paper regarding Bayesian neural networks? 2. What are the strengths of the proposed approach, particularly in terms of group sparsity and technical soundness? 3. Do you have any concerns or suggestions regarding the choice of prior distributions and their connection to variational dropout? 4. How does the reviewer assess the clarity, quality, originality, and significance of the paper's content? 5. Are there any questions or comments regarding the computational aspects and the approximation of posterior variances?
Review
Review Summary: The paper explores group sparsity in Bayesian neural networks. The authors use truncated log-uniform distributed random variables to turn off uninformative neurons/filters. Experiments demonstrating sparsity without significant generalization loss are presented. Clarity and Quality: The paper is sufficiently clear and easy to follow. The proposed group sparsity inducing priors are a natural extension of existing work interpreting Gaussian dropout as variational inference in BNNs. Truncation is a technically reasonable approach for dealing with issues stemming from the use of improper priors necessitated by this interpretation. Overall the paper is technically sound. Originality and Significance - The primary contribution of the paper is the use of truncated log-uniform priors over scale (noise) variables to induce group structured sparsity in BNNs. Automatic model selection in BNNs is an interesting problem and being able to turn off uninformative neurons and filters is a useful step in that direction. Other Comments: 1) The choice of dealing with improper log-uniform prior seems cumbersome. Alternate, sparsity inducing proper priors such as continuous relaxations of the spike and slab (Horseshoe variants) are more convenient to work with. Under an appropriate parameterization, they replace the hard constraints on the range of theta [0, 1], with a smooth penalty and hence an easier optimization problem. These alternate priors do weaken the connection to the variational interpretation of dropout, but so does truncation and the connection to variational dropout is not necessary for structured sparsity. 2) While it may be a common computational trick, summarizing the posterior with a point estimate for test predictions is fraught with difficulties when the posterior variance hasn’t collapsed to (nearly) zero. It will be useful to include an analysis of the posterior variances of theta_i, to ascertain the quality of approximation of Eqn. 18. In any case, the median is perhaps a better summary of the skewed log-normal distribution than the mean and may be worth exploring in place of Equation 18. 3) The discussion on Page 4 justifying the use of log-Normal family of variational approximations for theta is bordering on unnecessary. While Gaussian variational approximations are commonly used for approximating the distributions of rvs with support over the entire real line no sensible variational inference scheme uses Gaussian approximations for rvs with support on only the positive half of the real line. Log-Normals are a popular and well-known choice for such random variables.
NIPS
Title Structured Bayesian Pruning via Log-Normal Multiplicative Noise Abstract Dropout-based regularization methods can be regarded as injecting random noise with pre-defined magnitude to different parts of the neural network during training. It was recently shown that Bayesian dropout procedure not only improves generalization but also leads to extremely sparse neural architectures by automatically setting the individual noise magnitude per weight. However, this sparsity can hardly be used for acceleration since it is unstructured. In the paper, we propose a new Bayesian model that takes into account the computational structure of neural networks and provides structured sparsity, e.g. removes neurons and/or convolutional channels in CNNs. To do this we inject noise to the neurons outputs while keeping the weights unregularized. We establish the probabilistic model with a proper truncated log-uniform prior over the noise and truncated log-normal variational approximation that ensures that the KL-term in the evidence lower bound is computed in closed-form. The model leads to structured sparsity by removing elements with a low SNR from the computation graph and provides significant acceleration on a number of deep neural architectures. The model is easy to implement as it can be formulated as a separate dropout-like layer. 1 Introduction Deep neural networks are a flexible family of models which provides state-of-the-art results in many machine learning problems [14, 20]. However, this flexibility often results in overfitting. A common solution for this problem is regularization. One of the most popular ways of regularization is Binary Dropout [19] that prevents co-adaptation of neurons by randomly dropping them during training. An equally effective alternative is Gaussian Dropout [19] that multiplies the outputs of the neurons by Gaussian random noise. In recent years several Bayesian generalizations of these techniques have been developed, e.g. Variational Dropout [8] and Variational Spike-and-Slab Neural Networks [13]. These techniques provide theoretical justification of different kinds of Dropout and also allow for automatic tuning of dropout rates, which is an important practical result. Besides overfitting, compression and acceleration of neural networks are other important challenges, especially when memory or computational resources are restricted. Further studies of Variational Dropout show that individual dropout rates for each weight allow to shrink the original network architecture and result in a highly sparse model [16]. General sparsity provides a way of neural network compression, while the time of network evaluation may remain the same, as most modern DNN-oriented software can’t work with sparse matrices efficiently. At the same time, it is possible to achieve acceleration by enforcing structured sparsity in convolutional filters or data tensors. In the simplest case it means removing redundant neurons or convolutional filters instead of separate weights; but more complex patterns can also be considered. This way Group-wise Brain Damage 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. [10] employs group-wise sparsity in convolutional filters, Perforated CNNs [3] drop redundant rows from the intermediate dataframe matrices that are used to compute convolutions, and Structured Sparsity Learning [24] provides a way to remove entire convolutional filters or even layers in residual networks. These methods allow to obtain practical acceleration with little to no modifications of the existing software. In this paper, we propose a tool that is able to induce an arbitrary pattern of structured sparsity on neural network parameters or intermediate data tensors. We propose a dropout-like layer with a parametric multiplicative noise and use stochastic variational inference to tune its parameters in a Bayesian way. We introduce a proper analog of sparsity-inducing log-uniform prior distribution [8, 16] that allows us to formulate a correct probabilistic model and avoid the problems that come from using an improper prior. This way we obtain a novel Bayesian method of regularization of neural networks that results in structured sparsity. Our model can be represented as a separate dropout-like layer that allows for a simple and flexible implementation with almost no computational overhead, and can be incorporated into existing neural networks. Our experiments show that our model leads to high group sparsity level and significant acceleration of convolutional neural networks with negligible accuracy drop. We demonstrate the performance of our method on LeNet and VGG-like architectures using MNIST and CIFAR-10 datasets. 2 Related Work Deep neural networks are extremely prone to overfitting, and extensive regularization is crucial. The most popular regularization methods are based on injection of multiplicative noise over layer inputs, parameters or activations [8, 19, 22]. Different kinds of multiplicative noise have been used in practice; the most popular choices are Bernoulli and Gaussian distributions. Another type of regularization of deep neural networks is based on reducing the number of parameters. One approach is to use low-rank approximations, e.g. tensor decompositions [4, 17], and the other approach is to induce sparsity, e.g. by pruning [5] or L1 regularization [24]. Sparsity can also be induced by using the Sparse Bayesian Learning framework with empirical Bayes [21] or with sparsity-inducing priors [12, 15, 16]. High sparsity is one of the key factors for the compression of DNNs [5, 21]. However, in addition to compression it is beneficial to obtain acceleration. Recent papers propose different approaches to acceleration of DNNs, e.g. Spatial Skipped Convolutions [3] and Spatially Adaptive Computation Time [2] that propose different ways to reduce the number of computed convolutions, Binary Networks [18] that achieve speedup by using only 1 bit to store a single weight of a DNN, Low-Rank Expansions [6] that use low-rank filter approximations, and Structured Sparsity Learning [24] that allows to remove separate neurons or filters. As reported in [24] it is possible to obtain acceleration of DNNs by introducing structured sparsity, e.g. by removing whole neurons, filters or layers. However, non-adaptive regularization techniques require tuning of a huge number of hyperparameters that makes it difficult to apply in practice. In this paper we apply the Bayesian learning framework to obtain structured sparsity and focus on acceleration of neural networks. 3 Stochastic Variational Inference Given a probabilistic model p(y |x, ✓) we want to tune parameters ✓ of the model using training dataset D = {(xi, yi)}Ni=1. The prior knowledge about parameters ✓ is defined by prior distribution p(✓). Using the Bayes rule we obtain the posterior distribution p(✓ | D) = p(D | ✓)p(✓)/p(D). However, computing posterior distribution using the Bayes rule usually involves computation of intractable integrals, so we need to use approximation techniques. One of the most widely used approximation techniques is Variational Inference. In this approach the unknown distribution p(✓ | D) is approximated by a parametric distribution q (✓) by minimization of the Kullback-Leibler divergence KL(q (✓) k p(✓ | D)). Minimization of the KL divergence is equivalent to maximization of the variational lower bound L( ). L( ) = LD( ) KL(q (✓) k p(✓)), (1) where LD( ) = NX i=1 Eq (✓) log p(yi |xi, ✓) (2) LD( ) is a so-called expected log-likelihood function which is intractable in case of complex probabilistic model p(y |x, ✓). Following [8] we use the Reparametrization trick to obtain an unbiased differentiable minibatch-based Monte Carlo estimator of the expected log-likelihood. Here N is the total number of objects, M is the minibatch size, and f( , ") provides samples from the approximate posterior q (✓) as a deterministic function of a non-parametric noise " ⇠ p("). LD( ) ' LSGV BD ( ) = N M MX k=1 log p(yik |xik , wik = f( , "ik)) (3) L( ) ' LSGV B( ) = LSGV BD ( ) KL(q (w) k p(w)) (4) r LD( ) ' r LSGV BD ( ) (5) This way we obtain a procedure of approximate Bayesian inference where we solve optimization problem (4) by stochastic gradient ascent w.r.t. variational parameters . This procedure can be efficiently applied to Deep Neural Networks and usually the computational overhead is very small, as compared to ordinary DNNs. If the model p(y |x, ✓, w) has another set of parameters w that we do not want to be Bayesian about, we can still use the same variational lower bound objective: L( , w) = LD( , w) KL(q (✓) k p(✓)) ! max ,w , (6) where LD( , w) = NX i=1 Eq (✓) log p(yi |xi, ✓, w) (7) This objective corresponds the maximum likelihood estimation wML of parameters w, while finding the approximate posterior distribution q (✓) ⇡ p(✓ | D, wML). In this paper we denote the weights of the neural networks, the biases, etc. as w and find their maximum likelihood estimation as described above. The parameters ✓ that undergo the Bayesian treatment are the noisy masks in the proposed dropout-like layer (SBP layer). They are described in the following section. 4 Group Sparsity with Log-normal Multiplicative Noise Variational Inference with a sparsity-inducing log-uniform prior over the weights of a neural network is an efficient way to enforce general sparsity on weight matrices [16]. However, it is difficult to apply this approach to explicitly enforce structured sparsity. We introduce a dropout-like layer with a certain kind of multiplicative noise. We also make use of the sparsity-inducing log-uniform prior, but put it over the noise variables rather than weights. By sharing those noise variables we can enforce group-wise sparsity with any form of groups. 4.1 Variational Inference for Group Sparsity Model We consider a single dropout-like layer with an input vector x 2 RI that represents one object with I features, and an output vector y 2 RI of the same size. The input vector x is usually supposed to come from the activations of the preceding layer. The output vector y would then serve as an input vector for the following layer. We follow the general way to build dropoutlike layers (8). Each input feature xi is multiplied by a noise variable ✓i that comes from some distribution pnoise(✓). For example, for Binary Dropout pnoise(✓) would be a fully factorized Bernoulli distribution with pnoise(✓i) = Bernoulli(p), and for Gaussian dropout it would be a fully-factorized Gaussian distribution with pnoise(✓i) = N (1,↵). yi = xi · ✓i ✓ ⇠ pnoise(✓) (8) Note that if we have a minibatch XM⇥I of M objects, we would independently sample a separate noise vector ✓m for each object xm. This would be the case throughout the paper, but for the sake of simplicity we would consider a single object x in all following formulas. Also note that the noise ✓ is usually only sampled during the training phase. A common approximation during the testing phase is to use the expected value E✓ instead of sampling ✓. All implementation details are provided and discussed in Section 4.5. We follow a Bayesian treatment of the variable ✓, as described in Section 3. In order to obtain a sparse solution, we choose the prior distribution p(✓) to be a fully-factorized improper log-uniform distribution. We denote this distribution as LogU1(·) to stress that it has infinite domain. This distribution is known for its sparsification properties and works well in practice for deep neural networks [16]. p(✓) = IY i=1 p(✓i) p(✓i) = LogU1(✓i) / 1 ✓ i ✓i > 0 (9) In order to train the model, i.e. perform variational inference, we need to choose an approximation family q for the posterior distribution p(✓ | D) ⇡ q (✓). q (✓) = IY i=1 q(✓i |µi, i) = IY i=1 LogN(✓i |µi, 2i ) (10) ✓i ⇠ LogN(✓i |µi, 2i ) () log ✓i ⇠ N (log ✓i |µi, 2i ) (11) A common choice of variational distribution q(·) is a fully-factorized Gaussian distribution. However, for this particular model we choose q(✓) to be a fully-factorized log-normal distribution (10–11). To make this choice, we were guided by the following reasons: • The log-uniform distribution is a specific case of the log-normal distribution when the parameter goes to infinity and µ remains fixed. Thus we can guarantee that in the case of no data our variational approximation can be made exact. Hence this variational family has no "prior gap". • We consider a model with multiplicative noise. The scale of this noise corresponds to its shift in the logarithmic space. By establishing the log-uniform prior we set no preferences on different scales of this multiplicative noise. The usual use of a Gaussian as a posterior immediately implies very asymmetric skewed distribution in the logarithmic space. Moreover log-uniform and Gaussian distributions have different supports and that will require establishing two log-uniform distributions for positive and negative noises. In this case Gaussian variational approximation would have quite exotic bi-modal form (one mode in the log-space of positive noises and another one in the log-space of negative noises). On the other hand, the log-normal posterior for the multiplicative noise corresponds to a Gaussian posterior for the additive noise in the logarithmic scale, which is much easier to interpret. • Log-normal noise is always non-negative both during training and testing phase, therefore it does not change the sign of its input. This is in contrast to Gaussian multiplicative noise N (✓i | 1,↵) that is a standard choice for Gaussian dropout and its modifications [8, 19, 23]. During the training phase Gaussian noise can take negative values, so the input to the following layer can be of arbitrary sign. However, during the testing phase noise ✓ is equal to 1, so the input to the following layer is non-negative with many popular non-linearities (e.g. ReLU, sigmoid, softplus). Although Gaussian dropout works well in practice, it is difficult to justify notoriously different input distributions during training and testing phases. • The log-normal approximate posterior is tractable. Specifically, the KL divergence term KL(LogN(✓ |µ, 2) kLogU1(✓)) can be computed analytically. The final loss function is presented in equation (12) and is essentially the original variational lower bound (4). LSGV B( ) = LSGV B D (µ, ,W ) KL(q(✓ |µ, ) k p(✓)) ! max µ, ,W , (12) where µ and are the variatianal parameters, and W denotes all other trainable parameters of the neural network, e.g. the weight matrices, the biases, batch normalization parameters, etc. Note that we can optimize the variational lower bound w.r.t. the parameters µ and of the log-normal noise ✓. We do not fix the mean of the noise thus making our variational approximation more tight. 4.2 Problems of Variational Inference with Improper Log-Uniform Prior The log-normal posterior in combination with a log-uniform prior has a number of attractive features. However, the maximization of the variational lower bound with a log-uniform prior and a log-normal posterior is an ill-posed optimization problem. As the log-uniform distribution is an improper prior, the KL-divergence between a log-normal distribution LogN(µ, 2) and a log-uniform distribution LogU1 is infinite for any finite value of parameters µ and . KL LogN(x |µ, 2) kLogU1(x) = C log C = +1 (13) A common way to tackle this problem is to consider the density of the log-uniform distribution to be equal to C✓ and to treat C as some finite constant. This trick works well for the case of a Gaussian posterior distribution [8, 16]. The KL divergence between a Gaussian posterior and a log-uniform prior has an infinite gap, but can be calculated up to this infinite constant in a meaningful way [16]. However, for the case of the log-normal posterior the KL divergence is infinite for any finite values of variational parameters, and is equal to zero for a fixed finite µ and infinite . As the data-term (3) is bounded for any value of variational parameters, the only global optimum of the variational lower bound is achieved when µ is finite and fixed, and goes to infinity. In this case the posterior distribution collapses into the prior distribution and the model fails to extract any information about the data. This effect is wholly caused by the fact that the log-uniform prior is an improper (non-normalizable) distribution, which makes the whole probabilistic model flawed. 4.3 Variational Inference with Truncated Approximation Family Due to the improper prior the optimization problem becomes ill-posed. But do we really need to use an improper prior distribution? The most common number format that is used to represent the parameters of a neural network is the floating-point format. The floating-point format is only able to represent numbers from a limited range. For example, a single-point precision variable can only represent numbers from the range 3.4⇥ 1038 to +3.4⇥ 1038, and the smallest possible positive number is equal to 1.2⇥ 10 38. All of probability mass of the improper log-uniform prior is concentrated beyond the single-point precision (and essentially any practical floating point precision), not to mention that the actual relevant range of values of neural network parameters is much smaller. It means that in practice this prior is not a good choice for software implementation of neural networks. We propose to use a truncated log-uniform distribution (14) as a proper analog of the log-uniform distribution. Here I[a,b](x) denotes the indicator function for the interval x 2 [a, b]. The posterior distribution should be defined on the same support as the prior distribution, so we also need to use a truncated log-normal distribution (14). LogU[a,b](✓i) / LogU1(✓i) · I[a,b](log ✓i) LogN[a,b](✓i) / LogN(✓i |µi, 2i ) · I[a,b](log ✓i) (14) Our final model then can be formulated as follows. y i = x i · ✓ i p(✓ i ) = LogU[a,b](✓i) q(✓i |µi, i) = LogN[a,b](✓i |µi, 2 i ) (15) Note that all the nice facts about the log-normal posterior distribution from the Section 4.1 are also true for the truncated log-normal posterior. However, now we have a proper probabilistic model and the Stochastic Variational Inference can be preformed correctly. Unlike (13), now the KL divergence term (16–17) can be calculated correctly for all valid values of variational parameters (see Appendix A for details). KL(q(✓ |µ, ) k p(✓)) = IX i=1 KL(q(✓i |µi, i) k p(✓i)) (16) KL(q(✓i |µi, i) k p(✓i)) = log b ap 2⇡e 2i log( ( i) (↵i)) ↵i (↵i) i ( i) 2( ( i) (↵i)) , (17) where ↵i = a µi i , i = b µi i , (·) and (·) are the density and the CDF of the standard normal distribution. The reparameterization trick also can still be performed (18) using the inverse CDF of the truncated normal distribution (see Appendix B). ✓ i = exp µ i + i 1 ( (↵ i ) + ( ( i ) (↵ i )) y i ) , where y i ⇠ U(y | 0, 1) (18) The final loss and the set of parameters is the same as described in Section 4.1, and the training procedure remains the same. 4.4 Sparsity Log-uniform prior is known to lead to a sparse solution [16]. In the variational dropout paper authors interpret the parameter ↵ of the multiplicative noise N (1,↵) as a Gaussian dropout rate and use it as a thresholding criterion for weight pruning. Unlike the binary or Gaussian dropout, in the truncated log-normal model there is no "dropout rate" variable. However, we can use the signal-to-noise ratio E✓/ p Var(✓) (SNR) for thresholding. SNR(✓ i ) = ( ( i ↵ i ) ( i i ))/ p ( i ) (↵ i )p exp( 2 i )( (2 i ↵ i ) (2 i i )) ( ( i ↵ i ) ( i i )) 2 (19) The SNR can be computed analytically, the derivation can be found in the appendix. It has a simple interpretation. If the SNR is low, the corresponding neuron becomes very noisy and its output no longer contains any useful information. If the SNR is high, it means that the neuron output contains little noise and is important for prediction. Therefore we can remove all neurons or filters with a low SNR and set their output to constant zero. 4.5 Implementation details We perform a minibatch-based stochastic variational inference for training. The training procedure looks as follows. On each training step we take a minibatch of M objects and feed it into the neural network. Consider a single SBP layer with input XM⇥I and output Y M⇥I . We independently sample a separate noise vector ✓m ⇠ q(✓) for each object xm and obtain a noise matrix ✓M⇥I . The output matrix Y M⇥I is then obtained by component-wise multiplication of the input matrix and the noise matrix: ymi = xmi · ✓mi . To be fully Bayesian, one would also sample and average over different dropout masks ✓ during testing, i.e. perform Bayesian ensembling. Although this procedure can be used to slightly improve the final accuracy, it is usually avoided. Bayesian ensembling essentially requires sampling of different copies of neural networks, which makes the evaluation K times slower for averaging over K samples. Instead, during the testing phase in most dropout-based techniques the noise variable ✓ is replaced with its expected value. In this paper we follow the same approach and replace all non-pruned ✓i with their expectations (20) during testing. The derivation of the expectation of the truncated log-normal distribution is presented in Appendix C. E✓ i = exp(µ i + 2 i /2) ( i ) (↵ i ) ✓ 2 i + µ i a i ◆ ✓ 2 i + µ i b i ◆ (20) We tried to use Bayesian ensembling with this model, and experienced almost no gain of accuracy. It means that the variance of the learned approximate posterior distribution is low and does not provide a rich ensemble. Throughout the paper we introduced the SBP dropout layer for the case when input objects are represented as one-dimensional vectors x. When defined like that, it would induce general sparsity on the input vector x. It works as intended for fully-connected layers, as a single input feature corresponds to a single output neuron of a preceding fully-connected layer and a single output neuron of the following layer. However, it is possible to apply the SBP layer in a more generic setting. Firstly, if the input object is represented as a multidimensional tensor X with shape I1 ⇥ I2 ⇥ · · ·⇥ Id, the noise vector ✓ of length I = I1 ⇥ I2 ⇥ · · ·⇥ Id can be reshaped into a tensor with the same shape. Then the output tensor Y can be obtained as a component-wise product of the input tensor X and the noise tensor ✓. Secondly, the SBP layer can induce any form of structured sparsity on this input tensor X . To do it, one would simply need to use a single random variable ✓i for the group of input features that should be removed simultaneously. For example, consider an input tensor XH⇥W⇥C that comes from a convolutional layer, H and W being the size of the image, and C being the number of channels. Then, in order to remove redundant filters from the preceding layer (and at the same time redundant channels from the following layer), one need to share the random variables ✓ in the following way: yhwc = xhwc · ✓c ✓c ⇠ LogN[a,b](✓c |µc, 2c ) (21) Note that now there is one sample ✓ 2 RC for one object XH⇥W⇥C on each training step. If the signal-to-noise ratio becomes lower than 1 for a component ✓c, that would mean that we can permanently remove the c-th channel of the input tensor, and therefore delete the c-th filter from the preceding layer and the c-th channel from the following layer. All the experiments with convolutional architectures used this formulation of SBP. This is a general approach that is not limited to reducing the shape of the input tensor. It is possible to obtain any fixed pattern of group-wise sparsity using this technique. Similarly, the SBP layer can be applied in a DropConnect fashion. One would just need to multiply the weight tensor W by a noise tensor ✓ of similar shape. The training procedure remains the same. It is still possible to enforce any structured sparsity pattern for the weight tensor W by sharing the random variables as described above. 5 Experiments We perform an evaluation on different supervised classification tasks and with different architectures of neural networks including deep VGG-like architectures with batch normalization layers. For each architecture, we report the number of retained neurons and filters, and obtained acceleration. Our experiments show that Structured Bayesian Pruning leads to a high level of structured sparsity in convolutional filters and neurons of DNNs without significant accuracy drop. We also demonstrate that optimization w.r.t. the full set of variational parameters (µ, ) leads to improving model quality and allows us to perform sparsification in a more efficient way, as compared to tuning of only one free parameter that corresponds to the noise variance. As a nice bonus, we show that Structured Bayesian Pruning network does not overfit on randomly labeled data, that is a common weakness of non-bayesian dropout networks. The source code is available in Theano [7] and Lasagne, and also in TensorFlow [1] (https://github.com/necludov/group-sparsity-sbp). 5.1 Experiment Setup The truncation parameters a and b are the hyperparameters of our model. As our layer is meant for regularization of the model, we would like our layer not to amplify the input signal and restrict the noise ✓ to an interval [0, 1]. This choice corresponds to the right truncation threshold b set to 0. We find empirically that the left truncation parameter a does not influence the final result much. We use values a = 20 and b = 0 in all experiments. We define redundant neurons by the signal-to-noise ratio of the corresponding multiplicative noise ✓. See Section 4.4 for more details. By removing all neurons and filters with the SNR < 1 we experience no accuracy drop in all our experiments. SBP dropout layers were put after each convolutional layer to remove its filters, and before each fully-connected layer to remove its input neurons. As one filter of the last convolutional layer usually corresponds to a group of neurons in the following dense layer, it means that we can remove more input neurons in the first dense layer. Note that it means that we have two consecutive dropout layers between the last convolutional layer and the first fully-connected layer in CNNs, and a dropout layer before the first fully-connected layer in FC networks (see Fig. 2). 5.2 More Flexible Variational Approximation Usually during automatic training of dropout rates the mean of the noise distribution remains fixed. In the case of our model it is possible to train both mean and variance of the multiplicative noise. By using a more flexible distribution we obtain a tighter variational lower bound and a higher sparsity level. In order to demonstrate this effect, we performed an experiment on MNIST dataset with a fully connected neural network that contains two hidden layers with 1000 neurons each. The results are presented in Fig. 1. 5.3 LeNet5 and Fully-Connected Net on MNIST We compare our method with other sparsity inducing methods on the MNIST dataset using a fully connected architecture LeNet-500-300 and a convolutional architecture LeNet-5-Caffe. These networks were trained with Adam without any data augmentation. The LeNet-500-300 network was trained from scratch, and the LeNet-5-Caffe1 network was pretrained with weight decay. An illustration of trained SNR for the image features for the LeNet-500-3002 network is shown in Fig. 2. The final accuracy, group-wise sparsity levels and speedup for these architectures for different methods are shown in Table 1. 5.4 VGG-like on CIFAR-10 To prove that SBP scales to deep architectures, we apply it to a VGG-like network [25] that was adapted for the CIFAR-10 [9] dataset. The network consists of 13 convolutional and two fullyconnected layers, trained with pre-activation batch normalization and Binary Dropout. At the start of the training procedure, we use pre-trained weights for initialization. Results with different scaling of the number of units are presented in Table 2. We present results for two architectures with different scaling coefficient k 2 {1.0, 1.5} . For smaller values of scaling coefficient k 2 {0.25, 0.5} we obtain less sparse architecture since these networks have small learning capacities. Besides the results for the standard StructuredBP procedure, we also provide the results for SBP with KL scaling (StructuredBPa). Scaling the KL term of the variational lower bound proportional to the computational complexity of the layer leads to a higher sparsity level for the first layers, providing 1A modified version of LeNet5 from [11]. Caffe Model specification: https://goo.gl/4yI3dL 2Fully Connected Neural Net with 2 hidden layers that contains 500 and 300 neurons respectively. more acceleration. Despite the higher error values, we obtain the higher value of true variational lower bound during KL scaling, hence, we find its another local maximum. 5.5 Random Labels A recent work shows that Deep Neural Networks have so much capacity that they can easily memorize the data even with random labeling [26]. Binary dropout as well as other standard regularization techniques do not prevent the networks from overfitting in this scenario. However, recently it was shown that Bayesian regularization may help [16]. Following these works, we conducted similar experiments. We used a Lenet5 network on the MNIST dataset and a VGG-like network on CIFAR-10. Although Binary Dropout does not prevent these networks from overfitting, SBP decides to remove all neurons of the neural network and provides a constant prediction. In other words, in this case SBP chooses the simplest model that achieves the same testing error rate. This is another confirmation that Bayesian regularization is more powerful than other popular regularization techniques. 6 Conclusion We propose Structured Bayesian Pruning, or SBP, a dropout-like layer that induces multiplicative random noise over the output of the preceding layer. We put a sparsity-inducing prior over the noise variables and tune the noise distribution using stochastic variational inference. SBP layer can induce an arbitrary structured sparsity pattern over its input and provides adaptive regularization. We apply SBP to cut down the number of neurons and filters in convolutional neural networks and report significant practical acceleration with no modification of the existing software implementation of these architectures. Acknowledgments We would like to thank Christos Louizos and Max Welling for valuable discussions. Kirill Neklyudov and Arsenii Ashukha were supported by HSE International lab of Deep Learning and Bayesian Methods which is funded by the Russian Academic Excellence Project ’5-100’. Dmitry Molchanov was supported by the Ministry of Education and Science of the Russian Federation (grant 14.756.31.0001). Dmitry Vetrov was supported by the Russian Science Foundation grant 17-11-01027.
1. What is the focus of the paper regarding deep neural networks? 2. What are the strengths of the proposed technique compared to other approaches? 3. How does the method enhance the Bayesian dropout method? 4. What are the weaknesses or limitations of the paper's contributions? 5. Are there any questions about the presentation, organization, or clarity of the content? 6. Are there any suggestions for improving the method or its applications?
Review
Review Summary: The paper addresses the actual problem of compression and efficient computation of deep NNs. It presents a new pruning technique for deep NNs, which is an enhancement of the Bayesian dropout method. Contrary to standard Bayesian dropout, the proposed method prunes NNs in a structured way. The NN-model is altered by dropout layers with an advanced probabilistic model (with truncated log-uniform prior distribution and truncated log-uniform posterior distribution). For the altered model, the authors propose a new "signal-to-ratio" pruning metric applicable to both individual neurons and more complex structures (e.g., filters). The submission and supplementary files contain a theoretical explanation of the method and derivation of the corresponding formulas. Moreover, an experimental evaluation of the approach is presented, with promising results. Quality: The paper has very good technical quality. It contains a thorough theoretical analysis of the proposed model and its properties. It also presents several experimental results, that evaluate the qualities of the method and compare it with two concurrent approaches. Clarity: The presentation is comprehensible and well-organized, the method is described in detail including derivation of formulas and implementation details. Originality and significance: The paper addresses the actual problem of compression and efficient computation of deep NNs. Its main contribution is the presentation of a new general technique for structured pruning of NNs that is based on Bayesian dropout. Contrary to the original method, 1) it uses more sophisticated and proper noise distributions and it involves a novel pruning metric, 2) the subject of pruning are not weights, but more complex structures (e.g., filters, neurons). 3) The method can be applied to several NN architectures (CNN, fully-connected,...), and it seems that it can be easily added to existing NN implementations. The presented approach proved in the experiments to massively prune the network and to speed up the computation on GPU and CPU, while inducing only small accuracy loss. However, based on the experimental results presented, the benefit over the concurrent "SSL" approach is relatively small. Typo: ...that is has ... (line 117) Comments: - Table 1 and Table 2 contain rows with results for concurrent methods called "SSL" and "SparseVD", without a short description of the main principles of these methods in the text. - It would be useful to assess also the efficiency of the pruning process itself. - The method has several parameters that need to be set. An interesting question is, how does the actual setting of these parameters affect the balance between sparsity and accuracy of the final model.
NIPS
Title Structured Bayesian Pruning via Log-Normal Multiplicative Noise Abstract Dropout-based regularization methods can be regarded as injecting random noise with pre-defined magnitude to different parts of the neural network during training. It was recently shown that Bayesian dropout procedure not only improves generalization but also leads to extremely sparse neural architectures by automatically setting the individual noise magnitude per weight. However, this sparsity can hardly be used for acceleration since it is unstructured. In the paper, we propose a new Bayesian model that takes into account the computational structure of neural networks and provides structured sparsity, e.g. removes neurons and/or convolutional channels in CNNs. To do this we inject noise to the neurons outputs while keeping the weights unregularized. We establish the probabilistic model with a proper truncated log-uniform prior over the noise and truncated log-normal variational approximation that ensures that the KL-term in the evidence lower bound is computed in closed-form. The model leads to structured sparsity by removing elements with a low SNR from the computation graph and provides significant acceleration on a number of deep neural architectures. The model is easy to implement as it can be formulated as a separate dropout-like layer. 1 Introduction Deep neural networks are a flexible family of models which provides state-of-the-art results in many machine learning problems [14, 20]. However, this flexibility often results in overfitting. A common solution for this problem is regularization. One of the most popular ways of regularization is Binary Dropout [19] that prevents co-adaptation of neurons by randomly dropping them during training. An equally effective alternative is Gaussian Dropout [19] that multiplies the outputs of the neurons by Gaussian random noise. In recent years several Bayesian generalizations of these techniques have been developed, e.g. Variational Dropout [8] and Variational Spike-and-Slab Neural Networks [13]. These techniques provide theoretical justification of different kinds of Dropout and also allow for automatic tuning of dropout rates, which is an important practical result. Besides overfitting, compression and acceleration of neural networks are other important challenges, especially when memory or computational resources are restricted. Further studies of Variational Dropout show that individual dropout rates for each weight allow to shrink the original network architecture and result in a highly sparse model [16]. General sparsity provides a way of neural network compression, while the time of network evaluation may remain the same, as most modern DNN-oriented software can’t work with sparse matrices efficiently. At the same time, it is possible to achieve acceleration by enforcing structured sparsity in convolutional filters or data tensors. In the simplest case it means removing redundant neurons or convolutional filters instead of separate weights; but more complex patterns can also be considered. This way Group-wise Brain Damage 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. [10] employs group-wise sparsity in convolutional filters, Perforated CNNs [3] drop redundant rows from the intermediate dataframe matrices that are used to compute convolutions, and Structured Sparsity Learning [24] provides a way to remove entire convolutional filters or even layers in residual networks. These methods allow to obtain practical acceleration with little to no modifications of the existing software. In this paper, we propose a tool that is able to induce an arbitrary pattern of structured sparsity on neural network parameters or intermediate data tensors. We propose a dropout-like layer with a parametric multiplicative noise and use stochastic variational inference to tune its parameters in a Bayesian way. We introduce a proper analog of sparsity-inducing log-uniform prior distribution [8, 16] that allows us to formulate a correct probabilistic model and avoid the problems that come from using an improper prior. This way we obtain a novel Bayesian method of regularization of neural networks that results in structured sparsity. Our model can be represented as a separate dropout-like layer that allows for a simple and flexible implementation with almost no computational overhead, and can be incorporated into existing neural networks. Our experiments show that our model leads to high group sparsity level and significant acceleration of convolutional neural networks with negligible accuracy drop. We demonstrate the performance of our method on LeNet and VGG-like architectures using MNIST and CIFAR-10 datasets. 2 Related Work Deep neural networks are extremely prone to overfitting, and extensive regularization is crucial. The most popular regularization methods are based on injection of multiplicative noise over layer inputs, parameters or activations [8, 19, 22]. Different kinds of multiplicative noise have been used in practice; the most popular choices are Bernoulli and Gaussian distributions. Another type of regularization of deep neural networks is based on reducing the number of parameters. One approach is to use low-rank approximations, e.g. tensor decompositions [4, 17], and the other approach is to induce sparsity, e.g. by pruning [5] or L1 regularization [24]. Sparsity can also be induced by using the Sparse Bayesian Learning framework with empirical Bayes [21] or with sparsity-inducing priors [12, 15, 16]. High sparsity is one of the key factors for the compression of DNNs [5, 21]. However, in addition to compression it is beneficial to obtain acceleration. Recent papers propose different approaches to acceleration of DNNs, e.g. Spatial Skipped Convolutions [3] and Spatially Adaptive Computation Time [2] that propose different ways to reduce the number of computed convolutions, Binary Networks [18] that achieve speedup by using only 1 bit to store a single weight of a DNN, Low-Rank Expansions [6] that use low-rank filter approximations, and Structured Sparsity Learning [24] that allows to remove separate neurons or filters. As reported in [24] it is possible to obtain acceleration of DNNs by introducing structured sparsity, e.g. by removing whole neurons, filters or layers. However, non-adaptive regularization techniques require tuning of a huge number of hyperparameters that makes it difficult to apply in practice. In this paper we apply the Bayesian learning framework to obtain structured sparsity and focus on acceleration of neural networks. 3 Stochastic Variational Inference Given a probabilistic model p(y |x, ✓) we want to tune parameters ✓ of the model using training dataset D = {(xi, yi)}Ni=1. The prior knowledge about parameters ✓ is defined by prior distribution p(✓). Using the Bayes rule we obtain the posterior distribution p(✓ | D) = p(D | ✓)p(✓)/p(D). However, computing posterior distribution using the Bayes rule usually involves computation of intractable integrals, so we need to use approximation techniques. One of the most widely used approximation techniques is Variational Inference. In this approach the unknown distribution p(✓ | D) is approximated by a parametric distribution q (✓) by minimization of the Kullback-Leibler divergence KL(q (✓) k p(✓ | D)). Minimization of the KL divergence is equivalent to maximization of the variational lower bound L( ). L( ) = LD( ) KL(q (✓) k p(✓)), (1) where LD( ) = NX i=1 Eq (✓) log p(yi |xi, ✓) (2) LD( ) is a so-called expected log-likelihood function which is intractable in case of complex probabilistic model p(y |x, ✓). Following [8] we use the Reparametrization trick to obtain an unbiased differentiable minibatch-based Monte Carlo estimator of the expected log-likelihood. Here N is the total number of objects, M is the minibatch size, and f( , ") provides samples from the approximate posterior q (✓) as a deterministic function of a non-parametric noise " ⇠ p("). LD( ) ' LSGV BD ( ) = N M MX k=1 log p(yik |xik , wik = f( , "ik)) (3) L( ) ' LSGV B( ) = LSGV BD ( ) KL(q (w) k p(w)) (4) r LD( ) ' r LSGV BD ( ) (5) This way we obtain a procedure of approximate Bayesian inference where we solve optimization problem (4) by stochastic gradient ascent w.r.t. variational parameters . This procedure can be efficiently applied to Deep Neural Networks and usually the computational overhead is very small, as compared to ordinary DNNs. If the model p(y |x, ✓, w) has another set of parameters w that we do not want to be Bayesian about, we can still use the same variational lower bound objective: L( , w) = LD( , w) KL(q (✓) k p(✓)) ! max ,w , (6) where LD( , w) = NX i=1 Eq (✓) log p(yi |xi, ✓, w) (7) This objective corresponds the maximum likelihood estimation wML of parameters w, while finding the approximate posterior distribution q (✓) ⇡ p(✓ | D, wML). In this paper we denote the weights of the neural networks, the biases, etc. as w and find their maximum likelihood estimation as described above. The parameters ✓ that undergo the Bayesian treatment are the noisy masks in the proposed dropout-like layer (SBP layer). They are described in the following section. 4 Group Sparsity with Log-normal Multiplicative Noise Variational Inference with a sparsity-inducing log-uniform prior over the weights of a neural network is an efficient way to enforce general sparsity on weight matrices [16]. However, it is difficult to apply this approach to explicitly enforce structured sparsity. We introduce a dropout-like layer with a certain kind of multiplicative noise. We also make use of the sparsity-inducing log-uniform prior, but put it over the noise variables rather than weights. By sharing those noise variables we can enforce group-wise sparsity with any form of groups. 4.1 Variational Inference for Group Sparsity Model We consider a single dropout-like layer with an input vector x 2 RI that represents one object with I features, and an output vector y 2 RI of the same size. The input vector x is usually supposed to come from the activations of the preceding layer. The output vector y would then serve as an input vector for the following layer. We follow the general way to build dropoutlike layers (8). Each input feature xi is multiplied by a noise variable ✓i that comes from some distribution pnoise(✓). For example, for Binary Dropout pnoise(✓) would be a fully factorized Bernoulli distribution with pnoise(✓i) = Bernoulli(p), and for Gaussian dropout it would be a fully-factorized Gaussian distribution with pnoise(✓i) = N (1,↵). yi = xi · ✓i ✓ ⇠ pnoise(✓) (8) Note that if we have a minibatch XM⇥I of M objects, we would independently sample a separate noise vector ✓m for each object xm. This would be the case throughout the paper, but for the sake of simplicity we would consider a single object x in all following formulas. Also note that the noise ✓ is usually only sampled during the training phase. A common approximation during the testing phase is to use the expected value E✓ instead of sampling ✓. All implementation details are provided and discussed in Section 4.5. We follow a Bayesian treatment of the variable ✓, as described in Section 3. In order to obtain a sparse solution, we choose the prior distribution p(✓) to be a fully-factorized improper log-uniform distribution. We denote this distribution as LogU1(·) to stress that it has infinite domain. This distribution is known for its sparsification properties and works well in practice for deep neural networks [16]. p(✓) = IY i=1 p(✓i) p(✓i) = LogU1(✓i) / 1 ✓ i ✓i > 0 (9) In order to train the model, i.e. perform variational inference, we need to choose an approximation family q for the posterior distribution p(✓ | D) ⇡ q (✓). q (✓) = IY i=1 q(✓i |µi, i) = IY i=1 LogN(✓i |µi, 2i ) (10) ✓i ⇠ LogN(✓i |µi, 2i ) () log ✓i ⇠ N (log ✓i |µi, 2i ) (11) A common choice of variational distribution q(·) is a fully-factorized Gaussian distribution. However, for this particular model we choose q(✓) to be a fully-factorized log-normal distribution (10–11). To make this choice, we were guided by the following reasons: • The log-uniform distribution is a specific case of the log-normal distribution when the parameter goes to infinity and µ remains fixed. Thus we can guarantee that in the case of no data our variational approximation can be made exact. Hence this variational family has no "prior gap". • We consider a model with multiplicative noise. The scale of this noise corresponds to its shift in the logarithmic space. By establishing the log-uniform prior we set no preferences on different scales of this multiplicative noise. The usual use of a Gaussian as a posterior immediately implies very asymmetric skewed distribution in the logarithmic space. Moreover log-uniform and Gaussian distributions have different supports and that will require establishing two log-uniform distributions for positive and negative noises. In this case Gaussian variational approximation would have quite exotic bi-modal form (one mode in the log-space of positive noises and another one in the log-space of negative noises). On the other hand, the log-normal posterior for the multiplicative noise corresponds to a Gaussian posterior for the additive noise in the logarithmic scale, which is much easier to interpret. • Log-normal noise is always non-negative both during training and testing phase, therefore it does not change the sign of its input. This is in contrast to Gaussian multiplicative noise N (✓i | 1,↵) that is a standard choice for Gaussian dropout and its modifications [8, 19, 23]. During the training phase Gaussian noise can take negative values, so the input to the following layer can be of arbitrary sign. However, during the testing phase noise ✓ is equal to 1, so the input to the following layer is non-negative with many popular non-linearities (e.g. ReLU, sigmoid, softplus). Although Gaussian dropout works well in practice, it is difficult to justify notoriously different input distributions during training and testing phases. • The log-normal approximate posterior is tractable. Specifically, the KL divergence term KL(LogN(✓ |µ, 2) kLogU1(✓)) can be computed analytically. The final loss function is presented in equation (12) and is essentially the original variational lower bound (4). LSGV B( ) = LSGV B D (µ, ,W ) KL(q(✓ |µ, ) k p(✓)) ! max µ, ,W , (12) where µ and are the variatianal parameters, and W denotes all other trainable parameters of the neural network, e.g. the weight matrices, the biases, batch normalization parameters, etc. Note that we can optimize the variational lower bound w.r.t. the parameters µ and of the log-normal noise ✓. We do not fix the mean of the noise thus making our variational approximation more tight. 4.2 Problems of Variational Inference with Improper Log-Uniform Prior The log-normal posterior in combination with a log-uniform prior has a number of attractive features. However, the maximization of the variational lower bound with a log-uniform prior and a log-normal posterior is an ill-posed optimization problem. As the log-uniform distribution is an improper prior, the KL-divergence between a log-normal distribution LogN(µ, 2) and a log-uniform distribution LogU1 is infinite for any finite value of parameters µ and . KL LogN(x |µ, 2) kLogU1(x) = C log C = +1 (13) A common way to tackle this problem is to consider the density of the log-uniform distribution to be equal to C✓ and to treat C as some finite constant. This trick works well for the case of a Gaussian posterior distribution [8, 16]. The KL divergence between a Gaussian posterior and a log-uniform prior has an infinite gap, but can be calculated up to this infinite constant in a meaningful way [16]. However, for the case of the log-normal posterior the KL divergence is infinite for any finite values of variational parameters, and is equal to zero for a fixed finite µ and infinite . As the data-term (3) is bounded for any value of variational parameters, the only global optimum of the variational lower bound is achieved when µ is finite and fixed, and goes to infinity. In this case the posterior distribution collapses into the prior distribution and the model fails to extract any information about the data. This effect is wholly caused by the fact that the log-uniform prior is an improper (non-normalizable) distribution, which makes the whole probabilistic model flawed. 4.3 Variational Inference with Truncated Approximation Family Due to the improper prior the optimization problem becomes ill-posed. But do we really need to use an improper prior distribution? The most common number format that is used to represent the parameters of a neural network is the floating-point format. The floating-point format is only able to represent numbers from a limited range. For example, a single-point precision variable can only represent numbers from the range 3.4⇥ 1038 to +3.4⇥ 1038, and the smallest possible positive number is equal to 1.2⇥ 10 38. All of probability mass of the improper log-uniform prior is concentrated beyond the single-point precision (and essentially any practical floating point precision), not to mention that the actual relevant range of values of neural network parameters is much smaller. It means that in practice this prior is not a good choice for software implementation of neural networks. We propose to use a truncated log-uniform distribution (14) as a proper analog of the log-uniform distribution. Here I[a,b](x) denotes the indicator function for the interval x 2 [a, b]. The posterior distribution should be defined on the same support as the prior distribution, so we also need to use a truncated log-normal distribution (14). LogU[a,b](✓i) / LogU1(✓i) · I[a,b](log ✓i) LogN[a,b](✓i) / LogN(✓i |µi, 2i ) · I[a,b](log ✓i) (14) Our final model then can be formulated as follows. y i = x i · ✓ i p(✓ i ) = LogU[a,b](✓i) q(✓i |µi, i) = LogN[a,b](✓i |µi, 2 i ) (15) Note that all the nice facts about the log-normal posterior distribution from the Section 4.1 are also true for the truncated log-normal posterior. However, now we have a proper probabilistic model and the Stochastic Variational Inference can be preformed correctly. Unlike (13), now the KL divergence term (16–17) can be calculated correctly for all valid values of variational parameters (see Appendix A for details). KL(q(✓ |µ, ) k p(✓)) = IX i=1 KL(q(✓i |µi, i) k p(✓i)) (16) KL(q(✓i |µi, i) k p(✓i)) = log b ap 2⇡e 2i log( ( i) (↵i)) ↵i (↵i) i ( i) 2( ( i) (↵i)) , (17) where ↵i = a µi i , i = b µi i , (·) and (·) are the density and the CDF of the standard normal distribution. The reparameterization trick also can still be performed (18) using the inverse CDF of the truncated normal distribution (see Appendix B). ✓ i = exp µ i + i 1 ( (↵ i ) + ( ( i ) (↵ i )) y i ) , where y i ⇠ U(y | 0, 1) (18) The final loss and the set of parameters is the same as described in Section 4.1, and the training procedure remains the same. 4.4 Sparsity Log-uniform prior is known to lead to a sparse solution [16]. In the variational dropout paper authors interpret the parameter ↵ of the multiplicative noise N (1,↵) as a Gaussian dropout rate and use it as a thresholding criterion for weight pruning. Unlike the binary or Gaussian dropout, in the truncated log-normal model there is no "dropout rate" variable. However, we can use the signal-to-noise ratio E✓/ p Var(✓) (SNR) for thresholding. SNR(✓ i ) = ( ( i ↵ i ) ( i i ))/ p ( i ) (↵ i )p exp( 2 i )( (2 i ↵ i ) (2 i i )) ( ( i ↵ i ) ( i i )) 2 (19) The SNR can be computed analytically, the derivation can be found in the appendix. It has a simple interpretation. If the SNR is low, the corresponding neuron becomes very noisy and its output no longer contains any useful information. If the SNR is high, it means that the neuron output contains little noise and is important for prediction. Therefore we can remove all neurons or filters with a low SNR and set their output to constant zero. 4.5 Implementation details We perform a minibatch-based stochastic variational inference for training. The training procedure looks as follows. On each training step we take a minibatch of M objects and feed it into the neural network. Consider a single SBP layer with input XM⇥I and output Y M⇥I . We independently sample a separate noise vector ✓m ⇠ q(✓) for each object xm and obtain a noise matrix ✓M⇥I . The output matrix Y M⇥I is then obtained by component-wise multiplication of the input matrix and the noise matrix: ymi = xmi · ✓mi . To be fully Bayesian, one would also sample and average over different dropout masks ✓ during testing, i.e. perform Bayesian ensembling. Although this procedure can be used to slightly improve the final accuracy, it is usually avoided. Bayesian ensembling essentially requires sampling of different copies of neural networks, which makes the evaluation K times slower for averaging over K samples. Instead, during the testing phase in most dropout-based techniques the noise variable ✓ is replaced with its expected value. In this paper we follow the same approach and replace all non-pruned ✓i with their expectations (20) during testing. The derivation of the expectation of the truncated log-normal distribution is presented in Appendix C. E✓ i = exp(µ i + 2 i /2) ( i ) (↵ i ) ✓ 2 i + µ i a i ◆ ✓ 2 i + µ i b i ◆ (20) We tried to use Bayesian ensembling with this model, and experienced almost no gain of accuracy. It means that the variance of the learned approximate posterior distribution is low and does not provide a rich ensemble. Throughout the paper we introduced the SBP dropout layer for the case when input objects are represented as one-dimensional vectors x. When defined like that, it would induce general sparsity on the input vector x. It works as intended for fully-connected layers, as a single input feature corresponds to a single output neuron of a preceding fully-connected layer and a single output neuron of the following layer. However, it is possible to apply the SBP layer in a more generic setting. Firstly, if the input object is represented as a multidimensional tensor X with shape I1 ⇥ I2 ⇥ · · ·⇥ Id, the noise vector ✓ of length I = I1 ⇥ I2 ⇥ · · ·⇥ Id can be reshaped into a tensor with the same shape. Then the output tensor Y can be obtained as a component-wise product of the input tensor X and the noise tensor ✓. Secondly, the SBP layer can induce any form of structured sparsity on this input tensor X . To do it, one would simply need to use a single random variable ✓i for the group of input features that should be removed simultaneously. For example, consider an input tensor XH⇥W⇥C that comes from a convolutional layer, H and W being the size of the image, and C being the number of channels. Then, in order to remove redundant filters from the preceding layer (and at the same time redundant channels from the following layer), one need to share the random variables ✓ in the following way: yhwc = xhwc · ✓c ✓c ⇠ LogN[a,b](✓c |µc, 2c ) (21) Note that now there is one sample ✓ 2 RC for one object XH⇥W⇥C on each training step. If the signal-to-noise ratio becomes lower than 1 for a component ✓c, that would mean that we can permanently remove the c-th channel of the input tensor, and therefore delete the c-th filter from the preceding layer and the c-th channel from the following layer. All the experiments with convolutional architectures used this formulation of SBP. This is a general approach that is not limited to reducing the shape of the input tensor. It is possible to obtain any fixed pattern of group-wise sparsity using this technique. Similarly, the SBP layer can be applied in a DropConnect fashion. One would just need to multiply the weight tensor W by a noise tensor ✓ of similar shape. The training procedure remains the same. It is still possible to enforce any structured sparsity pattern for the weight tensor W by sharing the random variables as described above. 5 Experiments We perform an evaluation on different supervised classification tasks and with different architectures of neural networks including deep VGG-like architectures with batch normalization layers. For each architecture, we report the number of retained neurons and filters, and obtained acceleration. Our experiments show that Structured Bayesian Pruning leads to a high level of structured sparsity in convolutional filters and neurons of DNNs without significant accuracy drop. We also demonstrate that optimization w.r.t. the full set of variational parameters (µ, ) leads to improving model quality and allows us to perform sparsification in a more efficient way, as compared to tuning of only one free parameter that corresponds to the noise variance. As a nice bonus, we show that Structured Bayesian Pruning network does not overfit on randomly labeled data, that is a common weakness of non-bayesian dropout networks. The source code is available in Theano [7] and Lasagne, and also in TensorFlow [1] (https://github.com/necludov/group-sparsity-sbp). 5.1 Experiment Setup The truncation parameters a and b are the hyperparameters of our model. As our layer is meant for regularization of the model, we would like our layer not to amplify the input signal and restrict the noise ✓ to an interval [0, 1]. This choice corresponds to the right truncation threshold b set to 0. We find empirically that the left truncation parameter a does not influence the final result much. We use values a = 20 and b = 0 in all experiments. We define redundant neurons by the signal-to-noise ratio of the corresponding multiplicative noise ✓. See Section 4.4 for more details. By removing all neurons and filters with the SNR < 1 we experience no accuracy drop in all our experiments. SBP dropout layers were put after each convolutional layer to remove its filters, and before each fully-connected layer to remove its input neurons. As one filter of the last convolutional layer usually corresponds to a group of neurons in the following dense layer, it means that we can remove more input neurons in the first dense layer. Note that it means that we have two consecutive dropout layers between the last convolutional layer and the first fully-connected layer in CNNs, and a dropout layer before the first fully-connected layer in FC networks (see Fig. 2). 5.2 More Flexible Variational Approximation Usually during automatic training of dropout rates the mean of the noise distribution remains fixed. In the case of our model it is possible to train both mean and variance of the multiplicative noise. By using a more flexible distribution we obtain a tighter variational lower bound and a higher sparsity level. In order to demonstrate this effect, we performed an experiment on MNIST dataset with a fully connected neural network that contains two hidden layers with 1000 neurons each. The results are presented in Fig. 1. 5.3 LeNet5 and Fully-Connected Net on MNIST We compare our method with other sparsity inducing methods on the MNIST dataset using a fully connected architecture LeNet-500-300 and a convolutional architecture LeNet-5-Caffe. These networks were trained with Adam without any data augmentation. The LeNet-500-300 network was trained from scratch, and the LeNet-5-Caffe1 network was pretrained with weight decay. An illustration of trained SNR for the image features for the LeNet-500-3002 network is shown in Fig. 2. The final accuracy, group-wise sparsity levels and speedup for these architectures for different methods are shown in Table 1. 5.4 VGG-like on CIFAR-10 To prove that SBP scales to deep architectures, we apply it to a VGG-like network [25] that was adapted for the CIFAR-10 [9] dataset. The network consists of 13 convolutional and two fullyconnected layers, trained with pre-activation batch normalization and Binary Dropout. At the start of the training procedure, we use pre-trained weights for initialization. Results with different scaling of the number of units are presented in Table 2. We present results for two architectures with different scaling coefficient k 2 {1.0, 1.5} . For smaller values of scaling coefficient k 2 {0.25, 0.5} we obtain less sparse architecture since these networks have small learning capacities. Besides the results for the standard StructuredBP procedure, we also provide the results for SBP with KL scaling (StructuredBPa). Scaling the KL term of the variational lower bound proportional to the computational complexity of the layer leads to a higher sparsity level for the first layers, providing 1A modified version of LeNet5 from [11]. Caffe Model specification: https://goo.gl/4yI3dL 2Fully Connected Neural Net with 2 hidden layers that contains 500 and 300 neurons respectively. more acceleration. Despite the higher error values, we obtain the higher value of true variational lower bound during KL scaling, hence, we find its another local maximum. 5.5 Random Labels A recent work shows that Deep Neural Networks have so much capacity that they can easily memorize the data even with random labeling [26]. Binary dropout as well as other standard regularization techniques do not prevent the networks from overfitting in this scenario. However, recently it was shown that Bayesian regularization may help [16]. Following these works, we conducted similar experiments. We used a Lenet5 network on the MNIST dataset and a VGG-like network on CIFAR-10. Although Binary Dropout does not prevent these networks from overfitting, SBP decides to remove all neurons of the neural network and provides a constant prediction. In other words, in this case SBP chooses the simplest model that achieves the same testing error rate. This is another confirmation that Bayesian regularization is more powerful than other popular regularization techniques. 6 Conclusion We propose Structured Bayesian Pruning, or SBP, a dropout-like layer that induces multiplicative random noise over the output of the preceding layer. We put a sparsity-inducing prior over the noise variables and tune the noise distribution using stochastic variational inference. SBP layer can induce an arbitrary structured sparsity pattern over its input and provides adaptive regularization. We apply SBP to cut down the number of neurons and filters in convolutional neural networks and report significant practical acceleration with no modification of the existing software implementation of these architectures. Acknowledgments We would like to thank Christos Louizos and Max Welling for valuable discussions. Kirill Neklyudov and Arsenii Ashukha were supported by HSE International lab of Deep Learning and Bayesian Methods which is funded by the Russian Academic Excellence Project ’5-100’. Dmitry Molchanov was supported by the Ministry of Education and Science of the Russian Federation (grant 14.756.31.0001). Dmitry Vetrov was supported by the Russian Science Foundation grant 17-11-01027.
1. What is the main contribution of the paper regarding dropout variants? 2. What are the strengths and weaknesses of the proposed method compared to prior works such as [1], [2], [3], [4], and [5]? 3. How does the reviewer assess the choice of prior distribution and approximating distribution in the paper? 4. What are some minor comments and suggestions for improvement made by the reviewer?
Review
Review The authors proposes a new dropout variant which is demonstrated in a neural network unit-pruning application, carried out by learning the dropout rates under a sparsity-inducing prior. The methodology follows that of recent Bayesian approaches to deep learning, and the application is similar to that of [1] (which has some fundamental limitations stemming from the improper prior and loose KL approximations). The main novelty of the present paper is the use of a truncated log uniform prior instead of [1]'s log uniform prior, and truncated log normal approximating posterior (instead of [1]'s Gaussian approximating distribution). The authors justify why the choice of the prior and approximating distribution is sensible, and give an analytical KL which does not diverge to infinity unlike [1]'s KL. It is worth noting though that the authors do not place a prior distribution over the weights, but rather place a prior over latent "dropout" variables. Ie the model the authors perform inference in is not a Bayesian neural network (like in [1,2]) but rather a latent variable model (like [3]). Major comments to the authors: * Please clarify in the introduction what the prior is placed over (ie over auxiliary latent variables); it was not clear until page 3. * The literature survey is lacking many related works such as [2, 3, 4] ([3] being most directly related to the authors' latent variable model). * I would expect to see a comparison to [1] by removing units with large alpha parameter values as a baseline. Minor comments to the authors: * The Random Labels experiment is rather neat * Placing dropout after each convolution layer to enforce sparsity over kernel-patch pairs has been proposed before by [5] * Eq. (4) - it is not clear what "max_phi" is * Missing parenthesis on line 147 * Appendix line 8: I assume x = log theta? There also seems to be a mistake in the first line. References: [1] Diederik P Kingma, Tim Salimans, and Max Welling. Variational dropout and the local reparameterization trick. In NIPS. Curran Associates, Inc., 2015. [2] Yarin Gal and Zoubin Ghahramani. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. ICML, 2016. [3] Shin-ichi Maeda. A Bayesian encourages dropout. arXiv preprint arXiv:1412.7003, 2014. [4] S Wang and C Manning. Fast dropout training. ICML, 2013. [5] Yarin Gal and Zoubin Ghahramani. Bayesian convolutional neural networks with Bernoulli approximate variational inference. ICLR workshop track, 2016.
NIPS
Title Information-theoretic generalization bounds for black-box learning algorithms Abstract We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing informationtheoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning. N/A We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing informationtheoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning. 1 Introduction Large neural networks trained with variants of stochastic gradient descent have excellent generalization capabilities, even in regimes where the number of parameters is much larger than the number of training examples. Zhang et al. [41] showed that classical generalization bounds based on various notions of complexity of hypothesis set fail to explain this phenomenon, as the same neural network can generalize well for one choice of training data and memorize completely for another one. This observation has spurred a tenacious search for algorithm-dependent and data-dependent generalization bounds that give meaningful results in practical settings for deep learning [17]. One line of attack bounds generalization error based on the information about training dataset stored in the weights [39, 4, 22, 6, 33, 14, 23, 27]. The main idea is that when the training and testing performance of a neural network are different, the network weights necessarily capture some information about the training dataset. However, the opposite might not be true: A neural network can store significant portions of training set in its weights and still generalize well [32, 40, 21]. Furthermore, because of their information-theoretic nature, these generalization bounds become infinite or produce trivial bounds for deterministic algorithms. When such bounds are not infinite, they are notoriously hard to estimate, due to the challenges arising in estimation of Shannon mutual information between two high-dimensional variables (e.g., weights of a ResNet and a training dataset). This work addresses the aforementioned challenges. We first improve some of the existing information-theoretic generalization bounds, providing a unified view and derivation of them (Sec. 2). We then derive novel generalization bounds that measure information with predictions, rather than with the output of the training algorithm (Sec. 3). These bounds are applicable to a wide range of methods, including neural networks, Bayesian algorithms, ensembling algorithms, and non-parametric approaches. In the case of neural networks, the proposed bounds improve over the existing weightbased bounds, partly because they avoid a counter-productive property of weight-based bounds that information stored in unused weights affects generalization bounds, even though it has no effect on generalization. The proposed bounds produce meaningful results for deterministic algorithms and are significantly easier to estimate. For example, in case of classification, computing our most efficient bound involves estimating mutual information between a pair of predictions and a binary variable. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). We apply the proposed bounds to ensembling algorithms, binary classification algorithms with finite VC dimension hypothesis classes, and to stable learning algorithms (Sec. 4). We compute our most efficient bound on realistic classification problems involving neural networks, and show that the bound closely follows the generalization error, even in situations when a neural network with 3M parameters is trained deterministically on 4000 examples, achieving 1% generalization error. 2 Weight-based generalization bounds We start by describing the necessary notation and definitions, after which we present some of the existing weigh-based information-theoretic generalization bounds, slightly improve some of them, and prove relations between them. The purpose of this section is to introduce the relevant existing bounds and prepare grounds for the functional conditional mutual information bounds introduced in Sec. 3, which we consider our main contribution. All proofs are presented in Appendix A. Preliminaries. We use capital letters for random variables, corresponding lowercase letters for their values, and calligraphic letters for their domains. If X is a random variable, X̄ denotes an independent copy of X . For example, if (X,Y ) is a pair of random variables with joint distribution PX,Y , then the joint distribution of (X̄, Ȳ ) will be PX̄,Ȳ = PX̄ ⊗ PȲ = PX ⊗ PY . A random variable X is called σ-subgaussian if E exp(t(X − EX)) ≤ exp(σ2t2/2), ∀t ∈ R. For example, a random variable that takes values in [a, b] almost surely, is (b− a)/2-subgaussian. Given probability measures P and Q defined on the same measurable space, such that P is absolutely continuous with respect to Q, the Kullback–Leibler divergence from P to Q is defined as KL (P ‖ Q) = ∫ log dPdQdP , where dPdQ is the Radon-Nikodym derivative of P with respect to Q. If X and Y are random variables defined on the same probability space, then KL (X ‖ Y ) denotes KL (PX ‖ PY ). The Shannon mutual information between random variables X and Y is I(X;Y ) = KL (PX,Y ‖ PX ⊗ PY ). In this paper, all information-theoretic quantities are measured in nats, instead of bits. Throughout the paper [n] denotes the set {1, 2, . . . , n}. Finally, if A = (a1, . . . , an) is a collection, then A−i , (a1, . . . , ai−1, ai+1, . . . , an). Theorems proved in the subsequent sections will be relying on the following lemma. Lemma 1. Let (Φ,Ψ) be a pair of random variables with joint distribution PΨ,Φ. If g(φ, ψ) is a measurable function such that EΦ,Ψ [g(Φ,Ψ)] exists and g(Φ̄, Ψ̄) is σ-subgaussian, then∣∣EΦ,Ψ [g(Φ,Ψ)]− EΦ,Ψ̄ [g(Φ, Ψ̄)]∣∣ ≤√2σ2I(Φ; Ψ). (1) Furthermore, if g(φ, Ψ̄) is σ-subgaussian for each φ and the expectation below exists, then EΦ,Ψ [( g(Φ,Ψ)− EΨ̄ g(Φ, Ψ̄) )2] ≤ 4σ2(I(Φ; Ψ) + log 3), (2) and P (∣∣g(Φ,Ψ)− EΨ̄ g(Φ, Ψ̄)∣∣ ≥ ) ≤ 4σ2(I(Φ; Ψ) + log 3) 2 , ∀ > 0. (3) The first part of this lemma is equivalent to Lemma 1 of Xu and Raginsky [39], which in turn has its roots in Russo and Zou [29]. The second part generalizes Lemma 2 of Hafez-Kolahi et al. [13] by also providing bounds on the expected squared difference. 2.1 Generalization bounds with input-output mutual information Let S = (Z1, Z2, . . . , Zn) ∼ Dn be a dataset of n i.i.d. examples, R ∈ R be a source of randomness (a random variable independent of S) and A : Zn × R → W be a training algorithm. Let W = A(S,R) be the output of the training algorithm applied on the dataset S with randomness R. Given a loss function ` :W ×Z → R, the empirical risk is Lemp(A,S,R) = 1n ∑n i=1 `(W,Zi) and the population risk is L(A,S,R) = EZ′∼D `(W,Z ′), where Z ′ is a test example independent from S and R. The generalization gap, also call generalization error, is L(A,S,R)−Lemp(A,S,R). In this setting, Xu and Raginsky [39] establish the following information-theoretic bound on the absolute value of the expected generalization gap. Theorem 2.1 (Thm. 1 of Xu and Raginsky [39]). If `(w,Z ′), where Z ′ ∼ D, is σ-subgaussian for all w ∈ W , then |ES,R [L(A,S,R)− Lemp(A,S,R)]| ≤ √ 2σ2I(W ;S) n . (4) We generalize this result by showing that instead of measuring information with the entire dataset, one can measure information with a subset of size m chosen uniformly at random. For brevity, hereafter we call subsets chosen uniformly at random just “random subsets”. Theorem 2.2. Let U be a random subset of [n] with size m, independent of S and R. If `(w,Z ′), where Z ′ ∼ D, is σ-subgaussian for all w ∈ W , then |ES,R [L(A,S,R)− Lemp(A,S,R)]| ≤ Eu∼U √ 2σ2 m I(W ;Su), (5) and ES,R (L(A,S,R)− Lemp(A,S,R))2 ≤ 4σ2 n (I(W ;S) + log 3) . (6) With a simple application of Markov’s inequality one can get tail bounds from the second part of the theorem. Furthermore, by taking square root of both sides of (6) and using Jensen’s inequality on the left side, one can also construct an upper bound for the expected absolute value of generalization gap, ES,R |L(A,S,R)− Lemp(A,S,R)|. These observations apply also to the other generalization gap bounds presented later in this work. Note the bound on the squared generalization gap is written only for the case of m = n. It is possible to derive squared generalization gap bounds of form 4σ 2 m (Eu∼U I(W ;Su) + log 3). Unfortunately, for small m the log 3 constant starts to dominate, resulting in vacuous bounds. Picking a small m decreases the mutual information term in (5), however, it also decreases the denominator. When settingm = n, we get the bound of Xu and Raginsky [39] (Thm. 2.1). Whenm = 1, the bound of (5) becomes 1n ∑n i=1 √ 2σ2I(W ;Zi), matching the result of Bu et al. [6] (Proposition 1). A similar bound, but for a different notion of information, was derived by Alabdulmohsin [2]. Bu et al. [6] prove that the bound with m = 1 is tighter than the bound with m = n. We generalize this result by proving that the bound of (5) is non-descreasing in m. Proposition 1. Let m ∈ [n− 1], U be a random subset of [n] of size m, U ′ be a random subset of size m+ 1, and φ : R→ R be any non-decreasing concave function. Then EU φ ( 1 m I(W ;Su) ) ≤ EU ′ φ ( 1 m+ 1 I(W ;Su′) ) . (7) When φ(x) = √ x, this result proves that the optimal value for m in (5) is 1. Furthermore, when we use Jensen’s inequality to move expectation over U inside the square root in (5), then the resulting bound becomes √ 2σ2 m Eu∼U I(W ;Su) and matches the result of Negrea et al. [22] (Thm. 2.3). These bounds are also non-decreasing with respect to m (using Proposition 1 with φ(x) = x). Thm. 2.1 can be used to derive generalization bounds that depend on the information between W and a single example Zi conditioned on the remaining examples Z−i = (Z1, . . . , Zi−1, Zi+1, . . . , Zn). Theorem 2.3. If `(w,Z ′), where Z ′ ∼ D, is σ-subgaussian for all w ∈ W , then |ES,R [L(A,S,R)− Lemp(A,S,R)]| ≤ 1 n n∑ i=1 √ 2σ2I(W ;Zi | Z−i), (8) and ES,R (L(A,S,R)− Lemp(A,S,R))2 ≤ 4σ2 n ( n∑ i=1 I(W ;Zi | Z−i) + log 3 ) . (9) This theorem is a simple corollary of Thm. 2.2, using the facts that I(W ;Zi) ≤ I(W ;Zi | Z−i) and that I(W ;S) is upper bounded by ∑n i=1 I(W ;Zi | Z−i), which is also known as erasure information [35]. The first part of it improves the result of Raginsky et al. [26] (Thm. 2), as the averaging over i is outside of the square root. While these bounds are worse that the corresponding bounds of Thm. 2.2, it is sometimes easier to manipulate them analytically. The bounds described above measure information with the output W of the training algorithm. In the case of prediction tasks with parametric methods, the parameters W might contain information about the training dataset, but not use it to make predictions. Partly for this reason, the main goal of this paper is to derive generalization bounds that measure information with the prediction function, rather than with the weights. In general, there is no straightforward way of encoding the prediction function into a random variable. However, when the domain Z is finite, we can encode the prediction function as the collection of predictions on all examples of Z . This naturally leads us to the next setting (albeit with a different motivation), first considered by Steinke and Zakynthinou [33], where one first fixes a set of 2n examples, and then randomly selects n of them to form the training set. We use this setting to provide prediction-based generalization bounds in Sec. 3. Before describing these bounds we present the setting of Steinke and Zakynthinou [33] in detail and generalize some of the existing weight-based bounds in that setting. 2.2 Generalization bounds with conditional mutual information Let Z̃ ∈ Zn×2 be a collection of 2n i.i.d samples from D, grouped in n pairs. The random variable S ∼ Uniform({0, 1}n) specifies which example to select from each pair to form the training set Z̃S = (Z̃i,Si) n i=1. LetR be a random variable, independent of Z̃ and S, that captures the stochasticity of training. In this setting Steinke and Zakynthinou [33] defined condition mutual information (CMI) of algorithm A with respect to the data distribution D as CMID(A) = I(A(Z̃S , R);S | Z̃) = Ez̃∼Z̃ I(A(z̃S , R);S), (10) and proved the following upper bound on expected generalization gap. Theorem 2.4 (Thm. 2, Steinke and Zakynthinou [33]). If the loss function `(w, z) ∈ [0, 1],∀w ∈ W, z ∈ Z , then the expected generalization gap can be bounded as follows:∣∣∣EZ̃,S,R [L(A, Z̃S , R)− Lemp(A, Z̃S , R)]∣∣∣ ≤ √ 2 n CMID(A). (11) Haghifam et al. [14] improved this bound in two aspects. First, they provided bounds where expectation over Z̃ is outside of the square root. Second, they considered measuring information with subsets of S, as we did in the previous section. Theorem 2.5 (Thm. 3.1 of Haghifam et al. [14]). Let m ∈ [n] and U ⊆ [n] be a random subset of size m, independent from R, Z̃, and S. If the loss function `(w, z) ∈ [0, 1],∀w ∈ W, z ∈ Z , then∣∣∣EZ̃,S,R [L(A, Z̃S , R)− Lemp(A, Z̃S , R)]∣∣∣ ≤ Ez∼Z̃ √ 2 m Eu∼U I(A(z̃S , R);Su). (12) Furthermore, for m = 1 they tighten the bound by showing that one can move the expectation over U outside of the squared root (Haghifam et al. [14], Thm 3.4). We generalize these results by showing that for all m expectation over U can be done outside of the square root. Furthermore, our proof closely follows the proof of Thm. 2.2. Theorem 2.6. Let m ∈ [n] and U ⊆ [n] be a random subset of size m, independent from R, Z̃, and S. If `(w, z) ∈ [0, 1],∀w ∈ W, z ∈ Z , then∣∣∣EZ̃,S,R [L(A, Z̃S , R)− Lemp(A, Z̃S , R)]∣∣∣ ≤ Ez̃∼Z̃,u∼U √ 2 m I(A(z̃S , R);Su), (13) and EZ̃,S,R ( L(A, Z̃S , R)− Lemp(A, Z̃S , R) )2 ≤ 8 n (Ez̃∼Z̃ I(A(z̃S , R);S) + 2) . (14) The bound of (13) improves over the bound of Thm. 2.5 and matches the special result for m = 1. Rodríguez-Gálvez et al. [28] proved even tighter expected generalization gap bound by replacing I(A(Z̃S , R);Su | Z̃ = z̃) with I(A(Z̃S , R);Su | Z̃u = z̃u). Haghifam et al. [14] showed that if one takes the expectations over Z̃ inside the square root in (12), then the resulting looser upper bounds become non-decreasing overm. Using this result they showed that their special case bound form = 1 is the tightest. We generalize their results by showing that even without taking the expectations inside the squared root, the bounds of Thm. 2.5 are non-decreasing over m. We also show that the same holds for our tighter bounds of (13). Proposition 2. Let m ∈ [n− 1], U be a random subset of [n] of size m, U ′ be a random subset of size m+ 1, z̃ be any fixed value of Z̃, and φ : R→ R be any non-decreasing concave function. Then Eu∼U φ ( 1 m I(A(z̃S , R);Su) ) ≤ Eu′∼U ′ φ ( 1 m+ 1 I(A(z̃S , R);Su′) ) . (15) By setting φ(x) = x, taking square root of both sides of (15), and then taking expectation over z̃, we prove that bounds of (12) are non-decreasing over m. By setting φ(x) = √ x and then taking expectation over z̃, we prove that bounds of (13) are non-decreasing with m. Similarly to the Thm. 2.3 of the previous section, Thm. A.1 presented in Appendix A establishes generalization bounds with information-theoretic stability quantities. 3 Functional conditional mutual information The bounds in Sec. 2 leverage information in the output of the algorithm, W . In this section we focus on supervised learning problems: Z = X × Y . To encompass many types of approaches, we do not assume that the training algorithm has an output W , which is then used to make predictions. Instead, we assume that the learning method implements a function f : Zn×X ×R → K that takes a training set z, a test input x′, an auxiliary argument r capturing the stochasticity of training and predictions, and outputs a prediction f(z, x′, r) on the test example. Note that the prediction domain K can be different from Y . This setting includes non-parametric methods (for which W is the training dataset itself), parametric methods, Bayesian algorithms, and more. For example, in parametric methods, where a hypothesis setH = {hw : X → K | w ∈ W} is defined, f(z, x, r) = hA(z,r)(x). In this supervised setting, the loss function ` : K × Y → R measures the discrepancy between a prediction and a label. As in the previous subsection, we assume that a collection of 2n i.i.d examples Z̃ ∼ Dn×2 is given, grouped in n pairs, and the random variable S ∼ Uniform({0, 1}n) specifies which example to select from each pair to form the training set Z̃S = (Z̃i,Si) n i=1. LetR be an auxiliary random variable, independent of Z̃ and S, that provides stochasticity for predictions (e.g., in neural networks R can be used to make the training stochastic). The empirical risk of learning method f trained on dataset Z̃S with randomnessR is defined as Lemp(f, Z̃S , R) = 1n ∑n i=1 `(f(Z̃S , Xi, R), Yi). The population risk is defined as L(f, Z̃S , R) = EZ′∼D `(f(Z̃S , X ′, R), Y ′). Before moving forward we adopt two conventions. First, if z is a collection of examples, then x and y denote the collection of its inputs and labels respectively. Second, if x is a collection of inputs, then f(z, x, r) denotes the collection of predictions on x after training on z with randomness r. We define functional conditional mutual information (f -CMI). Definition 3.1. Let D, f , R, Z̃, S be defined as above and let u ⊆ [n] be a subset of size m. Then pointwise functional conditional mutual information f -CMI(f, z̃, u) is defined as f -CMI(f, z̃, u) = I(f(z̃S , x̃u, R);Su), (16) while functional conditional mutual information f -CMID(f, u) is defined as f -CMID(f, u) = Ez̃∼Z̃ f -CMI(f, z̃, u). (17) When u = [n] we will simply use the notations f -CMI(f, z̃) and f -CMID(f), instead of f -CMI(f, z̃, [n]) and f -CMID(f, [n]), respectively. Theorem 3.1. Let U be a random subset of size m, independent of Z̃, S, and randomness of training algorithm f . If `(ŷ, y) ∈ [0, 1],∀ŷ ∈ K, z ∈ Z , then∣∣∣EZ̃,R,S [L(f, Z̃S , R)− Lemp(f, Z̃S , R)]∣∣∣ ≤ Ez̃∼Z̃,u∼U √ 2 m f -CMI(f, z̃, u), (18) and EZ̃,R,S ( L(f, Z̃S , R)− Lemp(f, Z̃S , R) )2 ≤ 8 n (Ez̃∼Z̃ f -CMI(f, z̃) + 2) . (19) For parametric methods, the bound of (18) improves over the bound of (13), as the Markov chain Su — A(z̃S , R) — f(z̃S , x̃u, R) allows to use the data processing inequality I(f(z̃S , x̃u, R);Su) ≤ I(A(z̃S , R);Su). For deterministic algorithms I(A(z̃S);Su) is often equal to H(Su) = m log 2, as most likely each choice of S produces a different W = A(z̃S). In such cases the bound with I(W ;Su) is vacuous. In contrast, the proposed bounds with f -CMI (especially when m = 1) do not have this problem. Even when the algorithm is stochastic, information between W and Su can be much larger than information between predictions and Su, as having access to weights makes it easier to determine Su (e.g., by using gradients). A similar phenomenon has been observed in the context of membership attacks, where having access to weights of a neural network allows constructing more successful membership attacks compared to having access to predictions only [21, 12]. Corollary 1. When m = n, the bound of (18) becomes∣∣∣EZ̃,R,S [L(f, Z̃S , R)− Lemp(f, Z̃S , R)]∣∣∣ ≤ Ez̃∼Z̃ √ 2 n f -CMI(f, z̃) ≤ √ 2 n f -CMID(f). (20) For parametric models, this improves over the CMI bound (Thm. 2.4), as by data processing inequality, f -CMID(f) = I(f(Z̃S , X̃, R);S | Z̃) ≤ I(A(Z̃S , R);S | Z̃) = CMID(A). Remark 1. Note that the collection of training and testing predictions f(Z̃S , X̃, R) cannot be replaced with only testing predictions f(Z̃S , X̃neg(S), R). As an example, consider an algorithm that memorizes the training examples and outputs a constant prediction on any other example. This algorithm will have non-zero generalization gap, but f(Z̃S , X̃neg(S), R) will be constant and will have zero information with S conditioned on any random variable. Moreover, if we replace f(Z̃S , X̃, R) with only training predictions f(Z̃S , X̃S , R), the resulting bound can become too loose, as one can deduce S by comparing training set predictions with the labels Ỹ . Corollary 2. When m = 1, the bound of (18) becomes∣∣∣EZ̃,R,S [L(f, Z̃S , R)− Lemp(f, Z̃S , R)]∣∣∣ ≤ 1n n∑ i=1 Ez̃∼Z̃ √ 2I(f(z̃S , x̃i, R);Si). (21) A great advantage of this bound compared to all other bounds described so far is that the mutual information term is computed between a relatively low-dimensional random variable f(z̃S , x̃i, R) and a binary random variable Si. For example, in the case of binary classification with K = {0, 1}, f(z̃S , x̃i, R) will be a pair of 2 binary variables. This allows us to estimate the bound efficiently and accurately (please refer to Appendix B for more details). Note that estimating other informationtheoretic bounds is significantly harder. The bounds of Xu and Raginsky [39], Negrea et al. [22], and Bu et al. [6] are hard to estimate as they involve estimation of mutual information between a high-dimensional non-discrete variable W and at least one example Zi. Furthermore, this mutual information can be infinite in case of deterministic algorithms or when H(Zi) is infinite. The bounds of Haghifam et al. [14] and Steinke and Zakynthinou [33] are also hard to estimate as they involve estimation of mutual information between W and at least one train-test split variable Si. As in the case of bounds presented in the previous section (Thm. 2.2 and Thm. 2.6), we prove that the bound of Thm. 3.1 is non-decreasing in m. This stays true even when we increase the upper bounds by moving the expectation over U or the expectation over Z̃ or both under the square root. The following proposition allows us to prove all these statements. Proposition 3. Let m ∈ [n− 1], U be a random subset of [n] of size m, U ′ be a random subset of size m+ 1, z̃ be any fixed value of Z̃, and φ : R→ R be any non-decreasing concave function. Then Eu∼U φ ( 1 m I(f(z̃S , x̃u, R);Su) ) ≤ Eu′∼U ′ φ ( 1 m+ 1 I(f(z̃S , x̃u′ , R);Su′) ) . (22) By setting φ(x) = √ x and then taking expectation over z̃ and u, we prove that bounds of Thm. 3.1 are non-decreasing over m. By setting φ(x) = x, taking expectation over z̃, and then taking square root of both sides of (22), we prove that bounds are non-decreasing in m when both expectations are under the square root. Proposition 3 proves that m = 1 is the optimal choice in Thm. 3.1. Notably, the bound that is the easiest to compute is also the tightest! Analogously to Thm. A.1, we provide the following stability-based bounds. Theorem 3.2. If `(ŷ, y) ∈ [0, 1],∀ŷ ∈ K, z ∈ Z , then∣∣∣EZ̃,R,S [L(f, Z̃S , R)− Lemp(f, Z̃S , R)]∣∣∣ ≤ Ez̃∼Z̃ [ 1 n n∑ i=1 √ 2I(f(z̃S , x̃i, R);Si | S−i) ] , (23) and EZ̃,R,S ( L(f, Z̃S , R)− Lemp(f, Z̃S , R) )2 ≤ 8 n ( Ez̃∼Z̃ [ n∑ i=1 I(f(z̃S , x̃, R);Si | S−i) ] + 2 ) . Note that unlike (23), in the second part of Thm. 3.2 we measure information with predictions on all 2n pairs and Si conditioned on S−i. It is an open question whether f(z̃S , x̃, R) can be replaced with f(z̃S , x̃i, R) – predictions only on the i-th pair. 4 Applications In this section we describe 3 applications of the f -CMI-based generalization bounds. 4.1 Ensembling algorithms Ensembling algorithms combine predictions of multiple learning algorithms to obtain better performance. Let us consider k learning algorithms, f1, f2, . . . , fk, each with its own independent randomness Ri, i ∈ [k]. Some ensembling algorithms can be viewed as a possibly stochastic function g : Kk → K that takes predictions of the k algorithms and combines them into a single prediction. Relating the generalization gap of the resulting ensembling algorithm to that of individual fis can be challenging for complicated choices of g. However, it is easy to bound the generalization gap of g(f1, . . . , fk) in terms of f -CMIs of individual predictors. Let z̃ be a fixed value of Z̃ and x be an arbitrary collection of inputs. Denoting Fi = fi(z̃S , x,Ri), i ∈ [k], we have that I(g(F1, . . . , Fk);S) ≤ I(F1, . . . , Fk;S) (data processing inequality) = I(F1;S) + I(F2, . . . , Fk;S)− I(F1;F2, . . . , Fk) + I(F1;F2, . . . , Fk | S) (chain rule) ≤ I(F1;S) + I(F2, . . . , Fk;S) (as MI is nonnegative and F1 ⊥ F2, . . . , Fk | S) ≤ . . . ≤ I(F1;S) + · · ·+ I(Fk;S). (repeating the arguments above to separate all Fi) Unfortunately, the same derivation above does not work if we replace S with Su, where u is a proper subset of [n], as I(F1;F2, . . . , Fk | Su) will not be zero in general. 4.2 Binary classification with finite VC dimension Let us consider the case of binary classification: Y = {0, 1}, where the learning method f : Zn ×X ×R → {0, 1} is implemented using a learning algorithm A : Zn ×R →W that selects a classifier from a hypothesis setH = {hw : X → Y}. IfH has finite VC dimension d [34], then for any algorithm f , the quantity f -CMI(f, z̃) can be bounded the following way. Theorem 4.1. Let Z ,H, f be defined as above, and let d <∞ be the VC dimension ofH. Then for any algorithm f and z̃ ∈ Zn×2, f -CMI(f, z̃) ≤ max {(d+ 1) log 2, d log (2en/d)} . (24) Considering the 0-1 loss function and using this result in Corollary 1, we get an expect generalization gap bound that is O (√ d n log ( n d )) , matching the classical uniform convergence bound [34]. The √ log n factor can be removed in some cases [13]. Both Xu and Raginsky [39] and Steinke and Zakynthinou [33] prove similar informationtheoretic bounds in the case of finite VC dimension classes, but their results holds for specific algorithms only. Even in the simple case of threshold functions: X = [0, 1] and H ={ hw : x 7→ 1{x>w} | w ∈ [0, 1] } , all weight-based bounds described in Sec. 2 are vacuous if one uses a training algorithm that encodes the training set in insignificant bits of W , while still getting zero error on the training set and hence achieving low test error. 4.3 Stable deterministic or stochastic algorithms Theorems 2.3, A.1 and 3.2 provide generalization bounds involving information-theoretic stability measures, such as I(W ;Zi | Z−i), I(A(z̃S , R);S | S−i) and I(f(z̃S , x̃, R);Si | S−i). In this section we build upon the predication-based stability bounds of Thm. 3.2. First, we show that for any collection of examples x, the mutual information I(f(z̃S , x);Si | S−i) can be bounded as follows. Proposition 4. Let Si←c denote S with Si set to c. Then for any z̃ ∈ Zn×2 and x̃ ∈ X k, the mutual information I(f(z̃S , x,R);Si | S−i) is upper bounded by 1 4 KL (f(z̃Si←1 , x,R)|S−i ‖ f(z̃Si←0 , x,R)|S−i) + 1 4 KL (f(z̃Si←0 , x,R)|S−i ‖ f(z̃Si←1 , x,R)|S−i) . To compute the right-hand side of Proposition 4 one needs to know how much on-average the distribution of predictions on x changes after replacing the i-th example in the training dataset. The problem arises when we consider deterministic algorithms. In such cases, the right-hand side is infinite, while the left-hand side I(f(z̃S , x,R);Si | S−i) is always finite and could be small. Therefore, for deterministic algorithms, directly applying the result of Proposition 4 will not give meaningful generalization bounds. Nevertheless, we show that we can add an optimal amount of noise to predictions, upper bound the generalization gap of the resulting noisy algorithm, and relate that to the generalization gap of the original deterministic algorithm. Let us consider a deterministic algorithm f : Zn × X → Rd. We define the following notions of functional stability. Definition 4.1 (Functional stability). Let S = (Z1, . . . , Zn) ∼ Dn be a collection of n i.i.d. samples, and Z ′ and Ztest be two additional independent samples from D. Let S(i) , (Z1, . . . , Zi−i, Z ′, Zi+1, . . . , Zn) be the collection constructed from S by replacing the i-th example with Z ′. A deterministic algorithm f : Zn ×X → Rd is a) β self-stable if ∀i ∈ [n], ES,Z′ ∥∥∥f(S,Zi)− f(S(i), Zi)∥∥∥2 ≤ β2, (25) b) β1 test-stable if ∀i ∈ [n], ES,Z′,Ztest ∥∥∥f(S,Ztest)− f(S(i), Ztest)∥∥∥2 ≤ β21 , (26) c) β2 train-stable if ∀i, j ∈ [n], i 6= j, ES,Z′ ∥∥∥f(S,Zj)− f(S(i), Zj)∥∥∥2 ≤ β22 . (27) Theorem 4.2. Let Y = Rd, f : Zn × X → Rd be a deterministic algorithm that is β self-stable, and `(ŷ, y) ∈ [0, 1] be a loss function that is γ-Lipschitz in the first coordinate. Then∣∣∣EZ̃,R,S [L(f, Z̃S , R)− Lemp(f, Z̃S , R)]∣∣∣ ≤ 2 32 d 14√γβ. (28) Furthermore, if f is also β1 train-stable and β2 test-stable, then EZ̃,R,S ( L(f, Z̃S , R)− Lemp(f, Z̃S , R) )2 ≤ 32 n + 12 3 2 √ dγ √ 2β2 + nβ21 + nβ 2 2 . (29) It is expected that β2 is smaller than β and β1. For example, in the case of neural networks interpolating the training data or in the case of empirical risk minimization in the realizable setting, β2 will be zero. It is also expected that β is larger than β1. However, the relation of β2 and nβ21 is not trivial. The notion of pointwise hypothesis stability β′2 defined by Bousquet and Elisseeff [5] (definition 4) is comparable to our notion of self-stability β. The first part of Theorem 11 in [5] describes a generalization bound where the difference between empirical and population losses is of order 1/ √ n + √ β′2, which is comparable with our result of Thm. 4.2 (Θ( √ β)). The proof there also contains a bound on the expected squared difference of empirical and population losses. That bound is of order 1/n+ β′2. In contrast, our result of (29) contains two extra terms related to test-stability and train-stability (the terms nβ21 and nβ 2 2 ). If β dominates nβ 2 1 + nβ 2 2 , then the bound of (29) will match the result of Bousquet and Elisseeff [5]. 5 Experiments As mentioned earlier, the expected generalization gap bound of Corollary 2 is significantly easier to compute compared to existing information-theoretic bounds, and does not give trivial results for deterministic algorithms. To understand how well the bound does in challenging situations, we consider cases when the algorithm generalizes well despite the high complexity of the hypothesis class and relatively small number of training examples. Due to space constraints we omit some experimental details and present them in Appendix B. The code can be found at github.com/hrayrhar/f-CMI. First, we consider the MNIST 4 vs 9 digit classification task [20] using a 4-layer convolutional neural network (CNN) that has approximately 200K parameters. We train the network using for 200 epochs using the ADAM algorithm [18] with 0.001 learning rate, β1 = 0.9, and mini-batches of 128 examples. Importantly, we fix the random seed that controls the initialization of weights and the shuffling of training data, making the training algorithm deterministic. Fig. 1a plots the expected generalization gap and the f -CMI bound of (21). We see that the bound is not vacuous and is not too far from the expected generalization gap even when considering only 75 training examples. As shown in the Fig. 3a of Appendix B, if we increase the width of all layers 4 times, making the number of parameters approximately 3M, the results remain largely unchanged. Next, we move away from binary classification and consider the CIFAR-10 classication task [19]. To construct a well-generalizing algorithm, we use the ResNet-50 [16] network pretrained on the ImageNet [7], and fine-tune it for 40 epochs using SGD with mini-batches of size 64, 0.01 learning rate, 0.9 momentum, and standard data augmentations. The results presented in Fig. 1b indicate that the f -CMI bound is always approximately 3 times larger than the expected generalization gap. In particular, when n = 20000, the expected generalization gap is 5%, while the bound predicts 16%. Note that the weight-based information-theoretic bounds discussed in Sec. 2 would give either infinite or trivial bounds for the deterministic algorithm described above. Even when we make the training algorithm stochastic by randomizing the seed, the quantities like I(W ;S) still remain infinite, while both the generalization gap and the f -CMI bound do not change significantly (see Fig. 3b of Appendix B). For this reason, we change the training algorithm to Stochastic Gradient Langevin Dynamics (SGLD) [10, 38] and compare the f -CMI-based bound against the specialized bound of Negrea et al. [22] (see eq. (6) of [22]). This bound (referred as SGLD bound here) is derived from a weight-based information-theoretic generalization bound, and depends on the the hyper-parameters of SGLD and on the variance of per-example gradients along the training trajectory. The SGLD algorithm is trained for 40 epochs, with learning rate and inverse temperature schedules described in Appendix B. Fig. 1c plots the expected generalization gap, the expected test error, the f -CMI bound and the SGLD bound. We see that the test accuracy plateaus after 16 epochs. At this time and afterwards, the f -CMI bound closely follows the generalization gap, while the SGLD bound increases to very high values. However, we see that the SGLD bound does better up to epoch 12. The difference between the f -CMI bound and the SGLD bound becomes more striking when we change the dataset to be a subset of CIFAR-10 consisting of 20000 examples, and fine-tune a pretrained ResNet-50 with SGLD. As shown in Fig. 2, even after a single epoch the SGLD bound is approximately 0.45, while the generalization gap is around 0.02. For comparison, the f -CMI is approximately 0.1 after one epoch of training. Interestingly, Fig. 1c shows that the f-CMI bound is large in the early epochs, despite of the extremely small generalization gap. In fact, a similar trend, albeit with lesser extent, is visible in the MNIST 4 vs 9 experiment, where a CNN is trained with a deterministic algorithm (see Fig. 3c of Appendix B). This indicates a possible area of improvement for the f -CMI bound. 6 Related Work This work is closely related to a rich literature of information-theoretic generalization bounds, some of which were discussed earlier [39, 4, 25, 22, 6, 33, 14, 13, 1, 23, 27, 8]. Most of these work derive generalization bounds that depend on a mutual information quantity measured between the output of the training algorithm and some quantity related to training data. Different from this major idea, Xu and Raginsky [39] and Russo and Zou [29] discussed the idea of bounding generalization gap with the information between the input and the vector of loss functions computed on training examples. This idea was later extended to the setting of conditional mutual information by Steinke and Zakynthinou [33]. This works are similar to ours in the sense that they move away from measuring information with weights, but they did not develop this line of reasoning enough to arrive to efficient bounds similar to Corollary 2. Additionally, we believe that measuring information with the prediction function allows better interpretation and is easier to work with analytically. Another related line of research are the stability-based bounds [5, 2, 26, 3, 9, 37, 27]. In Sec. 2 and Sec. 3 we improve existing generalization bounds that use information stability. In Sec. 4.3 we describe a technique of applying information stability bounds to deterministic algorithms. The main idea is to add noise to predictions, but only for analysis purposes. A similar idea, but in the context of measuring information content of an individual example, was suggested by Harutyunyan et al. [15]. In fact, our notion of test-stability defined in Sec. 4.3 comes very close to their definition of functional sample information. A similar idea was recently used by Neu et al. [23] in analyzing generalization performance of SGD. More broadly this work is related to PAC-Bayes bounds and to classical generalization bounds. Please refer to the the survey by Jiang* et al. [17] for more information on these bounds. Finally, our work has connections with attribute and membership inference attacks [32, 40, 21, 12]. Some of these works show that having a white-box access to models allows constructing better membership inference attacks, compared to having a black-box access. This is analogous to our observation that prediction-based bounds are better than weight-based bounds. Shokri et al. [32] and Yeom et al. [40] demonstrate that even in the case of black-box access to a well-generalizing model, sometimes it is still possible to construct successful membership attacks. This is in line with our observation that the f -CMI bound can be significantly large, despite of small generalization gap (see epoch 4 of Fig. 1c). This suggests a possible direction of improving the f -CMI-based bounds. Acknowledgments and Disclosure of Funding This work is based on research sponsored by Air Force Research Laboratory (AFRL) under agreement number FA8750-19-1-1000. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation therein. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of Air Force Laboratory, DARPA or the U.S. Government. HH was partially supported by a USC Annenberg Fellowship.
1. What is the focus of the paper regarding generalization bounds for neural network classifiers? 2. What are the strengths of the proposed approach, particularly in using mutual information? 3. What are the weaknesses of the paper, especially regarding its theoretical nature and lack of practicality? 4. How does the reviewer assess the significance of the improved generalization bounds presented in the paper? 5. Are there any concerns or suggestions provided by the reviewer for improving the paper's content or practical applicability?
Summary Of The Paper Review
Summary Of The Paper This paper studies information-based generalization bounds and derive several improved versions of existing generalization bounds using mutual information. The paper suggests a function-based conditional mutual information in order to tighten the existing generalization bounds and discusses some numerical results to support the shown generalization bounds. Review This work uses an information-theoretic approach to analyze the generalization properties of neural network classifiers. The work uses the mutual information between observed data and model parameters to bound the generalization error and then proves several variants of the existing information-based bounds through the application of conditional mutual information and function-based mutual information measures that focus on the classifier's output instead of the input samples. In general, the paper tries to address an important problem in learning theory on the generalization properties of overparameterized deep neural networks. The paper's analysis seems to improve the existing mutual information-based generalization bounds. However, I am not completely sure how to estimate the bounds from real data because the paper has no guarantees that the estimation error for the information measures is bounded. In addition, the paper contains many theoretical statements, which makes it hard for a reader to focus on and understand the paper's main contribution. To further explain the comments above, my main concern with this work is how to evaluate the generalization bounds for standard datasets and deep neural network classifiers. The generalization bounds include conditional mutual information terms which are hard to evaluate from real data in practical settings. Also, the upper bound in Corollary 2 includes n mutual information terms where for every term we have only one observed sample. Therefore, it is not clear how one should estimate every mutual information term from only one sample. In addition to the estimation of information measures, the current paper includes too many theoretical statements that makes it hard to distinguish the paper's main contribution from the existing results in the literature. I highly suggest removing the less important results from the main text and further explaining how the generalization bounds in this work can be more useful than existing information-based generalization bounds. The current draft seems to only suggest that the new bounds rely on conditional information measures, while this new feature may not be that useful in practice because the conditional information measures are statistically unaffordable to estimate.
NIPS
Title Information-theoretic generalization bounds for black-box learning algorithms Abstract We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing informationtheoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning. N/A We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing informationtheoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning. 1 Introduction Large neural networks trained with variants of stochastic gradient descent have excellent generalization capabilities, even in regimes where the number of parameters is much larger than the number of training examples. Zhang et al. [41] showed that classical generalization bounds based on various notions of complexity of hypothesis set fail to explain this phenomenon, as the same neural network can generalize well for one choice of training data and memorize completely for another one. This observation has spurred a tenacious search for algorithm-dependent and data-dependent generalization bounds that give meaningful results in practical settings for deep learning [17]. One line of attack bounds generalization error based on the information about training dataset stored in the weights [39, 4, 22, 6, 33, 14, 23, 27]. The main idea is that when the training and testing performance of a neural network are different, the network weights necessarily capture some information about the training dataset. However, the opposite might not be true: A neural network can store significant portions of training set in its weights and still generalize well [32, 40, 21]. Furthermore, because of their information-theoretic nature, these generalization bounds become infinite or produce trivial bounds for deterministic algorithms. When such bounds are not infinite, they are notoriously hard to estimate, due to the challenges arising in estimation of Shannon mutual information between two high-dimensional variables (e.g., weights of a ResNet and a training dataset). This work addresses the aforementioned challenges. We first improve some of the existing information-theoretic generalization bounds, providing a unified view and derivation of them (Sec. 2). We then derive novel generalization bounds that measure information with predictions, rather than with the output of the training algorithm (Sec. 3). These bounds are applicable to a wide range of methods, including neural networks, Bayesian algorithms, ensembling algorithms, and non-parametric approaches. In the case of neural networks, the proposed bounds improve over the existing weightbased bounds, partly because they avoid a counter-productive property of weight-based bounds that information stored in unused weights affects generalization bounds, even though it has no effect on generalization. The proposed bounds produce meaningful results for deterministic algorithms and are significantly easier to estimate. For example, in case of classification, computing our most efficient bound involves estimating mutual information between a pair of predictions and a binary variable. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). We apply the proposed bounds to ensembling algorithms, binary classification algorithms with finite VC dimension hypothesis classes, and to stable learning algorithms (Sec. 4). We compute our most efficient bound on realistic classification problems involving neural networks, and show that the bound closely follows the generalization error, even in situations when a neural network with 3M parameters is trained deterministically on 4000 examples, achieving 1% generalization error. 2 Weight-based generalization bounds We start by describing the necessary notation and definitions, after which we present some of the existing weigh-based information-theoretic generalization bounds, slightly improve some of them, and prove relations between them. The purpose of this section is to introduce the relevant existing bounds and prepare grounds for the functional conditional mutual information bounds introduced in Sec. 3, which we consider our main contribution. All proofs are presented in Appendix A. Preliminaries. We use capital letters for random variables, corresponding lowercase letters for their values, and calligraphic letters for their domains. If X is a random variable, X̄ denotes an independent copy of X . For example, if (X,Y ) is a pair of random variables with joint distribution PX,Y , then the joint distribution of (X̄, Ȳ ) will be PX̄,Ȳ = PX̄ ⊗ PȲ = PX ⊗ PY . A random variable X is called σ-subgaussian if E exp(t(X − EX)) ≤ exp(σ2t2/2), ∀t ∈ R. For example, a random variable that takes values in [a, b] almost surely, is (b− a)/2-subgaussian. Given probability measures P and Q defined on the same measurable space, such that P is absolutely continuous with respect to Q, the Kullback–Leibler divergence from P to Q is defined as KL (P ‖ Q) = ∫ log dPdQdP , where dPdQ is the Radon-Nikodym derivative of P with respect to Q. If X and Y are random variables defined on the same probability space, then KL (X ‖ Y ) denotes KL (PX ‖ PY ). The Shannon mutual information between random variables X and Y is I(X;Y ) = KL (PX,Y ‖ PX ⊗ PY ). In this paper, all information-theoretic quantities are measured in nats, instead of bits. Throughout the paper [n] denotes the set {1, 2, . . . , n}. Finally, if A = (a1, . . . , an) is a collection, then A−i , (a1, . . . , ai−1, ai+1, . . . , an). Theorems proved in the subsequent sections will be relying on the following lemma. Lemma 1. Let (Φ,Ψ) be a pair of random variables with joint distribution PΨ,Φ. If g(φ, ψ) is a measurable function such that EΦ,Ψ [g(Φ,Ψ)] exists and g(Φ̄, Ψ̄) is σ-subgaussian, then∣∣EΦ,Ψ [g(Φ,Ψ)]− EΦ,Ψ̄ [g(Φ, Ψ̄)]∣∣ ≤√2σ2I(Φ; Ψ). (1) Furthermore, if g(φ, Ψ̄) is σ-subgaussian for each φ and the expectation below exists, then EΦ,Ψ [( g(Φ,Ψ)− EΨ̄ g(Φ, Ψ̄) )2] ≤ 4σ2(I(Φ; Ψ) + log 3), (2) and P (∣∣g(Φ,Ψ)− EΨ̄ g(Φ, Ψ̄)∣∣ ≥ ) ≤ 4σ2(I(Φ; Ψ) + log 3) 2 , ∀ > 0. (3) The first part of this lemma is equivalent to Lemma 1 of Xu and Raginsky [39], which in turn has its roots in Russo and Zou [29]. The second part generalizes Lemma 2 of Hafez-Kolahi et al. [13] by also providing bounds on the expected squared difference. 2.1 Generalization bounds with input-output mutual information Let S = (Z1, Z2, . . . , Zn) ∼ Dn be a dataset of n i.i.d. examples, R ∈ R be a source of randomness (a random variable independent of S) and A : Zn × R → W be a training algorithm. Let W = A(S,R) be the output of the training algorithm applied on the dataset S with randomness R. Given a loss function ` :W ×Z → R, the empirical risk is Lemp(A,S,R) = 1n ∑n i=1 `(W,Zi) and the population risk is L(A,S,R) = EZ′∼D `(W,Z ′), where Z ′ is a test example independent from S and R. The generalization gap, also call generalization error, is L(A,S,R)−Lemp(A,S,R). In this setting, Xu and Raginsky [39] establish the following information-theoretic bound on the absolute value of the expected generalization gap. Theorem 2.1 (Thm. 1 of Xu and Raginsky [39]). If `(w,Z ′), where Z ′ ∼ D, is σ-subgaussian for all w ∈ W , then |ES,R [L(A,S,R)− Lemp(A,S,R)]| ≤ √ 2σ2I(W ;S) n . (4) We generalize this result by showing that instead of measuring information with the entire dataset, one can measure information with a subset of size m chosen uniformly at random. For brevity, hereafter we call subsets chosen uniformly at random just “random subsets”. Theorem 2.2. Let U be a random subset of [n] with size m, independent of S and R. If `(w,Z ′), where Z ′ ∼ D, is σ-subgaussian for all w ∈ W , then |ES,R [L(A,S,R)− Lemp(A,S,R)]| ≤ Eu∼U √ 2σ2 m I(W ;Su), (5) and ES,R (L(A,S,R)− Lemp(A,S,R))2 ≤ 4σ2 n (I(W ;S) + log 3) . (6) With a simple application of Markov’s inequality one can get tail bounds from the second part of the theorem. Furthermore, by taking square root of both sides of (6) and using Jensen’s inequality on the left side, one can also construct an upper bound for the expected absolute value of generalization gap, ES,R |L(A,S,R)− Lemp(A,S,R)|. These observations apply also to the other generalization gap bounds presented later in this work. Note the bound on the squared generalization gap is written only for the case of m = n. It is possible to derive squared generalization gap bounds of form 4σ 2 m (Eu∼U I(W ;Su) + log 3). Unfortunately, for small m the log 3 constant starts to dominate, resulting in vacuous bounds. Picking a small m decreases the mutual information term in (5), however, it also decreases the denominator. When settingm = n, we get the bound of Xu and Raginsky [39] (Thm. 2.1). Whenm = 1, the bound of (5) becomes 1n ∑n i=1 √ 2σ2I(W ;Zi), matching the result of Bu et al. [6] (Proposition 1). A similar bound, but for a different notion of information, was derived by Alabdulmohsin [2]. Bu et al. [6] prove that the bound with m = 1 is tighter than the bound with m = n. We generalize this result by proving that the bound of (5) is non-descreasing in m. Proposition 1. Let m ∈ [n− 1], U be a random subset of [n] of size m, U ′ be a random subset of size m+ 1, and φ : R→ R be any non-decreasing concave function. Then EU φ ( 1 m I(W ;Su) ) ≤ EU ′ φ ( 1 m+ 1 I(W ;Su′) ) . (7) When φ(x) = √ x, this result proves that the optimal value for m in (5) is 1. Furthermore, when we use Jensen’s inequality to move expectation over U inside the square root in (5), then the resulting bound becomes √ 2σ2 m Eu∼U I(W ;Su) and matches the result of Negrea et al. [22] (Thm. 2.3). These bounds are also non-decreasing with respect to m (using Proposition 1 with φ(x) = x). Thm. 2.1 can be used to derive generalization bounds that depend on the information between W and a single example Zi conditioned on the remaining examples Z−i = (Z1, . . . , Zi−1, Zi+1, . . . , Zn). Theorem 2.3. If `(w,Z ′), where Z ′ ∼ D, is σ-subgaussian for all w ∈ W , then |ES,R [L(A,S,R)− Lemp(A,S,R)]| ≤ 1 n n∑ i=1 √ 2σ2I(W ;Zi | Z−i), (8) and ES,R (L(A,S,R)− Lemp(A,S,R))2 ≤ 4σ2 n ( n∑ i=1 I(W ;Zi | Z−i) + log 3 ) . (9) This theorem is a simple corollary of Thm. 2.2, using the facts that I(W ;Zi) ≤ I(W ;Zi | Z−i) and that I(W ;S) is upper bounded by ∑n i=1 I(W ;Zi | Z−i), which is also known as erasure information [35]. The first part of it improves the result of Raginsky et al. [26] (Thm. 2), as the averaging over i is outside of the square root. While these bounds are worse that the corresponding bounds of Thm. 2.2, it is sometimes easier to manipulate them analytically. The bounds described above measure information with the output W of the training algorithm. In the case of prediction tasks with parametric methods, the parameters W might contain information about the training dataset, but not use it to make predictions. Partly for this reason, the main goal of this paper is to derive generalization bounds that measure information with the prediction function, rather than with the weights. In general, there is no straightforward way of encoding the prediction function into a random variable. However, when the domain Z is finite, we can encode the prediction function as the collection of predictions on all examples of Z . This naturally leads us to the next setting (albeit with a different motivation), first considered by Steinke and Zakynthinou [33], where one first fixes a set of 2n examples, and then randomly selects n of them to form the training set. We use this setting to provide prediction-based generalization bounds in Sec. 3. Before describing these bounds we present the setting of Steinke and Zakynthinou [33] in detail and generalize some of the existing weight-based bounds in that setting. 2.2 Generalization bounds with conditional mutual information Let Z̃ ∈ Zn×2 be a collection of 2n i.i.d samples from D, grouped in n pairs. The random variable S ∼ Uniform({0, 1}n) specifies which example to select from each pair to form the training set Z̃S = (Z̃i,Si) n i=1. LetR be a random variable, independent of Z̃ and S, that captures the stochasticity of training. In this setting Steinke and Zakynthinou [33] defined condition mutual information (CMI) of algorithm A with respect to the data distribution D as CMID(A) = I(A(Z̃S , R);S | Z̃) = Ez̃∼Z̃ I(A(z̃S , R);S), (10) and proved the following upper bound on expected generalization gap. Theorem 2.4 (Thm. 2, Steinke and Zakynthinou [33]). If the loss function `(w, z) ∈ [0, 1],∀w ∈ W, z ∈ Z , then the expected generalization gap can be bounded as follows:∣∣∣EZ̃,S,R [L(A, Z̃S , R)− Lemp(A, Z̃S , R)]∣∣∣ ≤ √ 2 n CMID(A). (11) Haghifam et al. [14] improved this bound in two aspects. First, they provided bounds where expectation over Z̃ is outside of the square root. Second, they considered measuring information with subsets of S, as we did in the previous section. Theorem 2.5 (Thm. 3.1 of Haghifam et al. [14]). Let m ∈ [n] and U ⊆ [n] be a random subset of size m, independent from R, Z̃, and S. If the loss function `(w, z) ∈ [0, 1],∀w ∈ W, z ∈ Z , then∣∣∣EZ̃,S,R [L(A, Z̃S , R)− Lemp(A, Z̃S , R)]∣∣∣ ≤ Ez∼Z̃ √ 2 m Eu∼U I(A(z̃S , R);Su). (12) Furthermore, for m = 1 they tighten the bound by showing that one can move the expectation over U outside of the squared root (Haghifam et al. [14], Thm 3.4). We generalize these results by showing that for all m expectation over U can be done outside of the square root. Furthermore, our proof closely follows the proof of Thm. 2.2. Theorem 2.6. Let m ∈ [n] and U ⊆ [n] be a random subset of size m, independent from R, Z̃, and S. If `(w, z) ∈ [0, 1],∀w ∈ W, z ∈ Z , then∣∣∣EZ̃,S,R [L(A, Z̃S , R)− Lemp(A, Z̃S , R)]∣∣∣ ≤ Ez̃∼Z̃,u∼U √ 2 m I(A(z̃S , R);Su), (13) and EZ̃,S,R ( L(A, Z̃S , R)− Lemp(A, Z̃S , R) )2 ≤ 8 n (Ez̃∼Z̃ I(A(z̃S , R);S) + 2) . (14) The bound of (13) improves over the bound of Thm. 2.5 and matches the special result for m = 1. Rodríguez-Gálvez et al. [28] proved even tighter expected generalization gap bound by replacing I(A(Z̃S , R);Su | Z̃ = z̃) with I(A(Z̃S , R);Su | Z̃u = z̃u). Haghifam et al. [14] showed that if one takes the expectations over Z̃ inside the square root in (12), then the resulting looser upper bounds become non-decreasing overm. Using this result they showed that their special case bound form = 1 is the tightest. We generalize their results by showing that even without taking the expectations inside the squared root, the bounds of Thm. 2.5 are non-decreasing over m. We also show that the same holds for our tighter bounds of (13). Proposition 2. Let m ∈ [n− 1], U be a random subset of [n] of size m, U ′ be a random subset of size m+ 1, z̃ be any fixed value of Z̃, and φ : R→ R be any non-decreasing concave function. Then Eu∼U φ ( 1 m I(A(z̃S , R);Su) ) ≤ Eu′∼U ′ φ ( 1 m+ 1 I(A(z̃S , R);Su′) ) . (15) By setting φ(x) = x, taking square root of both sides of (15), and then taking expectation over z̃, we prove that bounds of (12) are non-decreasing over m. By setting φ(x) = √ x and then taking expectation over z̃, we prove that bounds of (13) are non-decreasing with m. Similarly to the Thm. 2.3 of the previous section, Thm. A.1 presented in Appendix A establishes generalization bounds with information-theoretic stability quantities. 3 Functional conditional mutual information The bounds in Sec. 2 leverage information in the output of the algorithm, W . In this section we focus on supervised learning problems: Z = X × Y . To encompass many types of approaches, we do not assume that the training algorithm has an output W , which is then used to make predictions. Instead, we assume that the learning method implements a function f : Zn×X ×R → K that takes a training set z, a test input x′, an auxiliary argument r capturing the stochasticity of training and predictions, and outputs a prediction f(z, x′, r) on the test example. Note that the prediction domain K can be different from Y . This setting includes non-parametric methods (for which W is the training dataset itself), parametric methods, Bayesian algorithms, and more. For example, in parametric methods, where a hypothesis setH = {hw : X → K | w ∈ W} is defined, f(z, x, r) = hA(z,r)(x). In this supervised setting, the loss function ` : K × Y → R measures the discrepancy between a prediction and a label. As in the previous subsection, we assume that a collection of 2n i.i.d examples Z̃ ∼ Dn×2 is given, grouped in n pairs, and the random variable S ∼ Uniform({0, 1}n) specifies which example to select from each pair to form the training set Z̃S = (Z̃i,Si) n i=1. LetR be an auxiliary random variable, independent of Z̃ and S, that provides stochasticity for predictions (e.g., in neural networks R can be used to make the training stochastic). The empirical risk of learning method f trained on dataset Z̃S with randomnessR is defined as Lemp(f, Z̃S , R) = 1n ∑n i=1 `(f(Z̃S , Xi, R), Yi). The population risk is defined as L(f, Z̃S , R) = EZ′∼D `(f(Z̃S , X ′, R), Y ′). Before moving forward we adopt two conventions. First, if z is a collection of examples, then x and y denote the collection of its inputs and labels respectively. Second, if x is a collection of inputs, then f(z, x, r) denotes the collection of predictions on x after training on z with randomness r. We define functional conditional mutual information (f -CMI). Definition 3.1. Let D, f , R, Z̃, S be defined as above and let u ⊆ [n] be a subset of size m. Then pointwise functional conditional mutual information f -CMI(f, z̃, u) is defined as f -CMI(f, z̃, u) = I(f(z̃S , x̃u, R);Su), (16) while functional conditional mutual information f -CMID(f, u) is defined as f -CMID(f, u) = Ez̃∼Z̃ f -CMI(f, z̃, u). (17) When u = [n] we will simply use the notations f -CMI(f, z̃) and f -CMID(f), instead of f -CMI(f, z̃, [n]) and f -CMID(f, [n]), respectively. Theorem 3.1. Let U be a random subset of size m, independent of Z̃, S, and randomness of training algorithm f . If `(ŷ, y) ∈ [0, 1],∀ŷ ∈ K, z ∈ Z , then∣∣∣EZ̃,R,S [L(f, Z̃S , R)− Lemp(f, Z̃S , R)]∣∣∣ ≤ Ez̃∼Z̃,u∼U √ 2 m f -CMI(f, z̃, u), (18) and EZ̃,R,S ( L(f, Z̃S , R)− Lemp(f, Z̃S , R) )2 ≤ 8 n (Ez̃∼Z̃ f -CMI(f, z̃) + 2) . (19) For parametric methods, the bound of (18) improves over the bound of (13), as the Markov chain Su — A(z̃S , R) — f(z̃S , x̃u, R) allows to use the data processing inequality I(f(z̃S , x̃u, R);Su) ≤ I(A(z̃S , R);Su). For deterministic algorithms I(A(z̃S);Su) is often equal to H(Su) = m log 2, as most likely each choice of S produces a different W = A(z̃S). In such cases the bound with I(W ;Su) is vacuous. In contrast, the proposed bounds with f -CMI (especially when m = 1) do not have this problem. Even when the algorithm is stochastic, information between W and Su can be much larger than information between predictions and Su, as having access to weights makes it easier to determine Su (e.g., by using gradients). A similar phenomenon has been observed in the context of membership attacks, where having access to weights of a neural network allows constructing more successful membership attacks compared to having access to predictions only [21, 12]. Corollary 1. When m = n, the bound of (18) becomes∣∣∣EZ̃,R,S [L(f, Z̃S , R)− Lemp(f, Z̃S , R)]∣∣∣ ≤ Ez̃∼Z̃ √ 2 n f -CMI(f, z̃) ≤ √ 2 n f -CMID(f). (20) For parametric models, this improves over the CMI bound (Thm. 2.4), as by data processing inequality, f -CMID(f) = I(f(Z̃S , X̃, R);S | Z̃) ≤ I(A(Z̃S , R);S | Z̃) = CMID(A). Remark 1. Note that the collection of training and testing predictions f(Z̃S , X̃, R) cannot be replaced with only testing predictions f(Z̃S , X̃neg(S), R). As an example, consider an algorithm that memorizes the training examples and outputs a constant prediction on any other example. This algorithm will have non-zero generalization gap, but f(Z̃S , X̃neg(S), R) will be constant and will have zero information with S conditioned on any random variable. Moreover, if we replace f(Z̃S , X̃, R) with only training predictions f(Z̃S , X̃S , R), the resulting bound can become too loose, as one can deduce S by comparing training set predictions with the labels Ỹ . Corollary 2. When m = 1, the bound of (18) becomes∣∣∣EZ̃,R,S [L(f, Z̃S , R)− Lemp(f, Z̃S , R)]∣∣∣ ≤ 1n n∑ i=1 Ez̃∼Z̃ √ 2I(f(z̃S , x̃i, R);Si). (21) A great advantage of this bound compared to all other bounds described so far is that the mutual information term is computed between a relatively low-dimensional random variable f(z̃S , x̃i, R) and a binary random variable Si. For example, in the case of binary classification with K = {0, 1}, f(z̃S , x̃i, R) will be a pair of 2 binary variables. This allows us to estimate the bound efficiently and accurately (please refer to Appendix B for more details). Note that estimating other informationtheoretic bounds is significantly harder. The bounds of Xu and Raginsky [39], Negrea et al. [22], and Bu et al. [6] are hard to estimate as they involve estimation of mutual information between a high-dimensional non-discrete variable W and at least one example Zi. Furthermore, this mutual information can be infinite in case of deterministic algorithms or when H(Zi) is infinite. The bounds of Haghifam et al. [14] and Steinke and Zakynthinou [33] are also hard to estimate as they involve estimation of mutual information between W and at least one train-test split variable Si. As in the case of bounds presented in the previous section (Thm. 2.2 and Thm. 2.6), we prove that the bound of Thm. 3.1 is non-decreasing in m. This stays true even when we increase the upper bounds by moving the expectation over U or the expectation over Z̃ or both under the square root. The following proposition allows us to prove all these statements. Proposition 3. Let m ∈ [n− 1], U be a random subset of [n] of size m, U ′ be a random subset of size m+ 1, z̃ be any fixed value of Z̃, and φ : R→ R be any non-decreasing concave function. Then Eu∼U φ ( 1 m I(f(z̃S , x̃u, R);Su) ) ≤ Eu′∼U ′ φ ( 1 m+ 1 I(f(z̃S , x̃u′ , R);Su′) ) . (22) By setting φ(x) = √ x and then taking expectation over z̃ and u, we prove that bounds of Thm. 3.1 are non-decreasing over m. By setting φ(x) = x, taking expectation over z̃, and then taking square root of both sides of (22), we prove that bounds are non-decreasing in m when both expectations are under the square root. Proposition 3 proves that m = 1 is the optimal choice in Thm. 3.1. Notably, the bound that is the easiest to compute is also the tightest! Analogously to Thm. A.1, we provide the following stability-based bounds. Theorem 3.2. If `(ŷ, y) ∈ [0, 1],∀ŷ ∈ K, z ∈ Z , then∣∣∣EZ̃,R,S [L(f, Z̃S , R)− Lemp(f, Z̃S , R)]∣∣∣ ≤ Ez̃∼Z̃ [ 1 n n∑ i=1 √ 2I(f(z̃S , x̃i, R);Si | S−i) ] , (23) and EZ̃,R,S ( L(f, Z̃S , R)− Lemp(f, Z̃S , R) )2 ≤ 8 n ( Ez̃∼Z̃ [ n∑ i=1 I(f(z̃S , x̃, R);Si | S−i) ] + 2 ) . Note that unlike (23), in the second part of Thm. 3.2 we measure information with predictions on all 2n pairs and Si conditioned on S−i. It is an open question whether f(z̃S , x̃, R) can be replaced with f(z̃S , x̃i, R) – predictions only on the i-th pair. 4 Applications In this section we describe 3 applications of the f -CMI-based generalization bounds. 4.1 Ensembling algorithms Ensembling algorithms combine predictions of multiple learning algorithms to obtain better performance. Let us consider k learning algorithms, f1, f2, . . . , fk, each with its own independent randomness Ri, i ∈ [k]. Some ensembling algorithms can be viewed as a possibly stochastic function g : Kk → K that takes predictions of the k algorithms and combines them into a single prediction. Relating the generalization gap of the resulting ensembling algorithm to that of individual fis can be challenging for complicated choices of g. However, it is easy to bound the generalization gap of g(f1, . . . , fk) in terms of f -CMIs of individual predictors. Let z̃ be a fixed value of Z̃ and x be an arbitrary collection of inputs. Denoting Fi = fi(z̃S , x,Ri), i ∈ [k], we have that I(g(F1, . . . , Fk);S) ≤ I(F1, . . . , Fk;S) (data processing inequality) = I(F1;S) + I(F2, . . . , Fk;S)− I(F1;F2, . . . , Fk) + I(F1;F2, . . . , Fk | S) (chain rule) ≤ I(F1;S) + I(F2, . . . , Fk;S) (as MI is nonnegative and F1 ⊥ F2, . . . , Fk | S) ≤ . . . ≤ I(F1;S) + · · ·+ I(Fk;S). (repeating the arguments above to separate all Fi) Unfortunately, the same derivation above does not work if we replace S with Su, where u is a proper subset of [n], as I(F1;F2, . . . , Fk | Su) will not be zero in general. 4.2 Binary classification with finite VC dimension Let us consider the case of binary classification: Y = {0, 1}, where the learning method f : Zn ×X ×R → {0, 1} is implemented using a learning algorithm A : Zn ×R →W that selects a classifier from a hypothesis setH = {hw : X → Y}. IfH has finite VC dimension d [34], then for any algorithm f , the quantity f -CMI(f, z̃) can be bounded the following way. Theorem 4.1. Let Z ,H, f be defined as above, and let d <∞ be the VC dimension ofH. Then for any algorithm f and z̃ ∈ Zn×2, f -CMI(f, z̃) ≤ max {(d+ 1) log 2, d log (2en/d)} . (24) Considering the 0-1 loss function and using this result in Corollary 1, we get an expect generalization gap bound that is O (√ d n log ( n d )) , matching the classical uniform convergence bound [34]. The √ log n factor can be removed in some cases [13]. Both Xu and Raginsky [39] and Steinke and Zakynthinou [33] prove similar informationtheoretic bounds in the case of finite VC dimension classes, but their results holds for specific algorithms only. Even in the simple case of threshold functions: X = [0, 1] and H ={ hw : x 7→ 1{x>w} | w ∈ [0, 1] } , all weight-based bounds described in Sec. 2 are vacuous if one uses a training algorithm that encodes the training set in insignificant bits of W , while still getting zero error on the training set and hence achieving low test error. 4.3 Stable deterministic or stochastic algorithms Theorems 2.3, A.1 and 3.2 provide generalization bounds involving information-theoretic stability measures, such as I(W ;Zi | Z−i), I(A(z̃S , R);S | S−i) and I(f(z̃S , x̃, R);Si | S−i). In this section we build upon the predication-based stability bounds of Thm. 3.2. First, we show that for any collection of examples x, the mutual information I(f(z̃S , x);Si | S−i) can be bounded as follows. Proposition 4. Let Si←c denote S with Si set to c. Then for any z̃ ∈ Zn×2 and x̃ ∈ X k, the mutual information I(f(z̃S , x,R);Si | S−i) is upper bounded by 1 4 KL (f(z̃Si←1 , x,R)|S−i ‖ f(z̃Si←0 , x,R)|S−i) + 1 4 KL (f(z̃Si←0 , x,R)|S−i ‖ f(z̃Si←1 , x,R)|S−i) . To compute the right-hand side of Proposition 4 one needs to know how much on-average the distribution of predictions on x changes after replacing the i-th example in the training dataset. The problem arises when we consider deterministic algorithms. In such cases, the right-hand side is infinite, while the left-hand side I(f(z̃S , x,R);Si | S−i) is always finite and could be small. Therefore, for deterministic algorithms, directly applying the result of Proposition 4 will not give meaningful generalization bounds. Nevertheless, we show that we can add an optimal amount of noise to predictions, upper bound the generalization gap of the resulting noisy algorithm, and relate that to the generalization gap of the original deterministic algorithm. Let us consider a deterministic algorithm f : Zn × X → Rd. We define the following notions of functional stability. Definition 4.1 (Functional stability). Let S = (Z1, . . . , Zn) ∼ Dn be a collection of n i.i.d. samples, and Z ′ and Ztest be two additional independent samples from D. Let S(i) , (Z1, . . . , Zi−i, Z ′, Zi+1, . . . , Zn) be the collection constructed from S by replacing the i-th example with Z ′. A deterministic algorithm f : Zn ×X → Rd is a) β self-stable if ∀i ∈ [n], ES,Z′ ∥∥∥f(S,Zi)− f(S(i), Zi)∥∥∥2 ≤ β2, (25) b) β1 test-stable if ∀i ∈ [n], ES,Z′,Ztest ∥∥∥f(S,Ztest)− f(S(i), Ztest)∥∥∥2 ≤ β21 , (26) c) β2 train-stable if ∀i, j ∈ [n], i 6= j, ES,Z′ ∥∥∥f(S,Zj)− f(S(i), Zj)∥∥∥2 ≤ β22 . (27) Theorem 4.2. Let Y = Rd, f : Zn × X → Rd be a deterministic algorithm that is β self-stable, and `(ŷ, y) ∈ [0, 1] be a loss function that is γ-Lipschitz in the first coordinate. Then∣∣∣EZ̃,R,S [L(f, Z̃S , R)− Lemp(f, Z̃S , R)]∣∣∣ ≤ 2 32 d 14√γβ. (28) Furthermore, if f is also β1 train-stable and β2 test-stable, then EZ̃,R,S ( L(f, Z̃S , R)− Lemp(f, Z̃S , R) )2 ≤ 32 n + 12 3 2 √ dγ √ 2β2 + nβ21 + nβ 2 2 . (29) It is expected that β2 is smaller than β and β1. For example, in the case of neural networks interpolating the training data or in the case of empirical risk minimization in the realizable setting, β2 will be zero. It is also expected that β is larger than β1. However, the relation of β2 and nβ21 is not trivial. The notion of pointwise hypothesis stability β′2 defined by Bousquet and Elisseeff [5] (definition 4) is comparable to our notion of self-stability β. The first part of Theorem 11 in [5] describes a generalization bound where the difference between empirical and population losses is of order 1/ √ n + √ β′2, which is comparable with our result of Thm. 4.2 (Θ( √ β)). The proof there also contains a bound on the expected squared difference of empirical and population losses. That bound is of order 1/n+ β′2. In contrast, our result of (29) contains two extra terms related to test-stability and train-stability (the terms nβ21 and nβ 2 2 ). If β dominates nβ 2 1 + nβ 2 2 , then the bound of (29) will match the result of Bousquet and Elisseeff [5]. 5 Experiments As mentioned earlier, the expected generalization gap bound of Corollary 2 is significantly easier to compute compared to existing information-theoretic bounds, and does not give trivial results for deterministic algorithms. To understand how well the bound does in challenging situations, we consider cases when the algorithm generalizes well despite the high complexity of the hypothesis class and relatively small number of training examples. Due to space constraints we omit some experimental details and present them in Appendix B. The code can be found at github.com/hrayrhar/f-CMI. First, we consider the MNIST 4 vs 9 digit classification task [20] using a 4-layer convolutional neural network (CNN) that has approximately 200K parameters. We train the network using for 200 epochs using the ADAM algorithm [18] with 0.001 learning rate, β1 = 0.9, and mini-batches of 128 examples. Importantly, we fix the random seed that controls the initialization of weights and the shuffling of training data, making the training algorithm deterministic. Fig. 1a plots the expected generalization gap and the f -CMI bound of (21). We see that the bound is not vacuous and is not too far from the expected generalization gap even when considering only 75 training examples. As shown in the Fig. 3a of Appendix B, if we increase the width of all layers 4 times, making the number of parameters approximately 3M, the results remain largely unchanged. Next, we move away from binary classification and consider the CIFAR-10 classication task [19]. To construct a well-generalizing algorithm, we use the ResNet-50 [16] network pretrained on the ImageNet [7], and fine-tune it for 40 epochs using SGD with mini-batches of size 64, 0.01 learning rate, 0.9 momentum, and standard data augmentations. The results presented in Fig. 1b indicate that the f -CMI bound is always approximately 3 times larger than the expected generalization gap. In particular, when n = 20000, the expected generalization gap is 5%, while the bound predicts 16%. Note that the weight-based information-theoretic bounds discussed in Sec. 2 would give either infinite or trivial bounds for the deterministic algorithm described above. Even when we make the training algorithm stochastic by randomizing the seed, the quantities like I(W ;S) still remain infinite, while both the generalization gap and the f -CMI bound do not change significantly (see Fig. 3b of Appendix B). For this reason, we change the training algorithm to Stochastic Gradient Langevin Dynamics (SGLD) [10, 38] and compare the f -CMI-based bound against the specialized bound of Negrea et al. [22] (see eq. (6) of [22]). This bound (referred as SGLD bound here) is derived from a weight-based information-theoretic generalization bound, and depends on the the hyper-parameters of SGLD and on the variance of per-example gradients along the training trajectory. The SGLD algorithm is trained for 40 epochs, with learning rate and inverse temperature schedules described in Appendix B. Fig. 1c plots the expected generalization gap, the expected test error, the f -CMI bound and the SGLD bound. We see that the test accuracy plateaus after 16 epochs. At this time and afterwards, the f -CMI bound closely follows the generalization gap, while the SGLD bound increases to very high values. However, we see that the SGLD bound does better up to epoch 12. The difference between the f -CMI bound and the SGLD bound becomes more striking when we change the dataset to be a subset of CIFAR-10 consisting of 20000 examples, and fine-tune a pretrained ResNet-50 with SGLD. As shown in Fig. 2, even after a single epoch the SGLD bound is approximately 0.45, while the generalization gap is around 0.02. For comparison, the f -CMI is approximately 0.1 after one epoch of training. Interestingly, Fig. 1c shows that the f-CMI bound is large in the early epochs, despite of the extremely small generalization gap. In fact, a similar trend, albeit with lesser extent, is visible in the MNIST 4 vs 9 experiment, where a CNN is trained with a deterministic algorithm (see Fig. 3c of Appendix B). This indicates a possible area of improvement for the f -CMI bound. 6 Related Work This work is closely related to a rich literature of information-theoretic generalization bounds, some of which were discussed earlier [39, 4, 25, 22, 6, 33, 14, 13, 1, 23, 27, 8]. Most of these work derive generalization bounds that depend on a mutual information quantity measured between the output of the training algorithm and some quantity related to training data. Different from this major idea, Xu and Raginsky [39] and Russo and Zou [29] discussed the idea of bounding generalization gap with the information between the input and the vector of loss functions computed on training examples. This idea was later extended to the setting of conditional mutual information by Steinke and Zakynthinou [33]. This works are similar to ours in the sense that they move away from measuring information with weights, but they did not develop this line of reasoning enough to arrive to efficient bounds similar to Corollary 2. Additionally, we believe that measuring information with the prediction function allows better interpretation and is easier to work with analytically. Another related line of research are the stability-based bounds [5, 2, 26, 3, 9, 37, 27]. In Sec. 2 and Sec. 3 we improve existing generalization bounds that use information stability. In Sec. 4.3 we describe a technique of applying information stability bounds to deterministic algorithms. The main idea is to add noise to predictions, but only for analysis purposes. A similar idea, but in the context of measuring information content of an individual example, was suggested by Harutyunyan et al. [15]. In fact, our notion of test-stability defined in Sec. 4.3 comes very close to their definition of functional sample information. A similar idea was recently used by Neu et al. [23] in analyzing generalization performance of SGD. More broadly this work is related to PAC-Bayes bounds and to classical generalization bounds. Please refer to the the survey by Jiang* et al. [17] for more information on these bounds. Finally, our work has connections with attribute and membership inference attacks [32, 40, 21, 12]. Some of these works show that having a white-box access to models allows constructing better membership inference attacks, compared to having a black-box access. This is analogous to our observation that prediction-based bounds are better than weight-based bounds. Shokri et al. [32] and Yeom et al. [40] demonstrate that even in the case of black-box access to a well-generalizing model, sometimes it is still possible to construct successful membership attacks. This is in line with our observation that the f -CMI bound can be significantly large, despite of small generalization gap (see epoch 4 of Fig. 1c). This suggests a possible direction of improving the f -CMI-based bounds. Acknowledgments and Disclosure of Funding This work is based on research sponsored by Air Force Research Laboratory (AFRL) under agreement number FA8750-19-1-1000. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation therein. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of Air Force Laboratory, DARPA or the U.S. Government. HH was partially supported by a USC Annenberg Fellowship.
1. What is the focus and contribution of the paper regarding information-theoretic bounds on the generalization gap in supervised learning? 2. What are the strengths of the proposed approach, particularly in introducing a new "transductive" variant of the conditional mutual information bound? 3. What are the weaknesses or concerns regarding the paper's motivation, specifically in its relation to deep learning? 4. Can you provide additional details about the quantitative improvements achieved by moving the expectation outside the square root in the inequality? 5. How does the improvement in the paper compare to previous weaker derivations? 6. Can you clarify the requirement for a proper learning algorithm in theorem 4.1? 7. Can you discuss the applications and advantages of the proposed approach concerning ensemble methods?
Summary Of The Paper Review
Summary Of The Paper The paper continues a line of work which develops information-theoretic bounds on the generalization gap (i.e. the difference between e,pirical and population loss) in supervised learning. Its main contributions are: the authors introduce a natural “transductive” variant of the conditional mutual information bound (Section 3) and show that it is stronger than all previous bound. (Eg one can use it to prove that any proper learner for a VC class generalizes). The authors quantitatively improve some of the previous bounds on input-output mutual information (theorem 2.2, 2.3), and on conditional mutual information (Theorem 2.6), by moving the expectation outside the square-root in the RHS of the inequality. Review Overall I liked this paper: I think it the transductive variant of the recently-well-studied conditional mutual information is natural, simple, and they show that this variant is in fact stronger. Moreover, they also derive quantitative improvements over the previous bounds. Some specific comments and questions to the authors: I was not very convinced by the motivation using deep learning. The main reason is because the bounds studied in this paper imply that the empirical and population losses are close, and perhaps the greatest mystery DNN demonstrate is that they generalize, however they do not necessarily satisfy this property. I would appreciate if the authors can elaborate more on their quantitative improvement over previous bound (by moving the expectation outside the square-root). Are there concrete, natural examples where it gives better bounds? How complicated is this improvement comparing to the previous (weaker) derivations? In Theorem 4.1, please add that the learning algorithm is proper. Otherwise the statements seems false. Can you please elaborate on the application wrt to ensemble methods? In which ways is it better than previous bounds?
NIPS
Title Information-theoretic generalization bounds for black-box learning algorithms Abstract We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing informationtheoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning. N/A We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing informationtheoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning. 1 Introduction Large neural networks trained with variants of stochastic gradient descent have excellent generalization capabilities, even in regimes where the number of parameters is much larger than the number of training examples. Zhang et al. [41] showed that classical generalization bounds based on various notions of complexity of hypothesis set fail to explain this phenomenon, as the same neural network can generalize well for one choice of training data and memorize completely for another one. This observation has spurred a tenacious search for algorithm-dependent and data-dependent generalization bounds that give meaningful results in practical settings for deep learning [17]. One line of attack bounds generalization error based on the information about training dataset stored in the weights [39, 4, 22, 6, 33, 14, 23, 27]. The main idea is that when the training and testing performance of a neural network are different, the network weights necessarily capture some information about the training dataset. However, the opposite might not be true: A neural network can store significant portions of training set in its weights and still generalize well [32, 40, 21]. Furthermore, because of their information-theoretic nature, these generalization bounds become infinite or produce trivial bounds for deterministic algorithms. When such bounds are not infinite, they are notoriously hard to estimate, due to the challenges arising in estimation of Shannon mutual information between two high-dimensional variables (e.g., weights of a ResNet and a training dataset). This work addresses the aforementioned challenges. We first improve some of the existing information-theoretic generalization bounds, providing a unified view and derivation of them (Sec. 2). We then derive novel generalization bounds that measure information with predictions, rather than with the output of the training algorithm (Sec. 3). These bounds are applicable to a wide range of methods, including neural networks, Bayesian algorithms, ensembling algorithms, and non-parametric approaches. In the case of neural networks, the proposed bounds improve over the existing weightbased bounds, partly because they avoid a counter-productive property of weight-based bounds that information stored in unused weights affects generalization bounds, even though it has no effect on generalization. The proposed bounds produce meaningful results for deterministic algorithms and are significantly easier to estimate. For example, in case of classification, computing our most efficient bound involves estimating mutual information between a pair of predictions and a binary variable. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). We apply the proposed bounds to ensembling algorithms, binary classification algorithms with finite VC dimension hypothesis classes, and to stable learning algorithms (Sec. 4). We compute our most efficient bound on realistic classification problems involving neural networks, and show that the bound closely follows the generalization error, even in situations when a neural network with 3M parameters is trained deterministically on 4000 examples, achieving 1% generalization error. 2 Weight-based generalization bounds We start by describing the necessary notation and definitions, after which we present some of the existing weigh-based information-theoretic generalization bounds, slightly improve some of them, and prove relations between them. The purpose of this section is to introduce the relevant existing bounds and prepare grounds for the functional conditional mutual information bounds introduced in Sec. 3, which we consider our main contribution. All proofs are presented in Appendix A. Preliminaries. We use capital letters for random variables, corresponding lowercase letters for their values, and calligraphic letters for their domains. If X is a random variable, X̄ denotes an independent copy of X . For example, if (X,Y ) is a pair of random variables with joint distribution PX,Y , then the joint distribution of (X̄, Ȳ ) will be PX̄,Ȳ = PX̄ ⊗ PȲ = PX ⊗ PY . A random variable X is called σ-subgaussian if E exp(t(X − EX)) ≤ exp(σ2t2/2), ∀t ∈ R. For example, a random variable that takes values in [a, b] almost surely, is (b− a)/2-subgaussian. Given probability measures P and Q defined on the same measurable space, such that P is absolutely continuous with respect to Q, the Kullback–Leibler divergence from P to Q is defined as KL (P ‖ Q) = ∫ log dPdQdP , where dPdQ is the Radon-Nikodym derivative of P with respect to Q. If X and Y are random variables defined on the same probability space, then KL (X ‖ Y ) denotes KL (PX ‖ PY ). The Shannon mutual information between random variables X and Y is I(X;Y ) = KL (PX,Y ‖ PX ⊗ PY ). In this paper, all information-theoretic quantities are measured in nats, instead of bits. Throughout the paper [n] denotes the set {1, 2, . . . , n}. Finally, if A = (a1, . . . , an) is a collection, then A−i , (a1, . . . , ai−1, ai+1, . . . , an). Theorems proved in the subsequent sections will be relying on the following lemma. Lemma 1. Let (Φ,Ψ) be a pair of random variables with joint distribution PΨ,Φ. If g(φ, ψ) is a measurable function such that EΦ,Ψ [g(Φ,Ψ)] exists and g(Φ̄, Ψ̄) is σ-subgaussian, then∣∣EΦ,Ψ [g(Φ,Ψ)]− EΦ,Ψ̄ [g(Φ, Ψ̄)]∣∣ ≤√2σ2I(Φ; Ψ). (1) Furthermore, if g(φ, Ψ̄) is σ-subgaussian for each φ and the expectation below exists, then EΦ,Ψ [( g(Φ,Ψ)− EΨ̄ g(Φ, Ψ̄) )2] ≤ 4σ2(I(Φ; Ψ) + log 3), (2) and P (∣∣g(Φ,Ψ)− EΨ̄ g(Φ, Ψ̄)∣∣ ≥ ) ≤ 4σ2(I(Φ; Ψ) + log 3) 2 , ∀ > 0. (3) The first part of this lemma is equivalent to Lemma 1 of Xu and Raginsky [39], which in turn has its roots in Russo and Zou [29]. The second part generalizes Lemma 2 of Hafez-Kolahi et al. [13] by also providing bounds on the expected squared difference. 2.1 Generalization bounds with input-output mutual information Let S = (Z1, Z2, . . . , Zn) ∼ Dn be a dataset of n i.i.d. examples, R ∈ R be a source of randomness (a random variable independent of S) and A : Zn × R → W be a training algorithm. Let W = A(S,R) be the output of the training algorithm applied on the dataset S with randomness R. Given a loss function ` :W ×Z → R, the empirical risk is Lemp(A,S,R) = 1n ∑n i=1 `(W,Zi) and the population risk is L(A,S,R) = EZ′∼D `(W,Z ′), where Z ′ is a test example independent from S and R. The generalization gap, also call generalization error, is L(A,S,R)−Lemp(A,S,R). In this setting, Xu and Raginsky [39] establish the following information-theoretic bound on the absolute value of the expected generalization gap. Theorem 2.1 (Thm. 1 of Xu and Raginsky [39]). If `(w,Z ′), where Z ′ ∼ D, is σ-subgaussian for all w ∈ W , then |ES,R [L(A,S,R)− Lemp(A,S,R)]| ≤ √ 2σ2I(W ;S) n . (4) We generalize this result by showing that instead of measuring information with the entire dataset, one can measure information with a subset of size m chosen uniformly at random. For brevity, hereafter we call subsets chosen uniformly at random just “random subsets”. Theorem 2.2. Let U be a random subset of [n] with size m, independent of S and R. If `(w,Z ′), where Z ′ ∼ D, is σ-subgaussian for all w ∈ W , then |ES,R [L(A,S,R)− Lemp(A,S,R)]| ≤ Eu∼U √ 2σ2 m I(W ;Su), (5) and ES,R (L(A,S,R)− Lemp(A,S,R))2 ≤ 4σ2 n (I(W ;S) + log 3) . (6) With a simple application of Markov’s inequality one can get tail bounds from the second part of the theorem. Furthermore, by taking square root of both sides of (6) and using Jensen’s inequality on the left side, one can also construct an upper bound for the expected absolute value of generalization gap, ES,R |L(A,S,R)− Lemp(A,S,R)|. These observations apply also to the other generalization gap bounds presented later in this work. Note the bound on the squared generalization gap is written only for the case of m = n. It is possible to derive squared generalization gap bounds of form 4σ 2 m (Eu∼U I(W ;Su) + log 3). Unfortunately, for small m the log 3 constant starts to dominate, resulting in vacuous bounds. Picking a small m decreases the mutual information term in (5), however, it also decreases the denominator. When settingm = n, we get the bound of Xu and Raginsky [39] (Thm. 2.1). Whenm = 1, the bound of (5) becomes 1n ∑n i=1 √ 2σ2I(W ;Zi), matching the result of Bu et al. [6] (Proposition 1). A similar bound, but for a different notion of information, was derived by Alabdulmohsin [2]. Bu et al. [6] prove that the bound with m = 1 is tighter than the bound with m = n. We generalize this result by proving that the bound of (5) is non-descreasing in m. Proposition 1. Let m ∈ [n− 1], U be a random subset of [n] of size m, U ′ be a random subset of size m+ 1, and φ : R→ R be any non-decreasing concave function. Then EU φ ( 1 m I(W ;Su) ) ≤ EU ′ φ ( 1 m+ 1 I(W ;Su′) ) . (7) When φ(x) = √ x, this result proves that the optimal value for m in (5) is 1. Furthermore, when we use Jensen’s inequality to move expectation over U inside the square root in (5), then the resulting bound becomes √ 2σ2 m Eu∼U I(W ;Su) and matches the result of Negrea et al. [22] (Thm. 2.3). These bounds are also non-decreasing with respect to m (using Proposition 1 with φ(x) = x). Thm. 2.1 can be used to derive generalization bounds that depend on the information between W and a single example Zi conditioned on the remaining examples Z−i = (Z1, . . . , Zi−1, Zi+1, . . . , Zn). Theorem 2.3. If `(w,Z ′), where Z ′ ∼ D, is σ-subgaussian for all w ∈ W , then |ES,R [L(A,S,R)− Lemp(A,S,R)]| ≤ 1 n n∑ i=1 √ 2σ2I(W ;Zi | Z−i), (8) and ES,R (L(A,S,R)− Lemp(A,S,R))2 ≤ 4σ2 n ( n∑ i=1 I(W ;Zi | Z−i) + log 3 ) . (9) This theorem is a simple corollary of Thm. 2.2, using the facts that I(W ;Zi) ≤ I(W ;Zi | Z−i) and that I(W ;S) is upper bounded by ∑n i=1 I(W ;Zi | Z−i), which is also known as erasure information [35]. The first part of it improves the result of Raginsky et al. [26] (Thm. 2), as the averaging over i is outside of the square root. While these bounds are worse that the corresponding bounds of Thm. 2.2, it is sometimes easier to manipulate them analytically. The bounds described above measure information with the output W of the training algorithm. In the case of prediction tasks with parametric methods, the parameters W might contain information about the training dataset, but not use it to make predictions. Partly for this reason, the main goal of this paper is to derive generalization bounds that measure information with the prediction function, rather than with the weights. In general, there is no straightforward way of encoding the prediction function into a random variable. However, when the domain Z is finite, we can encode the prediction function as the collection of predictions on all examples of Z . This naturally leads us to the next setting (albeit with a different motivation), first considered by Steinke and Zakynthinou [33], where one first fixes a set of 2n examples, and then randomly selects n of them to form the training set. We use this setting to provide prediction-based generalization bounds in Sec. 3. Before describing these bounds we present the setting of Steinke and Zakynthinou [33] in detail and generalize some of the existing weight-based bounds in that setting. 2.2 Generalization bounds with conditional mutual information Let Z̃ ∈ Zn×2 be a collection of 2n i.i.d samples from D, grouped in n pairs. The random variable S ∼ Uniform({0, 1}n) specifies which example to select from each pair to form the training set Z̃S = (Z̃i,Si) n i=1. LetR be a random variable, independent of Z̃ and S, that captures the stochasticity of training. In this setting Steinke and Zakynthinou [33] defined condition mutual information (CMI) of algorithm A with respect to the data distribution D as CMID(A) = I(A(Z̃S , R);S | Z̃) = Ez̃∼Z̃ I(A(z̃S , R);S), (10) and proved the following upper bound on expected generalization gap. Theorem 2.4 (Thm. 2, Steinke and Zakynthinou [33]). If the loss function `(w, z) ∈ [0, 1],∀w ∈ W, z ∈ Z , then the expected generalization gap can be bounded as follows:∣∣∣EZ̃,S,R [L(A, Z̃S , R)− Lemp(A, Z̃S , R)]∣∣∣ ≤ √ 2 n CMID(A). (11) Haghifam et al. [14] improved this bound in two aspects. First, they provided bounds where expectation over Z̃ is outside of the square root. Second, they considered measuring information with subsets of S, as we did in the previous section. Theorem 2.5 (Thm. 3.1 of Haghifam et al. [14]). Let m ∈ [n] and U ⊆ [n] be a random subset of size m, independent from R, Z̃, and S. If the loss function `(w, z) ∈ [0, 1],∀w ∈ W, z ∈ Z , then∣∣∣EZ̃,S,R [L(A, Z̃S , R)− Lemp(A, Z̃S , R)]∣∣∣ ≤ Ez∼Z̃ √ 2 m Eu∼U I(A(z̃S , R);Su). (12) Furthermore, for m = 1 they tighten the bound by showing that one can move the expectation over U outside of the squared root (Haghifam et al. [14], Thm 3.4). We generalize these results by showing that for all m expectation over U can be done outside of the square root. Furthermore, our proof closely follows the proof of Thm. 2.2. Theorem 2.6. Let m ∈ [n] and U ⊆ [n] be a random subset of size m, independent from R, Z̃, and S. If `(w, z) ∈ [0, 1],∀w ∈ W, z ∈ Z , then∣∣∣EZ̃,S,R [L(A, Z̃S , R)− Lemp(A, Z̃S , R)]∣∣∣ ≤ Ez̃∼Z̃,u∼U √ 2 m I(A(z̃S , R);Su), (13) and EZ̃,S,R ( L(A, Z̃S , R)− Lemp(A, Z̃S , R) )2 ≤ 8 n (Ez̃∼Z̃ I(A(z̃S , R);S) + 2) . (14) The bound of (13) improves over the bound of Thm. 2.5 and matches the special result for m = 1. Rodríguez-Gálvez et al. [28] proved even tighter expected generalization gap bound by replacing I(A(Z̃S , R);Su | Z̃ = z̃) with I(A(Z̃S , R);Su | Z̃u = z̃u). Haghifam et al. [14] showed that if one takes the expectations over Z̃ inside the square root in (12), then the resulting looser upper bounds become non-decreasing overm. Using this result they showed that their special case bound form = 1 is the tightest. We generalize their results by showing that even without taking the expectations inside the squared root, the bounds of Thm. 2.5 are non-decreasing over m. We also show that the same holds for our tighter bounds of (13). Proposition 2. Let m ∈ [n− 1], U be a random subset of [n] of size m, U ′ be a random subset of size m+ 1, z̃ be any fixed value of Z̃, and φ : R→ R be any non-decreasing concave function. Then Eu∼U φ ( 1 m I(A(z̃S , R);Su) ) ≤ Eu′∼U ′ φ ( 1 m+ 1 I(A(z̃S , R);Su′) ) . (15) By setting φ(x) = x, taking square root of both sides of (15), and then taking expectation over z̃, we prove that bounds of (12) are non-decreasing over m. By setting φ(x) = √ x and then taking expectation over z̃, we prove that bounds of (13) are non-decreasing with m. Similarly to the Thm. 2.3 of the previous section, Thm. A.1 presented in Appendix A establishes generalization bounds with information-theoretic stability quantities. 3 Functional conditional mutual information The bounds in Sec. 2 leverage information in the output of the algorithm, W . In this section we focus on supervised learning problems: Z = X × Y . To encompass many types of approaches, we do not assume that the training algorithm has an output W , which is then used to make predictions. Instead, we assume that the learning method implements a function f : Zn×X ×R → K that takes a training set z, a test input x′, an auxiliary argument r capturing the stochasticity of training and predictions, and outputs a prediction f(z, x′, r) on the test example. Note that the prediction domain K can be different from Y . This setting includes non-parametric methods (for which W is the training dataset itself), parametric methods, Bayesian algorithms, and more. For example, in parametric methods, where a hypothesis setH = {hw : X → K | w ∈ W} is defined, f(z, x, r) = hA(z,r)(x). In this supervised setting, the loss function ` : K × Y → R measures the discrepancy between a prediction and a label. As in the previous subsection, we assume that a collection of 2n i.i.d examples Z̃ ∼ Dn×2 is given, grouped in n pairs, and the random variable S ∼ Uniform({0, 1}n) specifies which example to select from each pair to form the training set Z̃S = (Z̃i,Si) n i=1. LetR be an auxiliary random variable, independent of Z̃ and S, that provides stochasticity for predictions (e.g., in neural networks R can be used to make the training stochastic). The empirical risk of learning method f trained on dataset Z̃S with randomnessR is defined as Lemp(f, Z̃S , R) = 1n ∑n i=1 `(f(Z̃S , Xi, R), Yi). The population risk is defined as L(f, Z̃S , R) = EZ′∼D `(f(Z̃S , X ′, R), Y ′). Before moving forward we adopt two conventions. First, if z is a collection of examples, then x and y denote the collection of its inputs and labels respectively. Second, if x is a collection of inputs, then f(z, x, r) denotes the collection of predictions on x after training on z with randomness r. We define functional conditional mutual information (f -CMI). Definition 3.1. Let D, f , R, Z̃, S be defined as above and let u ⊆ [n] be a subset of size m. Then pointwise functional conditional mutual information f -CMI(f, z̃, u) is defined as f -CMI(f, z̃, u) = I(f(z̃S , x̃u, R);Su), (16) while functional conditional mutual information f -CMID(f, u) is defined as f -CMID(f, u) = Ez̃∼Z̃ f -CMI(f, z̃, u). (17) When u = [n] we will simply use the notations f -CMI(f, z̃) and f -CMID(f), instead of f -CMI(f, z̃, [n]) and f -CMID(f, [n]), respectively. Theorem 3.1. Let U be a random subset of size m, independent of Z̃, S, and randomness of training algorithm f . If `(ŷ, y) ∈ [0, 1],∀ŷ ∈ K, z ∈ Z , then∣∣∣EZ̃,R,S [L(f, Z̃S , R)− Lemp(f, Z̃S , R)]∣∣∣ ≤ Ez̃∼Z̃,u∼U √ 2 m f -CMI(f, z̃, u), (18) and EZ̃,R,S ( L(f, Z̃S , R)− Lemp(f, Z̃S , R) )2 ≤ 8 n (Ez̃∼Z̃ f -CMI(f, z̃) + 2) . (19) For parametric methods, the bound of (18) improves over the bound of (13), as the Markov chain Su — A(z̃S , R) — f(z̃S , x̃u, R) allows to use the data processing inequality I(f(z̃S , x̃u, R);Su) ≤ I(A(z̃S , R);Su). For deterministic algorithms I(A(z̃S);Su) is often equal to H(Su) = m log 2, as most likely each choice of S produces a different W = A(z̃S). In such cases the bound with I(W ;Su) is vacuous. In contrast, the proposed bounds with f -CMI (especially when m = 1) do not have this problem. Even when the algorithm is stochastic, information between W and Su can be much larger than information between predictions and Su, as having access to weights makes it easier to determine Su (e.g., by using gradients). A similar phenomenon has been observed in the context of membership attacks, where having access to weights of a neural network allows constructing more successful membership attacks compared to having access to predictions only [21, 12]. Corollary 1. When m = n, the bound of (18) becomes∣∣∣EZ̃,R,S [L(f, Z̃S , R)− Lemp(f, Z̃S , R)]∣∣∣ ≤ Ez̃∼Z̃ √ 2 n f -CMI(f, z̃) ≤ √ 2 n f -CMID(f). (20) For parametric models, this improves over the CMI bound (Thm. 2.4), as by data processing inequality, f -CMID(f) = I(f(Z̃S , X̃, R);S | Z̃) ≤ I(A(Z̃S , R);S | Z̃) = CMID(A). Remark 1. Note that the collection of training and testing predictions f(Z̃S , X̃, R) cannot be replaced with only testing predictions f(Z̃S , X̃neg(S), R). As an example, consider an algorithm that memorizes the training examples and outputs a constant prediction on any other example. This algorithm will have non-zero generalization gap, but f(Z̃S , X̃neg(S), R) will be constant and will have zero information with S conditioned on any random variable. Moreover, if we replace f(Z̃S , X̃, R) with only training predictions f(Z̃S , X̃S , R), the resulting bound can become too loose, as one can deduce S by comparing training set predictions with the labels Ỹ . Corollary 2. When m = 1, the bound of (18) becomes∣∣∣EZ̃,R,S [L(f, Z̃S , R)− Lemp(f, Z̃S , R)]∣∣∣ ≤ 1n n∑ i=1 Ez̃∼Z̃ √ 2I(f(z̃S , x̃i, R);Si). (21) A great advantage of this bound compared to all other bounds described so far is that the mutual information term is computed between a relatively low-dimensional random variable f(z̃S , x̃i, R) and a binary random variable Si. For example, in the case of binary classification with K = {0, 1}, f(z̃S , x̃i, R) will be a pair of 2 binary variables. This allows us to estimate the bound efficiently and accurately (please refer to Appendix B for more details). Note that estimating other informationtheoretic bounds is significantly harder. The bounds of Xu and Raginsky [39], Negrea et al. [22], and Bu et al. [6] are hard to estimate as they involve estimation of mutual information between a high-dimensional non-discrete variable W and at least one example Zi. Furthermore, this mutual information can be infinite in case of deterministic algorithms or when H(Zi) is infinite. The bounds of Haghifam et al. [14] and Steinke and Zakynthinou [33] are also hard to estimate as they involve estimation of mutual information between W and at least one train-test split variable Si. As in the case of bounds presented in the previous section (Thm. 2.2 and Thm. 2.6), we prove that the bound of Thm. 3.1 is non-decreasing in m. This stays true even when we increase the upper bounds by moving the expectation over U or the expectation over Z̃ or both under the square root. The following proposition allows us to prove all these statements. Proposition 3. Let m ∈ [n− 1], U be a random subset of [n] of size m, U ′ be a random subset of size m+ 1, z̃ be any fixed value of Z̃, and φ : R→ R be any non-decreasing concave function. Then Eu∼U φ ( 1 m I(f(z̃S , x̃u, R);Su) ) ≤ Eu′∼U ′ φ ( 1 m+ 1 I(f(z̃S , x̃u′ , R);Su′) ) . (22) By setting φ(x) = √ x and then taking expectation over z̃ and u, we prove that bounds of Thm. 3.1 are non-decreasing over m. By setting φ(x) = x, taking expectation over z̃, and then taking square root of both sides of (22), we prove that bounds are non-decreasing in m when both expectations are under the square root. Proposition 3 proves that m = 1 is the optimal choice in Thm. 3.1. Notably, the bound that is the easiest to compute is also the tightest! Analogously to Thm. A.1, we provide the following stability-based bounds. Theorem 3.2. If `(ŷ, y) ∈ [0, 1],∀ŷ ∈ K, z ∈ Z , then∣∣∣EZ̃,R,S [L(f, Z̃S , R)− Lemp(f, Z̃S , R)]∣∣∣ ≤ Ez̃∼Z̃ [ 1 n n∑ i=1 √ 2I(f(z̃S , x̃i, R);Si | S−i) ] , (23) and EZ̃,R,S ( L(f, Z̃S , R)− Lemp(f, Z̃S , R) )2 ≤ 8 n ( Ez̃∼Z̃ [ n∑ i=1 I(f(z̃S , x̃, R);Si | S−i) ] + 2 ) . Note that unlike (23), in the second part of Thm. 3.2 we measure information with predictions on all 2n pairs and Si conditioned on S−i. It is an open question whether f(z̃S , x̃, R) can be replaced with f(z̃S , x̃i, R) – predictions only on the i-th pair. 4 Applications In this section we describe 3 applications of the f -CMI-based generalization bounds. 4.1 Ensembling algorithms Ensembling algorithms combine predictions of multiple learning algorithms to obtain better performance. Let us consider k learning algorithms, f1, f2, . . . , fk, each with its own independent randomness Ri, i ∈ [k]. Some ensembling algorithms can be viewed as a possibly stochastic function g : Kk → K that takes predictions of the k algorithms and combines them into a single prediction. Relating the generalization gap of the resulting ensembling algorithm to that of individual fis can be challenging for complicated choices of g. However, it is easy to bound the generalization gap of g(f1, . . . , fk) in terms of f -CMIs of individual predictors. Let z̃ be a fixed value of Z̃ and x be an arbitrary collection of inputs. Denoting Fi = fi(z̃S , x,Ri), i ∈ [k], we have that I(g(F1, . . . , Fk);S) ≤ I(F1, . . . , Fk;S) (data processing inequality) = I(F1;S) + I(F2, . . . , Fk;S)− I(F1;F2, . . . , Fk) + I(F1;F2, . . . , Fk | S) (chain rule) ≤ I(F1;S) + I(F2, . . . , Fk;S) (as MI is nonnegative and F1 ⊥ F2, . . . , Fk | S) ≤ . . . ≤ I(F1;S) + · · ·+ I(Fk;S). (repeating the arguments above to separate all Fi) Unfortunately, the same derivation above does not work if we replace S with Su, where u is a proper subset of [n], as I(F1;F2, . . . , Fk | Su) will not be zero in general. 4.2 Binary classification with finite VC dimension Let us consider the case of binary classification: Y = {0, 1}, where the learning method f : Zn ×X ×R → {0, 1} is implemented using a learning algorithm A : Zn ×R →W that selects a classifier from a hypothesis setH = {hw : X → Y}. IfH has finite VC dimension d [34], then for any algorithm f , the quantity f -CMI(f, z̃) can be bounded the following way. Theorem 4.1. Let Z ,H, f be defined as above, and let d <∞ be the VC dimension ofH. Then for any algorithm f and z̃ ∈ Zn×2, f -CMI(f, z̃) ≤ max {(d+ 1) log 2, d log (2en/d)} . (24) Considering the 0-1 loss function and using this result in Corollary 1, we get an expect generalization gap bound that is O (√ d n log ( n d )) , matching the classical uniform convergence bound [34]. The √ log n factor can be removed in some cases [13]. Both Xu and Raginsky [39] and Steinke and Zakynthinou [33] prove similar informationtheoretic bounds in the case of finite VC dimension classes, but their results holds for specific algorithms only. Even in the simple case of threshold functions: X = [0, 1] and H ={ hw : x 7→ 1{x>w} | w ∈ [0, 1] } , all weight-based bounds described in Sec. 2 are vacuous if one uses a training algorithm that encodes the training set in insignificant bits of W , while still getting zero error on the training set and hence achieving low test error. 4.3 Stable deterministic or stochastic algorithms Theorems 2.3, A.1 and 3.2 provide generalization bounds involving information-theoretic stability measures, such as I(W ;Zi | Z−i), I(A(z̃S , R);S | S−i) and I(f(z̃S , x̃, R);Si | S−i). In this section we build upon the predication-based stability bounds of Thm. 3.2. First, we show that for any collection of examples x, the mutual information I(f(z̃S , x);Si | S−i) can be bounded as follows. Proposition 4. Let Si←c denote S with Si set to c. Then for any z̃ ∈ Zn×2 and x̃ ∈ X k, the mutual information I(f(z̃S , x,R);Si | S−i) is upper bounded by 1 4 KL (f(z̃Si←1 , x,R)|S−i ‖ f(z̃Si←0 , x,R)|S−i) + 1 4 KL (f(z̃Si←0 , x,R)|S−i ‖ f(z̃Si←1 , x,R)|S−i) . To compute the right-hand side of Proposition 4 one needs to know how much on-average the distribution of predictions on x changes after replacing the i-th example in the training dataset. The problem arises when we consider deterministic algorithms. In such cases, the right-hand side is infinite, while the left-hand side I(f(z̃S , x,R);Si | S−i) is always finite and could be small. Therefore, for deterministic algorithms, directly applying the result of Proposition 4 will not give meaningful generalization bounds. Nevertheless, we show that we can add an optimal amount of noise to predictions, upper bound the generalization gap of the resulting noisy algorithm, and relate that to the generalization gap of the original deterministic algorithm. Let us consider a deterministic algorithm f : Zn × X → Rd. We define the following notions of functional stability. Definition 4.1 (Functional stability). Let S = (Z1, . . . , Zn) ∼ Dn be a collection of n i.i.d. samples, and Z ′ and Ztest be two additional independent samples from D. Let S(i) , (Z1, . . . , Zi−i, Z ′, Zi+1, . . . , Zn) be the collection constructed from S by replacing the i-th example with Z ′. A deterministic algorithm f : Zn ×X → Rd is a) β self-stable if ∀i ∈ [n], ES,Z′ ∥∥∥f(S,Zi)− f(S(i), Zi)∥∥∥2 ≤ β2, (25) b) β1 test-stable if ∀i ∈ [n], ES,Z′,Ztest ∥∥∥f(S,Ztest)− f(S(i), Ztest)∥∥∥2 ≤ β21 , (26) c) β2 train-stable if ∀i, j ∈ [n], i 6= j, ES,Z′ ∥∥∥f(S,Zj)− f(S(i), Zj)∥∥∥2 ≤ β22 . (27) Theorem 4.2. Let Y = Rd, f : Zn × X → Rd be a deterministic algorithm that is β self-stable, and `(ŷ, y) ∈ [0, 1] be a loss function that is γ-Lipschitz in the first coordinate. Then∣∣∣EZ̃,R,S [L(f, Z̃S , R)− Lemp(f, Z̃S , R)]∣∣∣ ≤ 2 32 d 14√γβ. (28) Furthermore, if f is also β1 train-stable and β2 test-stable, then EZ̃,R,S ( L(f, Z̃S , R)− Lemp(f, Z̃S , R) )2 ≤ 32 n + 12 3 2 √ dγ √ 2β2 + nβ21 + nβ 2 2 . (29) It is expected that β2 is smaller than β and β1. For example, in the case of neural networks interpolating the training data or in the case of empirical risk minimization in the realizable setting, β2 will be zero. It is also expected that β is larger than β1. However, the relation of β2 and nβ21 is not trivial. The notion of pointwise hypothesis stability β′2 defined by Bousquet and Elisseeff [5] (definition 4) is comparable to our notion of self-stability β. The first part of Theorem 11 in [5] describes a generalization bound where the difference between empirical and population losses is of order 1/ √ n + √ β′2, which is comparable with our result of Thm. 4.2 (Θ( √ β)). The proof there also contains a bound on the expected squared difference of empirical and population losses. That bound is of order 1/n+ β′2. In contrast, our result of (29) contains two extra terms related to test-stability and train-stability (the terms nβ21 and nβ 2 2 ). If β dominates nβ 2 1 + nβ 2 2 , then the bound of (29) will match the result of Bousquet and Elisseeff [5]. 5 Experiments As mentioned earlier, the expected generalization gap bound of Corollary 2 is significantly easier to compute compared to existing information-theoretic bounds, and does not give trivial results for deterministic algorithms. To understand how well the bound does in challenging situations, we consider cases when the algorithm generalizes well despite the high complexity of the hypothesis class and relatively small number of training examples. Due to space constraints we omit some experimental details and present them in Appendix B. The code can be found at github.com/hrayrhar/f-CMI. First, we consider the MNIST 4 vs 9 digit classification task [20] using a 4-layer convolutional neural network (CNN) that has approximately 200K parameters. We train the network using for 200 epochs using the ADAM algorithm [18] with 0.001 learning rate, β1 = 0.9, and mini-batches of 128 examples. Importantly, we fix the random seed that controls the initialization of weights and the shuffling of training data, making the training algorithm deterministic. Fig. 1a plots the expected generalization gap and the f -CMI bound of (21). We see that the bound is not vacuous and is not too far from the expected generalization gap even when considering only 75 training examples. As shown in the Fig. 3a of Appendix B, if we increase the width of all layers 4 times, making the number of parameters approximately 3M, the results remain largely unchanged. Next, we move away from binary classification and consider the CIFAR-10 classication task [19]. To construct a well-generalizing algorithm, we use the ResNet-50 [16] network pretrained on the ImageNet [7], and fine-tune it for 40 epochs using SGD with mini-batches of size 64, 0.01 learning rate, 0.9 momentum, and standard data augmentations. The results presented in Fig. 1b indicate that the f -CMI bound is always approximately 3 times larger than the expected generalization gap. In particular, when n = 20000, the expected generalization gap is 5%, while the bound predicts 16%. Note that the weight-based information-theoretic bounds discussed in Sec. 2 would give either infinite or trivial bounds for the deterministic algorithm described above. Even when we make the training algorithm stochastic by randomizing the seed, the quantities like I(W ;S) still remain infinite, while both the generalization gap and the f -CMI bound do not change significantly (see Fig. 3b of Appendix B). For this reason, we change the training algorithm to Stochastic Gradient Langevin Dynamics (SGLD) [10, 38] and compare the f -CMI-based bound against the specialized bound of Negrea et al. [22] (see eq. (6) of [22]). This bound (referred as SGLD bound here) is derived from a weight-based information-theoretic generalization bound, and depends on the the hyper-parameters of SGLD and on the variance of per-example gradients along the training trajectory. The SGLD algorithm is trained for 40 epochs, with learning rate and inverse temperature schedules described in Appendix B. Fig. 1c plots the expected generalization gap, the expected test error, the f -CMI bound and the SGLD bound. We see that the test accuracy plateaus after 16 epochs. At this time and afterwards, the f -CMI bound closely follows the generalization gap, while the SGLD bound increases to very high values. However, we see that the SGLD bound does better up to epoch 12. The difference between the f -CMI bound and the SGLD bound becomes more striking when we change the dataset to be a subset of CIFAR-10 consisting of 20000 examples, and fine-tune a pretrained ResNet-50 with SGLD. As shown in Fig. 2, even after a single epoch the SGLD bound is approximately 0.45, while the generalization gap is around 0.02. For comparison, the f -CMI is approximately 0.1 after one epoch of training. Interestingly, Fig. 1c shows that the f-CMI bound is large in the early epochs, despite of the extremely small generalization gap. In fact, a similar trend, albeit with lesser extent, is visible in the MNIST 4 vs 9 experiment, where a CNN is trained with a deterministic algorithm (see Fig. 3c of Appendix B). This indicates a possible area of improvement for the f -CMI bound. 6 Related Work This work is closely related to a rich literature of information-theoretic generalization bounds, some of which were discussed earlier [39, 4, 25, 22, 6, 33, 14, 13, 1, 23, 27, 8]. Most of these work derive generalization bounds that depend on a mutual information quantity measured between the output of the training algorithm and some quantity related to training data. Different from this major idea, Xu and Raginsky [39] and Russo and Zou [29] discussed the idea of bounding generalization gap with the information between the input and the vector of loss functions computed on training examples. This idea was later extended to the setting of conditional mutual information by Steinke and Zakynthinou [33]. This works are similar to ours in the sense that they move away from measuring information with weights, but they did not develop this line of reasoning enough to arrive to efficient bounds similar to Corollary 2. Additionally, we believe that measuring information with the prediction function allows better interpretation and is easier to work with analytically. Another related line of research are the stability-based bounds [5, 2, 26, 3, 9, 37, 27]. In Sec. 2 and Sec. 3 we improve existing generalization bounds that use information stability. In Sec. 4.3 we describe a technique of applying information stability bounds to deterministic algorithms. The main idea is to add noise to predictions, but only for analysis purposes. A similar idea, but in the context of measuring information content of an individual example, was suggested by Harutyunyan et al. [15]. In fact, our notion of test-stability defined in Sec. 4.3 comes very close to their definition of functional sample information. A similar idea was recently used by Neu et al. [23] in analyzing generalization performance of SGD. More broadly this work is related to PAC-Bayes bounds and to classical generalization bounds. Please refer to the the survey by Jiang* et al. [17] for more information on these bounds. Finally, our work has connections with attribute and membership inference attacks [32, 40, 21, 12]. Some of these works show that having a white-box access to models allows constructing better membership inference attacks, compared to having a black-box access. This is analogous to our observation that prediction-based bounds are better than weight-based bounds. Shokri et al. [32] and Yeom et al. [40] demonstrate that even in the case of black-box access to a well-generalizing model, sometimes it is still possible to construct successful membership attacks. This is in line with our observation that the f -CMI bound can be significantly large, despite of small generalization gap (see epoch 4 of Fig. 1c). This suggests a possible direction of improving the f -CMI-based bounds. Acknowledgments and Disclosure of Funding This work is based on research sponsored by Air Force Research Laboratory (AFRL) under agreement number FA8750-19-1-1000. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation therein. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of Air Force Laboratory, DARPA or the U.S. Government. HH was partially supported by a USC Annenberg Fellowship.
1. What is the main contribution of the paper regarding information-theoretic bounds for learning algorithms? 2. How does the paper extend previous works such as Xu & Raginsky (2017) and Steinke & Zakynthinou (2020)? 3. Why do the authors choose to use functional CMI instead of traditional mutual information measures? 4. Can you explain the difference between using weights and predictions in deriving information-theoretic bounds? 5. What are some implications of the paper's results in terms of ensemble learning and VC classes? 6. Do you think the experimental validation effectively supports the theoretical analysis?
Summary Of The Paper Review
Summary Of The Paper The paper derives information-theoretic bounds for learning algorithms. The bounds are based on the predictions of the learned model, rather than the weights (output), which makes them tighter by the data-processing inequality and can yield non-vacuous bounds in cases where weight-based mutual information bounds are vacuous. The authors validate the analysis with a few experiments. Review The paper derives information-theoretic bounds for learning algorithms. The bounds are based on the predictions of the learned model, rather than the weights (output), which makes them tighter by the data-processing inequality and can yield non-vacuous bounds in cases where weight-based mutual information bounds are vacuous. The new bounds build upon the earlier works of Xu & Raginsky (2017) and the more recent work of Steinke & Zakynthinou (2020) using the notion of the conditional mutual information (CMI). First, the authors present a theorem that generalizes a bound of Xu & Raginsky (2017) (Theorem 2.2 in the paper) in which the mutual information is measured between the weights W and the full dataset S. Theorem 2.2 uses a subset of S. However, I don't see how this theorem is different from Theorem 2.1 since taking a proper subset of a training set (i.e. for any fixed choice of u in the theorem), the distribution of S_u is the same as the distribution of a training sample of size m drawn iid. Would you please clarify why Eq 5 is not equivalent to Eq 4? The authors then discuss the choice of the subset size m. They argue that having a smaller m yields a tighter bound. However, there is an early work that makes the same argument and is not discussed in the paper. In Alabdulmohsin (2015), m=1 is used to derive non-vacuous bounds and it is shown that those bounds were tight (see for instance [1] and the subsequent works summarized in [2]). Note that the difference between using the weights and using the function itself (decision boundary) was discussed there as well using data-processing. Since this is an earlier work that makes the same argument, it should be discussed. The main contribution of this work is Theorem 3.1, in which the authors introduce the notion of functional CMI. It is analogous to the notion of CMI introduced by Steinke & Zakynthinou (2020) but uses the predictions rather than the weights (output). One interesting difference is on the connections between mutual information and the VC dimension. Earlier works (e.g. Alabdulmohsin (2018) and Steinke & Zakynthinou (2020)) prove that an ERM exists in a VC class that has a small mutual information between the weights and the data. Using functional CMI, the authors show that all functions in a VC class have a finite f-CMI, which is a neat result. The difference here is that by looking at the predictions, they rule out cases in which the data is encoded in the weights without impacting the decision boundary significantly. The authors then discuss some implications. For example, they look into the case of ensemble learning and show that the f-CMI of an ensemble can be bounded using the f-CMI of the base models. However, this argument is not really useful because it does not explain why an ensemble method tends to generalize better. The bound for the ensemble is larger than the bound of any base classifier. Finally, the authors validate the analysis with a few experiments. The authors argue that the f-CMI bound "closely tracks" the generalization gap but this is misleading because the difference is one order of magnitude and the fact it is decreasing with increasing sample size does not make it "tracking" the generalization gap. After all, all useful generalization bounds decrease with increasing sample size. I suggest that the authors rephrase that claim (perhaps by only arguing that the bound is non-vacuous). Also, in Line 319, the statement is false because weight-based mutual information bounds need not be vacuous (e.g. when m=1 as discussed earlier). Yes, they can be vacuous but need not be. One minor comment. In Lines 128 and 129, I think there is a typo. Shouldn't "n" be "2n"? Overall, the paper is well-written and brings together several recent works. The only comment I have is that there is a related line of work missing in the discussion. As mentioned above, Alabdulmohsin (2015) was an early work that proposed measuring the mutual information with a subset of size m=1 and showed that it was tight (in a certain sense). Based on that, a bound similar to Theorem 2.1 was derived for bounded losses and an equivalence relation was established with the VC dimension, which is similar to the works discussed in Section 4.2, in addition to concentration bounds. These are summarized in [2]. They should be included in the related works for completeness. [1] Alabdulmohsin, "Algorithmic stability and uniform generalization." NeurIPS (2015). [2] Alabdulmohsin, "Towards a Unified Theory of Learning and Information." Entropy 22.4 (2020): 438. ======== Post-rebuttal: I have read the authors' response and I intend to keep my score as is. I find this paper a worthwhile contribution overall.
NIPS
Title Information-theoretic generalization bounds for black-box learning algorithms Abstract We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing informationtheoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning. N/A We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing informationtheoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning. 1 Introduction Large neural networks trained with variants of stochastic gradient descent have excellent generalization capabilities, even in regimes where the number of parameters is much larger than the number of training examples. Zhang et al. [41] showed that classical generalization bounds based on various notions of complexity of hypothesis set fail to explain this phenomenon, as the same neural network can generalize well for one choice of training data and memorize completely for another one. This observation has spurred a tenacious search for algorithm-dependent and data-dependent generalization bounds that give meaningful results in practical settings for deep learning [17]. One line of attack bounds generalization error based on the information about training dataset stored in the weights [39, 4, 22, 6, 33, 14, 23, 27]. The main idea is that when the training and testing performance of a neural network are different, the network weights necessarily capture some information about the training dataset. However, the opposite might not be true: A neural network can store significant portions of training set in its weights and still generalize well [32, 40, 21]. Furthermore, because of their information-theoretic nature, these generalization bounds become infinite or produce trivial bounds for deterministic algorithms. When such bounds are not infinite, they are notoriously hard to estimate, due to the challenges arising in estimation of Shannon mutual information between two high-dimensional variables (e.g., weights of a ResNet and a training dataset). This work addresses the aforementioned challenges. We first improve some of the existing information-theoretic generalization bounds, providing a unified view and derivation of them (Sec. 2). We then derive novel generalization bounds that measure information with predictions, rather than with the output of the training algorithm (Sec. 3). These bounds are applicable to a wide range of methods, including neural networks, Bayesian algorithms, ensembling algorithms, and non-parametric approaches. In the case of neural networks, the proposed bounds improve over the existing weightbased bounds, partly because they avoid a counter-productive property of weight-based bounds that information stored in unused weights affects generalization bounds, even though it has no effect on generalization. The proposed bounds produce meaningful results for deterministic algorithms and are significantly easier to estimate. For example, in case of classification, computing our most efficient bound involves estimating mutual information between a pair of predictions and a binary variable. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). We apply the proposed bounds to ensembling algorithms, binary classification algorithms with finite VC dimension hypothesis classes, and to stable learning algorithms (Sec. 4). We compute our most efficient bound on realistic classification problems involving neural networks, and show that the bound closely follows the generalization error, even in situations when a neural network with 3M parameters is trained deterministically on 4000 examples, achieving 1% generalization error. 2 Weight-based generalization bounds We start by describing the necessary notation and definitions, after which we present some of the existing weigh-based information-theoretic generalization bounds, slightly improve some of them, and prove relations between them. The purpose of this section is to introduce the relevant existing bounds and prepare grounds for the functional conditional mutual information bounds introduced in Sec. 3, which we consider our main contribution. All proofs are presented in Appendix A. Preliminaries. We use capital letters for random variables, corresponding lowercase letters for their values, and calligraphic letters for their domains. If X is a random variable, X̄ denotes an independent copy of X . For example, if (X,Y ) is a pair of random variables with joint distribution PX,Y , then the joint distribution of (X̄, Ȳ ) will be PX̄,Ȳ = PX̄ ⊗ PȲ = PX ⊗ PY . A random variable X is called σ-subgaussian if E exp(t(X − EX)) ≤ exp(σ2t2/2), ∀t ∈ R. For example, a random variable that takes values in [a, b] almost surely, is (b− a)/2-subgaussian. Given probability measures P and Q defined on the same measurable space, such that P is absolutely continuous with respect to Q, the Kullback–Leibler divergence from P to Q is defined as KL (P ‖ Q) = ∫ log dPdQdP , where dPdQ is the Radon-Nikodym derivative of P with respect to Q. If X and Y are random variables defined on the same probability space, then KL (X ‖ Y ) denotes KL (PX ‖ PY ). The Shannon mutual information between random variables X and Y is I(X;Y ) = KL (PX,Y ‖ PX ⊗ PY ). In this paper, all information-theoretic quantities are measured in nats, instead of bits. Throughout the paper [n] denotes the set {1, 2, . . . , n}. Finally, if A = (a1, . . . , an) is a collection, then A−i , (a1, . . . , ai−1, ai+1, . . . , an). Theorems proved in the subsequent sections will be relying on the following lemma. Lemma 1. Let (Φ,Ψ) be a pair of random variables with joint distribution PΨ,Φ. If g(φ, ψ) is a measurable function such that EΦ,Ψ [g(Φ,Ψ)] exists and g(Φ̄, Ψ̄) is σ-subgaussian, then∣∣EΦ,Ψ [g(Φ,Ψ)]− EΦ,Ψ̄ [g(Φ, Ψ̄)]∣∣ ≤√2σ2I(Φ; Ψ). (1) Furthermore, if g(φ, Ψ̄) is σ-subgaussian for each φ and the expectation below exists, then EΦ,Ψ [( g(Φ,Ψ)− EΨ̄ g(Φ, Ψ̄) )2] ≤ 4σ2(I(Φ; Ψ) + log 3), (2) and P (∣∣g(Φ,Ψ)− EΨ̄ g(Φ, Ψ̄)∣∣ ≥ ) ≤ 4σ2(I(Φ; Ψ) + log 3) 2 , ∀ > 0. (3) The first part of this lemma is equivalent to Lemma 1 of Xu and Raginsky [39], which in turn has its roots in Russo and Zou [29]. The second part generalizes Lemma 2 of Hafez-Kolahi et al. [13] by also providing bounds on the expected squared difference. 2.1 Generalization bounds with input-output mutual information Let S = (Z1, Z2, . . . , Zn) ∼ Dn be a dataset of n i.i.d. examples, R ∈ R be a source of randomness (a random variable independent of S) and A : Zn × R → W be a training algorithm. Let W = A(S,R) be the output of the training algorithm applied on the dataset S with randomness R. Given a loss function ` :W ×Z → R, the empirical risk is Lemp(A,S,R) = 1n ∑n i=1 `(W,Zi) and the population risk is L(A,S,R) = EZ′∼D `(W,Z ′), where Z ′ is a test example independent from S and R. The generalization gap, also call generalization error, is L(A,S,R)−Lemp(A,S,R). In this setting, Xu and Raginsky [39] establish the following information-theoretic bound on the absolute value of the expected generalization gap. Theorem 2.1 (Thm. 1 of Xu and Raginsky [39]). If `(w,Z ′), where Z ′ ∼ D, is σ-subgaussian for all w ∈ W , then |ES,R [L(A,S,R)− Lemp(A,S,R)]| ≤ √ 2σ2I(W ;S) n . (4) We generalize this result by showing that instead of measuring information with the entire dataset, one can measure information with a subset of size m chosen uniformly at random. For brevity, hereafter we call subsets chosen uniformly at random just “random subsets”. Theorem 2.2. Let U be a random subset of [n] with size m, independent of S and R. If `(w,Z ′), where Z ′ ∼ D, is σ-subgaussian for all w ∈ W , then |ES,R [L(A,S,R)− Lemp(A,S,R)]| ≤ Eu∼U √ 2σ2 m I(W ;Su), (5) and ES,R (L(A,S,R)− Lemp(A,S,R))2 ≤ 4σ2 n (I(W ;S) + log 3) . (6) With a simple application of Markov’s inequality one can get tail bounds from the second part of the theorem. Furthermore, by taking square root of both sides of (6) and using Jensen’s inequality on the left side, one can also construct an upper bound for the expected absolute value of generalization gap, ES,R |L(A,S,R)− Lemp(A,S,R)|. These observations apply also to the other generalization gap bounds presented later in this work. Note the bound on the squared generalization gap is written only for the case of m = n. It is possible to derive squared generalization gap bounds of form 4σ 2 m (Eu∼U I(W ;Su) + log 3). Unfortunately, for small m the log 3 constant starts to dominate, resulting in vacuous bounds. Picking a small m decreases the mutual information term in (5), however, it also decreases the denominator. When settingm = n, we get the bound of Xu and Raginsky [39] (Thm. 2.1). Whenm = 1, the bound of (5) becomes 1n ∑n i=1 √ 2σ2I(W ;Zi), matching the result of Bu et al. [6] (Proposition 1). A similar bound, but for a different notion of information, was derived by Alabdulmohsin [2]. Bu et al. [6] prove that the bound with m = 1 is tighter than the bound with m = n. We generalize this result by proving that the bound of (5) is non-descreasing in m. Proposition 1. Let m ∈ [n− 1], U be a random subset of [n] of size m, U ′ be a random subset of size m+ 1, and φ : R→ R be any non-decreasing concave function. Then EU φ ( 1 m I(W ;Su) ) ≤ EU ′ φ ( 1 m+ 1 I(W ;Su′) ) . (7) When φ(x) = √ x, this result proves that the optimal value for m in (5) is 1. Furthermore, when we use Jensen’s inequality to move expectation over U inside the square root in (5), then the resulting bound becomes √ 2σ2 m Eu∼U I(W ;Su) and matches the result of Negrea et al. [22] (Thm. 2.3). These bounds are also non-decreasing with respect to m (using Proposition 1 with φ(x) = x). Thm. 2.1 can be used to derive generalization bounds that depend on the information between W and a single example Zi conditioned on the remaining examples Z−i = (Z1, . . . , Zi−1, Zi+1, . . . , Zn). Theorem 2.3. If `(w,Z ′), where Z ′ ∼ D, is σ-subgaussian for all w ∈ W , then |ES,R [L(A,S,R)− Lemp(A,S,R)]| ≤ 1 n n∑ i=1 √ 2σ2I(W ;Zi | Z−i), (8) and ES,R (L(A,S,R)− Lemp(A,S,R))2 ≤ 4σ2 n ( n∑ i=1 I(W ;Zi | Z−i) + log 3 ) . (9) This theorem is a simple corollary of Thm. 2.2, using the facts that I(W ;Zi) ≤ I(W ;Zi | Z−i) and that I(W ;S) is upper bounded by ∑n i=1 I(W ;Zi | Z−i), which is also known as erasure information [35]. The first part of it improves the result of Raginsky et al. [26] (Thm. 2), as the averaging over i is outside of the square root. While these bounds are worse that the corresponding bounds of Thm. 2.2, it is sometimes easier to manipulate them analytically. The bounds described above measure information with the output W of the training algorithm. In the case of prediction tasks with parametric methods, the parameters W might contain information about the training dataset, but not use it to make predictions. Partly for this reason, the main goal of this paper is to derive generalization bounds that measure information with the prediction function, rather than with the weights. In general, there is no straightforward way of encoding the prediction function into a random variable. However, when the domain Z is finite, we can encode the prediction function as the collection of predictions on all examples of Z . This naturally leads us to the next setting (albeit with a different motivation), first considered by Steinke and Zakynthinou [33], where one first fixes a set of 2n examples, and then randomly selects n of them to form the training set. We use this setting to provide prediction-based generalization bounds in Sec. 3. Before describing these bounds we present the setting of Steinke and Zakynthinou [33] in detail and generalize some of the existing weight-based bounds in that setting. 2.2 Generalization bounds with conditional mutual information Let Z̃ ∈ Zn×2 be a collection of 2n i.i.d samples from D, grouped in n pairs. The random variable S ∼ Uniform({0, 1}n) specifies which example to select from each pair to form the training set Z̃S = (Z̃i,Si) n i=1. LetR be a random variable, independent of Z̃ and S, that captures the stochasticity of training. In this setting Steinke and Zakynthinou [33] defined condition mutual information (CMI) of algorithm A with respect to the data distribution D as CMID(A) = I(A(Z̃S , R);S | Z̃) = Ez̃∼Z̃ I(A(z̃S , R);S), (10) and proved the following upper bound on expected generalization gap. Theorem 2.4 (Thm. 2, Steinke and Zakynthinou [33]). If the loss function `(w, z) ∈ [0, 1],∀w ∈ W, z ∈ Z , then the expected generalization gap can be bounded as follows:∣∣∣EZ̃,S,R [L(A, Z̃S , R)− Lemp(A, Z̃S , R)]∣∣∣ ≤ √ 2 n CMID(A). (11) Haghifam et al. [14] improved this bound in two aspects. First, they provided bounds where expectation over Z̃ is outside of the square root. Second, they considered measuring information with subsets of S, as we did in the previous section. Theorem 2.5 (Thm. 3.1 of Haghifam et al. [14]). Let m ∈ [n] and U ⊆ [n] be a random subset of size m, independent from R, Z̃, and S. If the loss function `(w, z) ∈ [0, 1],∀w ∈ W, z ∈ Z , then∣∣∣EZ̃,S,R [L(A, Z̃S , R)− Lemp(A, Z̃S , R)]∣∣∣ ≤ Ez∼Z̃ √ 2 m Eu∼U I(A(z̃S , R);Su). (12) Furthermore, for m = 1 they tighten the bound by showing that one can move the expectation over U outside of the squared root (Haghifam et al. [14], Thm 3.4). We generalize these results by showing that for all m expectation over U can be done outside of the square root. Furthermore, our proof closely follows the proof of Thm. 2.2. Theorem 2.6. Let m ∈ [n] and U ⊆ [n] be a random subset of size m, independent from R, Z̃, and S. If `(w, z) ∈ [0, 1],∀w ∈ W, z ∈ Z , then∣∣∣EZ̃,S,R [L(A, Z̃S , R)− Lemp(A, Z̃S , R)]∣∣∣ ≤ Ez̃∼Z̃,u∼U √ 2 m I(A(z̃S , R);Su), (13) and EZ̃,S,R ( L(A, Z̃S , R)− Lemp(A, Z̃S , R) )2 ≤ 8 n (Ez̃∼Z̃ I(A(z̃S , R);S) + 2) . (14) The bound of (13) improves over the bound of Thm. 2.5 and matches the special result for m = 1. Rodríguez-Gálvez et al. [28] proved even tighter expected generalization gap bound by replacing I(A(Z̃S , R);Su | Z̃ = z̃) with I(A(Z̃S , R);Su | Z̃u = z̃u). Haghifam et al. [14] showed that if one takes the expectations over Z̃ inside the square root in (12), then the resulting looser upper bounds become non-decreasing overm. Using this result they showed that their special case bound form = 1 is the tightest. We generalize their results by showing that even without taking the expectations inside the squared root, the bounds of Thm. 2.5 are non-decreasing over m. We also show that the same holds for our tighter bounds of (13). Proposition 2. Let m ∈ [n− 1], U be a random subset of [n] of size m, U ′ be a random subset of size m+ 1, z̃ be any fixed value of Z̃, and φ : R→ R be any non-decreasing concave function. Then Eu∼U φ ( 1 m I(A(z̃S , R);Su) ) ≤ Eu′∼U ′ φ ( 1 m+ 1 I(A(z̃S , R);Su′) ) . (15) By setting φ(x) = x, taking square root of both sides of (15), and then taking expectation over z̃, we prove that bounds of (12) are non-decreasing over m. By setting φ(x) = √ x and then taking expectation over z̃, we prove that bounds of (13) are non-decreasing with m. Similarly to the Thm. 2.3 of the previous section, Thm. A.1 presented in Appendix A establishes generalization bounds with information-theoretic stability quantities. 3 Functional conditional mutual information The bounds in Sec. 2 leverage information in the output of the algorithm, W . In this section we focus on supervised learning problems: Z = X × Y . To encompass many types of approaches, we do not assume that the training algorithm has an output W , which is then used to make predictions. Instead, we assume that the learning method implements a function f : Zn×X ×R → K that takes a training set z, a test input x′, an auxiliary argument r capturing the stochasticity of training and predictions, and outputs a prediction f(z, x′, r) on the test example. Note that the prediction domain K can be different from Y . This setting includes non-parametric methods (for which W is the training dataset itself), parametric methods, Bayesian algorithms, and more. For example, in parametric methods, where a hypothesis setH = {hw : X → K | w ∈ W} is defined, f(z, x, r) = hA(z,r)(x). In this supervised setting, the loss function ` : K × Y → R measures the discrepancy between a prediction and a label. As in the previous subsection, we assume that a collection of 2n i.i.d examples Z̃ ∼ Dn×2 is given, grouped in n pairs, and the random variable S ∼ Uniform({0, 1}n) specifies which example to select from each pair to form the training set Z̃S = (Z̃i,Si) n i=1. LetR be an auxiliary random variable, independent of Z̃ and S, that provides stochasticity for predictions (e.g., in neural networks R can be used to make the training stochastic). The empirical risk of learning method f trained on dataset Z̃S with randomnessR is defined as Lemp(f, Z̃S , R) = 1n ∑n i=1 `(f(Z̃S , Xi, R), Yi). The population risk is defined as L(f, Z̃S , R) = EZ′∼D `(f(Z̃S , X ′, R), Y ′). Before moving forward we adopt two conventions. First, if z is a collection of examples, then x and y denote the collection of its inputs and labels respectively. Second, if x is a collection of inputs, then f(z, x, r) denotes the collection of predictions on x after training on z with randomness r. We define functional conditional mutual information (f -CMI). Definition 3.1. Let D, f , R, Z̃, S be defined as above and let u ⊆ [n] be a subset of size m. Then pointwise functional conditional mutual information f -CMI(f, z̃, u) is defined as f -CMI(f, z̃, u) = I(f(z̃S , x̃u, R);Su), (16) while functional conditional mutual information f -CMID(f, u) is defined as f -CMID(f, u) = Ez̃∼Z̃ f -CMI(f, z̃, u). (17) When u = [n] we will simply use the notations f -CMI(f, z̃) and f -CMID(f), instead of f -CMI(f, z̃, [n]) and f -CMID(f, [n]), respectively. Theorem 3.1. Let U be a random subset of size m, independent of Z̃, S, and randomness of training algorithm f . If `(ŷ, y) ∈ [0, 1],∀ŷ ∈ K, z ∈ Z , then∣∣∣EZ̃,R,S [L(f, Z̃S , R)− Lemp(f, Z̃S , R)]∣∣∣ ≤ Ez̃∼Z̃,u∼U √ 2 m f -CMI(f, z̃, u), (18) and EZ̃,R,S ( L(f, Z̃S , R)− Lemp(f, Z̃S , R) )2 ≤ 8 n (Ez̃∼Z̃ f -CMI(f, z̃) + 2) . (19) For parametric methods, the bound of (18) improves over the bound of (13), as the Markov chain Su — A(z̃S , R) — f(z̃S , x̃u, R) allows to use the data processing inequality I(f(z̃S , x̃u, R);Su) ≤ I(A(z̃S , R);Su). For deterministic algorithms I(A(z̃S);Su) is often equal to H(Su) = m log 2, as most likely each choice of S produces a different W = A(z̃S). In such cases the bound with I(W ;Su) is vacuous. In contrast, the proposed bounds with f -CMI (especially when m = 1) do not have this problem. Even when the algorithm is stochastic, information between W and Su can be much larger than information between predictions and Su, as having access to weights makes it easier to determine Su (e.g., by using gradients). A similar phenomenon has been observed in the context of membership attacks, where having access to weights of a neural network allows constructing more successful membership attacks compared to having access to predictions only [21, 12]. Corollary 1. When m = n, the bound of (18) becomes∣∣∣EZ̃,R,S [L(f, Z̃S , R)− Lemp(f, Z̃S , R)]∣∣∣ ≤ Ez̃∼Z̃ √ 2 n f -CMI(f, z̃) ≤ √ 2 n f -CMID(f). (20) For parametric models, this improves over the CMI bound (Thm. 2.4), as by data processing inequality, f -CMID(f) = I(f(Z̃S , X̃, R);S | Z̃) ≤ I(A(Z̃S , R);S | Z̃) = CMID(A). Remark 1. Note that the collection of training and testing predictions f(Z̃S , X̃, R) cannot be replaced with only testing predictions f(Z̃S , X̃neg(S), R). As an example, consider an algorithm that memorizes the training examples and outputs a constant prediction on any other example. This algorithm will have non-zero generalization gap, but f(Z̃S , X̃neg(S), R) will be constant and will have zero information with S conditioned on any random variable. Moreover, if we replace f(Z̃S , X̃, R) with only training predictions f(Z̃S , X̃S , R), the resulting bound can become too loose, as one can deduce S by comparing training set predictions with the labels Ỹ . Corollary 2. When m = 1, the bound of (18) becomes∣∣∣EZ̃,R,S [L(f, Z̃S , R)− Lemp(f, Z̃S , R)]∣∣∣ ≤ 1n n∑ i=1 Ez̃∼Z̃ √ 2I(f(z̃S , x̃i, R);Si). (21) A great advantage of this bound compared to all other bounds described so far is that the mutual information term is computed between a relatively low-dimensional random variable f(z̃S , x̃i, R) and a binary random variable Si. For example, in the case of binary classification with K = {0, 1}, f(z̃S , x̃i, R) will be a pair of 2 binary variables. This allows us to estimate the bound efficiently and accurately (please refer to Appendix B for more details). Note that estimating other informationtheoretic bounds is significantly harder. The bounds of Xu and Raginsky [39], Negrea et al. [22], and Bu et al. [6] are hard to estimate as they involve estimation of mutual information between a high-dimensional non-discrete variable W and at least one example Zi. Furthermore, this mutual information can be infinite in case of deterministic algorithms or when H(Zi) is infinite. The bounds of Haghifam et al. [14] and Steinke and Zakynthinou [33] are also hard to estimate as they involve estimation of mutual information between W and at least one train-test split variable Si. As in the case of bounds presented in the previous section (Thm. 2.2 and Thm. 2.6), we prove that the bound of Thm. 3.1 is non-decreasing in m. This stays true even when we increase the upper bounds by moving the expectation over U or the expectation over Z̃ or both under the square root. The following proposition allows us to prove all these statements. Proposition 3. Let m ∈ [n− 1], U be a random subset of [n] of size m, U ′ be a random subset of size m+ 1, z̃ be any fixed value of Z̃, and φ : R→ R be any non-decreasing concave function. Then Eu∼U φ ( 1 m I(f(z̃S , x̃u, R);Su) ) ≤ Eu′∼U ′ φ ( 1 m+ 1 I(f(z̃S , x̃u′ , R);Su′) ) . (22) By setting φ(x) = √ x and then taking expectation over z̃ and u, we prove that bounds of Thm. 3.1 are non-decreasing over m. By setting φ(x) = x, taking expectation over z̃, and then taking square root of both sides of (22), we prove that bounds are non-decreasing in m when both expectations are under the square root. Proposition 3 proves that m = 1 is the optimal choice in Thm. 3.1. Notably, the bound that is the easiest to compute is also the tightest! Analogously to Thm. A.1, we provide the following stability-based bounds. Theorem 3.2. If `(ŷ, y) ∈ [0, 1],∀ŷ ∈ K, z ∈ Z , then∣∣∣EZ̃,R,S [L(f, Z̃S , R)− Lemp(f, Z̃S , R)]∣∣∣ ≤ Ez̃∼Z̃ [ 1 n n∑ i=1 √ 2I(f(z̃S , x̃i, R);Si | S−i) ] , (23) and EZ̃,R,S ( L(f, Z̃S , R)− Lemp(f, Z̃S , R) )2 ≤ 8 n ( Ez̃∼Z̃ [ n∑ i=1 I(f(z̃S , x̃, R);Si | S−i) ] + 2 ) . Note that unlike (23), in the second part of Thm. 3.2 we measure information with predictions on all 2n pairs and Si conditioned on S−i. It is an open question whether f(z̃S , x̃, R) can be replaced with f(z̃S , x̃i, R) – predictions only on the i-th pair. 4 Applications In this section we describe 3 applications of the f -CMI-based generalization bounds. 4.1 Ensembling algorithms Ensembling algorithms combine predictions of multiple learning algorithms to obtain better performance. Let us consider k learning algorithms, f1, f2, . . . , fk, each with its own independent randomness Ri, i ∈ [k]. Some ensembling algorithms can be viewed as a possibly stochastic function g : Kk → K that takes predictions of the k algorithms and combines them into a single prediction. Relating the generalization gap of the resulting ensembling algorithm to that of individual fis can be challenging for complicated choices of g. However, it is easy to bound the generalization gap of g(f1, . . . , fk) in terms of f -CMIs of individual predictors. Let z̃ be a fixed value of Z̃ and x be an arbitrary collection of inputs. Denoting Fi = fi(z̃S , x,Ri), i ∈ [k], we have that I(g(F1, . . . , Fk);S) ≤ I(F1, . . . , Fk;S) (data processing inequality) = I(F1;S) + I(F2, . . . , Fk;S)− I(F1;F2, . . . , Fk) + I(F1;F2, . . . , Fk | S) (chain rule) ≤ I(F1;S) + I(F2, . . . , Fk;S) (as MI is nonnegative and F1 ⊥ F2, . . . , Fk | S) ≤ . . . ≤ I(F1;S) + · · ·+ I(Fk;S). (repeating the arguments above to separate all Fi) Unfortunately, the same derivation above does not work if we replace S with Su, where u is a proper subset of [n], as I(F1;F2, . . . , Fk | Su) will not be zero in general. 4.2 Binary classification with finite VC dimension Let us consider the case of binary classification: Y = {0, 1}, where the learning method f : Zn ×X ×R → {0, 1} is implemented using a learning algorithm A : Zn ×R →W that selects a classifier from a hypothesis setH = {hw : X → Y}. IfH has finite VC dimension d [34], then for any algorithm f , the quantity f -CMI(f, z̃) can be bounded the following way. Theorem 4.1. Let Z ,H, f be defined as above, and let d <∞ be the VC dimension ofH. Then for any algorithm f and z̃ ∈ Zn×2, f -CMI(f, z̃) ≤ max {(d+ 1) log 2, d log (2en/d)} . (24) Considering the 0-1 loss function and using this result in Corollary 1, we get an expect generalization gap bound that is O (√ d n log ( n d )) , matching the classical uniform convergence bound [34]. The √ log n factor can be removed in some cases [13]. Both Xu and Raginsky [39] and Steinke and Zakynthinou [33] prove similar informationtheoretic bounds in the case of finite VC dimension classes, but their results holds for specific algorithms only. Even in the simple case of threshold functions: X = [0, 1] and H ={ hw : x 7→ 1{x>w} | w ∈ [0, 1] } , all weight-based bounds described in Sec. 2 are vacuous if one uses a training algorithm that encodes the training set in insignificant bits of W , while still getting zero error on the training set and hence achieving low test error. 4.3 Stable deterministic or stochastic algorithms Theorems 2.3, A.1 and 3.2 provide generalization bounds involving information-theoretic stability measures, such as I(W ;Zi | Z−i), I(A(z̃S , R);S | S−i) and I(f(z̃S , x̃, R);Si | S−i). In this section we build upon the predication-based stability bounds of Thm. 3.2. First, we show that for any collection of examples x, the mutual information I(f(z̃S , x);Si | S−i) can be bounded as follows. Proposition 4. Let Si←c denote S with Si set to c. Then for any z̃ ∈ Zn×2 and x̃ ∈ X k, the mutual information I(f(z̃S , x,R);Si | S−i) is upper bounded by 1 4 KL (f(z̃Si←1 , x,R)|S−i ‖ f(z̃Si←0 , x,R)|S−i) + 1 4 KL (f(z̃Si←0 , x,R)|S−i ‖ f(z̃Si←1 , x,R)|S−i) . To compute the right-hand side of Proposition 4 one needs to know how much on-average the distribution of predictions on x changes after replacing the i-th example in the training dataset. The problem arises when we consider deterministic algorithms. In such cases, the right-hand side is infinite, while the left-hand side I(f(z̃S , x,R);Si | S−i) is always finite and could be small. Therefore, for deterministic algorithms, directly applying the result of Proposition 4 will not give meaningful generalization bounds. Nevertheless, we show that we can add an optimal amount of noise to predictions, upper bound the generalization gap of the resulting noisy algorithm, and relate that to the generalization gap of the original deterministic algorithm. Let us consider a deterministic algorithm f : Zn × X → Rd. We define the following notions of functional stability. Definition 4.1 (Functional stability). Let S = (Z1, . . . , Zn) ∼ Dn be a collection of n i.i.d. samples, and Z ′ and Ztest be two additional independent samples from D. Let S(i) , (Z1, . . . , Zi−i, Z ′, Zi+1, . . . , Zn) be the collection constructed from S by replacing the i-th example with Z ′. A deterministic algorithm f : Zn ×X → Rd is a) β self-stable if ∀i ∈ [n], ES,Z′ ∥∥∥f(S,Zi)− f(S(i), Zi)∥∥∥2 ≤ β2, (25) b) β1 test-stable if ∀i ∈ [n], ES,Z′,Ztest ∥∥∥f(S,Ztest)− f(S(i), Ztest)∥∥∥2 ≤ β21 , (26) c) β2 train-stable if ∀i, j ∈ [n], i 6= j, ES,Z′ ∥∥∥f(S,Zj)− f(S(i), Zj)∥∥∥2 ≤ β22 . (27) Theorem 4.2. Let Y = Rd, f : Zn × X → Rd be a deterministic algorithm that is β self-stable, and `(ŷ, y) ∈ [0, 1] be a loss function that is γ-Lipschitz in the first coordinate. Then∣∣∣EZ̃,R,S [L(f, Z̃S , R)− Lemp(f, Z̃S , R)]∣∣∣ ≤ 2 32 d 14√γβ. (28) Furthermore, if f is also β1 train-stable and β2 test-stable, then EZ̃,R,S ( L(f, Z̃S , R)− Lemp(f, Z̃S , R) )2 ≤ 32 n + 12 3 2 √ dγ √ 2β2 + nβ21 + nβ 2 2 . (29) It is expected that β2 is smaller than β and β1. For example, in the case of neural networks interpolating the training data or in the case of empirical risk minimization in the realizable setting, β2 will be zero. It is also expected that β is larger than β1. However, the relation of β2 and nβ21 is not trivial. The notion of pointwise hypothesis stability β′2 defined by Bousquet and Elisseeff [5] (definition 4) is comparable to our notion of self-stability β. The first part of Theorem 11 in [5] describes a generalization bound where the difference between empirical and population losses is of order 1/ √ n + √ β′2, which is comparable with our result of Thm. 4.2 (Θ( √ β)). The proof there also contains a bound on the expected squared difference of empirical and population losses. That bound is of order 1/n+ β′2. In contrast, our result of (29) contains two extra terms related to test-stability and train-stability (the terms nβ21 and nβ 2 2 ). If β dominates nβ 2 1 + nβ 2 2 , then the bound of (29) will match the result of Bousquet and Elisseeff [5]. 5 Experiments As mentioned earlier, the expected generalization gap bound of Corollary 2 is significantly easier to compute compared to existing information-theoretic bounds, and does not give trivial results for deterministic algorithms. To understand how well the bound does in challenging situations, we consider cases when the algorithm generalizes well despite the high complexity of the hypothesis class and relatively small number of training examples. Due to space constraints we omit some experimental details and present them in Appendix B. The code can be found at github.com/hrayrhar/f-CMI. First, we consider the MNIST 4 vs 9 digit classification task [20] using a 4-layer convolutional neural network (CNN) that has approximately 200K parameters. We train the network using for 200 epochs using the ADAM algorithm [18] with 0.001 learning rate, β1 = 0.9, and mini-batches of 128 examples. Importantly, we fix the random seed that controls the initialization of weights and the shuffling of training data, making the training algorithm deterministic. Fig. 1a plots the expected generalization gap and the f -CMI bound of (21). We see that the bound is not vacuous and is not too far from the expected generalization gap even when considering only 75 training examples. As shown in the Fig. 3a of Appendix B, if we increase the width of all layers 4 times, making the number of parameters approximately 3M, the results remain largely unchanged. Next, we move away from binary classification and consider the CIFAR-10 classication task [19]. To construct a well-generalizing algorithm, we use the ResNet-50 [16] network pretrained on the ImageNet [7], and fine-tune it for 40 epochs using SGD with mini-batches of size 64, 0.01 learning rate, 0.9 momentum, and standard data augmentations. The results presented in Fig. 1b indicate that the f -CMI bound is always approximately 3 times larger than the expected generalization gap. In particular, when n = 20000, the expected generalization gap is 5%, while the bound predicts 16%. Note that the weight-based information-theoretic bounds discussed in Sec. 2 would give either infinite or trivial bounds for the deterministic algorithm described above. Even when we make the training algorithm stochastic by randomizing the seed, the quantities like I(W ;S) still remain infinite, while both the generalization gap and the f -CMI bound do not change significantly (see Fig. 3b of Appendix B). For this reason, we change the training algorithm to Stochastic Gradient Langevin Dynamics (SGLD) [10, 38] and compare the f -CMI-based bound against the specialized bound of Negrea et al. [22] (see eq. (6) of [22]). This bound (referred as SGLD bound here) is derived from a weight-based information-theoretic generalization bound, and depends on the the hyper-parameters of SGLD and on the variance of per-example gradients along the training trajectory. The SGLD algorithm is trained for 40 epochs, with learning rate and inverse temperature schedules described in Appendix B. Fig. 1c plots the expected generalization gap, the expected test error, the f -CMI bound and the SGLD bound. We see that the test accuracy plateaus after 16 epochs. At this time and afterwards, the f -CMI bound closely follows the generalization gap, while the SGLD bound increases to very high values. However, we see that the SGLD bound does better up to epoch 12. The difference between the f -CMI bound and the SGLD bound becomes more striking when we change the dataset to be a subset of CIFAR-10 consisting of 20000 examples, and fine-tune a pretrained ResNet-50 with SGLD. As shown in Fig. 2, even after a single epoch the SGLD bound is approximately 0.45, while the generalization gap is around 0.02. For comparison, the f -CMI is approximately 0.1 after one epoch of training. Interestingly, Fig. 1c shows that the f-CMI bound is large in the early epochs, despite of the extremely small generalization gap. In fact, a similar trend, albeit with lesser extent, is visible in the MNIST 4 vs 9 experiment, where a CNN is trained with a deterministic algorithm (see Fig. 3c of Appendix B). This indicates a possible area of improvement for the f -CMI bound. 6 Related Work This work is closely related to a rich literature of information-theoretic generalization bounds, some of which were discussed earlier [39, 4, 25, 22, 6, 33, 14, 13, 1, 23, 27, 8]. Most of these work derive generalization bounds that depend on a mutual information quantity measured between the output of the training algorithm and some quantity related to training data. Different from this major idea, Xu and Raginsky [39] and Russo and Zou [29] discussed the idea of bounding generalization gap with the information between the input and the vector of loss functions computed on training examples. This idea was later extended to the setting of conditional mutual information by Steinke and Zakynthinou [33]. This works are similar to ours in the sense that they move away from measuring information with weights, but they did not develop this line of reasoning enough to arrive to efficient bounds similar to Corollary 2. Additionally, we believe that measuring information with the prediction function allows better interpretation and is easier to work with analytically. Another related line of research are the stability-based bounds [5, 2, 26, 3, 9, 37, 27]. In Sec. 2 and Sec. 3 we improve existing generalization bounds that use information stability. In Sec. 4.3 we describe a technique of applying information stability bounds to deterministic algorithms. The main idea is to add noise to predictions, but only for analysis purposes. A similar idea, but in the context of measuring information content of an individual example, was suggested by Harutyunyan et al. [15]. In fact, our notion of test-stability defined in Sec. 4.3 comes very close to their definition of functional sample information. A similar idea was recently used by Neu et al. [23] in analyzing generalization performance of SGD. More broadly this work is related to PAC-Bayes bounds and to classical generalization bounds. Please refer to the the survey by Jiang* et al. [17] for more information on these bounds. Finally, our work has connections with attribute and membership inference attacks [32, 40, 21, 12]. Some of these works show that having a white-box access to models allows constructing better membership inference attacks, compared to having a black-box access. This is analogous to our observation that prediction-based bounds are better than weight-based bounds. Shokri et al. [32] and Yeom et al. [40] demonstrate that even in the case of black-box access to a well-generalizing model, sometimes it is still possible to construct successful membership attacks. This is in line with our observation that the f -CMI bound can be significantly large, despite of small generalization gap (see epoch 4 of Fig. 1c). This suggests a possible direction of improving the f -CMI-based bounds. Acknowledgments and Disclosure of Funding This work is based on research sponsored by Air Force Research Laboratory (AFRL) under agreement number FA8750-19-1-1000. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation therein. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of Air Force Laboratory, DARPA or the U.S. Government. HH was partially supported by a USC Annenberg Fellowship.
1. What is the focus of the paper regarding generalization error bounds? 2. What are the strengths of the proposed improved bound, particularly in comparison to prior works? 3. Do you have any questions or concerns about the proof or the experimental results presented in the paper? 4. How does the reviewer assess the novelty and significance of the proposed bound in the context of existing literature? 5. Are there any limitations or potential drawbacks of the proposed approach that could be further explored or addressed?
Summary Of The Paper Review
Summary Of The Paper The authors build upon recent literature on information theoretic bounds on generalization error and specifically the conditional mutual information bound by Steinke and Zakynthinou [28]. They propose an improved bound that is tigther than [28] and easier to compute in practice. It is achieved by replacing the weight-parameter vector in the mutual information expression of [28] with a prediction function evaluated on a subset of in-sample (training) data and a subset of out-of-sample data. This enables the authors to give performance guarantees on deterministic training algorithms. Several toy examples and experimental results are presented to illustrate the advantages of the improved generalization bound. Review The proposed bound involving mutual information between the sample index variables and prediction function involving the first n samples is interesting and has practical advantages. However the proof seems to follow prior works and builds upon Lemma 1. Can the authors explain what aspects are non-trivial in the derivation of the new bound? Can the authors comment on how the result in Thm. 4.2 compares with other generalization results on deterministic algorithms n the literature? In the experimental section I was wondering why the authors did not compare their bound with bounds other than Negrea et al.? There are other state of the art bounds (such as the work of Haghifam et al) that are better than Negrea et al. for SGLD. Are there difficulties in comparing with these works?
NIPS
Title Information-theoretic generalization bounds for black-box learning algorithms Abstract We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing informationtheoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning. N/A We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing informationtheoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning. 1 Introduction Large neural networks trained with variants of stochastic gradient descent have excellent generalization capabilities, even in regimes where the number of parameters is much larger than the number of training examples. Zhang et al. [41] showed that classical generalization bounds based on various notions of complexity of hypothesis set fail to explain this phenomenon, as the same neural network can generalize well for one choice of training data and memorize completely for another one. This observation has spurred a tenacious search for algorithm-dependent and data-dependent generalization bounds that give meaningful results in practical settings for deep learning [17]. One line of attack bounds generalization error based on the information about training dataset stored in the weights [39, 4, 22, 6, 33, 14, 23, 27]. The main idea is that when the training and testing performance of a neural network are different, the network weights necessarily capture some information about the training dataset. However, the opposite might not be true: A neural network can store significant portions of training set in its weights and still generalize well [32, 40, 21]. Furthermore, because of their information-theoretic nature, these generalization bounds become infinite or produce trivial bounds for deterministic algorithms. When such bounds are not infinite, they are notoriously hard to estimate, due to the challenges arising in estimation of Shannon mutual information between two high-dimensional variables (e.g., weights of a ResNet and a training dataset). This work addresses the aforementioned challenges. We first improve some of the existing information-theoretic generalization bounds, providing a unified view and derivation of them (Sec. 2). We then derive novel generalization bounds that measure information with predictions, rather than with the output of the training algorithm (Sec. 3). These bounds are applicable to a wide range of methods, including neural networks, Bayesian algorithms, ensembling algorithms, and non-parametric approaches. In the case of neural networks, the proposed bounds improve over the existing weightbased bounds, partly because they avoid a counter-productive property of weight-based bounds that information stored in unused weights affects generalization bounds, even though it has no effect on generalization. The proposed bounds produce meaningful results for deterministic algorithms and are significantly easier to estimate. For example, in case of classification, computing our most efficient bound involves estimating mutual information between a pair of predictions and a binary variable. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). We apply the proposed bounds to ensembling algorithms, binary classification algorithms with finite VC dimension hypothesis classes, and to stable learning algorithms (Sec. 4). We compute our most efficient bound on realistic classification problems involving neural networks, and show that the bound closely follows the generalization error, even in situations when a neural network with 3M parameters is trained deterministically on 4000 examples, achieving 1% generalization error. 2 Weight-based generalization bounds We start by describing the necessary notation and definitions, after which we present some of the existing weigh-based information-theoretic generalization bounds, slightly improve some of them, and prove relations between them. The purpose of this section is to introduce the relevant existing bounds and prepare grounds for the functional conditional mutual information bounds introduced in Sec. 3, which we consider our main contribution. All proofs are presented in Appendix A. Preliminaries. We use capital letters for random variables, corresponding lowercase letters for their values, and calligraphic letters for their domains. If X is a random variable, X̄ denotes an independent copy of X . For example, if (X,Y ) is a pair of random variables with joint distribution PX,Y , then the joint distribution of (X̄, Ȳ ) will be PX̄,Ȳ = PX̄ ⊗ PȲ = PX ⊗ PY . A random variable X is called σ-subgaussian if E exp(t(X − EX)) ≤ exp(σ2t2/2), ∀t ∈ R. For example, a random variable that takes values in [a, b] almost surely, is (b− a)/2-subgaussian. Given probability measures P and Q defined on the same measurable space, such that P is absolutely continuous with respect to Q, the Kullback–Leibler divergence from P to Q is defined as KL (P ‖ Q) = ∫ log dPdQdP , where dPdQ is the Radon-Nikodym derivative of P with respect to Q. If X and Y are random variables defined on the same probability space, then KL (X ‖ Y ) denotes KL (PX ‖ PY ). The Shannon mutual information between random variables X and Y is I(X;Y ) = KL (PX,Y ‖ PX ⊗ PY ). In this paper, all information-theoretic quantities are measured in nats, instead of bits. Throughout the paper [n] denotes the set {1, 2, . . . , n}. Finally, if A = (a1, . . . , an) is a collection, then A−i , (a1, . . . , ai−1, ai+1, . . . , an). Theorems proved in the subsequent sections will be relying on the following lemma. Lemma 1. Let (Φ,Ψ) be a pair of random variables with joint distribution PΨ,Φ. If g(φ, ψ) is a measurable function such that EΦ,Ψ [g(Φ,Ψ)] exists and g(Φ̄, Ψ̄) is σ-subgaussian, then∣∣EΦ,Ψ [g(Φ,Ψ)]− EΦ,Ψ̄ [g(Φ, Ψ̄)]∣∣ ≤√2σ2I(Φ; Ψ). (1) Furthermore, if g(φ, Ψ̄) is σ-subgaussian for each φ and the expectation below exists, then EΦ,Ψ [( g(Φ,Ψ)− EΨ̄ g(Φ, Ψ̄) )2] ≤ 4σ2(I(Φ; Ψ) + log 3), (2) and P (∣∣g(Φ,Ψ)− EΨ̄ g(Φ, Ψ̄)∣∣ ≥ ) ≤ 4σ2(I(Φ; Ψ) + log 3) 2 , ∀ > 0. (3) The first part of this lemma is equivalent to Lemma 1 of Xu and Raginsky [39], which in turn has its roots in Russo and Zou [29]. The second part generalizes Lemma 2 of Hafez-Kolahi et al. [13] by also providing bounds on the expected squared difference. 2.1 Generalization bounds with input-output mutual information Let S = (Z1, Z2, . . . , Zn) ∼ Dn be a dataset of n i.i.d. examples, R ∈ R be a source of randomness (a random variable independent of S) and A : Zn × R → W be a training algorithm. Let W = A(S,R) be the output of the training algorithm applied on the dataset S with randomness R. Given a loss function ` :W ×Z → R, the empirical risk is Lemp(A,S,R) = 1n ∑n i=1 `(W,Zi) and the population risk is L(A,S,R) = EZ′∼D `(W,Z ′), where Z ′ is a test example independent from S and R. The generalization gap, also call generalization error, is L(A,S,R)−Lemp(A,S,R). In this setting, Xu and Raginsky [39] establish the following information-theoretic bound on the absolute value of the expected generalization gap. Theorem 2.1 (Thm. 1 of Xu and Raginsky [39]). If `(w,Z ′), where Z ′ ∼ D, is σ-subgaussian for all w ∈ W , then |ES,R [L(A,S,R)− Lemp(A,S,R)]| ≤ √ 2σ2I(W ;S) n . (4) We generalize this result by showing that instead of measuring information with the entire dataset, one can measure information with a subset of size m chosen uniformly at random. For brevity, hereafter we call subsets chosen uniformly at random just “random subsets”. Theorem 2.2. Let U be a random subset of [n] with size m, independent of S and R. If `(w,Z ′), where Z ′ ∼ D, is σ-subgaussian for all w ∈ W , then |ES,R [L(A,S,R)− Lemp(A,S,R)]| ≤ Eu∼U √ 2σ2 m I(W ;Su), (5) and ES,R (L(A,S,R)− Lemp(A,S,R))2 ≤ 4σ2 n (I(W ;S) + log 3) . (6) With a simple application of Markov’s inequality one can get tail bounds from the second part of the theorem. Furthermore, by taking square root of both sides of (6) and using Jensen’s inequality on the left side, one can also construct an upper bound for the expected absolute value of generalization gap, ES,R |L(A,S,R)− Lemp(A,S,R)|. These observations apply also to the other generalization gap bounds presented later in this work. Note the bound on the squared generalization gap is written only for the case of m = n. It is possible to derive squared generalization gap bounds of form 4σ 2 m (Eu∼U I(W ;Su) + log 3). Unfortunately, for small m the log 3 constant starts to dominate, resulting in vacuous bounds. Picking a small m decreases the mutual information term in (5), however, it also decreases the denominator. When settingm = n, we get the bound of Xu and Raginsky [39] (Thm. 2.1). Whenm = 1, the bound of (5) becomes 1n ∑n i=1 √ 2σ2I(W ;Zi), matching the result of Bu et al. [6] (Proposition 1). A similar bound, but for a different notion of information, was derived by Alabdulmohsin [2]. Bu et al. [6] prove that the bound with m = 1 is tighter than the bound with m = n. We generalize this result by proving that the bound of (5) is non-descreasing in m. Proposition 1. Let m ∈ [n− 1], U be a random subset of [n] of size m, U ′ be a random subset of size m+ 1, and φ : R→ R be any non-decreasing concave function. Then EU φ ( 1 m I(W ;Su) ) ≤ EU ′ φ ( 1 m+ 1 I(W ;Su′) ) . (7) When φ(x) = √ x, this result proves that the optimal value for m in (5) is 1. Furthermore, when we use Jensen’s inequality to move expectation over U inside the square root in (5), then the resulting bound becomes √ 2σ2 m Eu∼U I(W ;Su) and matches the result of Negrea et al. [22] (Thm. 2.3). These bounds are also non-decreasing with respect to m (using Proposition 1 with φ(x) = x). Thm. 2.1 can be used to derive generalization bounds that depend on the information between W and a single example Zi conditioned on the remaining examples Z−i = (Z1, . . . , Zi−1, Zi+1, . . . , Zn). Theorem 2.3. If `(w,Z ′), where Z ′ ∼ D, is σ-subgaussian for all w ∈ W , then |ES,R [L(A,S,R)− Lemp(A,S,R)]| ≤ 1 n n∑ i=1 √ 2σ2I(W ;Zi | Z−i), (8) and ES,R (L(A,S,R)− Lemp(A,S,R))2 ≤ 4σ2 n ( n∑ i=1 I(W ;Zi | Z−i) + log 3 ) . (9) This theorem is a simple corollary of Thm. 2.2, using the facts that I(W ;Zi) ≤ I(W ;Zi | Z−i) and that I(W ;S) is upper bounded by ∑n i=1 I(W ;Zi | Z−i), which is also known as erasure information [35]. The first part of it improves the result of Raginsky et al. [26] (Thm. 2), as the averaging over i is outside of the square root. While these bounds are worse that the corresponding bounds of Thm. 2.2, it is sometimes easier to manipulate them analytically. The bounds described above measure information with the output W of the training algorithm. In the case of prediction tasks with parametric methods, the parameters W might contain information about the training dataset, but not use it to make predictions. Partly for this reason, the main goal of this paper is to derive generalization bounds that measure information with the prediction function, rather than with the weights. In general, there is no straightforward way of encoding the prediction function into a random variable. However, when the domain Z is finite, we can encode the prediction function as the collection of predictions on all examples of Z . This naturally leads us to the next setting (albeit with a different motivation), first considered by Steinke and Zakynthinou [33], where one first fixes a set of 2n examples, and then randomly selects n of them to form the training set. We use this setting to provide prediction-based generalization bounds in Sec. 3. Before describing these bounds we present the setting of Steinke and Zakynthinou [33] in detail and generalize some of the existing weight-based bounds in that setting. 2.2 Generalization bounds with conditional mutual information Let Z̃ ∈ Zn×2 be a collection of 2n i.i.d samples from D, grouped in n pairs. The random variable S ∼ Uniform({0, 1}n) specifies which example to select from each pair to form the training set Z̃S = (Z̃i,Si) n i=1. LetR be a random variable, independent of Z̃ and S, that captures the stochasticity of training. In this setting Steinke and Zakynthinou [33] defined condition mutual information (CMI) of algorithm A with respect to the data distribution D as CMID(A) = I(A(Z̃S , R);S | Z̃) = Ez̃∼Z̃ I(A(z̃S , R);S), (10) and proved the following upper bound on expected generalization gap. Theorem 2.4 (Thm. 2, Steinke and Zakynthinou [33]). If the loss function `(w, z) ∈ [0, 1],∀w ∈ W, z ∈ Z , then the expected generalization gap can be bounded as follows:∣∣∣EZ̃,S,R [L(A, Z̃S , R)− Lemp(A, Z̃S , R)]∣∣∣ ≤ √ 2 n CMID(A). (11) Haghifam et al. [14] improved this bound in two aspects. First, they provided bounds where expectation over Z̃ is outside of the square root. Second, they considered measuring information with subsets of S, as we did in the previous section. Theorem 2.5 (Thm. 3.1 of Haghifam et al. [14]). Let m ∈ [n] and U ⊆ [n] be a random subset of size m, independent from R, Z̃, and S. If the loss function `(w, z) ∈ [0, 1],∀w ∈ W, z ∈ Z , then∣∣∣EZ̃,S,R [L(A, Z̃S , R)− Lemp(A, Z̃S , R)]∣∣∣ ≤ Ez∼Z̃ √ 2 m Eu∼U I(A(z̃S , R);Su). (12) Furthermore, for m = 1 they tighten the bound by showing that one can move the expectation over U outside of the squared root (Haghifam et al. [14], Thm 3.4). We generalize these results by showing that for all m expectation over U can be done outside of the square root. Furthermore, our proof closely follows the proof of Thm. 2.2. Theorem 2.6. Let m ∈ [n] and U ⊆ [n] be a random subset of size m, independent from R, Z̃, and S. If `(w, z) ∈ [0, 1],∀w ∈ W, z ∈ Z , then∣∣∣EZ̃,S,R [L(A, Z̃S , R)− Lemp(A, Z̃S , R)]∣∣∣ ≤ Ez̃∼Z̃,u∼U √ 2 m I(A(z̃S , R);Su), (13) and EZ̃,S,R ( L(A, Z̃S , R)− Lemp(A, Z̃S , R) )2 ≤ 8 n (Ez̃∼Z̃ I(A(z̃S , R);S) + 2) . (14) The bound of (13) improves over the bound of Thm. 2.5 and matches the special result for m = 1. Rodríguez-Gálvez et al. [28] proved even tighter expected generalization gap bound by replacing I(A(Z̃S , R);Su | Z̃ = z̃) with I(A(Z̃S , R);Su | Z̃u = z̃u). Haghifam et al. [14] showed that if one takes the expectations over Z̃ inside the square root in (12), then the resulting looser upper bounds become non-decreasing overm. Using this result they showed that their special case bound form = 1 is the tightest. We generalize their results by showing that even without taking the expectations inside the squared root, the bounds of Thm. 2.5 are non-decreasing over m. We also show that the same holds for our tighter bounds of (13). Proposition 2. Let m ∈ [n− 1], U be a random subset of [n] of size m, U ′ be a random subset of size m+ 1, z̃ be any fixed value of Z̃, and φ : R→ R be any non-decreasing concave function. Then Eu∼U φ ( 1 m I(A(z̃S , R);Su) ) ≤ Eu′∼U ′ φ ( 1 m+ 1 I(A(z̃S , R);Su′) ) . (15) By setting φ(x) = x, taking square root of both sides of (15), and then taking expectation over z̃, we prove that bounds of (12) are non-decreasing over m. By setting φ(x) = √ x and then taking expectation over z̃, we prove that bounds of (13) are non-decreasing with m. Similarly to the Thm. 2.3 of the previous section, Thm. A.1 presented in Appendix A establishes generalization bounds with information-theoretic stability quantities. 3 Functional conditional mutual information The bounds in Sec. 2 leverage information in the output of the algorithm, W . In this section we focus on supervised learning problems: Z = X × Y . To encompass many types of approaches, we do not assume that the training algorithm has an output W , which is then used to make predictions. Instead, we assume that the learning method implements a function f : Zn×X ×R → K that takes a training set z, a test input x′, an auxiliary argument r capturing the stochasticity of training and predictions, and outputs a prediction f(z, x′, r) on the test example. Note that the prediction domain K can be different from Y . This setting includes non-parametric methods (for which W is the training dataset itself), parametric methods, Bayesian algorithms, and more. For example, in parametric methods, where a hypothesis setH = {hw : X → K | w ∈ W} is defined, f(z, x, r) = hA(z,r)(x). In this supervised setting, the loss function ` : K × Y → R measures the discrepancy between a prediction and a label. As in the previous subsection, we assume that a collection of 2n i.i.d examples Z̃ ∼ Dn×2 is given, grouped in n pairs, and the random variable S ∼ Uniform({0, 1}n) specifies which example to select from each pair to form the training set Z̃S = (Z̃i,Si) n i=1. LetR be an auxiliary random variable, independent of Z̃ and S, that provides stochasticity for predictions (e.g., in neural networks R can be used to make the training stochastic). The empirical risk of learning method f trained on dataset Z̃S with randomnessR is defined as Lemp(f, Z̃S , R) = 1n ∑n i=1 `(f(Z̃S , Xi, R), Yi). The population risk is defined as L(f, Z̃S , R) = EZ′∼D `(f(Z̃S , X ′, R), Y ′). Before moving forward we adopt two conventions. First, if z is a collection of examples, then x and y denote the collection of its inputs and labels respectively. Second, if x is a collection of inputs, then f(z, x, r) denotes the collection of predictions on x after training on z with randomness r. We define functional conditional mutual information (f -CMI). Definition 3.1. Let D, f , R, Z̃, S be defined as above and let u ⊆ [n] be a subset of size m. Then pointwise functional conditional mutual information f -CMI(f, z̃, u) is defined as f -CMI(f, z̃, u) = I(f(z̃S , x̃u, R);Su), (16) while functional conditional mutual information f -CMID(f, u) is defined as f -CMID(f, u) = Ez̃∼Z̃ f -CMI(f, z̃, u). (17) When u = [n] we will simply use the notations f -CMI(f, z̃) and f -CMID(f), instead of f -CMI(f, z̃, [n]) and f -CMID(f, [n]), respectively. Theorem 3.1. Let U be a random subset of size m, independent of Z̃, S, and randomness of training algorithm f . If `(ŷ, y) ∈ [0, 1],∀ŷ ∈ K, z ∈ Z , then∣∣∣EZ̃,R,S [L(f, Z̃S , R)− Lemp(f, Z̃S , R)]∣∣∣ ≤ Ez̃∼Z̃,u∼U √ 2 m f -CMI(f, z̃, u), (18) and EZ̃,R,S ( L(f, Z̃S , R)− Lemp(f, Z̃S , R) )2 ≤ 8 n (Ez̃∼Z̃ f -CMI(f, z̃) + 2) . (19) For parametric methods, the bound of (18) improves over the bound of (13), as the Markov chain Su — A(z̃S , R) — f(z̃S , x̃u, R) allows to use the data processing inequality I(f(z̃S , x̃u, R);Su) ≤ I(A(z̃S , R);Su). For deterministic algorithms I(A(z̃S);Su) is often equal to H(Su) = m log 2, as most likely each choice of S produces a different W = A(z̃S). In such cases the bound with I(W ;Su) is vacuous. In contrast, the proposed bounds with f -CMI (especially when m = 1) do not have this problem. Even when the algorithm is stochastic, information between W and Su can be much larger than information between predictions and Su, as having access to weights makes it easier to determine Su (e.g., by using gradients). A similar phenomenon has been observed in the context of membership attacks, where having access to weights of a neural network allows constructing more successful membership attacks compared to having access to predictions only [21, 12]. Corollary 1. When m = n, the bound of (18) becomes∣∣∣EZ̃,R,S [L(f, Z̃S , R)− Lemp(f, Z̃S , R)]∣∣∣ ≤ Ez̃∼Z̃ √ 2 n f -CMI(f, z̃) ≤ √ 2 n f -CMID(f). (20) For parametric models, this improves over the CMI bound (Thm. 2.4), as by data processing inequality, f -CMID(f) = I(f(Z̃S , X̃, R);S | Z̃) ≤ I(A(Z̃S , R);S | Z̃) = CMID(A). Remark 1. Note that the collection of training and testing predictions f(Z̃S , X̃, R) cannot be replaced with only testing predictions f(Z̃S , X̃neg(S), R). As an example, consider an algorithm that memorizes the training examples and outputs a constant prediction on any other example. This algorithm will have non-zero generalization gap, but f(Z̃S , X̃neg(S), R) will be constant and will have zero information with S conditioned on any random variable. Moreover, if we replace f(Z̃S , X̃, R) with only training predictions f(Z̃S , X̃S , R), the resulting bound can become too loose, as one can deduce S by comparing training set predictions with the labels Ỹ . Corollary 2. When m = 1, the bound of (18) becomes∣∣∣EZ̃,R,S [L(f, Z̃S , R)− Lemp(f, Z̃S , R)]∣∣∣ ≤ 1n n∑ i=1 Ez̃∼Z̃ √ 2I(f(z̃S , x̃i, R);Si). (21) A great advantage of this bound compared to all other bounds described so far is that the mutual information term is computed between a relatively low-dimensional random variable f(z̃S , x̃i, R) and a binary random variable Si. For example, in the case of binary classification with K = {0, 1}, f(z̃S , x̃i, R) will be a pair of 2 binary variables. This allows us to estimate the bound efficiently and accurately (please refer to Appendix B for more details). Note that estimating other informationtheoretic bounds is significantly harder. The bounds of Xu and Raginsky [39], Negrea et al. [22], and Bu et al. [6] are hard to estimate as they involve estimation of mutual information between a high-dimensional non-discrete variable W and at least one example Zi. Furthermore, this mutual information can be infinite in case of deterministic algorithms or when H(Zi) is infinite. The bounds of Haghifam et al. [14] and Steinke and Zakynthinou [33] are also hard to estimate as they involve estimation of mutual information between W and at least one train-test split variable Si. As in the case of bounds presented in the previous section (Thm. 2.2 and Thm. 2.6), we prove that the bound of Thm. 3.1 is non-decreasing in m. This stays true even when we increase the upper bounds by moving the expectation over U or the expectation over Z̃ or both under the square root. The following proposition allows us to prove all these statements. Proposition 3. Let m ∈ [n− 1], U be a random subset of [n] of size m, U ′ be a random subset of size m+ 1, z̃ be any fixed value of Z̃, and φ : R→ R be any non-decreasing concave function. Then Eu∼U φ ( 1 m I(f(z̃S , x̃u, R);Su) ) ≤ Eu′∼U ′ φ ( 1 m+ 1 I(f(z̃S , x̃u′ , R);Su′) ) . (22) By setting φ(x) = √ x and then taking expectation over z̃ and u, we prove that bounds of Thm. 3.1 are non-decreasing over m. By setting φ(x) = x, taking expectation over z̃, and then taking square root of both sides of (22), we prove that bounds are non-decreasing in m when both expectations are under the square root. Proposition 3 proves that m = 1 is the optimal choice in Thm. 3.1. Notably, the bound that is the easiest to compute is also the tightest! Analogously to Thm. A.1, we provide the following stability-based bounds. Theorem 3.2. If `(ŷ, y) ∈ [0, 1],∀ŷ ∈ K, z ∈ Z , then∣∣∣EZ̃,R,S [L(f, Z̃S , R)− Lemp(f, Z̃S , R)]∣∣∣ ≤ Ez̃∼Z̃ [ 1 n n∑ i=1 √ 2I(f(z̃S , x̃i, R);Si | S−i) ] , (23) and EZ̃,R,S ( L(f, Z̃S , R)− Lemp(f, Z̃S , R) )2 ≤ 8 n ( Ez̃∼Z̃ [ n∑ i=1 I(f(z̃S , x̃, R);Si | S−i) ] + 2 ) . Note that unlike (23), in the second part of Thm. 3.2 we measure information with predictions on all 2n pairs and Si conditioned on S−i. It is an open question whether f(z̃S , x̃, R) can be replaced with f(z̃S , x̃i, R) – predictions only on the i-th pair. 4 Applications In this section we describe 3 applications of the f -CMI-based generalization bounds. 4.1 Ensembling algorithms Ensembling algorithms combine predictions of multiple learning algorithms to obtain better performance. Let us consider k learning algorithms, f1, f2, . . . , fk, each with its own independent randomness Ri, i ∈ [k]. Some ensembling algorithms can be viewed as a possibly stochastic function g : Kk → K that takes predictions of the k algorithms and combines them into a single prediction. Relating the generalization gap of the resulting ensembling algorithm to that of individual fis can be challenging for complicated choices of g. However, it is easy to bound the generalization gap of g(f1, . . . , fk) in terms of f -CMIs of individual predictors. Let z̃ be a fixed value of Z̃ and x be an arbitrary collection of inputs. Denoting Fi = fi(z̃S , x,Ri), i ∈ [k], we have that I(g(F1, . . . , Fk);S) ≤ I(F1, . . . , Fk;S) (data processing inequality) = I(F1;S) + I(F2, . . . , Fk;S)− I(F1;F2, . . . , Fk) + I(F1;F2, . . . , Fk | S) (chain rule) ≤ I(F1;S) + I(F2, . . . , Fk;S) (as MI is nonnegative and F1 ⊥ F2, . . . , Fk | S) ≤ . . . ≤ I(F1;S) + · · ·+ I(Fk;S). (repeating the arguments above to separate all Fi) Unfortunately, the same derivation above does not work if we replace S with Su, where u is a proper subset of [n], as I(F1;F2, . . . , Fk | Su) will not be zero in general. 4.2 Binary classification with finite VC dimension Let us consider the case of binary classification: Y = {0, 1}, where the learning method f : Zn ×X ×R → {0, 1} is implemented using a learning algorithm A : Zn ×R →W that selects a classifier from a hypothesis setH = {hw : X → Y}. IfH has finite VC dimension d [34], then for any algorithm f , the quantity f -CMI(f, z̃) can be bounded the following way. Theorem 4.1. Let Z ,H, f be defined as above, and let d <∞ be the VC dimension ofH. Then for any algorithm f and z̃ ∈ Zn×2, f -CMI(f, z̃) ≤ max {(d+ 1) log 2, d log (2en/d)} . (24) Considering the 0-1 loss function and using this result in Corollary 1, we get an expect generalization gap bound that is O (√ d n log ( n d )) , matching the classical uniform convergence bound [34]. The √ log n factor can be removed in some cases [13]. Both Xu and Raginsky [39] and Steinke and Zakynthinou [33] prove similar informationtheoretic bounds in the case of finite VC dimension classes, but their results holds for specific algorithms only. Even in the simple case of threshold functions: X = [0, 1] and H ={ hw : x 7→ 1{x>w} | w ∈ [0, 1] } , all weight-based bounds described in Sec. 2 are vacuous if one uses a training algorithm that encodes the training set in insignificant bits of W , while still getting zero error on the training set and hence achieving low test error. 4.3 Stable deterministic or stochastic algorithms Theorems 2.3, A.1 and 3.2 provide generalization bounds involving information-theoretic stability measures, such as I(W ;Zi | Z−i), I(A(z̃S , R);S | S−i) and I(f(z̃S , x̃, R);Si | S−i). In this section we build upon the predication-based stability bounds of Thm. 3.2. First, we show that for any collection of examples x, the mutual information I(f(z̃S , x);Si | S−i) can be bounded as follows. Proposition 4. Let Si←c denote S with Si set to c. Then for any z̃ ∈ Zn×2 and x̃ ∈ X k, the mutual information I(f(z̃S , x,R);Si | S−i) is upper bounded by 1 4 KL (f(z̃Si←1 , x,R)|S−i ‖ f(z̃Si←0 , x,R)|S−i) + 1 4 KL (f(z̃Si←0 , x,R)|S−i ‖ f(z̃Si←1 , x,R)|S−i) . To compute the right-hand side of Proposition 4 one needs to know how much on-average the distribution of predictions on x changes after replacing the i-th example in the training dataset. The problem arises when we consider deterministic algorithms. In such cases, the right-hand side is infinite, while the left-hand side I(f(z̃S , x,R);Si | S−i) is always finite and could be small. Therefore, for deterministic algorithms, directly applying the result of Proposition 4 will not give meaningful generalization bounds. Nevertheless, we show that we can add an optimal amount of noise to predictions, upper bound the generalization gap of the resulting noisy algorithm, and relate that to the generalization gap of the original deterministic algorithm. Let us consider a deterministic algorithm f : Zn × X → Rd. We define the following notions of functional stability. Definition 4.1 (Functional stability). Let S = (Z1, . . . , Zn) ∼ Dn be a collection of n i.i.d. samples, and Z ′ and Ztest be two additional independent samples from D. Let S(i) , (Z1, . . . , Zi−i, Z ′, Zi+1, . . . , Zn) be the collection constructed from S by replacing the i-th example with Z ′. A deterministic algorithm f : Zn ×X → Rd is a) β self-stable if ∀i ∈ [n], ES,Z′ ∥∥∥f(S,Zi)− f(S(i), Zi)∥∥∥2 ≤ β2, (25) b) β1 test-stable if ∀i ∈ [n], ES,Z′,Ztest ∥∥∥f(S,Ztest)− f(S(i), Ztest)∥∥∥2 ≤ β21 , (26) c) β2 train-stable if ∀i, j ∈ [n], i 6= j, ES,Z′ ∥∥∥f(S,Zj)− f(S(i), Zj)∥∥∥2 ≤ β22 . (27) Theorem 4.2. Let Y = Rd, f : Zn × X → Rd be a deterministic algorithm that is β self-stable, and `(ŷ, y) ∈ [0, 1] be a loss function that is γ-Lipschitz in the first coordinate. Then∣∣∣EZ̃,R,S [L(f, Z̃S , R)− Lemp(f, Z̃S , R)]∣∣∣ ≤ 2 32 d 14√γβ. (28) Furthermore, if f is also β1 train-stable and β2 test-stable, then EZ̃,R,S ( L(f, Z̃S , R)− Lemp(f, Z̃S , R) )2 ≤ 32 n + 12 3 2 √ dγ √ 2β2 + nβ21 + nβ 2 2 . (29) It is expected that β2 is smaller than β and β1. For example, in the case of neural networks interpolating the training data or in the case of empirical risk minimization in the realizable setting, β2 will be zero. It is also expected that β is larger than β1. However, the relation of β2 and nβ21 is not trivial. The notion of pointwise hypothesis stability β′2 defined by Bousquet and Elisseeff [5] (definition 4) is comparable to our notion of self-stability β. The first part of Theorem 11 in [5] describes a generalization bound where the difference between empirical and population losses is of order 1/ √ n + √ β′2, which is comparable with our result of Thm. 4.2 (Θ( √ β)). The proof there also contains a bound on the expected squared difference of empirical and population losses. That bound is of order 1/n+ β′2. In contrast, our result of (29) contains two extra terms related to test-stability and train-stability (the terms nβ21 and nβ 2 2 ). If β dominates nβ 2 1 + nβ 2 2 , then the bound of (29) will match the result of Bousquet and Elisseeff [5]. 5 Experiments As mentioned earlier, the expected generalization gap bound of Corollary 2 is significantly easier to compute compared to existing information-theoretic bounds, and does not give trivial results for deterministic algorithms. To understand how well the bound does in challenging situations, we consider cases when the algorithm generalizes well despite the high complexity of the hypothesis class and relatively small number of training examples. Due to space constraints we omit some experimental details and present them in Appendix B. The code can be found at github.com/hrayrhar/f-CMI. First, we consider the MNIST 4 vs 9 digit classification task [20] using a 4-layer convolutional neural network (CNN) that has approximately 200K parameters. We train the network using for 200 epochs using the ADAM algorithm [18] with 0.001 learning rate, β1 = 0.9, and mini-batches of 128 examples. Importantly, we fix the random seed that controls the initialization of weights and the shuffling of training data, making the training algorithm deterministic. Fig. 1a plots the expected generalization gap and the f -CMI bound of (21). We see that the bound is not vacuous and is not too far from the expected generalization gap even when considering only 75 training examples. As shown in the Fig. 3a of Appendix B, if we increase the width of all layers 4 times, making the number of parameters approximately 3M, the results remain largely unchanged. Next, we move away from binary classification and consider the CIFAR-10 classication task [19]. To construct a well-generalizing algorithm, we use the ResNet-50 [16] network pretrained on the ImageNet [7], and fine-tune it for 40 epochs using SGD with mini-batches of size 64, 0.01 learning rate, 0.9 momentum, and standard data augmentations. The results presented in Fig. 1b indicate that the f -CMI bound is always approximately 3 times larger than the expected generalization gap. In particular, when n = 20000, the expected generalization gap is 5%, while the bound predicts 16%. Note that the weight-based information-theoretic bounds discussed in Sec. 2 would give either infinite or trivial bounds for the deterministic algorithm described above. Even when we make the training algorithm stochastic by randomizing the seed, the quantities like I(W ;S) still remain infinite, while both the generalization gap and the f -CMI bound do not change significantly (see Fig. 3b of Appendix B). For this reason, we change the training algorithm to Stochastic Gradient Langevin Dynamics (SGLD) [10, 38] and compare the f -CMI-based bound against the specialized bound of Negrea et al. [22] (see eq. (6) of [22]). This bound (referred as SGLD bound here) is derived from a weight-based information-theoretic generalization bound, and depends on the the hyper-parameters of SGLD and on the variance of per-example gradients along the training trajectory. The SGLD algorithm is trained for 40 epochs, with learning rate and inverse temperature schedules described in Appendix B. Fig. 1c plots the expected generalization gap, the expected test error, the f -CMI bound and the SGLD bound. We see that the test accuracy plateaus after 16 epochs. At this time and afterwards, the f -CMI bound closely follows the generalization gap, while the SGLD bound increases to very high values. However, we see that the SGLD bound does better up to epoch 12. The difference between the f -CMI bound and the SGLD bound becomes more striking when we change the dataset to be a subset of CIFAR-10 consisting of 20000 examples, and fine-tune a pretrained ResNet-50 with SGLD. As shown in Fig. 2, even after a single epoch the SGLD bound is approximately 0.45, while the generalization gap is around 0.02. For comparison, the f -CMI is approximately 0.1 after one epoch of training. Interestingly, Fig. 1c shows that the f-CMI bound is large in the early epochs, despite of the extremely small generalization gap. In fact, a similar trend, albeit with lesser extent, is visible in the MNIST 4 vs 9 experiment, where a CNN is trained with a deterministic algorithm (see Fig. 3c of Appendix B). This indicates a possible area of improvement for the f -CMI bound. 6 Related Work This work is closely related to a rich literature of information-theoretic generalization bounds, some of which were discussed earlier [39, 4, 25, 22, 6, 33, 14, 13, 1, 23, 27, 8]. Most of these work derive generalization bounds that depend on a mutual information quantity measured between the output of the training algorithm and some quantity related to training data. Different from this major idea, Xu and Raginsky [39] and Russo and Zou [29] discussed the idea of bounding generalization gap with the information between the input and the vector of loss functions computed on training examples. This idea was later extended to the setting of conditional mutual information by Steinke and Zakynthinou [33]. This works are similar to ours in the sense that they move away from measuring information with weights, but they did not develop this line of reasoning enough to arrive to efficient bounds similar to Corollary 2. Additionally, we believe that measuring information with the prediction function allows better interpretation and is easier to work with analytically. Another related line of research are the stability-based bounds [5, 2, 26, 3, 9, 37, 27]. In Sec. 2 and Sec. 3 we improve existing generalization bounds that use information stability. In Sec. 4.3 we describe a technique of applying information stability bounds to deterministic algorithms. The main idea is to add noise to predictions, but only for analysis purposes. A similar idea, but in the context of measuring information content of an individual example, was suggested by Harutyunyan et al. [15]. In fact, our notion of test-stability defined in Sec. 4.3 comes very close to their definition of functional sample information. A similar idea was recently used by Neu et al. [23] in analyzing generalization performance of SGD. More broadly this work is related to PAC-Bayes bounds and to classical generalization bounds. Please refer to the the survey by Jiang* et al. [17] for more information on these bounds. Finally, our work has connections with attribute and membership inference attacks [32, 40, 21, 12]. Some of these works show that having a white-box access to models allows constructing better membership inference attacks, compared to having a black-box access. This is analogous to our observation that prediction-based bounds are better than weight-based bounds. Shokri et al. [32] and Yeom et al. [40] demonstrate that even in the case of black-box access to a well-generalizing model, sometimes it is still possible to construct successful membership attacks. This is in line with our observation that the f -CMI bound can be significantly large, despite of small generalization gap (see epoch 4 of Fig. 1c). This suggests a possible direction of improving the f -CMI-based bounds. Acknowledgments and Disclosure of Funding This work is based on research sponsored by Air Force Research Laboratory (AFRL) under agreement number FA8750-19-1-1000. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation therein. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of Air Force Laboratory, DARPA or the U.S. Government. HH was partially supported by a USC Annenberg Fellowship.
1. What are the main contributions and strengths of the paper regarding information-theoretic generalization bounds? 2. What are the weaknesses and limitations of the paper, particularly regarding its novelty and comparison with existing literature? 3. How does the paper's approach differ from or improve upon previous works in the field, such as those by Xu and Raginsky, Negrea et al., and Bu et al.? 4. How do the experimental results support or not support the effectiveness of the proposed generalization bounds? 5. What are some missing references that could contribute to the two directions mentioned in the review, and how would including them impact the paper's claims? 6. Could you explain the phenomenon where the generalization bound in this paper decreases with the number of epochs while the true generalization gap captured by the bound in Negrea et al. [19] increases? 7. How does the second term inside the square root in Equation (29) depend on n, and what does it suggest about the generalization gap's behavior with respect to sample size? 8. How do the high-probability bounds provided in the paper compare with those in other references such as [34], and how do they relate to the "monitor technique"? 9. Could you elaborate on why manipulating the bounds analytically might be easier sometimes, despite their potential inferiority compared to the corresponding bounds of Thm. 2.2, as stated in Line 112-113?
Summary Of The Paper Review
Summary Of The Paper This paper proposes new information-theoretic generalization bounds. These bounds extend some existing results and seem easier to compute. The authors also demonstrate their bounds through some applications and experiments. Review Strengths: Using the information-theoretic framework, introduced by Xu and Raginsky, for studying the generalization error is interesting. The new generalization bounds in Section 3 seem easier to compute compared with existing mutual-information-based generalization bounds. The experimental results seem to suggest that the new bounds in this paper are promising. Weaknesses: My main concern of this paper is its novelty and unclear comparison with existing literature. The comparison with existing literature is unclear. For example, in the abstract, the authors claimed that their bounds improve over the existing information-theoretic bounds since (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. In the second paragraph of the paper, the authors echoed their claim and provided some references. However, it seems that many existing papers have already contributed to the above two challenges. In fact, the generalization bounds in [4,28] are applicable to deterministic algorithms, and the bounds in [11,19,20] seem easy to compute as well. There are some missing references that also contribute to these two directions. Hence, I recommend the author revise their introduction and clarify their contributions. The authors provided a generalization bound in Theorem 2.2 and stated that this bound matches the result of Bu et al. [4] when m=1. Then authors proved that this bound is non-decreasing in m. Hence, if I understand correctly, Bu et al. have already derived the tightest case. Then what is the purpose of introducing other weaker bounds? In the same vein, bounds with similar forms have already appeared before. For example, Raginsky has presented a similar result in a tutorial at NASIT 2019 (see the recorded video at https://www.itsoc.org/conferences/schools/nasit2019 1:25:05). Negrea et al introduced a similar bound. Although their bound is slightly weaker than Thm 2.2. in this paper, it does not seem that the improvement is significant. Similar comments also apply to some bounds in Section 2.2. Section 3. The authors considered a different framework where they assumed that the learning method implements a function f: Z^n \times X \times R \to K. This framework has been considered in the following missing reference for studying excess risk. Hence, it seems that the authors just applied the existing bounds introduced in [28] to this new framework. Xu, A. and Raginsky, M., 2020. Minimum excess risk in Bayesian learning. arXiv preprint arXiv:2012.14868. My second concern is the experiments in this paper. The authors compared their bound with the one in Negrea et al [19] in Fig 1 (b). For the first 12 epochs, the bound in Negrea et al [19] outperforms the bound in this paper. Given that the SGLD algorithm converges in 16 epochs, it seems that the new bound introduced in this paper is often weaker than the bound in Negrea et al [19] before the algorithm converges. The bound in Negrea et al [19] blows up after the algorithm convergences and that’s the region where the bound in this paper is sharper. However, a follow-up work [11] has already proposed a new generalization bound which seems to address this issue and becomes much tighter. This bound is extended to the setting of SGLD in the following missing reference. Hence, I wonder how the generalization bound in this paper compares with these new bounds. Rodríguez-Gálvez, B., Bassi, G., Thobaben, R. and Skoglund, M., 2021, April. On random subset generalization error bounds and the stochastic gradient Langevin dynamics algorithm. In 2020 IEEE Information Theory Workshop (ITW) (pp. 1-5). IEEE. The generalization bound in this paper is decreasing w.r.t. the number of epochs in Fig. 1. However, it seems that the true generalization gap is increasing which is captured by the bound in Negrea et al [19]. How can you interpret this phenomenon? Other comments The generalization bounds in Eq.(29). It is unclear to me why the second term has a n inside the square root. How to explain this dependence? Does that suggest that the generalization gap is increasing w.r.t. the sample size. Also, how does this bound compare with other existing stability bounds? The authors provided some high-probability bounds (e.g., Eq. 3 and Line 88) in the paper. I wonder how they compare with the one in [34]. It seems that using the “monitor technique” can lead to a tighter bound compared with the bound obtained from Markov’s inequality. Also, how do these high-probability bounds compare with the ones in the following reference? Esposito, A.R., Gastpar, M. and Issa, I., 2021. Generalization error bounds via Rényi-, f-Divergences and Maximal Leakage. IEEE Transactions on Information Theory. Line 112—113. The authors wrote “While these bounds are worse than the corresponding bounds of Thm. 2.2, it is sometimes easier to manipulate them analytically.” This is unclear to me. Can the authors elaborate more?