paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
sequence
review_writers
sequence
review_contents
sequence
review_ratings
sequence
review_confidences
sequence
review_reply_tos
sequence
nips_2022_Tocn9vYMU-o
Approaching Quartic Convergence Rates for Quasi-Stochastic Approximation with Application to Gradient-Free Optimization
Stochastic approximation is a foundation for many algorithms found in machine learning and optimization. It is in general slow to converge: the mean square error vanishes as $O(n^{-1})$. A deterministic counterpart known as quasi-stochastic approximation is a viable alternative in many applications, including gradient-free optimization and reinforcement learning. It was assumed in prior research that the optimal achievable convergence rate is $O(n^{-2})$. It is shown in this paper that through design it is possible to obtain far faster convergence, of order $O(n^{-4+\delta})$, with $\delta>0$ arbitrary. Two techniques are introduced for the first time to achieve this rate of convergence. The theory is also specialized within the context of gradient-free optimization, and tested on standard benchmarks. The main results are based on a combination of novel application of results from number theory and techniques adapted from stochastic approximation theory.
Accept
All reviewers are happy with the new ideas and strength of results in this paper.
train
[ "AVmvurJWfpI", "FfuKFiWBYEP", "Iy8DMA08YbH4", "-SCDLbEIQJK", "lw3TYGTQ7A", "b3xB4J-tZDo", "C0nROcIqnU" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " You may be reassured to learn that we discovered an error in our statement on dependencies: the exponential bound in Baker’s Theorem concerns $K$ (the number of frequencies) and NOT $d$. Origin of error: we took $K=d$ to simplify notation. We apologize for the confusion this caused!\n\nTo avoid assuming $K=d$ we will make a change in notation: First, denote by $\\xi_t^0$ the vector of $K$ sinusoids of differing frequencies, and then $\\xi_t = G_0(\\xi_t^0 )$ with $G_0$ analytic. Example: $K=1$ and $G_0$ polynomial to obtain\n$\\xi_t = [\\cos(\\omega t), \\cos(2\\omega t), \\dots, \\cos(d\\omega t) ]$.\n\nTo the best of our knowledge, and experts we have consulted, the QMC community does not consider bounds for QSA, only Monte-Carlo. We will make clear in the revision that dimension dependent bounds for QSA is an entirely open field for research. \n\nThe paper is a challenge to review because of the unusual mix of mathematical techniques. The disturbance decomposition begins with an old trick from the SA literature, but the multiple applications of Poisson's equation is entirely novel (and essential). The geometry surrounding Poisson's equation is entirely new. And, as remarked in the rebuttal letter, we have not seen application of Baker's theorem in prior research in any ML related domain. To help the reader we plan to move essential elements of the theory to the body, including:\n1. The representation of the QSA ODE (see rebuttal).\n2. The geometry surrounding Poisson’s equation from the supplementary material (currently hidden in Corollary B.5).\n3. Clarity regarding the role of Baker's Theorem to obtain bounds (as explained above Theorem B.6, this is used to obtain a lower bound on $| \\omega_{\\alpha,b} |$ that is essential to bound solutions to Poisson's equation).\n\nWe can make space by moving discussion on Monte-Carlo to the supplementary material. ", " I appreciate the response and agreed that traditional QMC may not yield the accelerated rate obtained in this paper. It would still be very helpful to emphasize and elaborate on the hidden dependence on $d$ in the rate. I am leaning towards increasing my rating, however, I would like to acknowledge again that I did not understand a majority of the proofs. ", " We appreciate the feedback from all of the reviewers and will take all of their input into account in a revision. \nSince the submission we realized that more theory should be on display in the body of the paper. In particular, the following exact representation of the QSA ODE\n$$\n\t\t\\frac{d}{dt} \\Theta_t = a_t \\Big[ \\bar f (\\Theta_t) - a_t \\bar\\Upsilon_t + a_t^2 W_t^0 + a_t \\frac{d}{dt} W^1 _t + \\frac{d^2}{dt^2} W^2_t \\Big]\n\t\t\\qquad {(\\star)}\n$$\nin which {$\\bar\\Upsilon_t , W^i _t : i=0,1,2 $} are smooth functions of $(t, \\Theta_t)$. The proof of Theorem 2.2 follows, as does Prop. 2.3 (derivatives are attenuated upon averaging). This representation provides other insights regarding transient behavior, and will help us respond to comments from the reviewers. \n \nWe were pleased to read the reviewers' appreciation of both exposition and content (in particular, the near quartic convergence rate). The analysis is also novel: Baker's theorem has been with us for 50 years without application to learning; we can find no other approach to obtain the sharp bounds for rates of convergence. The multiple applications of Poisson's equation to obtain ($\\star$) is entirely novel.\n\n$\\textbf{Responses to rev. SkSg}$: \n\n$\\textit{Impact of dimension.}$ Agreed: the QMC literature obtains a rate of convergence of order $O(\\log(n)^d n^{-1})$. We dramatically beat this in terms of $n$-dependency [rate of order $O(n^{-2+\\delta})$], but we do not have theory to predict the impact of dimension. In a revision we will emphasize this gap in current theory. However, comparison with traditional QMC is not straightforward. To the best of our knowledge, the QMC community has not addressed nonlinear dynamical systems similar to QSA. Moreover, if we move to exploration based on traditional QMC then we lose the crucial geometry used to obtain our main results.\n\n$\\textit{Application to SGD.}$ There are many dimensions to this question. For SGD with truly noisy observations, we must face the Cramer-Rao lower bound (of Polyak and Ruppert), which means that the MSE converges at rate $n^{-1}$ rather than $n^{-4+\\delta}$. In other formulations of SGD such as block gradient descent, the covariance of the noise is state dependent and vanishes at the stationary point. In such cases we may see significant benefit from representations similar to ($\\star$). See replies to rev. DYiM regarding stochastic approximation for further thoughts.\n\n$\\textbf{Responses to rev. DYiM:}$ \n\n$\\textit{Knowledge of the noise.}$ Absolutely---we agree entirely that application of QSA is not universal. We argue that there is a perfect fit with gradient free optimization, and more limited instances of reinforcement learning (when \"noise\" is largely designed by the user for exploration). The efficiency and reliability in applications to optimization is remarkable in all of our experiments so far, especially when compared with the standards such as SPSA (cf. figure 4).\n\n$\\textit{Implication to stochastic recursions.}$ We also had thoughts on this point, and since the submission we have constructed a representation similar to ($\\star$) for stochastic recursions in discrete time. We find that the nuisance term $\\bar\\Upsilon$ appears when there is multiplicative noise (such as in TD-learning). It has strongly negative implications, especially for fixed gain algorithms. This is beyond the scope of this submission, but we will add remarks.\nBeyond this, we wonder if there is any hope in finding stochastic probing signals that exhibit similar geometry to what is found in QSA. The orthogonality of functions of the probing signal and solutions to Poisson's equation was crucial to eliminate $\\bar\\Upsilon$. We are currently considering classes of reversible Markov chains, but have not yet had success. \n\nPlease see our response to rev. SkSg for more on this topic. In particular, due to the central limit theorem we can only expect $n^{-1}$ rate of convergence for the MSE, rather than $n^{-4+\\delta}$ obtained here. \n\n$\\textit{Broader classes of probing signals.}$ We agree that we must go beyond mixtures of sinusoids in applications to gradient free optimization. The general theory in this paper allows for any smooth function of sinusoids with irrationally related frequencies, so we can approximate triangle or sawtooth waves, or more chaotic signals (though we must respect physics in online optimization). We do not yet have theory to predict an optimal choice for probing. \n\n$\\textbf{Responses to rev. dkZC:}$ We will update the references to include those recommended, and also past ML articles on zeroth order optimization (mainly to help put this work in context with articles familiar to the NeurIPS audience).\n\n$\\textbf{Responses to rev. JEVp:}$ We will create a section dedicated to literature review, notation and organization. In particular, the notation on line 18 does refers to convergence in distribution, so a statement will be added to avoid confusion.", " This paper proposes to solve the root finding problem of the form $\\bar f(\\theta^* ) = 0$ where $\\bar f$ is the expectation of some smooth but random function. Instead of the usual ODE method that converges at at rate of $O(1/t)$ in the MSE, the authors propose to use QSA ODE instead to obtain a convergence rate of $O(1/t^2)$, and *close to* $O(1/t^4)$ if combined with Polyak-Rupert (PR) Averaging. The QSA ODE method is similar to the ODE method but with sinusoidal perturbations rather than iid uniform perturbations. The theoretical results provide insights to new approaches in Quasi Monte Carlo sampling. Finally, simulation studies using Euler approximation corroborate with the theoretical results. I find this submission to be very interesting, unfortunately, I would like to first acknowledge that after carefully reading the paper, I realized my expertise in certain technical aspects is lower than I had anticipated, especially when it comes to complex analysis, the invocation of Baker's Theorem, and assumptions on the probing signal. I only looked through the proofs at a high level and was unable to understand every detail. My evaluation is thus also at a very high level based on my knowledge in other aspects such as discrete time stochastic optimization and QMC. I look forward to learning more from the authors and the other reviewers. \n\nI particularly enjoy the new QMC results developed that give rise to Theorem 2.1 (continuous time) and B.9 (discrete time). One concern I have is that the authors claim (line 87-89) the latter result gives a $O(1/n)$ instead of the usual $O(log(n)^d/n)$ rate for QMC in the root MSE. To me, removing a dependence on the dimensionality $d$ is a huge improvement, as QMC is long known to work better than MC only in the low dimensional setting. However, as the authors pointed out, the $O(1/n)$ actually contains a multiplicative constant $K_f$ for which known bounds grow at least exponentially in $d$. It is thus unclear to me whether this is in fact a better rate than classical QMC rates. If yes, it would be great if the authors could elaborate more on precisely what settings this would be the case. \n\nAlthough PR Averaging has long been studied in stochastic optimization, I have not seen it being used in conjunction with Quasi-Monte Carlo perturbations. The near-quartic convergence obtained from this is therefore impressive, and it would be interesting to see if similar techniques (such as using sinusoidal noise) can be incorporated to, say, SGD, to obtain even faster rates. \n\nOverall, I think the submission is well polished and organized. The experiments setups are justified and plots are nicely presented, with details clearly stated in the appendix. Please see section above. I find the discussions on limitations to be fairly adequate. This is particularly reflected in Section 2.1, where the authors discuss bounds on the constant $K_f$.\n", " This manuscript provides a fast convergence rate analysis for nonlinear quasi-stochastic approximation by the Polyak-Ruppert averaging technique. The main ingredient for achieving acceleration is to carefully design the sine type probing signals to break the $O(n^{-2})$ bound. The application of this idea to gradient-free optimization is also discussed. Some experiments are conducted on simple test functions and it seems to verify the theory. Strengths:\n(i) The paper is well-written. The main message of the paper is very clear.\n(ii) The theoretical result looks sound and strong. The overview of the proof makes sense to me (although I did not read all the proofs in the appendix in great detail).\n\nWeaknesses:\n(i) The specific form of the probing signal is very important to achieve acceleration. However, the noise design is not realistic in machine learning applications. For example, in standard stochastic approximation or SGD for training machine learning models, we do not really have the knowledge of the noise. Some discussions/clarifications would be better.\n\n(ii) The whole paper considered an ODE formulation, it is unclear whether there is a corresponding stochastic algorithm that has a fast convergence rate as well. 1. Is it possible to relax the requirement of the probing signal (e.g., to a broader class of signals) and still achieve acceleration?\n\n2. How to translate the Quasi-Stochastic Approximation ODE to a stochastic algorithm?\n\n N/A", " This paper deals with quasi-stochastic approximation - i.e. when the random noise of standard stochastic approximation schemes is replaced by exponential signals - and provides novel and faster convergence rates. The authors introduced acceleration techniques to derive such theoretical results and highlight them in the context of gradient-free optimization. Numerical experiments support the theoretical findings.\n\n\n First, I would like to thank the authors for their work: It is a well written and clear paper with relevant motivations (clear introduction), well-explained related work and fine overall structure. Although I am not an expert in this area of research, I found the message of the article to be clear and the theoretical results to be well presented and discussed.\n\n- Originality: The idea of forward-backward filtering is originial and there are solid contributions. This paper presents a new framework for quasi-stochastic approximation which extends several convergence results known so far in restricted settings. The related work is adequately cited and clearly describes how the current paper differs from it.\n\n- Quality: Technically strong, highly general results, advanced techniques.\n\n- Clarity: The paper is very well written with fine overall structure making it easy for the reader to grasp all the main ideas. The authors may consider to add and discuss the following works which are strongly related to the current contribution:\n\n * \\[A] \"Optimal rate of convergence for quasi-stochastic approximation\". (Bernstein, Andrey and Chen, Yue and Colombino, Marcello and Dall'Anese, Emiliano and Mehta, Prashant and Meyn, Sean)\n * \\[B] \"Accelerating optimization and reinforcement learning with quasi stochastic approximation\". (Chen, Shuhang and Devraj, Adithya and Bernstein, Andrey and Meyn, Sean)\n\n- Significance: The results are quite important as they extend and relax several assumptions made so far in the quasi-stochastic approximation litterature.\n\n**References**\n\n- \\[A]: @article{bernstein2019optimal,\n title={Optimal rate of convergence for quasi-stochastic approximation},\n author={Bernstein, Andrey and Chen, Yue and Colombino, Marcello and Dall'Anese, Emiliano and Mehta, Prashant and Meyn, Sean},\n journal={arXiv preprint arXiv:1903.07228},\n year={2019}}\n\n- \\[B]: @inproceedings{chen2021accelerating,\n title={Accelerating optimization and reinforcement learning with quasi stochastic approximation},\n author={Chen, Shuhang and Devraj, Adithya and Bernstein, Andrey and Meyn, Sean},\n booktitle={2021 American Control Conference (ACC)},\n pages={1965--1972},\n year={2021},\n organization={IEEE}} This is a theoretical work with no foreseeable negative societal impacts.", " In this paper, the authors focus on the theoretical analysis and the application of quasi-stochastic approximation to the gradient-free optimization. The authors utilized some acceleration techniques to achieve the faster rate of convergence of order $\\mathcal{O}(n^{\\delta - 4})$ where $\\delta > 0$ is any arbitrary positive number. Then, the authors specialize this theoretical results to the setting of the gradient-free optimization problem. The authors also conduct several numerical experiments and these empirical results are consistent with the theoretical theorems. The authors applied the results from stochastic approximation theory and the most recent discovery from number theory to prove the theoretical theorems. All proof has been put in the supplementary material of the appendix. This paper has the following strengths in terms of the originality, quality, clarity and significance:\n\nOriginality and Significance: this paper provides the first analysis and proof of quadratic convergence rates of quasi-stochastic approximation with application in the gradient-free optimization. This is the main and independent contribution of this paper.\n\nQuality: the authors provides detailed and correct theoretical analysis in the proof section and reasonable explanations in the numerical experiments section. These theoretical and empirical results are consistent with each other.\n\nClarity: the paper is organized with clear and clean structure and the language is well-organized so that the readers can easily understand the content of this submission.\n\nFrom my perspective, there is no major weakness for this submission. One minor weakness is that the authors should put the literature review, notation and organization of section Introduction into another section so that the structure of the submission could be more clear. One question:\n\nIn line 18, is that convergence notation the notation of convergence in distribution? If it is, the authors should explicitly explain this notation and give an introduction to the concept of convergence in distribution, so that the readers could easily understand this. The authors have presented the limitations of this paper in the final conclusion section." ]
[ -1, -1, -1, 7, 7, 7, 5 ]
[ -1, -1, -1, 2, 4, 3, 4 ]
[ "FfuKFiWBYEP", "Iy8DMA08YbH4", "nips_2022_Tocn9vYMU-o", "nips_2022_Tocn9vYMU-o", "nips_2022_Tocn9vYMU-o", "nips_2022_Tocn9vYMU-o", "nips_2022_Tocn9vYMU-o" ]
nips_2022_KnCS9390Va
Delving into Out-of-Distribution Detection with Vision-Language Representations
Recognizing out-of-distribution (OOD) samples is critical for machine learning systems deployed in the open world. The vast majority of OOD detection methods are driven by a single modality (e.g., either vision or language), leaving the rich information in multi-modal representations untapped. Inspired by the recent success of vision-language pre-training, this paper enriches the landscape of OOD detection from a single-modal to a multi-modal regime. Particularly, we propose Maximum Concept Matching (MCM), a simple yet effective zero-shot OOD detection method based on aligning visual features with textual concepts. We contribute in-depth analysis and theoretical insights to understand the effectiveness of MCM. Extensive experiments demonstrate that MCM achieves superior performance on a wide variety of real-world tasks. MCM with vision-language features outperforms a common baseline with pure visual features on a hard OOD task with semantically similar classes by 56.60% (FPR95). Code is available at https://github.com/deeplearning-wisc/MCM.
Accept
This paper presents an interesting and novel try at using vision-language multi-modal models for OOD tasks. The experiments are sufficiently validated on diverse OOD datasets. Besides, the theoretical explanations of softmax scaling are quite insightful. All reviewers give positive scores. During the discussion phase, the authors have accordingly added more ablation studies and comparisons. Despite some reviewers raising concerns about the possible limited novelty of using softmax scaling for the CLIP-like model. The authors give sufficient discussions and provide theoretical justifications, which will inspire the community. The meta-reviewers thus recommend accepting it.
val
[ "ctwF8MuUcJ0", "stRrQs0_uKR", "NyVmZhF_e3W", "Mhc4BckSlxA", "iBJNBtirTrH", "8p6jBZZNpQ4", "o1V_lBZpmzW", "0bt8oU7Sw5i", "6QyAptC8YIa", "MNAJ_LD2K7D" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " >**Q1: Comparisons with recent works**.\n\nThanks for the suggestion! For the Energy score, please refer to Appendix F.1 for a detailed discussion where we investigate the effectiveness of Energy score based on CLIP. For GradNorm, as suggested, we provide the results as follows. For reference, we also paste the results reported in the original paper (Table 1) [1] based on ResNetv2-101 trained on ImageNet (numbers are FPR95/AUROC).\n\n| Model | iNaturalist | SUN | Places | Texture | Average |\n| ----------------------- | ----------- | ----------- | ----------- | ----------- | ----------- |\n| GradNorm (ResNetv2-101) | 50.03/90.33 | 46.48/89.03 | 60.86/84.82 | 61.42/81.07 | 54.70/86.31 |\n| GradNorm (CLIP-B) | 68.35/79.53 | 40.74/91.11 | 49.64/87.31 | 48.37/87.51 | 51.77/86.37 |\n| MSP (CLIP-B) | 40.89/88.63 | 65.81/81.24 | 67.90/80.14 | 64.96/78.16 | 59.89/82.04 |\n| MCM (CLIP-B) | 32.08/94.41 | 39.21/92.28 | 44.88/89.83 | 58.05/85.96 | 43.55/90.62 |\n\nFrom the above results, we have several observations: **(1)** given the same feature backbone (CLIP-B), when linear probed on ImageNet-1k, GradNorm indeed improves the average performance compared to the classic MSP score (59.89\\% vs. 51.77\\% in FPR95); **(2)** GradNorm (CLIP-B) achieves comparable and even better performance compared to GradNorm (ResNetv2-101 trained from scratch on ImageNet) due to better feature representations as a result of large-scale pre-training. For example, the average FPR95 is improved from 54.70\\% to 51.77\\%; **(3)** Finally, MCM (CLIP-B) still outperform GradNorm by a large margin (43.55\\% vs. 54.70\\% in FPR95) across most OOD test sets, which is encouraging as MCM is zero-shot and training free. \n\n[1] Huang et al., On the Importance of Gradients for Detecting Distributional Shifts in the Wild, NIPS 2021\n\n>**Q2: Do we have to design new prompts for new datasets**\n\nWe agree that prompt design can be an important factor. While we observe prompt ensembling can further improve the performance, it is not a hard requirement.\n\nOne interesting finding in our experiments is that, thanks to the powerful pre-trained model, instead of designing different prompts for different datasets, the default simple prompt \"*This is a photo of __*\" suffices to achieve promising OOD detection results across different architectures and OOD benchmarks.\n\nAnother reason we use the fixed prompt template is for the consideration of fairness. Also, instead of designing prompts for each new dataset, recent advances such as prompt learning [2] might further improve the performance, which we leave as feature work.\n\n[2] Zhou et al., Conditional Prompt Learning for Vision-Language Models, CVPR 2022\n\n>**Q3: In Table 2, why do Fort et al. (ViT-B) and MSP (CLIP-B) have the same performance on all metrics**\n\nThanks very much for catching the typo! We have updated the table in the manuscript with verified results for Fort et al. (ViT-B). Code will also be released to facilitate reproducibility.\n\nWe also post the updated rows here for reference. Numbers are FPR95/AUROC.\n\n| Model | iNaturalist | SUN | Places | Texture | Average |\n| ------------------- | ----------- | ----------- | ----------- | ----------- | ----------- |\n| Fort et al. (ViT-B) | 15.07/96.64 | 54.12/86.37 | 57.99/85.24 | 53.32/84.77 | 45.12/88.25 |\n| MSP (CLIP-B) | 40.89/88.63 | 65.81/81.24 | 67.90/80.14 | 64.96/78.16 | 59.89/82.04 |", " We sincerely appreciate the reviewer for the positive feedback and insightful comments!\n>**Q1: Comparing MCM with Mahalanobis score**\n\nConceptually, the Mahalanobis score can be viewed as concept matching based on visual class prototypes. However, there are several key differences:\n\n**(1)** Distributional assumption: Mahalanobis assumes that features follow class-wise multivariate Gaussian distribution, which can be very hard to satisfy for neural network-based features; while MCM based on cross-modal matching does not place any distributional assumption and thus is more generalizable.\n\n**(2)** Sample efficiency: estimating the covariance matrix for the Mahalanobis score can be challenging for high-dimensional feature space. For example, for error $\\epsilon \\in (0,1)$, under mild conditions, we require a sample size of $O(\\epsilon^{-2}d\\log d)$ to obtain a good estimate (Remark 5.6.2 in [1]), where $d$ is the dimension of features. This can be challenging in practice. In contrast, MCM directly alleviates the need for estimating the covariance matrix.\n\n**(3)** Visual prototypes vs. textual prototypes: compared to textual labels such as “cat”, cat images usually contain background and potentially spurious attributes (Appendix C), and as a consequence, when the average of visual features are used as prototypes for the class “cat” (as in Mahalanobis score), it may inherit spurious information. The performance of Mahalanobis score and MCM for spurious OOD samples are reported in Section 4.2 (Table 3).\n\n[1] Vershynin, Roman. High-dimensional probability: An introduction with applications in data science. Cambridge university press, 2018.\n\n>**Q2: Comparing MCM with prior works on multi-modal OOD detection**\n\nTo the best of our knowledge, two recent works [2,3] also utilize multi-modal information for OOD detection. \n\nIn **L253 - 275** (page 7), we have compared MCM with two recent works utilizing multi-modal information for OOD detection, with extended implementation details in **Section E (appendix)**. There exist fundamental limitations for both works. For example, Fort et al. [2] assume prior knowledge for test OOD samples, while ZO-CLIP[3] cannot guarantee the generated labels are non-overlapping with ID labels. In contrast, MCM is simpler, more efficient (no need to train a decoder), and shows promising results compared to ZO-CLIP[3] across a wide range of benchmarks. On the other hand, compared to Fort el al., MCM does not rely on any prior information and therefore can be generally applied.\n\n> **Q3: Ablation study on softmax scaling**\n\nWe really appreciate the suggestion! Please refer to our response to Q3 of Reviewer rjjS. The ablation study is also added in Appendix H. \n\n> **Q4: Can MCM be applicable for non-contrastive vision-language pre-training models** \n\nNice question! Our MCM score is built on top of the aligned cross-modal features, and it is designed to scale aligned cross-modal features to magnify the difference between ID and OOD. Therefore, the MCM score should also be applied to models pre-trained with multi-modal alignment prediction objectives, including recent works such as RegionCLIP [4], GroupViT [5], etc. We will extend the experiments (with more recent pre-trained models) in the revised version.\n\n\n\n[2] Fort et al., Exploring the Limits of Out-of-Distribution Detection, NIPS 2021\n\n[3] Esmaeilpour et al., Zero-Shot Out-of-Distribution Detection Based on the Pre-trained Model CLIP, AAAI 2022\n\n[4] Zhong et al., RegionCLIP: Region-based Language-Image Pretraining, CVPR 2022.\n\n[5] Xu et al., GroupViT: Semantic Segmentation Emerges from Text Supervision, CVPR 2022\n", " > **Q4: Performance with fine-tuning CLIP models**\n\nIn Table 2, we compared MCM with a range of recent methods that require fine-tuning such as Fort et al. (based on ViT), MOS (based on BiT), and MSP (fine-tuned the ViT model in CLIP, same backbone as ours). Compared to the baselines, we show that MCM remains very competitive without fine-tuning on ImageNet-1K. \n\nDuring our exploration, we did consider fine-tuning the entire backbone. However, we find that **(1)** simply fine-tuning both text and image encoders with the CLIP loss does not lead to consistent improvement for OOD detection as fine-tuning the large feature backbone without special optimization strategies can distort aligned cross-modal features learned during pre-training; **(2)** only fine-tuning the image encoder also does not yield consistent improvements compared to linear-probing. \n\nOur findings also echo a conclusion in a recent paper [1] on OOD generalization that shows fine-tuning the feature backbone leads to worse accuracy than linear-probing when the pre-trained features are good, and the distribution shift is large.\n\n>**Q5: Comparison with Mahalanobis score. Why text features as the prototypes are supposed to be better than visual features as the prototypes?**\n\nThanks for the comment! Please see the first answer below in response to Reviewer ZoJQ.\n\n[1] Kumar et al., Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution, ICLR 2022", " > **Q1: The role of softmax scaling**\n\nGreat question! We provided an expansive explanation in **L113-L152, Page 4**. Despite its simple form, our insights behind using softmax scaling for the CLIP-like model is new, and *contrary* to the findings based on the cross-entropy loss. In fact, since MSP was introduced as a baseline, it took the community a few years of research efforts to realize that logit-based scores without softmax scaling are more effective for models trained with cross-entropy loss. In light of this, we are very deliberate and careful in bringing softmax scaling to the picture, for CLIP-like models.\n\nIn particular, applying softmax to cross-modal cosine similarity is *fundamentally different from the softmax score derived from a model trained with cross-entropy loss*, which inherently maximizes the posterior $p(y|\\mathbf{x})$ for the ground-truth label, and minimizes the probability for other labels. Unlike CLIP-like models, logit scores displaying uniformity would be heavily penalized by the CE loss. \n\nAs a result, the logit score corresponding to the ground truth label can already be significantly higher than other labels. Applying softmax on the logit scores can exacerbate overconfident predictions, and reduce the separability between ID and OOD data. On the contrary, for CLIP-like models, applying softmax helps sharpen the uniform-like inner product scores, and increases the separability between ID and OOD data. We provide theoretical justifications in **Thm 3.1** that softmax scaling indeed improves the separability between ID and OOD data for CLIP-based OOD detection, with full proof in the appendix.\n\nWith that being said, we agree with the reviewer that the simplicity and effectiveness of our method make it a proper baseline for future works on multi-modal OOD detection. Moreover, we hope our theoretical insights are valuable to the community.\n\n> **Q2: Merits of MCM**\n\nFor OOD detection, most prior works focus on a single modality & task-specific models, leaving rich opportunities for multi-modal features untapped. Our paper aims to provide a timely investigation and highlight the compelling advantages of OOD detection with aligned multi-modal features. We hope MCM can serve as a springboard and a simple baseline for future works on OOD detection in this direction. *The highlighted merits are mostly positioned w.r.t. existing OOD detection literature*, though we agree that the merits are indispensable from the CLIP model. We will clarify this in the revised version.\n\n> **Q3: OOD detection performance without softmax scaling**\n\nThanks very much for the suggestions! We have added an ablation study in Appendix H. The results *without* softmax scaling (i.e. using the maximum cosine similarity) are shown below based on CLIP-B/16 (numbers are FPR95/AUROC).\n\n| ID/OOD | iNaturalist | SUN | Places | Texture | Average |\n| ------------- | ----------- | ----------- | ----------- | ----------- | ----------- |\n| ImageNet-10 | 41.15/95.74 | 33.31/96.45 | 30.6/96.6 | 34.57/96.44 | 34.91/96.31 |\n| ImageNet-20 | 14.88/97.91 | 14.74/97.86 | 13.49/97.84 | 13.42/97.39 | 14.13/97.75 |\n| ImageNet-1k | 56.59/90.27 | 64.28/87.60 | 57.23/87.76 | 84.2/73.5 | 65.57/84.78 |\n| Stanford-Cars | 0.00/100.00 | 0.03/99.99 | 0.29/99.94 | 0.00/100.0 | 0.08/99.98 |\n| Food-101 | 0.65/99.83 | 0.21/99.92 | 0.52/99.86 | 9.31/97.15 | 2.67/99.19 |\n| Oxford-Pet | 0.00/99.99 | 0.04/99.98 | 0.19/99.95 | 0.27/99.92 | 0.12/99.96 |\n\nFor easy tasks such as Food-101, Stanford-Cars, and Oxford-Pet as ID, the performance of maximum cosine similarity score is similar to MCM (Tables 1 and 2). However, for more challenging tasks such as ImageNet-20 and ImageNet-1k, MCM significantly outperforms that without softmax scaling. For example, the average FPR95 is improved by 11.33\\% on ImageNet-20 and 22.02\\% on ImageNet-1k, which highlights the necessity of a proper scaling function for CLIP-based OOD detection.\n\n", " > **Q1: Alternative scoring functions** \n\nWe thank the reviewer for the positive feedback and insightful comments! In this work, we use the softmax function with temperature to improve the separability between ID and OOD due to its simplicity and theoretical guarantees (**Thm 3.1**) for CLIP-like models. As suggested, we report the results for the large-scale benchmark ImageNet-1K blow (numbers are FPR95/AUROC): \n\n| Score | iNaturalist | SUN | Places | Texture | Average |\n| --------------------------------------- | ----------- | ----------- | ----------- | ----------- | ----------- |\n| MCM | 32.08/94.41 | 39.21/92.28 | 44.88/89.83 | 58.05/85.96 | 43.55/90.62 |\n| Entropy after softmax (best $\\tau$) | 44.63/83.76 | 50.52/81.53 | 52.75/79.00 | 64.15/75.94 | 53.01/80.06 |\n| Variance of cosine similarity | 85.45/65.41 | 70.00/80.32 | 75.29/76.14 | 79.79/71.60 | 77.63/73.37 |\n| Exp(max - second max cosine similarity) | 88.90/71.48 | 90.25/67.86 | 89.39/69.61 | 90.25/67.86 | 89.47/69.87 |\n\nFor the Entropy score, we select $\\tau \\in$ \\{0.005, 0.01, 0.1, 1, 10\\} and report the performance with the best $\\tau$. We observe that on the large-scale benchmark ImageNet-1k (ID), MCM still gives **the most promising** results compared to the other three alternative scores. We have included the ablation study comparing MCM with all the scores suggested in Appendix F.2. \n\n> **Q2: ResNet-CLIP vs. ViT-CLIP**\n\nGreat suggestion! We would like to discuss the cause of strong OOD detection performance for CLIP-like models with the MCM score from the following two aspects:\n\n**(1)** It is true that CLIP-like pre-training models boost the performance of OOD detection. Large-scale pre-training is beneficial for transferrable representation learning [1]. Therefore, the task of OOD detection benefits from a powerful pre-trained model. As suggested, to illustrate the effectiveness of MCM score on ResNet-based CLIP, we use RN50x4 (178.3M) which shares similar number of parameters as CLIP-B/16 (149.6M). The OOD detection performance of MCM for the large-scale benchmark ImageNet-1k (ID) is shown below (numbers are FPR95/AUROC):\n\n| Model | iNaturalist | SUN | Places | Texture | Average |\n| --------- | ----------- | ----------- | ----------- | ----------- | ----------- |\n| CLIP-B/16 | 32.08/94.41 | 39.21/92.28 | 44.88/89.83 | 58.05/85.96 | 43.55/90.62 |\n| RN50x4 | 44.51/91.51 | 35.11/92.84 | 43.74/89.60 | 57.73/85.93 | 45.27/89.97 |\n\nIt can be seen that MCM still shows promising results with ResNet-based CLIP models, and the performance is comparable between RN50x4 and CLIP-B/16 (89.97 vs. 90.62 in AUROC). We have added this ablation to Appendix G.\n\n**(2)** We would like to highlight the importance of a proper detection score. Given the most powerful CLIP model (CLIP-L/14), directly using cosine similarity *without softmax scaling* for OOD detection only yields an average FPR95 of 60.92% for ImageNet-1k (ID), significantly worse than MCM (37.19%). In addition, a recent work (ZO-CLIP) [2] also uses CLIP for OOD detection. Given the same encoder, we enhance ZO-CLIP with a stronger decoder and a filter module (Appendix E). MCM still consistently outperforms ZO-CLIP (Figure 3) across the 4 benchmark OOD test sets, which also suggests that a powerful feature representation alone is insufficient for strong OOD detection performance.\n\n> **Q3: The temperature of CLIP during pre-training.** \n\nThanks very much for pointing this out. We have revised the description in the updated manuscript.\n\n[1] Radford et al., Learning Transferable Visual Models From Natural Language Supervision, ICML 2021\n\n[2] Esmaeilpour et al., Zero-Shot Out-of-Distribution Detection Based on the Pre-trained Model CLIP, AAAI 2022", " We thank all the reviewers for their time, insightful suggestions, and valuable comments. We are glad that **ALL** reviewers appreciate our work and find our method simple and effective, one of the first works that leverage vision-language multi-modal models for OOD (rjjS), with comprehensive experiments on small-scale, large-scale, and hard OOD detection tasks where MCM demonstrate promising results (rjjS, ZoJQ, 8qRe). We are also encouraged that reviewers find our theoretical explanations of softmax scaling reasonable (Ekfu) and insightful (8qRe); the paper is well-organized, clear, and concise (ZoJQ, 8qRe).\n\nWe respond to each reviewer's comments in detail below. We have also revised the manuscript according to reviewers' suggestions, and we believe this makes our paper much stronger. The main changes we made include:\n\n- In Appendix F.2, we add an ablation study comparing MCM with other scaling functions on cosine similarities (as Reviewer Ekfu suggested)\n- In Appendix G, we add an ablation study on the performance of MCM with ResNet-based CLIP models.\n- In Appendix H, we add an ablation study on the performance without softmax scaling for all tasks.\n\n*In the revised manuscript, we have marked the revisions in blue.", " This paper presents an out-of-distribution(OOD) detection method for the image classification models based on the CLIP model. The authors propose Maximum Concept Matching which detects OOD by aligning visual features with textual concepts. Experiments show that this simple yet effective method can outperform many methods. Since the method does not need any training, it can be easily applied to different datasets with minimum effort. Strengths:\n- The method is simple and can work well without bells and whistles.\n - There're four compelling advantages: generalizable to many tasks; OOD-agnostic; training-free; scalable.\n- The authors justify their design with theoretical proof.\n\nWeaknesses:\n- Although the definition of MCM in eq. 1 is intuitive, there could be many ways to detect uniformity of the cosine similarities. For example, entropy after softmax scaling, the variance of the cosine similarities, exp(max - second_max). More options should be compared in the experiments.\n- It will be great if the authors can try more different CLIP models like resnet CLIP, and some community open sourced CLIP-like models. Would it be possible that it is actually the vision transformer or the training data of CLIP to be the root cause of strong OOD detection performance?\n- Wrong fact: the authors claim that CLIP uses a temperature of 0.07. However, the temperature of CLIP is initialized as 0.07 and learned during training.\n\nFinal rating:\nThe authors' rebuttal addresses my questions very well. I will keep my original rating. No limitations are discussed by the authors. ", " This paper proposed a new out-of-distribution (OOD) detection method based on pretrained vision-language model. Unlike previous works, this work relies on pretrained models like CLIP to infer whether a given image is in or out of the target distribution by matching the visual features to a set of concept embeddings. To mitigate the score overlap between in-domain and out-of-domain data, the authors proposed a simple but effective softmax-scaling to recalibrate the matching scores between images and texts. According to the experimental results, the proposed method called Maximum Concept Matching (MCM) achieves superior performance to previous works on a variety of benchmarks and settings. It does not need any finetuning but can be directly applied to different datasets and label spaces in a zero-shot manner thanks to the zero-shot capacity learned by vision-langauge models. [Strenghts]\n\n1. This paper introduced a simple yet effective OOD detection method called MCM based on multi-modal pre-trained models. This is one of the first works that leverage vision-language multi-modal models for OOD, in contrast to previous single-modal methods.\n\n2. Based on the vision-language pretrained model like CLIP, the authors further proposed a simple softmax scaling to re-calibrate the matching scores to magnify the difference between in-distribution and out-of-distribution samples. \n\n3. Comprehensive experiments demonstrate that the proposed MCM outperforms previous work proposed by Fort et al. and MSP. The superiority is universal on both small-scale datasets and large-scale ones line ImageNet-1K. The authors further demonstrated that MCM can benefit hard OOD detection very well. \n\n[Weaknesses]\n\n1. The technical contribution in this paper is limited. First of all, using softmax on the classification logits is natural and straightforward as shown in MSP and other works. Despite the authors gave proof that softmax scaling can improve the OOD detection when using clip-based models, I think this change is incremental and should be considered as a baseline method.\n\n2. The proposed MCM is one of the first works that leveraged vision-language pretrained models. However, I do think the claims made in lines 155-168 are exaggerated. Most of the claimed merits for MCM are due to vision-language pretrained CLIP, such as generalization ability, training-free and scalable.\n\n3. In Fig.4, the authors showed the effect of softmax scaling and the corresponding FPR at different settings. For completeness, the authors should report the results on all used datasets for the proposed method without softmax scaling. \n\n4. The authors merely reported the performance on zero-shot settings, it would be better to also report the performance after finetuning. It is not clear whether after finetuning the OOD detection performance will be further improved or not. \n\n5. It is not clear in the paper why the vision-language model achieves much better zero-shot OOD detection than the single-modal method. The authors comapred with Maha in lines 289-307. However, why text features as the prototypes are supposed to be better than visual features as the prototypes? Questions:\n\n1. What is the performance after finetuning CLIP models for OOD detection? It would be better to show the performance after finetuning on downstream datasets.\n\n2. It is still not clear to me why vision-language pretrained model with simple softmax scaling can outperform previous work significantly. There is not much explanation in the paper about this.\n\n3. The authors should report the performance without softmax scaling for all datasets and evaluations. The role of softmax scaling is not clear to me.\n\nSuggestions:\n\n1. I do not think the claimed contributions in lines 155-168 are from the proposed MCM method. It mainly inherits from the off-the-shelf CLIP models. All methods that use vision and language encoders in CLIP are supposed to possess such properties.\n\n2. The authors should have more discussions about the contribution of proposing softmax scaling, which I think is an incremental and normal technique.\n\n The authors did not discuss the limitations and societal impact in their submission.", " The authors proposed Maximum Concept Matching (MCM), a distance-based zero-shot OOD detection method based on aligning visual features with textual concepts, and provided theoretical justifications that softmax scaling used by MCM improves the separability between ID and OOD data for CLIP-based OOD detection. The experiments demonstrate that MCM achieves superior performance on a wide variety of real-world tasks. MCM with vision-language (multi-modal) features outperforms Mahalanobis with visual features only (single-modal) on a hard OOD task. MCM offers several advantages: OOD-agnostic (no information required from OOD data), training-free (no downstream fine-tuning required), and generalizable (one model supports many tasks). Strengths\n\nOriginality: the authors adopted softmax scaling, which improves the separability between ID and OOD data.\n\nQuality: MCM achieves superior performance on a wide variety of real-world tasks, compared with the state-of-the-art OOD detection methods. MCM with vision-language (multi-modal) features outperforms Mahalanobis with visual features only (single-modal) on a hard OOD task, with an improvement of 56.60% on FPR95.\n\nClarity: the manuscript is well-organized and the text is clear and concise.\n\nSignificance: The use of the powerful CLIP representation and softmax scaling, enables zero-shot OOD detection. More important, MCM is OOD-agnostic and generalizes to many tasks, making it suitable for real-world applications, where we don’t know OOD labels in prior.\n\n\nWeaknesses\n\nOriginality: the authors proposed MCM inspired by concept match which has been adopted in previous OOD detection work (Mahalanobis, which defines class prototypes based on pure visual embeddings). And multi-modal OOD detection has been practiced in previous OOD detection works yet. Can MCM be applicable for non-contrastive vision-language pre-training models? Ablation study on softmax scaling should be given.", " The paper presents an OOD detection approach termed MCM based on vision-language pretrained models, by formatting the OOD detection as a distance-based matching problem. Specifically, they take the advantage of CLIP for distinguishing OOD data from known classes by cross-modal matching, which establishes a powerful cross-modal subspace for metric learning. The author gives some theoretical and empirical evidence to explain the superiority of MCM over cross-entropy-based models. Strengths:\n* The paper is well written and mostly clear.\n* The idea of the maximum matching score is simple but effective and shows promising results compared with some baselines.\n* The theoretical explanation of softmax scaling seems reasonable and insightful.\n\nWeaknesses:\n* Missing comparisons with recent works. The compared works in Table 2 all adopt a Transformer trained on large datasets, a large portion of CNN-based methods is missing, e.g. GradNorm[R1], Energy[R2]. Note that when replacing ResNet with CLIP, the simple baseline MSP could boost from 63.69/87.59 (reported in [R1]) to 26.65/95.25 on FPR95/AUROC on iNaturalist. I am worried that previous CNN-based methods may also benefit from a strong backbone significantly. The paper lacks a fair comparison with recent CNN-based SOTA methods (MSP and Mahalanobis are somewhat old) using a similar backbone.\n\n* Prompt ensemble. For the same datasets, I guess different prompts may produce unstable performance and thus ensemble works well. I wonder whether the authors have to design new prompts for new datasets. \n\n[R1] On the Importance of Gradients for Detecting Distributional Shifts in the Wild, NIPS 2021.\n\n[R2] Energy-based out-of-distribution detection, NIPS 2020.\n 1. In Table 2, why do Fort et al. (ViT-B) and MSP (CLIP-B) have the same performance on all metrics? Also, some values should not be bold.\n2. What about the variance of the experiments? The paper did not describe limitations." ]
[ -1, -1, -1, -1, -1, -1, 6, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "MNAJ_LD2K7D", "6QyAptC8YIa", "Mhc4BckSlxA", "0bt8oU7Sw5i", "o1V_lBZpmzW", "nips_2022_KnCS9390Va", "nips_2022_KnCS9390Va", "nips_2022_KnCS9390Va", "nips_2022_KnCS9390Va", "nips_2022_KnCS9390Va" ]
nips_2022_wsnMW0c_Au
Non-convex online learning via algorithmic equivalence
We study an algorithmic equivalence technique between non-convex gradient descent and convex mirror descent. We start by looking at a harder problem of regret minimization in online non-convex optimization. We show that under certain geometric and smoothness conditions, online gradient descent applied to non-convex functions is an approximation of online mirror descent applied to convex functions under reparameterization. In continuous time, the gradient flow with this reparameterization was shown to be \emph{exactly} equivalent to continuous-time mirror descent by Amid and Warmuth, but theory for the analogous discrete time algorithms is left as an open problem. We prove an $O(T^{\frac{2}{3}})$ regret bound for non-convex online gradient descent in this setting, answering this open problem. Our analysis is based on a new and simple algorithmic equivalence method.
Accept
The main result of the paper is on establishing an approximate equivalence between online gradient descent (OGD) on non-convex losses with online mirror descent (OMD) on convex reparametrizations of the losses. In a previous result by Amid and Warmuth, which applies to the the continuous-time setting, we have exact conditions where the gradient flow with this reparameterization is exactly equivalent to continuous-time mirror descent. However, the current paper focuses on providing discrete time algorithms. The main theoretical contribution of the paper is to provide sufficient general conditions for the approximate equivalence to go through, leading to an O(T^{2/3}) regret bound for OGD on non-convex losses. The paper was discussed to a good extent between the authors and the reviewers, and among the reviewers. The reviewers (and the AC) agreed that the paper provides a novel analysis which is based on a simple, elegant, and novel algorithmic equivalence method. Most of the concerns of the reviewers were addressed during the discussion phase. However, the reviewers still felt that the paper lacks relevant and important applications where the approximate equivalence (and reparametrization) would apply to and would lead to non-trivial results. All in all, the reviewers agreed on the decision that the paper lies on the acceptance border (with inclination towards acceptance).
train
[ "5i15t3JQNhs", "ucoGOk5DtfJ", "rOMGs9EYVox", "Ruic9DVpJcu", "XVEPq5fmkbG", "zXFYYqwu2s", "8HafODwSscP", "UQCsHcl-9jr", "mCI7V19I9bBJ", "zqw03J3uZb", "NQJPBsy6Ylp", "a_thCunR3HB", "iHnW-aoBPbm", "LKJiHm-8mdK", "A-6bJ3iWq_h" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I have checked the comments of Reviewer x2pL and Reviewer NwoT. As they bring up some issues that I didn't raise, e.g., the issue of G raised by Reviewer NwoT, I tend to keep the score as is. \n\nIn my opinion, the title of the paper sounds grandiose, but the result (a bit) falls short of it. ", " I would like to thank the authors for their clear answers to the weaknesses and questions I posed.\n\nI agree with their answers, but I still believe that weakness 1 is a limitation of the work, and it is also unfortunate that the answer to weakness 2 leads to an $O(T^{22/23})$ rate. Together these imply that the generality of the proposed framework does come at the cost of significantly deteriorated rates. I understand the authors' point that having any sublinear rate at all is a step forward, but it still seems that some key aspect is missing from the approach if it does not give good rates for any natural example. I will therefore keep my original score.", " Dear reviewer,\n\nWe see your point, it would have been great to analyze a case in which GD converges to global optimality in nonconvex optimization that is completely new. \n\n1. We do accomplish this partially, since we can handle constrained optimization with projections, which were not analyzed in the previous line of work, which analyzed unconstrained optimization. \n\n2. We agree that coming up with a new case completely would be better, we still see value in precise analysis of the finite time regret bounds: this leads to better and more precise understanding of the limitations of the technique. \n\n", " This paper provides a candidate framework for online nonconvex optimization, but if this candidate cannot help us solve any specific problem of researchers' interest, then its relevance is in question. That's why I asked for at least one relevant application so the relevance can be justified. \n\nThere should be already other candidate frameworks for online nonconvex optimization. For example, geodesic convexity is also one approach to handle nonconvex optimization. The paper \"No-regret online learning over Riemannian manifolds\" by Wang et al. provides regret guarantees and applications. ", " Dear reviewer, \nWe respectfully disagree about the non-convex regret being clear before us - getting a finite time regret bound was explicitly posed as an open question in the last NeurIPS by experts in the field, and as far as we know, we are the first to get finite time regret bounds! \n(which as evident from other questions of the reviewers, require some calculations in various cases and not immediate). ", " I just saw your response to another reviewer that you worked out the EG example in Appendix B. The proof looks fine. There is a typo in Line 429: It is $\\epsilon$ that is chosen as $1/23$. \n\nThe idea in this paper is interesting and simple. Unfortunately, this paper does not provide any relevant application in which the regret of the online nonconvex optimization problem was unclear and can be analyzed by the framework of this paper. So I cannot give a high score. ", " Thanks. Sorry I had an illusion when imagining the shape of the set. ", " Perhaps the notation is confusing. More precisely, we mean $S =$ {$u | \\forall i, u_i \\geq \\sqrt{\\epsilon}$}. This set is convex as for $u, w \\in S$, for all $\\lambda \\in [0,1]$, $\\lambda u + (1-\\lambda)w \\in S$ because if $u_i \\geq \\sqrt{\\epsilon}$ and $w_i \\geq \\sqrt{\\epsilon}$ then $\\lambda u_i + (1-\\lambda) w_i \\geq \\sqrt{\\epsilon}$. Hope this helps!", " Perhaps I misunderstand something: Isn't the set $\\set{ u: u \\geq \\sqrt{\\epsilon} }$ nonconvex? ", " We thank the reviewer for the valuable feedback.\n\nWeaknesses:\n\n1. The main benefit is to use the equivalence to identify whether a certain non-convex OGD problem converges without knowing its OMD counterpart. A preliminary result is shown in Theorem 7. Also, OGD uses Euclidean projection, which is sometimes easier to implement than Bregman projection used by OMD. \n\n2. Yes we agree that our results heavily depend on the reparameterization viewpoint. The novelty of our paper is the use of algorithmic equivalence as a technique so we feel this is appropriate, even though our result does not hold for general nonconvex losses (which is NP Hard).\n\nThank you for catching the typo on line 215, we have fixed this. As for line 200-202, strong convexity assures us that the minimizer must be close to $x_t$ so we restrict to a sufficiently small ball around $x_t$ in order to use Taylor expansion arguments downstream. The notation is indeed a bit messy, but we are not sure how this argument should be expanded for improved clarity.", " We thank the reviewer for the detailed comments, and will address them in order below.\n\nWeaknesses:\n\n1. Our algorithm is designed to cover the general case, while the algorithms in [3],[4] are more customized. For example, the algorithms in [3],[4] do normalization instead of projection. We can still get sub-linear regret bounds for the same setting, see the answer to (2).\n\n2. Good point! Thank you for noticing the potential issue on constants which was neglected. Our analysis can still get sub-linear regret bounds by smoothing the domain: we use a $T^{-\\epsilon}$ smoothing instead in lines 163-164. Basically $G$ is dominated by the 3-nd derivative of $R$ which is roughly $T^{2\\epsilon}$, and the regret can be bounded by $O(T^{1-\\epsilon}+TG^5 \\eta^{1/2}+1/\\eta)$. Balancing the parameters by setting $\\eta=T^{-20\\epsilon/3-2/3}$ and then $\\epsilon=1/23$ gives an $O(T^{22/23})$ regret bound. Though our general analysis can't recover optimal regret bounds for each case, it does preserve a sub-linear regret. We have added a proof to this in appendix in the latest version.\n\n3. Theorem 7 is a general proof of existence and doesn't focus on the exact regret bound, therefore we don't require \"constructing\" these constants explicitly. But for certain cases like example 1 we do need to pay more attention.\n\n\nQuestion:\n\n1. If we use a $T^{-\\epsilon}$ smoothing in lines 163-164, $G$ is dominated by the 3-nd derivative of $R$ which is roughly $T^{2\\epsilon}$. \n2. Indeed we omitted the use of Assumption 1, which gives an additional formula $(q'_i(u_i)^2)R''_i(x_i)=1$. Inserting this and formula 1 into formula 2, we get rid of the dependence on $R'_i(x_i)$ in formula 2: $\\tilde{R_i}''(u_i)-q_i''(u_i)\\tilde{R_i}'(u_i)/q_i'(u_i)-1=0$. Because $q_i'(u_i)$ is positive we reach the final ODE. We have updated these intermediate steps in the latest version.\n\nRemarks: We have added $1$-strongly convex to Assumption $2$. We assumed this for simplicity, as a strongly convex regularizer can be scaled by a constant to be $1$-strongly convex. All three examples we use are $1$-strongly convex though, which can be confirmed by taking Hessians. The Hessians are all diagonal so a simple minimization will show the result.\n\nMinor Comments: We have fixed the typos in the most recent revision. Thank you for pointing these out. For line 108, the $K$ is transformed via $q^{-1}$ into $K'$.", " We thank the reviewer for the detailed comments, and will address them in order below.\n\nQuestions:\n1. Smoothing of the simplex may appear to make $K’$ nonconvex around the origin, but it should be noted that $K'$ is not actually the ball, but the intersection of the ball and {$\\{u : u \\ge \\sqrt{\\epsilon}\\}$} for an $\\epsilon$ smoothing of the simplex. As an intersection of convex sets, this is convex. As such, quadratic reparameterization over the ball should still work. As noted by reviewer NwoT, there are other challenges involving the Lipschitz constants here, but sublinear regret is still attainable. We leave improving rates to future work. \n\n2. Theorem 7 is a preliminary attempt to characterize general conditions under which convergence of a non-convex OGD is possible. Unfortunately, we do not have any concrete applications of Theorem 7 beyond the settings of Theorem 3.\n\nLimitations: As noted above, we can still get sublinear regret for reparameterization of exponentiated gradient and similar smoothing and balancing of parameters should also suffice for a log-barrier.\n\nTypos: We have fixed the typos in the most recent revision. Thank you for pointing these out.", " This paper shows that under several assumptions, online mirror descent for convex losses can be written as a perturbed version of online gradient descent for possibly non-convex losses with an $O( T^{2/3} )$ regret, up to a reparameterization. By a similar understanding, this paper also discusses when online gradient descent for non-convex losses is provably no-regret by the reparameterization argument. **Strengths**\n1. The argument is very simple yet reasonable and actually novel, as far as I know. \n\n**Weaknesses**\n1. The first example in Section 3.1 seems to be wrong. That example considers online exponentiated gradient descent on the probability simplex, arguably the most representative instance of online mirror descent. As relative entropy is not Lipschitz on the probability simplex, violating Assumption 2, the smoothed simplex is proposed to be a fix. However, then points close enough to the origin should be removed from the set $\\mathcal{K}'$, meaning that $\\mathcal{K}'$ is non-convex and the theory in the paper does not apply. \n2. The other two examples in Section 3.1, online mirror descent with the log barrier and the tempered Bregman divergence, are much less representative compared to exponentiated gradient descent. \n2. Section 5, which provides conditions when online gradient on non-convex losses may be viewed as online mirror descent on convex losses and hence is no-regret, should be the most useful part. Unfortunately, no relevant example is provided. \n2. There are some typos. \n - Ln. 95: $\\Vert x \\Vert_p$ instead of $\\Vert x \\Vert p$\n - Ln. 105: $\\nabla^2 R$ instead of $\\nabla^2_R$\n - The equality following Ln. 205: $\\nabla f_t ( x_t )$ instead of $\\nabla f_t$\n - The article \"the\" is missing in several places. 1. Please check the correctness of the first example in Section 3.1. \n2. Is there any relevant application of Theorem 7 in Section 5? Assumption 2 basically assumes that everything should be Lipschitz. This assumption is violated for exponentiated gradient descent and online mirror descent with self-concordant barriers, for example. The authors also noticed that the matrix version of exponentiated gradient \"is difficult to fit into this framework.\" The self concordance of the regularizers in these examples may a useful property to replace the Lipschitz assumption. ", " The paper studies an approximate equivalence between online gradient\ndescent (OGD) on non-convex losses with online mirror descent (OMD) on\nconvex reparametrizations of the losses. Special cases of this\nequivalence were already known from [3] and [4], but the present paper\nattempts to set up a general theory. Its main results are:\n- Theorem 3, which gives sufficient general conditions for the\n approximate equivalence to go through, leading to an O(T^{2/3}) regret\n bound for OGD on the non-convex losses.\n- Section 3.1, which provides example instances of OMD on convex losses\n which can be approximated this way by OGD on non-convex losses.\n- Theorem 7 goes in the other direction: it gives sufficient conditions\n for OGD on non-convex losses to be approximated by OMD on convex\n losses.\n\n Strengths:\n* The paper is very well written, especially the introduction.\n* The approximate equivalence between non-convex and convex online\n optimization is an exciting new direction, for which few results are\n yet available. It would be great to have a general theory that\n describes when such equivalences are possible.\n\nWeaknesses:\n* It is not clear that the general theory covers any interesting cases:\n 1. It does not recover the previous results from [3] and [4].\n 2. I do not consider the examples provided in Section 3.1 to be very\n convincing. In the first two examples, it would seem that G could be\n as large as T, so then Theorem 3 would be vacuous. For the last\n example, I am not sure how the constants play out.\n 3. Theorem 7 does not quantify how the strong convexity parameter c>0\n of the regularizer R that is constructed, depends on the properties of\n q and tilde{f}. It could therefore be \n\nIn spite of these weaknesses, the overall proof approach does seem like\nthe natural way forward, by quantifying the approximation error in the\nequivalence. If my questions below can be answered satisfactorily, I am\ntherefore still mildly in favor of acceptance, even if there are no\nstrong examples.\n\nRemarks:\n* Theorem 3 assumes that R is 1-strongly convex. This should be stated\n explicitly in its conditions. I am also concerned that this is not\n verified in the examples from Section 3.1, so do they really satisfy\n this condition?\n* In the Proof of Theorem 7: I don't understand how you get from the\n displayed equations above \"solving the above formulas\" to the ODE\n below.\n\nMinor Comments:\n* Line 80: equivalence between OMD and FTRL requires constant step size.\n* \"Euclidean\" should be capitalized throughout the paper\n* Line 95: p should be a subscript in p-norm\n* The square on the Hessian appears to overlap with the nabla symbol\n throughout the paper.\n* Line 108: Shouldn't q^{-1} be q? (Maybe I was just confused here.)\n* Assumption 1: Should the Hessian be interpreted as the Hessian of R or\n the Hessian of R(q(.))\n* Algorithm 1: f_t -> f_t(x_t)\n* Line 147: \"cab\" -> \"can\"\n* Line 205: f_t -> f_t(x_t)\n * Could you indicate how the G that appears in Theorem 3 scales with T,\n and verify that R is 1-strongly convex for all three examples in\n Section 3.1?\n* Could you answer the remark above about the proof of Theorem 7? \n\n Section 3.2 provides a very good discussion of some of the limitations\nof the approach.\n\nThe limitations in footnotes 1 and 2 seem sufficiently important to put\nthem somewhere in the main text.\n", " This paper shows that for a certain class of non-convex problems, online gradient descent has a O(T^{2/3}) regret. Previous works by Amid and Warmuth show that gradient flow for minimizing a reparametrized function is equivalent to mirror flow for minimizing the original function. This work takes a step forward by showing in discrete-time, online gradient descent on a reparametrized objective has a sublinear regret.\n\nThe high-level idea is that the discrete-time online mirror descent has a O(\\sqrt{T}) for the same class of problems. By carefully bounding the discrepancy between online mirror descent for the original problem and online gradient descent (OGD) for the non-convex re-parametrization, a sublinear regret of OGD can be obtained.\n\n\n While the corresponding continuous-time result has been established in the previous work, the discrete-time setting seems to be left open. This work fills the gap.\n\nIn my opinion, this paper is pretty much borderline for the following reasons:\n\n1. While I appreciate the theoretical result, I think it will be more helpful if the authors can discuss the advantage of conducting discrete-time online learning on the reprametrized functions, since the regret gets worse, i.e., one could simply use OMD for the convex problems.\n\n2. Following the first point, I think the title should be modified and be more specific, since the technique seems only applicable to a certain class of problems and critically relies on the particular connection between OGD and OMD that has been established in the prior work by Amid and Warmuth.\n\n\n\n\nMinor:\n1. I don't quite understand Line 200-202. Perhaps a more detailed derivation would be helpful?\n2. Line 215: \"both both\".\n N/A N/A" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 3 ]
[ "zqw03J3uZb", "NQJPBsy6Ylp", "Ruic9DVpJcu", "XVEPq5fmkbG", "zXFYYqwu2s", "8HafODwSscP", "UQCsHcl-9jr", "mCI7V19I9bBJ", "a_thCunR3HB", "A-6bJ3iWq_h", "LKJiHm-8mdK", "iHnW-aoBPbm", "nips_2022_wsnMW0c_Au", "nips_2022_wsnMW0c_Au", "nips_2022_wsnMW0c_Au" ]
nips_2022_ePZsWeGJXyp
VICRegL: Self-Supervised Learning of Local Visual Features
Most recent self-supervised methods for learning image representations focus on either producing a global feature with invariance properties, or producing a set of local features. The former works best for classification tasks while the latter is best for detection and segmentation tasks. This paper explores the fundamental trade-off between learning local and global features. A new method called VICRegL is proposed that learns good global and local features simultaneously, yielding excellent performance on detection and segmentation tasks while maintaining good performance on classification tasks. Concretely, two identical branches of a standard convolutional net architecture are fed two differently distorted versions of the same image. The VICReg criterion is applied to pairs of global feature vectors. Simultaneously, the VICReg criterion is applied to pairs of local feature vectors occurring before the last pooling layer. Two local feature vectors are attracted to each other if their l2-distance is below a threshold or if their relative locations are consistent with a known geometric transformation between the two input images. We demonstrate strong performance on linear classification and segmentation transfer tasks. Code and pretrained models are publicly available at: https://github.com/facebookresearch/VICRegL
Accept
This paper proposes to extend the existing VICReg objective to the local features for obtaining good performances on both image-level and dense prediction tasks. In specific, while the global features are obtained by an average pooling on the output feature maps, the local pairs are determined by both of the feature distance and spatial location distance. The technical novelty seems to be somewhat incremental due to a little bit simple modification of the existing global objective to the local objective for dense representation learning. However, extensive experiments on several benchmarks including ablations and visualization clearly demonstrate the effectiveness of the proposed self-supervised representation learning for both classification and segmentation (+detection) tasks. Especially, the authors faithfully addressed most concerns and questions raised by the reviewers, and the overall quality of the paper seems to be significantly improved. Therefore, I would recommend to accept this paper.
train
[ "ErcZ8BkBZID", "bWhBiQfpSQW", "gECQ5SqHGBk", "7x3eOY9YjrE", "oiNqX_FOF8V", "hfsIEFMuKz3", "0Cj_bsag4ON", "QYU__mHEN8H", "t0aTYzLHKA", "Tfdd3NDhUgU", "4kP0SFotBe6" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " 7) *L127, L135, 181: Is there a motivation / intuition on why it is better to keep only the top- pairs? Together with the l2-distance loss, this could reinforce the intuition that the l2-distance loss act as a regularizer/booster as it would be applied only to feature vectors that are already close to each other, which one can assume to be ground-truth positives.*\n\nAs explained in point 7) of our answer to reviewer yFUm and point 2) of our answer to reviewer hQQN, the general idea is to eliminate the mismatched pairs of feature vectors:\n- That are too far away in the image for the location-based matching, and that therefore probably do not represent the same objects.\n- But most importantly that are probably mismatched for the feature-based matching, especially at the beginning of the training when the network matches feature vectors representing different objects or textures.\nWe will clarify this point in the revision.", " 1) *The main contribution seems incremental with respect to VICREG.*\n\nOur paper introduces a notion of locality to self-supervised learning (SSL), which can be combined to any SSL criterion other than VICReg, such as SimCLR. The choice of VICReg for the main method is due to its simplicity and its non-contrastive nature, which we believe is more appropriate when matching local descriptors within the same image, as explained in point 2) of our answer to reviewer yFUm. The second main contribution of our paper is to demonstrate empirically the fundamental trade-off there is between learning local and global features, highlighted by Figure 2 of the paper. We will make sure to clarify our contributions in the revision.\n\n2) *L49: \"contrasting feature vectors corresponding to far away locations can have a negative effect, as these vectors could have been pooled from locations that represent the same object in the image.\" This is not clear. Does contrasting here mean pushing the feature vector together? In this case, wouldn't one want to push features of the same object together? i.e. this would be a positive outcome?*\n\nHere contrasting the feature vectors means pushing them away from each other. We will clarify the sentence.\n\n3) *The l2-distance (Eq 3) is not intuitive at first. It pushes together local features close in the embedding space even though they might not be ground-truth matches. Is there any intuition on why this is beneficial (as supported by the experiments)? One intuition is that it might act as regularizer or an implicit clustering. Another is that the global criterion already pushes positive matches coarsely together and this loss refines the vector. Is there any observations or intuition on that interesting phenomenon?*\n\nThe goal of the feature-based matching (FBM) of Eq. (3) is to capture long-range interactions not captured by the location-based matching (LBM) of Eq.(2). Indeed, LBM will only match vectors that are spatially close in the image, but will ignore potential matches of similar objects far away in the image, FBM is not spatially constrained and can therefore match feature vectors pooled from far away locations in the image. This is illustrated in Figure 3, in the top image, where we can see a feature-based match between two areas both representing grass textures in their respective view, but are far away in the seed image. At the beginning of the training, it is true that FBM matches can be considered as bad matches, because the encoder has not learned good features yet. However, LBM matches are ground-truth matches which allows the system to not fall into degenerated solutions. The global loss also helps not fall into these solutions. At the end of the training, FBM matches are of good quality and capture long-range interactions, as illustrated in Figure 3 of the paper. We will provide additional visualization in the appendix of the revision.\n\n4) *L107: It would be nice to introduce the notation of \\lambda, \\mu, \\nu in Eq 1 for completeness.*\n\nWe will add a sentence mentioning these parameters.\n\n5) *L118: $h$ is called a 'local projector' whereas it is the equivalent of the VICREG expander but at the scale of the feature map. Since 120 uses the terminology 'global expander', it would seem more consistent to use the word 'local expander' rather than projector. This would also be more consistent with VICREG on which the paper relies on.*\n\nWe called it projector as it projects the feature vectors into a space of smaller dimensionality. However we agree that the terminology is confusing, especially because the main reason we do that is because of GPU memory constraint. We will change for the term \"local expander\".\n\n6) *L127, L135, 181: 'only the top-\\gamma pairs are kept'. It is not clear whether for each feature position, only the top-\\gamma are kept to derive the loss, or if the top-\\gamma among all pairs in the batch are kept (assuming top-\\gamma means the pairs with the smallest distance). This would help the reader if the definition were introduced L127.*\n\nFrom two views of an example, the corresponding two feature maps are produced by the encoder, and for each feature vector in the first feature map, the best feature vector in the other feature map is computed, which results in N matches if there are N feature vectors. Among these N matches, the top-\\gamma best matches are kept. We will clarify the explanation in the text.\n", " 5) *In the ablation of selected pairs, when all possible matches are selected, I believe both spatial and embedding distance losses will be the same, and network would try to match all local features though coming from different objects. Though not optimal, it seems the performance is improved compared to the baselines. Please could you provide intuition on this scenario and related improvement.*\n\nSelecting all the matches in Table 4 does not mean that all N^2 pairs are selected, it means that for all N feature vectors in view 1, we search for the NN in view 2 and match the two. Therefore different matching criteria give different matchings, which is the case here, with the location-based criterion, which only captures short-range interactions, and the feature-based criterion, which is able to capture long-range interactions. We will clarify this point in the revision.\n\n6) *Is the network trained from scratch with global and local losses or, a pretraining with global loss is done?*\n\nThe network is trained from scratch with both the global and local losses, but pretraining with only the global loss and fine-tuning with the local loss is an interesting idea that should be explored by future work.\n\n7) *How does the method compare to [1]*\n\nWe provide a comparison of our VICRegL model with a ConvNeXt-B backbone against MAE with a ViT-B backbone [1]. The ConvNeXt-B and ViT-B models both have the same number of parameters. We report results on linear and fine-tuning classification on ImageNet, linear and fine-tuning segmentation on Pascal VOC, and fine-tuning segmentation on ADE20k with a UperNet head.\n\n|\tMethod |\tImageNet Classification | |\tPascalVOC Segmentation | ADE20k Segmentation |\n| --- | --- | --- | --- | --- |\n|\t\t\t\t\t| Linear Frozen (%)\t| Linear FT (%) | Linear FT (mIoU) | UperNet FT (mIoU)|\n| MAE\t\t\t\t|\t68.0\t| 83.6\t\t| 59.9 | 82.5 | 48.1 |\n| VICRegL 0.75\t\t| 76.3\t | 83.5\t\t| 70.4 | 82.5 | 48.2\n\nVICRegL offers similar results in the fine-tuning regime, but strongly improves over MEA in the frozen regime. We will incorporate MEA as a concurrent method in Table 2 of our revision.\n\n\n8) *It is natural to fine-tune the network when supervisions are available, however, the gap in the performance boosts narrows after finetuning. May be a semi-supervised task and evaluation would highlight the importance of the method and a more meaningful scenario.*\n\nWe provide below results on the semi-supervised evaluation benchmark proposed in DenseCL (Table 3). We report APb on detection and APm on segmentation on COCO:\n\n| Method\t\t| Detection\t(APb)\t\t\t\t\t| Segmentation (APm)|\n| --- | --- | --- |\n| MoCov2\t\t| 23.8\t\t\t\t\t\t\t| 20.9 |\n| VICReg\t\t| 24.0\t\t\t\t\t\t\t| 20.9 |\n| DenseCL\t\t| 24.8\t\t\t\t\t\t\t| 21.8 |\n| VICRegL 0.75\t| 25.7\t\t\t\t\t\t\t| 22.6 |\n\nVICRegL significantly outperforms the other methods on these benchmarks. These results will be included in the revision.\n\n9) *It would be interesting to see any sort of analytical study to support the hypothesis of relation of batch size and dimension of the network’s output. This would be an important contribution on itself.*\n\nWe provide in point 8) of our answer to reviewer yFUm a possible explanation for the fact that ResNet-50 combined with multi-crop does not give strong performance, however the relation between batch size and dimensionality of the network's output is still unclear. We have conducted experiments that empirically show a boost in downstream performance when the size of the batch is equal or close to the dimensionality of the output. However a more theoretical study is required.", " 1) *The main concern is that although the paper claims to benefit detection and segmentation tasks, only segmentation is evaluated. Evaluations on detection are studied in most of the baseline papers but this is lacking in the paper. It may require a more comprehensive experiment to validate the effectiveness.*\n\nWe agree that detection results are missing and are actually important in order to validate our approach. We therefore provide below experimental results on detection tasks. We follow DenseCL and fine-tune a Faster R-CNN detector (C4-backbone) on the Pascal VOC trainval07+12 set with standard 2x schedule and test on the VOC test2007 set (Fine-tune in the Table). Additionally, we provide results using our linear frozen benchmark, that we adapt to produce bounding boxes instead of segmentation heads (Linear frozen in the Table). We report our results in the following table:\n\n|\t\t\t\t| Fine-tune\t(AP)\t\t\t\t| Linear Frozen (AP)|\n| --- | --- | --- | \n| MoCov2\t\t|\t57.0\t\t\t\t\t\t| 41.3 |\n| VICReg\t\t|\t57.4\t\t\t\t\t\t| 41.9 |\n| DenseCL\t\t|\t58.7\t\t\t\t\t\t| 43.0 |\n| VICRegL 0.75\t| 59.5\t\t\t\t\t\t| 45.6 |\n\nHere again, VICRegL offers better performance than other methods in the fine-tuning regime, but really shines in the frozen regime, with an improvement of +3.7 AP over VICReg and of +2.6 AP over DenseCL. We provide results on the same benchmark with the ConvNeXt-B backbone, in comparison with other competitive methods using the ViT-B backbone.\n\n|\t\t\t\t| Fine-tune\t(AP)\t\t\t\t| Linear Frozen (AP)|\n| --- | --- | --- | \n| MoCo-v3\t\t|\t69.8\t\t\t\t\t\t| 54.1 |\n| VICReg\t\t|\t71.3\t\t\t\t\t\t| 55.4 |\n| DINO\t\t\t|\t74.5\t\t\t\t\t\t| 58.4 |\n| VICRegL 0.75\t| 75.4\t\t\t\t\t\t| 59.0 |\n\nAgain, VICRegL outperforms concurrent methods by a significant margin. These results will be added to our revision.\n\n\n2) *The authors use ‘multicrop’ technique but do not ablate this factor in the experiments. Also, it is not clear if other baselines (especially VICReg) also enjoy the benefit of multicrop. If they do not then, the comparison may not be fair. The authors may want to clarify this.*\n\nWe provide an ablation on the use of multi-crop in point 8) of our answer to reviewer yFUm. We confirm that all methods compared in Table 1 **do not use** multi-crop (including VICReg and our VICRegL), and that all methods in Table 2 **use** multi-crop.\n\n3) *As the method uses features maps and losses based on them, it would be interesting to see its impact on training speed and memory.*\n\nWe provide a comparison between VICReg and VICRegL in terms of running time and memory use, using a ResNet-50 Runs on 32GPUs with a batch size of 2048.\n\n| \tMethods\t\t| RT (100ep Hours)\t| MEM (Peak G on GPU) |\n| --- | --- | --- |\n| VICReg\t\t| 11h \t|\t\t\t11.3G\n| VICRegL\t\t| 13h\t|\t\t\t15.7G\n\nThe computation of multiple covariance matrices for each feature vector indeed induces a moderate additional computational burden both in terms of time and memory usage.\n\n\n4) *The method chooses location representation correspondences based on nearest location and embedding similarity. Any thought or analysis on noisy or spurious correspondences? For example, although nearest neighbors the patch may be coming from different objects/semantics/texture. Similarly, based on embedding similarity, the network may choose unreliable pairs especially in the early stage of training.*\n\nThis is a good point, and this is the reason why we introduced the top-k best matches selection, which prevents bad pairs from being matched. Our ablation in Table 4 shows how selecting only the best-k helps the performance. Additionally, as explained in point 7) of our answer to reviewer yFUm, there are with high probability always good local features to match as the views have a low probability of not overlapping, and this is why always matching the top-k pairs, compared to introducing a threshold value at which the pair is considered a match, does not degrade the performance. We will clarify this point in the revision.", " 7) *The number of local pairs is fixed in the method. What happens, however, if the two views don't or barely overlap?*\n\nFor the feature-based matching this is not an issue, as the purpose of this matching is to capture long-range interactions not captured by location-based matching. For the location-based matching, given the parameters we use to generate the views (each view covers between 8% and 100% of the image, chosen uniformly), the probability for the views to not overlap is small, and even in that case matching the closest points between the views does not degrade the final performance. Indeed, we also have tried to use a variable number of matches and a threshold value used to compute the matches, which did not improve the performance compared to using a fixed number of matches. We will discuss this point in the revision.\n\n8) *The paper mentions that multi-crop was used with ConvNeXt architectures. Therefore, providing an ablation with and without multi-crop in this setting would be good. Also, it would be good to investigate more the reason for multi-crop failing with ResNet-50, e.g., test smaller batch sizes as hypothesized in Section 3.2.*\n\nWe provide here a comparison between using multi-crop or not with the ConvNeXt architecture. We pre-train a ConvNeXt-S on 100 epochs on ImageNet with VICReg and VICRegL and evaluate on linear frozen segmentation on Pascal VOC.\n\n| \tMethod |\t\t\tM-C\t(mIoU)\t| no M-C (mIoU) |\n| --- | --- | --- |\n| VICReg \t\t\t | 60.1\t| 58.7 |\n| VICRegL $\\alpha$=0.75\t | 67.5\t| 64.8 |\n\nThe improvement between VICReg and VICRegL is significant whether multi-crop is used or not. In Table 2 of our paper, all competing methods use multi-crop.\n\nRegarding the poor performance of the ResNet-50 backbone combined with multi-crop (MC), we have conducted additional experiments, and found that ResNet-50 with MC was strongly overfitting. The ConvNeXt architecture uses stochastic depth residual connection (SDRC) for preventing overfitting, which helps a lot for handling MC. In the MC regime, we found that incorporating SDRC in ResNet-50 helps a lot, especially when using small batch size, which experimentally verifies our hypothesis of Section 3.2, and that ConvNeXt without SDRC will result in strong overfitting.\n\n\n\n9) *Minor: Error in L104 - It should not be C in both input and output*\n\nWe will fix this typo by replacing \"C\" by \"3\" in L105.\n\n10) *Questions*\n\n* *How does the method compare to [N1]?*\n\nSee the Table and explanation provided in point 3) of our answer to reviewer yFUm.\n\n* *Is the VICReg necessary, or would the approach also work with other objectives?*\n\nSee our comparison provided in point 2) of our answer to reviewer yFUm.\n\n\n* *How are the results for prior work obtained in Table 1, and what explains the inconsistency with DenseCL?*\n\nSee our explanation in point 3) of our answer to reviewer yFUm.\n\n* *How does the work relate to and differ from the various related prior works (including [N1-N3])?*\n\nSee our explanation in points 3), 4) and 5) of our answer to reviewer yFUm", " 4) *Several related works on learning both global and local representations are summarized in Section 2, but it is unclear how the proposed method relates to and differs from these works.*\n\nRegarding global methods, VICRegL introduces an additional notion of locality and builds on the VICReg method that only learns at a global level. VICRegL therefore falls into the category of non-contrastive methods that maintain the informational content of the learnt features. Regarding local methods, we propose to rewrite the related work section as follows:\n\n\"\"\"\nLocal features. In opposition to global methods, local one focus on learning a set of local features that describe small parts of the image, and are therefore better suited for segmentation tasks. Indeed these methods commonly only evaluate on segmentation benchmarks. A contrastive loss function can be applied directly: (1) at the pixel level [Xie et al., 2021], which forces consistency between pixels at similar locations; (2) at the feature map level [Wang et al., 2021], which forces consistency between groups of pixels: (3) at the image region level [Xiao et al., 2021], which forces consistency between large regions that overlap in different views of an image. Similar to [Wang et al., 2021], our method VICReg operates at the feature map level but with a more advanced matching criterion that takes into account the distance in pixel space between the objects. Copy pasting a patch on a random background [Yang et al., 2021, Wang et al.,2022] has also shown to be effective for learning to localize an object without relying on spurious correlations with the background. Aggregating multiple images corresponding to several object instances into a single image can also help the localization task [Yang et al., 2022]. These approaches rely on carefully and handcrafted constructions of new images with modified background or with aggregation of semantic content from several other images, which is not satisfactory, while our method simply rely on the classical augmentations commonly used in self-supervised learning. The best current approaches consist in using the information from unsupervised segmentation masks, which can be computed as a pre-processing step [Hénaff et al., 2021] or computed online [Hénaff et al., 2022]. The feature vectors coming from the same region in the mask are pooled together and the resulting vectors are contrasted between each other with a contrastive loss function. These approaches explicitly construct semantic segmentation masks using k-means for every input image, which is computationally not efficient, and is a strong inductive bias in the architecture. Our method does not rely on these masks and therefore learns less specialized features.\n\"\"\"\n\n\n5) *The idea of using nearest-neighbors in feature space for contrastive learning has been explored in prior works (e.g., [N2, N3]). While it is used in a different context here, these works should be discussed.*\n\nWe agree that these approaches should be discussed and we will mention them in the paper. However, the nearest neighbors of [N2] and [N3] are very different from ours. Indeed, these methods search for the closest example from a target global representation of a full image. In our case, one image is decomposed into a set of local descriptors, and we search for the nearest descriptor within this set, using location and feature-based matching.\n\n\n\n6) *The paper argues for learning \"hierarchical features\" (L39), however, it is unclear how the proposed method should learn more hierarchical features than existing approaches based on CNN architectures.*\n\nWe agree that the word \"hierarchical\" is not really appropriate here. What we meant is that learning local and global representations can be seen as learning at two different scales. One can imagine learning at more than two scales, by enforcing a local criterion at every layer in an encoder. This future architecture is really what we meant by \"hierarchical\". We will clarify this in the revision.", " 1) *The approach to building positive local pairs via spatial distance is very similar to [N1], which is neither discussed in the paper nor compared to in experiments.*\n\nWe were not aware of this paper and the proposed ReSIM method. We will make sure to mention it both in the text and the experimental results of our paper. Our method VICRegL, differs from ReSIM on a number of points:\n\n* Contrary to ReSIM which focuses on location-based matching that only captures short range interactions, VICRegL combines both a location-based matching and an feature-based matching, which allows for capturing both short-range and long-range interactions within an image.\n\n* ReSIM matches the overlapping region of two views of a seed image by averaging the feature vectors within some sliding window. VICRegL directly compares the feature vectors themselves and can match areas coming from non-overlapping regions of the views.\n\n* ReSIM uses a contrastive criterion. VICRegL uses a non-contrastive criterion that empirically performs better, and does not push away similar but unmatched local features. This is a key component in handling long-range interactions where similar objects or textures are matched from far away locations in the image.\n\n* ReSIM focuses on learning local image features and is evaluated on vision tasks that best exploit them, such as object detection and instance segmentation. We argue instead that they should be combined with more global features useful in tasks such as image categorization. We show that there is a fundamental tradeoff between the two types of features and focus on optimizing this tradeoff.\n\nFinally, we incorporate an experimental comparison with ReSIM in point 3) of our answer to reviewer yFUm, and show that VICRegL significantly outperforms ReSIM on several downstream tasks.\n\n\n\n2) *The method is demonstrated only with VICReg as the contrastive criterion. It is unclear if the approach to incorporate local objectives with the global one would also work with other \"contrastive\" objectives. In principle, it should also work, and demonstrating it with more standard contrastive objectives (e.g., SimCLR) would strengthen the paper.*\n\nWe use VICReg for its *non-contrastive* nature, which avoids pulling away from each other features from different regions in the image, but associated with the same object. We observed in practice that VICReg gives a stronger improvement on segmentation downstream tasks when combined with our local criterion. But our method can be used with other criteria as well and we provide here a table where we compare the performance on linear segmentation on Pascal VOC for VICReg, VICRegL, SimCLR, and SimCLR combined with our local criterion (SimCLR-L), pretrained with a ResNet-100 backbone on 100 epochs:\n\n| Method | Linear Segmentation (mIoU) |\n| ---- | ---- |\n|VICReg \t| 47.8 |\n|VICRegL \t| 55.9 |\n|SimCLR \t| 45.9 |\n|SimCLR-L \t| 51.3 |\n\nVICReg performs already better than SimCLR (+1.9 mIoU), and the improvement of VICRegL over VICReg is of +8.1 mIoU compared to only +5.4 mIoU for SimCLR-L over SimCLR. VICRegL also performs better than SimCLR-L with an improvement of +4.6 mIoU. We will include these results in the revision.\n\n\n3) *Inconsistencies with results reported in prior work: DenseCL by Wang et al. reports 69.4 mIoU on PascalVOC with fine-tuning. It is unclear how the difference from the reported 66.8 in Table 1 comes about.*\n\nThe performance of 69.4 reported in [1] is the fine-tuning performance of an **FCN head** (CNN head with multiple layers). Table 1 of our paper reports the fine-tuning performance of a **linear head**, which explains the difference. We report below a comparison including DenseCL and ReSIM with both a linear and a FCN head in the frozen and the fine-tuning regime:\n\n| Method |\t\tLinear Frozen (mIoU)\t| FCN Frozen (mIoU)\t| Linear FT (mIoU) | FCN FT (mIoU) |\n| --- | --- | --- | --- | --- |\n| MoCov2\t\t\t| 35.6\t|\t\t41.3\t|\t64.8 | 67.5 |\n| DenseCL\t\t\t| 45.3\t|\t\t50.4\t|\t66.8 | 69.4 |\n| ReSIM\t\t\t| 51.9\t\t|\t52.9\t\t| 67.1 | 69.7 |\n| VICRegL $\\alpha$=0.75\t| 55.9\t\t|\t56.0\t\t| 67.6 | 70.3 |\n\nVICRegL significantly outperforms the other methods across all the benchmarks. We will clarify this point and incorporate the FCN results in our revision.\n", " We thank the reviewers for their comments about the paper. We have incorporated additional experimental results in this rebuttal, mainly results on new benchmarks (object detection, semi-supervised segmentation, fine-tuning with large heads) for evaluation and comparison with various other methods (VICReg, ReSIM, DenseCL, MAE), as well as an additional ablation of multi-crop, and a study on the running time and memory usage of our method. These new results will be incorporated in the paper. To avoid ambiguities, we use in this rebuttal \"feature-based matching\" instead of the term \"l2-distance based matching\" used in the original submission to clearly distinguish this form of matching based on distance between features in some high-dimensional embedding space from \"location-based matching\" based on the image distance between pixels. The metric used in both cases is the l2 (Euclidean) distance. We will use this terminology in the revision as well.", " The paper proposes VICRegL, a method that extends the existing (global) VICReg objective with local contrastive objectives where the loss is applied at the level of feature pixels (between spatial locations in the feature map before global pooling in CNNs).\nTwo approaches to building pairs of feature pixels for contrastive learning are proposed: 1) based on the l2 distance in the learned embedding space, and 2) based on the known spatial location between the two generated views.\nIn experiments, the method is evaluated on (global) image classification tasks on ImageNet and (local) semantic segmentation tasks on Pascal VOC, Citiscapes, and ADEK20K. \nAblations demonstrate a trade-off between global and local performance. A good balance between global and local SSL objectives can obtain competitive image classification performance while achieving SotA performance on segmentation tasks. Strengths:\n- Study of SSL pre-training that performs well on various downstream tasks (local and global) is valuable\n- Good performance in transfer to both classification and segmentation\n- Technical presentation is mostly clear\n- A good set of ablation experiments is provided, demonstrating:\n\t- the trade-off between local and global feature learning\n\t- The two ways to construct local pairs (l2 in embedding space and spatial distance)\n\t- The influence of the number of local pairs to use in the loss\n\n\nWeaknesses:\n- The approach to building positive local pairs via spatial distance is very similar to [N1], which is neither discussed in the paper nor compared to in experiments. \n- The method is demonstrated only with VICReg as the contrastive criterion. It is unclear if the approach to incorporate local objectives with the global one would also work with other \"contrastive\" objectives. In principle, it should also work, and demonstrating it with more standard contrastive objectives (e.g., SimCLR) would strengthen the paper. \n- Inconsistencies with results reported in prior work: DenseCL by Wang et al. reports 69.4 mIoU on PascalVOC with fine-tuning. It is unclear how the difference from the reported 66.8 in Table 1 comes about. \n- Several related works on learning both global and local representations are summarized in Section 2, but it is unclear how the proposed method relates to and differs from these works.\n- The idea of using nearest-neighbors in feature space for contrastive learning has been explored in prior works (e.g., [N2, N3]). While it is used in a different context here, these works should be discussed. \n- The paper argues for learning \"hierarchical features\" (L39), however, it is unclear how the proposed method should learn more hierarchical features than existing approaches based on CNN architectures.\n- The number of local pairs is fixed in the method. What happens, however, if the two views don't or barely overlap?\n- The paper mentions that multi-crop was used with ConvNeXt architectures. Therefore, providing an ablation with and without multi-crop in this setting would be good. Also, it would be good to investigate more the reason for multi-crop failing with ResNet-50, e.g., test smaller batch sizes as hypothesized in Section 3.2. \n- Minor: Error in L104 - It should not be C in both input and output\n\n\n[N1] Xiao et al.: \"Region Similarity Representation Learning\", ICCV2021\n\n[N2] Dwibedi et al.: \"With a little help from my friends: Nearest-neighbor contrastive learning of visual representations.\"\n\n[N3] Koohpayegani et al.: \"Mean Shift for Self-Supervised Learning\" I would appreciate it if the authors could address the weakness listed above, especially the first four points, i.e., \n- How does the method compare to [N1]?\n- Is the VICReg necessary, or would the approach also work with other objectives?\n- How are the results for prior work obtained in Table 1, and what explains the inconsistency with DenseCL? \n- How does the work relate to and differ from the various related prior works (including [N1-N3])? Limitations were addressed well in the paper. The point about whether feature pixels learn local information is interesting and could be explored through experiments, e.g., by clustering the feature pixels of an image and visualizing these clusters. \n", " This paper introduces locality constraints building upon the existing VICReg method. It does so by not destroying the features maps and adding VICReg loss terms on the features maps. Two simple methods of correspondence generation are proposed: a) Nearest neighbour in spatial location, and b) embedding similarity of local features. The paper claims to learn visual representation that improves performances on localized tasks such as detection and segmentation while maintaining the consistently good performance for global classification task. The paper uses two backbones for evaluations and shown to perform better than existing methods especially on linear evaluations. The paper has a good contribution, but a few evaluations and analysis seem lacking. Strength:\n1. The idea is elegant, uses the existing loss but additionally on local features. \n2. It is shown to significantly improve segmentation performance (though difference in finetuning performances not huge.) which is convincing.\n3. Ablations studies show insights to design choices, visualization on matches.\n4. Overall, the paper is easy to follow, and the paper has a good contribution however there are a few concerns as follows\n\n\nWeakness:\n1. The main concern is that although the paper claims to benefit detection and segmentation tasks, only segmentation is evaluated. Evaluations on detection are studied in most of the baseline papers but this is lacking in the paper. It may require a more comprehensive experiment to validate the effectiveness.\n2. The authors use ‘multicrop’ technique but do not ablate this factor in the experiments. Also, it is not clear if other baselines (especially VICReg) also enjoy the benefit of multicrop. If they do not then, the comparison may not be fair. The authors may want to clarify this.\n3. As the method uses features maps and losses on based on them, it would be interesting to see its impact on training speed and memory.\n\nPlease read below on other queries and comments below.\n\n 1. The method chooses location representation correspondences based on nearest location and embedding similarity. Any thought or analysis on noisy or spurious correspondences? For example, although nearest neighbours the patch may be coming from different object/semantics/texture. Similarly, based on embedding similarity, the network may choose unreliable pairs especially in the early stage of training. \n\n2. In the ablation of selected pairs, when all possible matches are selected, I believe both spatial and embedding distance losses will be the same, and network would try to match all local features though coming from different objects. Though not optimal, it seems the performance is improved compared to the baselines. Please could you provide intuition on this scenario and related improvement. \n3. Is the network trained from scratch with global and local losses or, a pretraining with global loss is done?\n4. How does the method compared to [1]\n\n[1] He, Kaiming, et al. \"Masked autoencoders are scalable vision learners.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\n\nThe authors have given satisfactory answers to the rebuttal. I would like to increase my rating to borderline accept given the authors promise to include all discussions and a complete result for detection on PASCAL and MS-COCO with standard evaluation protocol. 1. [Comment/Suggestion] It is natural to fine-tune the network when supervisions are available, however, the gap in the performance boosts narrows after finetuning. May be a semi-supervised task and evaluation would highlight the importance of the method and a more meaningful scenario. \n2. It would be interesting to see any sort of analytical study to support the hypothesis of relation of batch size and dimension of the network’s output. This would be an important contribution on itself.", " The paper describes a learning method to learn jointly global and local descriptors in a self-supervised way.\n\nThe global descriptor are learned following VICREG [1], a learning method that tackles the issue of mode collapse during representation learning. Two versions of the same images are derived using random transformations (crop, color jittering) and fed to a siamese network. Each branch produces a 1D feature vector representing the image. The branches are optimized separately with the VICREG constraints: the feature vector must be close in the embedding space (invariance loss), the standard deviation of the embedding dimensions must be close to one (variance loss), and the dimensions of the embeddings should not be correlated (covariance loss).\n\nThe local descriptor learning is the main contribution of the paper. The feature maps provide local descriptors (a feature map of dimension H x W x C can be seen as a set of HW local features of dimension C). These descriptors are optimized so that: 1) descriptors from the same location in the image are pushed together (spatial matching loss) 2) descriptors close in the embedding space are also pushed close together (even when they do not relate to the same image location i.e. they are not ground truth match).\n\nThe several components of the final loss are balanced with metaparameters investigated in the paper. The main finding are: 1) that the bigger weight given to the global loss, the worse local features are for object detection and semantic segmentation tasks; 2) the global criterion is relevant to improve the local features suggesting that they benefit from a global understanding of the image.\n\nThe paper evaluates the global descriptors on classification and local descriptors on object detection and semantic segmentation. It compares against relevant instances of the state of the art and experiments show that the introduced optimization is indeed relevant to learn local features in a self-supervised way.\n\n[1] Vicreg: Variance-invariance-covariance regularization for self-supervised learning Strengths:\n- The problem tackled is relevant to the community: the joint learning of global and local features\n- The originality of the l2 distance loss: it pushes feature vector that are close in the embedding space even though they are not positive matches.\n- Relevant state-of-the art\n- Relevant experiments (comparisons, analysis and ablation)\n- Very well-written paper: the organisation and presentation of the ideas is easy to follow, and the description and analysis of the experimental results are clear and valuable to the reader.\n\nWeaknesses:\n- The main contribution seems incremental with respect to VICREG. L49: \"contrasting feature vectors corresponding to far away locations can have a negative effect, as these vectors could have been pooled from locations that represent the same object in the image.\"\nThis is not clear. Does contrasting here mean pushing the feature vector together? In this case, wouldn't one want to push features of the same object together? i.e. this would be a positive outcome?\n\nThe l2-distance (Eq 3) is not intuitive at first. It pushes together local features close in the embedding space even though they might not be ground-truth matches. Is there any intuition on why this is beneficial (as supported by the experiments)?\nOne intuition is that it might act as regularizer or an implicit clustering.\nAnother is that the global criterion already pushes positive matches coarsely together and this loss refines the vector. Is there any observations or intuition on that interesting phenomenon? - L107: It would be nice to introduce the notation of $\\lambda, \\mu, \\nu$ in Eq 1 for completeness.\n\n- L118: $h_{\\phi}^l$ is called a 'local projector' whereas it is the equivalent of the VICREG expander but at the scale of the feature map. Since 120 uses the terminology 'global expander', it would seem more consistent to use the word 'local expander' rather than projector. This would also be more consistent with VICREG on which the paper relies on.\n\n- L127, L135, 181: 'only the top-$\\gamma$ pairs are kept'. It is not clear whether for each feature position, only the top-$\\gamma$ are kept to derive the loss, or if the top-$\\gamma$ among all pairs in the batch are kept (assuming top-$\\gamma$ means the pairs with the smallest distance). This would help the reader if the definition were introduced L127.\n\n- L127, L135, 181: Is there a motivation / intuition on why it is better to keep only the top-$\\gamma$ pairs?\nTogether with the l2-distance loss, this could reinforce the intuition that the l2-distance loss act as a regularizer/booster as it would be applied only to feature vectors that are already close to each other, which one can assume to be ground-truth positives.\n\n- L306: a feature maps -> a feature map" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "bWhBiQfpSQW", "4kP0SFotBe6", "7x3eOY9YjrE", "Tfdd3NDhUgU", "hfsIEFMuKz3", "0Cj_bsag4ON", "t0aTYzLHKA", "nips_2022_ePZsWeGJXyp", "nips_2022_ePZsWeGJXyp", "nips_2022_ePZsWeGJXyp", "nips_2022_ePZsWeGJXyp" ]
nips_2022_C0VKVmhlKgb
Bayesian Clustering of Neural Spiking Activity Using a Mixture of Dynamic Poisson Factor Analyzers
Modern neural recording techniques allow neuroscientists to observe the spiking activity of many neurons simultaneously. Although previous work has illustrated how activity within and between known populations of neurons can be summarized by low-dimensional latent vectors, in many cases what determines a unique population may be unclear. Neurons differ in their anatomical location, but also, in their cell types and response properties. Moreover, multiple distinct populations may not be well described by a single low-dimensional, linear representation. To tackle these challenges, we develop a clustering method based on a mixture of dynamic Poisson factor analyzers (DPFA) model, with the number of clusters treated as an unknown parameter. To do the analysis of DPFA model, we propose a novel Markov chain Monte Carlo (MCMC) algorithm to efficiently sample its posterior distribution. Validating our proposed MCMC algorithm with simulations, we find that it can accurately recover the true clustering and latent states and is insensitive to the initial cluster assignments. We then apply the proposed mixture of DPFA model to multi-region experimental recordings, where we find that the proposed method can identify novel, reliable clusters of neurons based on their activity, and may, thus, be a useful tool for neural data analysis.
Accept
The authors present a mixture of dynamic Poisson factor analyzers (sometimes called Poisson linear dynamical systems) model. The model itself is not especially novel (it seems closely related to Poisson switching linear dynamical systems) but the authors make up for it with a well-developed approach to Bayesian inference. They allow for unknown numbers of states with a mixture of finite mixtures model, and they handle the nonconjugacy of the Poisson-Gaussian model with a Metropolis-corrected Polya-gamma augmentation scheme. The empirical results do not show a clear improvement on Neuropixels recordings from the Allen Brain Observatory, but again the assessment is thorough. Overall, I think the paper conveys valuable findings and ideas, and lays a nice foundation for future work. I encourage the authors to incorporate and address the reviewers' feedback when preparing the final manuscript, and to consider expanding the discussion of how the Mix-DPFA model relates to Poisson SLDS.
train
[ "8Bkd2ncM--", "_gGDPGRfbtZ", "chPO2_UMwbG", "UGjE7cFR8pm", "kMY-PLTAdMDu", "TvaeALXzDw7", "rkFm9kksFip", "oyBrKEbhwIE", "bLslKcgePFE", "LubRU5yvC5" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for these additional comments.\n\nA comparison with switching models, looking at decoding by cluster, and examining the latent states in more detail would all be interesting directions for future work.\n\nFor low firing rates - we *include* 72% of neurons in the analysis. We've now tried to clarify in the text - it is not difficult or problematic to include low firing rate neurons, but we exclude them here for clarity.", " Thanks for these additional questions.\n\nWe do not currently include a comparison with cell types or a comparison between clusters and stimulus responses. However, these, along with more detailed look at the latent states, would be interesting directions for future work.\n\nFrom what we understand about the admixture assumption, it is most applicable in cases where each datapoint consists of multiple parts (e.g. genetic data). We have not seen it previously used in computational neuroscience, but, since the admixture assumption is potentially more flexible than the mixture model, maybe there is some useful application. Are there any specific references that the reviewer has in mind?\n\n", " Thank you for clarifying your contributions relative to MixPLDS, especially highlighting the identifiability. That is helpful context. If you have not revised your manuscript yet to make this contribution clearer in the manuscript, I would suggest doing that.\n\n> Beyond that, it depends on the specific data being fit. The clusters are interesting, largely because they represent a functional grouping that disagrees with the usual groupings based on anatomy or cell type. \n\nI agree that the clustering could be interesting way to explore optimal functional groupings, and I see the direct comparison to anatomy. However, I don't see a comparison to cell type--if that is in the manuscript, can you please point me to it?\n\n> The distinct low-d latent states that describe each cluster could potentially have a more direct relationship to circuit function or perception/behavior than a single global low-d description of the entire population.\n\nI agree that the clusters _could_ have a more direct relationship to circuit function, which is one of the hopes of mixture/admixture models used in the brain. The scientific utility of this manuscript would be stronger if there was clear evidence that this was the case, which I feel is missing in the current manuscript.\n\n> the main purpose is to do clustering. Our model can extract more structure (i.e. clustering structure), with improvement in terms of model fitting as the “byproduct” (at least not worse than current methods).\n\nThe clustering is interesting, but without evidence linking it to circuit function or perception/behavior (as the authors list above), this just seems a bit speculative without clear and strong predictive improvements.\n\nThe neuropixels data has associated visual stimuli. Would it be feasible to link the cluster responses to parts of the visual stimuli (maybe CCA between each cluster's neurons and the stimuli pattern)? That would provide a clearer biological link and provide me with stronger evidence on the utility of the method.\n\nI am remain interested on the author's view on a clustering vs admixture assumption. Why is clustering the correct assumption?", " I thank the authors for their answers and clarifications. \n\nThe answers about the clarity of the paper, the choice of prior, the use of cross-validation for selecting p, and the definition of \"anatomical cluster model\" are satisfactory. However, I my major comments are still mostly unresolved and I am thus not willing to increase my score. I think the direction pursued by the authors of the paper is very interesting but the model still needs more validation. \n \nAbout the change of state (related to tasks): I understand that the authors focus on clustering neurons, but many neurons's firing rates will be modulated by these states, and the animals do perform tasks in the recordings considered in this paper. I do think that it would be necessary to consider models such as the switching-LDS, or at least evaluate if the clustering is consistent over trials of the recordings, how the different states impact the clustering. This is related to my comment about robustness to spike sorting artifacts. If you split a neuron in half, are both halves clustered together? \nMoreover, does using these clusters give you higher accuracy on simple decoding tasks? What information do the clusters contain about the trials? \n\nAbout the fact that \"Low firing rates tend to cause confusions in clustering\":\n72% of the neurons in the recording is a lot, and discarding neurons based on firing rates is a huge assumption about the data. There is a priori no reason for a neuron to have high firing rate to be relevant. Moreover, have you checked if some neurons have overall low firing rate but have high firing rate for short periods of time (in which case they can contain important information about movement for example)?\n\nAbout the \"speed and accuracy\" of the model. I understand that the model is (approximately) linear in the length of the recording, which is great! Would it allow to perform online clustering? (faster than time)\n", " Thanks to the reviewer for their positive comments that the proposed model “is novel and very well-thought”. We have now tried to clarify several technical details..\n\n> However, it is a bit hard to grasp, and I feel like the section 2 could be a little more clear.\n\n**H** contains all the prior for parameters theta for cluster j. The details of prior H for all DPFA parameters can be found in each subsection of A.1 MCMC updates. H is included in (Fig 1C) to indicate the prior, however the other parameters that describe the evolution of the latent states are excluded for clarity.\n\n> I would have also liked to gain more intuition on why you chose the different priors.\n\nFor the choice of k in the prior for the number of clusters, we now point out that for long recording times the likelihood will dominate the prior, so that the number of clusters is largely independent of the choice of prior.\n\n> You propose to use cross-validation for choosing p... \n\nWhen choosing p, we use the held-out log-likelihood (cross-validation). When we use very large p (larger than the true p), we will overfit the data. so the held-out log-likelihood will begin to drop as p increases beyond the ground truth p.\n\n> What happens when the animal change state, receives a stimulus or start doing a task?\n\nFor modeling state changes, such as the start of a task, it’s right that the linear model may not be the best description of the state. Models such as the switching-LDS (SLDS, Fox, 2009 & Murphy, 2012) may be able to describe these effects, but here we focus on clustering neurons. \n\n> It would be great to add a visualization of the latent states throughout trials.\n\nWhen sampling the cluster labels, unlike with deterministic algorithms, it’s somewhat difficult to visualize, since every sample changes the cluster memberships. However, this is something we are aiming to follow up with for future work, where the relationship between latent states and experimental variables can be explored in more detail.\n\n> Could you show results and comparisons, in both speed and accuracy for these three approaches?\n\nComparing the speed and accuracy for the approximations of the marginal likelihood for cluster membership (Laplace approximation, Gamma approximation, and 2nd-order polynomial approximation) is a nice idea, but somewhat beyond the scope of the paper at the moment. At least in our simulations, when evaluating the marginal likelihood by the 2nd-order polynomial [Keeley et al., 2019], it doesn't recover the ground truth cluster, no matter how we choose the approximation intervals (e.g. 1 interval for each neuron as suggested in their paper, or different intervals for both different neurons and time points). Both Laplace approximation and the Gamma approximation used in our paper can recover the ground truth cluster. For more detail about the Gamma approximation, we refer readers to (El-Sayyad, 1973 & Chan and Vasconcelos, 2009).\n\n> The main flaw of the model, to me, is that \"Low firing rates tend to cause confusions in clustering\".\n\nThanks to the reviewer for pointing out the issue with low firing rate neurons. We now clarify to say that “Low firing rates and short recording lengths tend to cause confusions in clustering” because of weak information. We can handle the low firing rate case without a lot of confusion, if we fit the model with a long recording length T. We view the “confusion” as a strength – our model can accurately reflect the uncertainty in cluster membership for neurons with little information. Just giving a single point estimate of clustering doesn’t make a lot of sense, when the observation is not rich.\n\nHere we only include neurons with firing rates >1Hz, and have now added explicitly that this was 72% of the neurons in the recording. The experimental results are shown here mostly as a proof-of-concept that this approach gives potentially interesting clusters, but we hope to follow up with a more extensive analysis that will evaluate the effects of firing rates, recording length, bin size, spike sorting errors, task, and brain state more comprehensively.\n\n> What exactly is the \"anatomical cluster model\" mentioned in the result section?\n\nWe have now modified the text to clarify that with this model the clusters are defined by the anatomical regions. The goal is to compare learned clusters, with a typical experimental definition of subpopulations just based on anatomy. For each region, we use cross-validation to select p separately. However, we didn’t do clustering for each region: neurons in each region are fitted by a single population DPFA. \n\n> Finally, MCMC models can be computationally expensive,...\n\nAs far as computation, cross-validation is not very cumbersome, since we select p by checking the trace of held-out log-likelihood for short chains. The method is (approximately) linear in the length of the recording.\n\n\n\n", " Thanks to the reviewer for their positive comments that “this approach seems fairly original and significant”. Here, we clarified some specific points…\n\n> I would appreciate additional details and discussion on the efficacy and reliability of individual approximations...\n\nAs suggested, we have now added some additional detail and discussion on the approximations here. Using Poisson-gamma conjugacy (i.e. approximate lognormal by Gamma) in approximating the marginal likelihood for the cluster labels works well in our case. We find that when comparing it with Laplace approximation, the clustering results are essentially the same and recover the ground truth in simulation. We refer readers to El-Sayyad, 1973 & Chan and Vasconcelos, 2009 for more detailed results about efficacy and reliability.\n\n> The model itself is a reasonable model, but does not seem especially novel or significant given the existence of MixPLDS.\n\nIn comparing our model (mixDPFA) to mixPLDS, it’s true that the model setting is quite similar. The main contribution is the inference. We try to emphasize the inference limitations of mixPLDS in the introduction: “1) it requires we predetermine the number of clusters, and 2) the clustering results are often sensitive to the initial cluster assignment.” The main contribution of this paper is providing solutions to these two inference problems. We also directly evaluate the accuracy of the Laplace approximation for the latent state by comparing the results to sampling of the exact posterior with Poly-Gamma augmentation (Fig 5 and Table 1). Additionally, here we have explicitly addressed model identifiability. For clustering, the constraints on $\\sum_{t=1}^{T}\\mu_t^{(j)} = 0$ and $\\sum_{t=1}^{T}\\mathbf{x}^{(j)}_t = \\mathbf{0}$ are essential, since inappropriate constraints will change the clustering results significantly.\n\n> One of the areas that would benefit from improvement is clarity on how this relates to a clear scientific challenge.\n\nWe have also tried to emphasize the scientific questions, as pointed out by the reviewer. The main scientific questions are \"How is population neural activity structured in clusters? And what low-d latent descriptions are there for these clusters?\" Beyond that, it depends on the specific data being fit. The clusters are interesting, largely because they represent a functional grouping that disagrees with the usual groupings based on anatomy or cell type. The distinct low-d latent states that describe each cluster could potentially have a more direct relationship to circuit function or perception/behavior than a single global low-d description of the entire population.\n\n> The results on real data, though, are one of the biggest weaknesses.\n\nIt’s true that the improvement in held-out log likelihood is somewhat unremarkable. However, our model uses many fewer parameters, and the main purpose is to do clustering. Our model can extract more structure (i.e. clustering structure), with improvement in terms of model fitting as the “byproduct” (at least not worse than current methods).\n\n> In what real-world datasets could this method significantly outperform the baselines?\n\nIn the introduction, we mention how this approach builds on mixtures of (Gaussian) factor analyzers (MFA), and the situations where mixDPFA outperforms other models for neural data would be somewhat similar. Namely, when the data is “globally nonlinear” and well described by combining a number of local factor analyzers. As large scale neuroscience records from many brain regions at once, this approach may be useful for describing disparate brain regions.\n\n\n", " Thanks to the reviewer for their positive comments that the “paper is technically strong and is an important tool with potential broad use in neuroscience.” We have taken the reviewer’s suggestion and now refer to our model as “mixDPFA” throughout the text and have fixed the issue with low resolution figures. We have also clarified many specific points…\n\n> Could you expand on how the laplace approximation is being used in 122-124?\n\nThe Laplace approximation is being used to generate samples for the MCMC approach, and we have added a sentence around line 122 to make this more explicit.\n\n> An equation showing the P(x|..) ...\n\nThe equation for the full conditional distribution P(x|...) is in appendix A.1 (line 487).\n\n> Also it appears there is a typo in the third line of the laplace approximation...\n\nTypo in the third equation for the Laplace approximation in the appendix is fixed, thanks.\n\n> Can you also clarify the difference between how the marginal likelihood in (3), ...\n\nWe modified the text in “update z_i” section of the appendix to clarify the point about non-conjugacy.\n\n> A few more sentences providing background on the MFM approach might also help clarify this section.\n\nThe motivation and background for MFM are discussed in section 2.2 and line 257-259.\n\n> What exactly is the form of H?\n\n**H** contains all the prior for cluster parameters (j), which will be complicated when we write things out. The details of prior H for all DPFA parameters can be found in each subsection of A.1 MCMC updates.\n\n> One limitation that I believe is more significant than the author's state is the selection of p being constant across subpopulations.\n\nWe also, now expand the discussion about “the selection of p being constant across subpopulations. ” It’s true that “groups of neurons across brain regions very clearly do not have the same dimensionality”. The assumption could certainly limit the accuracy of the model, and we make it more clear that this is an important future direction for the mixDPFA approach.\n", " The authors propose a model to identify distinct functional clusters of neurons in neural population activity without an an a priori determined number of clusters. Each cluster is characterized by a latent LDS and mapped through an exponential nonlinearity to poisson rates. The prior over number of clusters and cluster assignment is computed efficiently borrowing from recent work on a mixture of finite mixture (MFM) model. The authors use a bespoke procedure for using MCMC to evaluate posteriors for their model, which involves a laplace approximation to estimate the population states, and a poisson-gamma conjugacy to marginalize over of the latent loadings $c_i$ to evaluate the probability of a neuron belonging to a given cluster. The authors show results on simulated data where the true number of clusters is known, and then evaluate the model on neuropixel recordings. They show on the neuropixel data that the clusters identified using their method outperform anatomical clustering or a single cluster on held-out data. This work is an important step in identifying functional groups of neurons in large multi-region neural population recordings. This paper is technically strong and is an important tool with potential broad use in neuroscience. The addition of a prior over cluster assignment was a non-trivial ML contribution and significant step in understanding increasingly common multi-region datasets with large numbers of neurons. However, there are many pieces to the inference procedure and the appendix has a lot going on. The authors reference many existing approaches and it can be hard to parse the specific approach used in this work.\n\n Some more detailed clarification on the inference would be helpful. Could you expand on how the laplace approximation is being used in 122-124? More organization in the introductory part of the inference section would I think go a long way. An equation showing the P(x|..) being approximated by Laplace would be helpful. Also it appears there is a typo in the third line of the laplace approximation in the appendix (floating closed parentheses).\n\nCan you also clarify the difference between how the marginal likelihood in (3), evaluated using a conjugacy using Poisson and inverse gamma distributions, relates to the nonconjugacy in the \"update $z_i$\" section of the appendix. Some reference to equation (3) from the main paper in this section of the appendix would also be helpful for clarification.\n\nA few more sentences providing background on the MFM approach might also help clarify this section. What exactly is the form of H? \n\nI think giving the specific model here a name would be useful. This would help the authors more easily reference their specific approach in the paper and later allow future researchers to succinctly refer to the work. \n\nThe resolution of the figures is low. Zooming in to see detail is difficult, particularly in 2D. One limitation that I believe is more significant than the author's state is the selection of p being constant across subpopulations. The authors briefly mention ways around this issue in the discussion. I think it is important to more directly point this out as a limitation -- groups of neurons across brain regions very clearly do not have the same dimensionality (low level visual areas and higher level decision making areas, e.g.) This model is only able to isolate subpopulations with this specific constrained feature. \n\nNo potential negative social impact of this work.", " This manuscript proposes a new approach called \"Dynamic Poisson Factor Analyzers\" (DPFA) that models neural spike trains. This methodology is developed to split individual neurals into distinct clusters, based upon the hypothesis that neural spike trains from each neural cluster into distinct populations. This approach uses fast approximations and modern MCMC approaches to implement an inference scheme, which is applied to synthetic data and high-dimensional real datasets. Results are compared to essentially a PLDS model (no clustering) and a clustering model based upon anatomical location. The biggest strength and contribution of this manuscript is the inference algorithm. The inference algorithm seems very well-considered and explored, using fast approximations in the form of well-design Laplacian approximations and approximate conjugate distributions. Combining these tricks with a Mixture of Finite Mixture (MFM) approach, rather than a Dirichlet Process, seems to mix fast and provide robust inference. This approach seems fairly original and significant, although I would appreciate additional details and discussion on the efficacy and reliability of individual approximations (e.g., how close is the gamma-poisson distribution to the lognormal?).\n\nThe model itself is a reasonable model, but does not seem especially novel or significant given the existence of MixPLDS. There are changes and advances, but they seem relatively minor. It would be helpful if the authors could highlight their particular advances more clearly, and explicit discuss how this improves the model.\n\nOne of the areas that would benefit from improvement is clarity on how this relates to a clear scientific challenge. How the neurons cluster is potentially interesting, but I believe that the manuscript would be greatly improved by focusing on a clear scientific question that this methodology can help answer.\n\nThe results on real data, though, are one of the biggest weaknesses. In the real data, the proposed method does not fit the data better based on log likelihood than a clustering based on known anatomy, and is barely better than not using a clustering approach at all. It is difficult to motivate using this approach based on those results for me, especially as other competing approaches such as deep RNNs (e.g., LFADS) do tend to fit the data better.\n What scientific question will this methodology help answer?\n\nIn what real-world datasets could this method significantly outperform the baselines?\n\nAssuming discrete clustering of neurons is a strong assumption. To my knowledge, most neurons participate in multiple subnetworks of the brain. Why are the authors considering an explicit clustering of neurons rather than an admixture, which may be more biologically realistic? No concerns.", " The authors develop a novel model based on a mixture of dynamic Poisson factor analyzers to cluster neurons in large-scale neural recordings. They treat the number of clusters treated as an unknown parameter and propose a novel MCMC algorithm to efficiently sample its posterior distribution. They show in a simulation that their model can accurately recover the true clustering and latent states, and then explore the results of their method on a recording from the Allen Institute Visual Coding Neuropixels dataset. The strength of the paper lies in the proposed model, that is novel and very well-thought. It is technically sound and well constructed. However, it is a complicated model and the section 2 of the paper could be a little more clear and give more intuition behind the choice of priors and parameters. The results on simulated data are very compelling, but I would have liked to see more experiments on real data (for example, how to choose the parameter of the geometric prior on k depending on the type of probe, or the number of anatomical regions) to gain more intuition. The main weakness of the model lies in the fact that \"Low firing rates tend to cause confusions in clustering\" and are excluded from the data in the analysis. This does not seem very biologically relevant. The model is very interesting and well constructed. However, it is a bit hard to grasp, and I feel like the section 2. could be a little more clear. I find that the graph in Fig. 1, C is not super informative, and could include more information, such as the priors and the additional parameters. H is also never introduced before it is mentioned on line 105. I understand that it corresponds to h, but in this case shouldn't there be an additional line in Equation 1 for mu and H? \n\nI would have also liked to gain more intuition on why you chose the different priors. For example, I understand the use of a Geometric distribution as the prior on k, and that the mean of this prior should increase when we want to get more clusters, but an experiment showing the different results and recommended parameter with different datasets, probes, tasks, or number of brain region recording would help a lot. \n\nYou propose to use cross-validation for choosing p. I assume you optimize over the log-likelihood. Won't the log-likelihood necessarily increase as you use higher values for p? What criteria is used? \n\nThe latent states are chosen to evolve linearly over time with Gaussian noise. This makes sense when the animal is not doing any tasks. What happens when the animal change state, receives a stimulus or start doing a task? It would be great to add a visualization of the latent states throughout trials. \n\nIn section 2.3, you mention that \"Though we may evaluate it by a Laplace approximation, but iterating over all potential clusters for each neuron is computationally intensive. To make faster clustering, we approximate the marginal likelihood by utilizing a Poisson-Gamma conjugacy.\" and \"Another possible idea is to approximate the log-likelihood by second-order polynomials, with coefficients determined by Chebyshev polynomial approximation [Keeley et al., 2019]. However, we find that this approximation doesn’t work well in practice when spike counts have a wide range.\". Could you show results and comparisons, in both speed and accuracy for these three approaches? \n\nThe main flaw of the model, to me, is that \"Low firing rates tend to cause confusions in clustering\". Low-firing rate neurons can encode crucial information in the brain, and artificially removing neurons with rates < 1Hz in the experiments does not seem to have any biological justification, but simply a way to get nicer results ass the firing rates will be more uniform. Some neurons can also one strongly associated to a state of the animal and have overall low firing rate as they fire strongly during specific tasks. \nCould this be due to the choice of prior on lambda (and more specifically on delta/mu?)? The prior on delta is not specified (unless I miss it), but this could maybe help and solve this issue? \nMoreover, the results with low-firing rate units are not even included in the paper. It is essential to include them (at least in the appendix) to have a sense of how it contaminates the clustering results. \nWhat proportion of the neurons are discarded when keeping only neurons with FR>1Hz?\n\nThe low firing rate neurons can also be due to artifacts of spike sorting. It is especially true in Neuropixels datasets, where neurons can be split and thus have their firing rate cut in half (or more). There are many quality metrics developed for the output of spike sorting. (Hill et al. (2011), Harris et al. (2001), Schmitzer-Torbert and Redish. J Neurophy (2004), Chung et al. (2017)...). I think using such metrics to discard \"bad\" units would be much more appropriate (as is done in Reproducibility of in-vivo electrophysiological measurements in mice, International Brain Laboratory (2022) for example).\nit would also be interesting to look at how the clusters are impacted by spike sorting artifacts. If you split a neuron in half, are the two half clustered together? How are the clusters impacted by electrode drift? You need to make sure the clustering algorithm is robust to these sources of noise. \n\nWhat exactly is the \"anatomical cluster model\" mentioned in the result section. Are you running your clustering model on each region independently, using different values for p? (p=[1,11,5,3]). How do you choose the number of clusters per region? Do you use Cross-validation on each region? If you cluster neurons from each region independently, are the neurons clustered together also clustered together when clustering all neurons together? This would be a very interesting experiment to show in the paper.\n\nFinally, MCMC models can be computationally expensive, and your model requires (5-fold) cross-validation. How does it scale with the number of neurons and length of the recording? I don't see any potential negative societal impact of this work." ]
[ -1, -1, -1, -1, -1, -1, -1, 7, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 5, 5 ]
[ "UGjE7cFR8pm", "chPO2_UMwbG", "TvaeALXzDw7", "kMY-PLTAdMDu", "LubRU5yvC5", "bLslKcgePFE", "oyBrKEbhwIE", "nips_2022_C0VKVmhlKgb", "nips_2022_C0VKVmhlKgb", "nips_2022_C0VKVmhlKgb" ]
nips_2022_hX5Ia-ION8Y
MCVD - Masked Conditional Video Diffusion for Prediction, Generation, and Interpolation
Video prediction is a challenging task. The quality of video frames from current state-of-the-art (SOTA) generative models tends to be poor and generalization beyond the training data is difficult. Furthermore, existing prediction frameworks are typically not capable of simultaneously handling other video-related tasks such as unconditional generation or interpolation. In this work, we devise a general-purpose framework called Masked Conditional Video Diffusion (MCVD) for all of these video synthesis tasks using a probabilistic conditional score-based denoising diffusion model, conditioned on past and/or future frames. We train the model in a manner where we randomly and independently mask all the past frames or all the future frames. This novel but straightforward setup allows us to train a single model that is capable of executing a broad range of video tasks, specifically: future/past prediction -- when only future/past frames are masked; unconditional generation -- when both past and future frames are masked; and interpolation -- when neither past nor future frames are masked. Our experiments show that this approach can generate high-quality frames for diverse types of videos. Our MCVD models are built from simple non-recurrent 2D-convolutional architectures, conditioning on blocks of frames and generating blocks of frames. We generate videos of arbitrary lengths autoregressively in a block-wise manner. Our approach yields SOTA results across standard video prediction and interpolation benchmarks, with computation times for training models measured in 1-12 days using $\le$ 4 GPUs. Project page: \url{https://mask-cond-video-diffusion.github.io} Code: \url{https://mask-cond-video-diffusion.github.io/}
Accept
The paper proposes the use of diffusion model for masked video modeling and shows promising results in video generation and completion. All of the reviewers agree that the paper is a good fit for publication at NeurIPS. I appreciate that the authors engaged with the reviewers and improved the paper!
val
[ "9PlTjdV9Mgo", "BBzbHKYzo_d", "OQSuyrQ2kZW", "4y67BdnXS0A", "9Dg0poRrsnr", "iAI4Si7xRQH", "4tqXOs_c6L0", "OVsqmx3puN1", "S_GmF8NtovW", "h48DDvR1pKF", "N_74gltQjN4", "cNMs3nbxeO2", "xz2yaLp8Np", "HBAa01cW2Ow" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " As asked by the reviewers, we updated the paper with the revisions and highlighted the changes.\n\nNote that we do not yet have the IS and FID results because we were delayed due to problems with the chainer and cupy packages dependencies and we had to contact the authors for further help. We will post the results once we obtain them if this is allowed by the openreview system, otherwise they will be included in the camera-ready version.\n", " I've read the response from the author and most of my questions are solved.\n\nThe proposed method seems to have marginal contributions over the previous works. However, the presented materials are straightforward to understand and the proposed framework for integrated video generation-related tasks is novel and promising for future research direction.\nI hope that the final version of the paper includes more analysis and discussion, such as more evaluation metrics, characteristics of used datasets, and different conditional frames for training, than the current version because the experiments are conducted on a relatively small scale.\n\nThank you.", " Could you provide a revised manuscript with all the changes made highlighted in a different color?\n", " I look forward to reading the revised manuscript.\n\nIn particular, a comprehensive table comparing FID&IS results to all prior art on the associated benchmark datasets.\n\nI also expect it to be made clear, when working on 64x64 vs 128x128 video generation.\n\nThe comparison to \"Deep Video Generation, Prediction and Completion of Human Action Sequences - CVPR 2018\" was very insightful.\n\nRegarding the impact statement.. it is inaccurate to say that you do not alter video when interpolation does just that.\nplease consider rephrasing the second sentence to something like:\n\n__\"Our formulation focuses on capturing the distributions of real video sequences.\"__\n\n\nIf the authors address all my other concerns (and those raised by other reviewers) in their updated manuscript, i am willing to further reflect that in my review score.", " We thank all the reviewers for taking the time and effort to provide their detailed feedback on our submission! We have taken note of every comment, and all of them have contributed to making our submission even better. We are happy to note all the positive comments from the reviewers:\n\n> - This paper is well written, clear\n> - Paper is well written and is easy to follow\n> - Proposed method is novel\n> - Combines all three (prediction, generation, interpolation) within one architecture, unlike prior/concurrent works\n> - Validates the versatility of the proposed architecture on several tasks --- unconditional generation, prediction, and interpolation\n> - Shows great and promising results on video tasks\n> - Compared to many other models in conditional and unconditional setups\n> - The proposed approach is more computational efficient than concurrent works in the literature\n> - More efficient blockwise generation mechanism and the ability to interpolate between frames\n> - Shows better performance even though it was trained on fewer frames at a time\n> - Demonstrates its capability on generating high quality videos\n\nWe believe the primary criticism can be attributed to our paper not clearly emphasizing a few key contributions. We hope to clarify these contributions and address the reviewers' concerns below. We had already provided many comparisons to many other papers for multiple datasets. Nevertheless, we will now also provide the FID and IS metrics (since some papers only report on the FID and IS, and not FVD) as requested by reviewer LFuX.\n\nWe recommend the reviewers also check our appendix in the supplementary material. We provided a detailed report in Table 8 on the computational resources used for each of the datasets: number of parameters, CPU memory used, batch size, GPU, GPU memory used, number of training steps, and GPU hours. We also provided an additional ablation study in Table 9, as well as some qualitative examples.\n\nWe also refer the reviewers to our qualitative results presented in an HTML file in the supplementary material. It contains several videos generated using our model, showcasing its effectiveness in all three tasks of Video Prediction, Generation, and Interpolation.", " > How about the smaller or larger number of frames affect the performance of the proposed algorithm?\n\nThat's a great question! We shall mention the following in the revised manuscript:\n\nWe had conducted preliminary experiments with a larger number of frames. Since the models with a larger number of frames were bigger, we could only run them for a shorter time with a smaller batch size than the smaller models. In general, we found that larger models did not substantially improve the results. We attribute this to the fact that using more frames means that the model should be given more capacity, but we could not increase it due to our computational budget constraints. We emphasize that our method works very well with fewer computational resources.\n\n> What is the diversity of the generated videos? Most works report fidelity metrics, such as FVD, FID, and IS.\n\nThat's a great question! It has been shown in [1, 2] that FID is a good measure of BOTH fidelity and diversity, while IS is a good measure of fidelity but not diversity. It has been shown that the properties of FID transfer to FVD [3], since they are both computed using the same principles. Hence, we considered FVD as an effective indicator of diversity.\n\n[1] Large Scale GAN Training for High Fidelity Natual Image Synthesis\n[2] An empirical study on evaluation metrics of generative adversarial networks \n[3] Towards Accurate Generative Models of Video: A New Metric & Challenges\n\n> The diversity metric introduced in [1] would be good. [1] Diverse Video Generation using a Gaussian Process Trigger, ICLR 21\n\nThank you for the suggestion! As mentioned above, FVD itself is a good measure of diversity and is a standard metric reported by several prior works. Hence, we use FVD itself as the diversity metric, and show that our videos are indeed quite diverse. This can be qualitatively checked in the video samples in the appendix.\n\n> Time coverage of snippets.\n> Several works typically count the number of frames, not the time span measured in seconds. It would be good to the time span of the video snippets for generation, prediction and interpolation.\n\nThat's an interesting point. The time span of each video depends on the dataset; different datasets have been recorded at different frame rates. However, an advantage of our proposed method is that its effectiveness does not rely on frame rate; it can interpolate frames to even increase the frame rate. So the number we chose to report is the number of frames and not the time span.\n\n---\n\nWe hope we have sufficiently answered all of the reviewer's comments point by point, and are happy to engage further on more questions! Considering this, we hope the reviewer increases their rating of our paper, and champions it for publication at this conference.", " \nWe thank the reviewer for taking the time and effort to provide their detailed feedback on our submission! We are happy to note all the positive comments from the reviewers:\n\n> - This paper is well written, clear\n> - Paper is well written and is easy to follow\n> - Proposed method is novel\n> - Combines all three (prediction, generation, interpolation) within one architecture, unlike prior/concurrent works\n> - Validates the versatility of the proposed architecture on several tasks --- unconditional generation, prediction, and interpolation\n> - Shows great and promising results on video tasks\n> - Compared to many other models in conditional and unconditional setups\n> - The proposed approach is more computational efficient than concurrent works in the literature\n> - More efficient blockwise generation mechanism and the ability to interpolate between frames\n> - Shows better performance even though it was trained on fewer frames at a time\n> - Demonstrates its capability on generating high quality videos\n\n### Response:\n\nWe believe the primary criticism can be attributed to our paper not clearly emphasizing a few key contributions. We hope to clarify these contributions and address the reviewers' concerns below.\n\n> The scale of the datasets used in the experiments is small-scale compared to the other generation methods these days.\n> The experiments only contain small-scale datasets. How about applying the proposed algorithm to a larger scale dataset, such as Kinetics, Something-something v2? Is there any bottleneck of the proposed algorithm except the amount of computation?\n\nThat's a great question! We would be more than happy to conduct experiments on much larger scale datasets. However, we choose to start small, investigate the effectiveness of our method, and progressively grow bigger. You can observe this in the order of datasets we choose -\n1) SMMINST (64x64) : black and white, simple digits moving in constrained spaces\n2) KTH (64x64) : grayscale, single humans with specific actions\n3) BAIR (64x64) : color, robots moving in a constrained environment, with multiple objects\n4) CityScapes (128x128) : color, higher resolution, car moving in complex scenes\n5) UCF101 (64x64) : color, large variety of actions (101!)\n\nAs well as the tasks we chose :\n1. Video Prediction,\n2. Video Predition + Generation,\n3. Video Predition + Generation + Interpolation.\n\nHence, the next logical step is indeed to scale this up! We also want to remind that for all the above experiments, we only used 4 GPUs at a time, and queuing for these GPUs can take many days, if not sometimes weeks! We are surely moving towards trying our method on larger datasets. However, in the meantime, we would like to let the research community take advantage of the significant effort we've made so far in showing the merits of our proposed method in all the above datasets and tasks.\n\n> The disadvantage of non-recurrent and sliding window sampling is not explained. The word ‘non-recurrent’ only appears in the abstract.\n> An algorithm with a non-recurrent and sliding window sampling may be weak in this case (scaling to larger datasets) compared to recurrent counterparts.\n\nThat's a great question! We shall include the following explanation in the revised manuscript:\n\nPrevious video prediction models were typically either 1) non-recurrent: predicting all $n$ frames simultaneously with no way of adding more frames (most GAN-based methods), or 2) recurrent in nature, predicting 1 frame at a time in an autoregressive fashion. The benefit of the non-recurrent type is that you can generate videos faster than 1 frame at a time while still allowing for generating as many frames as needed. The disadvantage is that it is slower than generating all frames at once and takes up more memory and compute at each iteration. Our model finds a sweet spot in between in that it is block-autoregressive: generating $k<n$ frames at a time recurrently to finally obtain $n$ frames.\n\n> the proposed model seems to show better performance on lower $k$ as explained in lines 257~260.\n\nThat's exactly right! The proposed method generates better quality videos, even though it was trained on a shorter number of frames than other methods.", " > There is limited analysis and ablation studies.\n> it is not clear how the design choices affect its results\n\nThanks for pointing this out! We do have ablation studies; we shall make it much clearer in the revised manuscript, specifically:\n\nTable 3 in the paper provides an ablation study varying the number of past frames conditioned on, as well as the length of the final video, and whether we use a Prediction-specific model or a model trained on Prediction+Generation (past-mask). It can be seen that our model works across different choices of past frames and generates better quality for shorter videos. This is expected from models of this kind. Moreover, it can be seen that the model trained on the two tasks of Prediction and Generation (i.e., the models with past-mask) performs better than the model trained only on Prediction!\n\nIn addition, the appendix contains an ablation study in Table 9 on the different design choices: concat vs concat past-future-mask vs spatin vs spatin future-mask vs spatin past-future-mask. It can be seen that concat is, in general, better than spatin. It can also be seen that the past-future-mask variant, which is a general model capable of all three tasks, performs better at the individual tasks than the models trained only on the individual task. This was demonstrated in Table 3 as well. This shows that the model gains very helpful insights while generalizing to all three tasks, which it does not while training only on the individual task.\n\n> why the masking should be random for inpainting?\n\nThat's a great question! Allow us to clarify:\n\nTo be clear, the masking is only random during training. The masking is not random at inference-time for the individual task of Interpolation. \n\nIn general, we found that training using random masking is better than without. See Table 3 for this ablation on Prediction (as mentioned in the above answer). See below for results on pure Interpolation without masking during training, and Table 7 with masking during training.\n\n> In absence of any ablations, I can argue that a fixed set of conditioned frames is a much better defined problem and may result in even higher quality predictions. \n\nOur experiments showed that models trained only on Interpolation with a fixed set of conditioned frames (SMMNIST psnr: 18.3847, ssim:0.8023) performed worse than models trained with randomly masked conditioned frames (SMMNIST psnr: 20.944, ssim: 0.854). We shall add this to the main paper.\n\nAs pointed out above, we do have ablation studies in both Table 3 in the main paper as well as Table 9 in the appendix that tackle this very question. They show that a model trained with a fixed set of conditioned frames performs worse than the model trained with randomly masked conditional frames. We shall make this clearer in the revised manuscript.\n\n> Another angle is computational issues which is mostly ignored in the paper. One major issue in video prediction literature is the huge number of parameters required for getting good results. How is the situation in a score based model?\n\nThanks for the great question! We humbly submit that we, in fact, emphasize that we were working with limited resources and that all of our great results took much lesser time, memory, GPUs, and GPU hours than many prior/concurrent works. We provide a thorough report of the computational resources used for each of the datasets (number of parameters, CPU memory used, batch size, GPU, GPU memory used, number of training steps, GPU hours) in Table 8 in the appendix.\n\n> Does a bigger model substantially improves the result or not?\n\nThat's a great question! We had conducted preliminary experiments with larger models, but due to our limited computational budget constraints, we could only run them for a shorter time with a smaller batch size than the smaller models. In general, we found that bigger models did not substantially improve the results. We emphasize that our method works very well with fewer computational resources.\n\n> Limited qualitative results. Given the nature of videos, they are hard to compare on printed paper. I highly recommend authors to host predicted videos in an anonymous website\n\nWe hoped you would ask this! We point the reviewer to the HTML page in the supplementary material, which is an anonymized website we created to showcase our great results! Simply download and extract the supplementary material and open the HTML page. It should show you many videos from all our datasets and all the tasks.\n\n---\n\nWe hope we have sufficiently answered all of the reviewer's comments point by point, and are happy to engage further on more questions! We highly encourage the reviewer to go through the supplementary material, since many of their questions are already answered there. Considering this, we hope the reviewer increases their rating of our paper, and champions it for publication at this conference.", " We thank the reviewer for taking the time and effort to provide their detailed feedback on our submission! We are happy to note all the positive comments from the reviewers:\n\n> - This paper is well written, clear\n> - Paper is well written and is easy to follow\n> - Proposed method is novel\n> - Combines all three (prediction, generation, interpolation) within one architecture, unlike prior/concurrent works\n> - Validates the versatility of the proposed architecture on several tasks --- unconditional generation, prediction, and interpolation\n> - Shows great and promising results on video tasks\n> - Compared to many other models in conditional and unconditional setups\n> - The proposed approach is more computational efficient than concurrent works in the literature\n> - More efficient blockwise generation mechanism and the ability to interpolate between frames\n> - Shows better performance even though it was trained on fewer frames at a time\n> - Demonstrates its capability on generating high quality videos\n\n### Response:\n\nWe believe the primary criticism can be attributed to our paper not clearly emphasizing a few key contributions. We hope to clarify these contributions and address the reviewers' concerns below.\n\n> Limited datasets. Although, authors tested the proposed method on multiple datasets, I find the choice of datasets to be narrow. In both prediction and unconditional generation, the datasets are limited to smaller size dataset (BAIR and UCF) and there is no experiment on larger datasets such as RoboNet or Kinetics.\n\nThat's a great question! We would be more than happy to conduct experiments on much larger scale datasets. However, we choose to start small, investigate the effectiveness of our method, and progressively grow bigger. You can observe this in the order of datasets we choose -\n1) SMMINST (64x64) : black and white, simple digits moving in constrained spaces\n2) KTH (64x64) : grayscale, single humans with specific actions\n3) BAIR (64x64) : color, robots moving in constrained environment, with multiple objects\n4) CityScapes (128x128) : color, higher resolution, car moving in complex scenes\n5) UCF101 (64x64) : color, large variety of actions (101!)\n\nAs well as the tasks we chose :\n1. Video Prediction,\n2. Video Predition + Generation,\n3. Video Predition + Generation + Interpolation.\n\nHence, the next logical step is indeed to scale this up! We also want to remind that for all the above experiments, we only used 4 GPUs at a time, and queuing for these GPUs can take many days if not sometimes weeks! We are surely moving towards trying our method on larger datasets. However, in the meantime, we would like to let the research community take advantage of the significant effort we've made so far in showing the merits of our proposed method in all the above datasets and tasks.\n\n> This makes me wonder if the model is heavily overfitting on the training data and there is not much generalization.\n\nThanks for the great question! We shall make this clearer in the revised manuscript: \n\nAll of our metrics are computed on test data that was NEVER seen by the model during training. We do not report any metrics on the training data, we have strong results on never-before-seen data. So our model is definitely not overfitting, and is definitely generalizing quite well!\n\n> FitVid paper (Babaeizadeh et al) has an interesting discussion on how overfitted methods can generate realistic high quality videos but are not capable of generating anything new.\n\nThat is an interesting point made in the FitVid paper. However, it is not applicable in our case since our model is not overfitting to the training data in the first place.\n", " > novelty is limited when compared to prior art or GAN-based approaches (e.g. Deep Video Generation, Prediction and Completion of Human Action Sequences - CVPR 2018)\n\nWe respectfully emphasize that our work is not similar to the mentioned paper (see below for the differences). Our approach is more general compared to the cited work (it does not rely on specialized skeleton generation as an intermediate representation) and provides new capabilities in that sense (it works on arbitrary video, not just humans). Moreover, the mentioned paper requires additional optimization while ours does not.\n\nOur novelty is in using score-based diffusion models to solve all three video tasks (prediction, generation, interpolation) simultaneously. We ask the reviewer not to ignore the fact that diffusion models are very different from GANs in many respects.\n\n| Deep Video Generation, Prediction and Completion of Human Action Sequences - CVPR 2018 | Ours |\n| -------- | -------- |\n| Uses a GAN. | Uses conditional score-based denoising diffusion models. |\n| Shows results only on 1 dataset: Human3.6M. | Shows results on 5 datasets: SMMINST, KTH, BAIR, CityScapes, UCF101. |\n| Subtracts all the backgrounds, and focuses on the foreground human figure only. | Generates full image of complex scenes. |\n| First generates human pose sequences, then transfers them onto human bodies in pixel space. | Does not have an intermediate stage of human pose sequenes, thus is not constrained to human-only videos. |\n| Performs prediction and interpolation using randomized optimization on the latent space. | Does not need any further optimization, the same single model is capable of all three tasks simultaneously (as mentioned by the reviewer). |\n\n\n\n> Could you improve the UCF-101 benchmark by making the experimental procedure clearer\n\nThank you for the great suggestion! We followed the same procedure as mentioned in Appendix A2 in the DIGAN paper. We shall mention this explicitly in the revised manuscript.\n\n\n\n> provide FID and IS results on this dataset, appropriately presented to allow for easy comparison to prior work such as progressiveVGAN,LDVD-GAN/TGAN-F,TGAN-ODE, etc.\n\nAs mentioned earlier, we are working on computing FID and IS. We aim to have these results by the end of the discussion period. We will include them in a response and in the revised manuscript (along with further comparisons).\n\n> Two generic sentences on broader impact is unacceptable\n\nThank you for bringing this to our notice. We plan to mention the following as part of the broader impact:\n\n*High-quality video generation is potentially a powerful technology that could be used by malicious actors for applications such as creating fake video content. Our formulation here, however, focuses on capturing the distributions of real video sequences and not on modifying or altering video. High-quality video prediction could one day find use in applications such as autonomous vehicles, where the cost of errors could be high. Interestingly, diffusion methods have shown great promise for covering the modes of real probability distributions compared to other generative modelling techniques. In this context, diffusion-based techniques for generative modelling may be a promising avenue for future research where the ability to capture modes properly is safety critical.*\n\n*Another potential point of impact is the amount of computational resources being spent for these applications involving the high fidelity and voluminous modality of video data. We emphasize the use of limited resources in achieving better or comparable results. Our submission provides evidence for more efficient computation involving fewer GPU hours spent in training time.*\n\nIf the reviewer feels that other key issues should be discussed, we are happy to include additional statements and discuss any other issues as appropriate.\n\n---\n\nWe hope we have sufficiently answered all of the reviewer's comments point by point and are happy to engage further on more questions! Considering this, we hope the reviewer increases their rating of our paper.", " We thank the reviewer for taking the time and effort to provide their detailed feedback on our submission! We are happy to note all the positive comments from the reviewers:\n\n> - This paper is well written, clear\n> - Paper is well written and is easy to follow\n> - Proposed method is novel\n> - Combines all three (prediction, generation, interpolation) within one architecture, unlike prior/concurrent works\n> - Validates the versatility of the proposed architecture on several tasks --- unconditional generation, prediction, and interpolation\n> - Shows great and promising results on video tasks\n> - Compared to many other models in conditional and unconditional setups\n> - The proposed approach is more computational efficient than concurrent works in the literature\n> - More efficient blockwise generation mechanism and the ability to interpolate between frames\n> - Shows better performance even though it was trained on fewer frames at a time\n> - Demonstrates its capability on generating high quality videos\n\n### Response:\n\nWe believe the primary criticism can be attributed to our paper not clearly emphasizing a few key contributions. We hope to clarify these contributions and address the reviewers' concerns below.\n\n> misses comparisons to prior video GAN based work\n\nWe address specific points below.\n\n\n> prior work utilises FID and IS as metrics\n\nWe initially did consider FID and IS for comparison, however, since FID and IS are image-based metrics while FVD is video-based, we opted to measure FVD. We believe that even if FID is computed using a feature extractor that takes video input, FVD is also computed precisely in that way, using an I3D network trained on the huge video dataset Kinetics-400. We refer to the FVD paper [1] for more details on FVD.\n\nNevertheless, we are working on computing FID and IS. We aim to have these results by the end of the discussion period. We will include them in a response and in the revised manuscript (along with further comparisons to other work which only computed FID/IS).\n\n[1] FVD: A new metric for video generation; workshop paper at ICLR 2019 https://openreview.net/pdf?id=rylgEULtdN\n\n\n\n> limited to 64x64 generation, many GAN based approaches are more performant and efficient at this resolution\n\nWe actually use 128x128 for Cityscapes and 64x64 for all other datasets.\n\nWe ask the reviewer not to ignore that score-based denoising diffusion models are very different from GANs. GANs have their own advantages and disadvantages; for example, GANs are very hard to train and have trouble with generating diverse data compared to denoising diffusion models [1, 2, 3, 4]. Even if GAN-based approaches are more efficient at this resolution, we don't see that as a weakness of our submission.\n\n[1] Tackling the generative learning trilemma with denoising diffusion gans, https://arxiv.org/abs/2112.07804\n[2] Pacgan: The power of two samples in generative adversarial networks\n[3] Towards principled methods for training generative adversarial networks \n[4] A closer look at the optimization landscapes of generative adversarial networks\n\n\n> The 64x64 generation should be made explicitly clear in any results/discussions\n\nThank you for the great suggestion! We currently mention the resolution in the appendix and the processing pipeline in the code, both of which are provided in the supplementary. However, we agree that it should be stated in the main paper and we will add this information into the manuscript. We actually use 128x128 for Cityscapes and 64x64 for all other datasets.", " This work tackles the tasks of video generation, prediction and interpolation using a score-based diffusion modeling approach. The main innovation is in using diffusion models to block-wise autoregressively generate/predict/interpolate video both forward and backward in time efficiently (i.e. using <4GPUs in <12days). Strengths:\n1. This paper is well written, clear and provides a straight-forward extension of the diffusion modelling approach to video.\n2. The proposed approach is more computational efficient than concurrent works in the literature (i.e. using <4GPUs in <12days).\n3. Technical contributions include a more efficient blockwise generation mechanism and the ability to interpolate between frames. Concurrent works use similar architectures and are capable of video generation and prediction. This work combines all of these within one architecture.\n\nWeaknesess:\n1. This work misses comparisons to prior video GAN based work. Taking the UCF-101 experiments as an example, although this work motivates the case for focusing on FVD as a metric, prior work utilises FID and IS as metrics. Not evaluating on these metrics makes it difficult to compare this work against prior art. For historical comparison, I would strongly encourage the authors to extend or append their results to a table similar to the one found in other concurrent/prior works (e.g. Table 1 from the [Video Diffusion Models](https://arxiv.org/abs/2204.03458) or [DIGAN](https://openreview.net/forum?id=Czsdv-S4-w9) papers)\n2. This work is limited to 64x64 generation. Many GAN based approaches are more performant and efficient at this resolution. The 64x64 generation should be made explicitely clear in any results/discussions, especially when comparing results to other works. [TGAN-F/LDVD-GAN](https://www.sciencedirect.com/science/article/abs/pii/S0893608020303397) discussed how results from these metrics are dependant on resolution and the pre-processing pipeline.\n3. Although an interesting direction for diffusion models, the novelty of this work in terms of new capabilities is limited when compared to prior art or GAN-based approaches (e.g. [Deep Video Generation, Prediction and Completion of Human Action Sequences - CVPR 2018](https://arxiv.org/abs/1711.08682)). BAIR, KTH, UCF-101 are some of the standard video generation benchmarks. Although the results tables appear to be quite exhaustive for BAIR/KTH, UCF101 is lacking in comparison... which matters since it is the standard benchmark for unconditional video generation. Could you improve the UCF-101 benchmark by making the experimental procedure clearer (see Appendix A.2 in the [DIGAN](https://openreview.net/pdf?id=Czsdv-S4-w9) paper), also do provide FID and IS results on this dataset, appropriately presented to allow for easy comparison to prior work such as progressiveVGAN,LDVD-GAN/TGAN-F,TGAN-ODE, etc.\n\n\nThe introductory paragraph touching on vehicle safety is an interesting way to introduce the problem, but it isn't touched upon beyond the first paragraph. In the end, how does your work impact this?\n More effort and thought has to be put into the broader impact statement. Two generic sentences on broader impact is unacceptable for this kind of work.", " This paper introduces the modified U-Net architecture for taking video frame features and the conditional diffusion process for frame-level image generation. Modified U-Net architecture is equipped with conditioning components with non-recurrent architecture fusing frame-level features. The space-time fusing process is conducted in two-step; channel-wise concatenation of frame-level features and space-time adaptive normalization (SPATIN) inspired by SPADE. This non-recurrent and space-time conditioning U-Net architecture is evaluated on the several video generation, prediction, and interpolation datasets. Although the scale of the datasets is restricted, the versatility of the algorithm is validated. ### Strengths\n- The author validates the versatility of the proposed neural diffusion architecture for video generation on several video generation tasks--- unconditional generation, prediction, and interpolation.\n\n### Weaknesses\n- The scale of the datasets used in the experiments is small-scale compared to the other generation methods these days.\nThe disadvantage of non-recurrent and sliding window sampling is not explained. The word ‘non-recurrent’ only appears in the abstract.\n ### The effect of $k$ on the proposed algorithm\nThe experiments only show the maximum number of frames during training is 4, 5 for video prediction/generation, and 10 for interpolation. Compared to the other model with $k=10, 15$ for training, I guess that the proposed model seems to show better performance on lower $k$ as explained in lines 257~260. How about the smaller or larger number of frames affect the performance of the proposed algorithm?\n\n### Diversity issues in video generation/prediction\nWhat is the diversity of the generated videos? Most video generation works only report fidelity metrics, such as FVD, FID, and IS. However, the fidelity metrics do not show the diversity of the generated videos. The diversity metric introduced in [1] would be good.\n\n[1] Diverse Video Generation using a Gaussian Process Trigger, ICLR 21\n\n### The scale of the dataset\nThe experiments only contain small-scale datasets. How about applying the proposed algorithm to a larger scale dataset, such as Kinetics, Something-something v2? Is there any bottleneck of the proposed algorithm except the amount of computation? An algorithm with a non-recurrent and sliding window sampling may be weak in this case compared to recurrent counterparts. \n\n### Time coverage of snippets.\nSeveral works typically count the number of frames, not the time span measured in seconds. It would be good to the time span of the video snippets for generation, prediction and interpolation.\n I’ve mentioned it in the question part.", " The authors proposed a DDPM based model for video prediction, called MCVD, based on randomly masking frames for conditioning. The random masking allows the model to be conditioned on past or future frames, enabling the model to perform a variety of tasks such as infilling, prediction and generation. Authors compared the performance of the proposed method in conditional (on BAIR and Cityescapes), inpainting (on SMMNIST, KTH and BAIR) and unconditional setup (on BAIR and UCF-101), demonstrating its capability on generating high quality videos.\n\n Strengths And Weaknesses:\n===== Strengths\n- The paper is well written and is easy to follow. \n- The authors compared the proposed method to many other models in conditional and unconditional setups.\n- The proposed method is novel (however there are multiple similar parallel works).\n \n===== Weaknesses:\n- Limited datasets. Although, authors tested the proposed method on multiple datasets, I find the choice of datasets to be narrow. In both prediction and unconditional generation, the datasets are limited to smaller size dataset (BAIR and UCF) and there is no experiment on larger datasets such as RoboNet or Kinetics. This makes me wonder if the model is heavily overfitting on the training data and there is not much generalization. FitVid paper (Babaeizadeh et al) has an interesting discussion on how overfitted methods can generate realistic high quality videos but are not capable of generating anything new. \n\n- Analysis. There is limited analysis and ablation studies. While the paper shows great and promising results on video tasks, it is not clear how the design choices affect its results. e.g. why the masking should be random for inpainting? In absence of any ablations, I can argue that a fixed set of conditioned frames is a much better defined problem and may result in even higher quality predictions. Table 3 has some for masking strategies but the paper can use more targeted experiments along the same lines. Another angle is computational issues which is mostly ignored in the paper. One major issue in video prediction literature is the huge number of parameters required for getting good results. How is the situation in a score based model? Does a bigger model substantially improves the result or not? Are these models already overfitting? Such analysis can substantially improve the quality of the paper.\n\n- Limited qualitative results. Given the nature of videos, they are hard to compare on printed paper. I highly recommend authors to host predicted videos in an anonymous website and provide comparative videos with other baselines. Please check weaknesses. \n Yes" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 4 ]
[ "nips_2022_hX5Ia-ION8Y", "iAI4Si7xRQH", "9Dg0poRrsnr", "h48DDvR1pKF", "nips_2022_hX5Ia-ION8Y", "4tqXOs_c6L0", "xz2yaLp8Np", "S_GmF8NtovW", "HBAa01cW2Ow", "N_74gltQjN4", "cNMs3nbxeO2", "nips_2022_hX5Ia-ION8Y", "nips_2022_hX5Ia-ION8Y", "nips_2022_hX5Ia-ION8Y" ]
nips_2022_IFXTZERXdM7
Solving Quantitative Reasoning Problems with Language Models
Language models have achieved remarkable performance on a wide range of tasks that require natural language understanding. Nevertheless, state-of-the-art models have generally struggled with tasks that require quantitative reasoning, such as solving mathematics, science, and engineering questions at the college level. To help close this gap, we introduce Minerva, a large language model pretrained on general natural language data and further trained on technical content. The model achieves strong performance in a variety of evaluations, including state-of-the-art performance on the MATH dataset. We also evaluate our model on over two hundred undergraduate-level problems in physics, biology, chemistry, economics, and other sciences that require quantitative reasoning, and find that the model can correctly answer nearly a quarter of them.
Accept
This is a very strong solid work that extracts equations from scientific papers and show strong performance in mathematical reasoning.
train
[ "-HyjmSgHTY", "2DXWcAn_QKM", "KMX4HyO8xtp", "umVMDRMKIN-", "GDyStNZjsS0", "OUN7kBHG2mD", "IhNzpc20v2t", "Q6oP9-nQ8hE", "JMO2HywlVNq", "uxZSxiVFd4r", "YS4dVxZ35XK", "fWyUVKIcadp", "4pHn3WfOaXM" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The authors answered most of my queries and there is no change in my rating.", " Thank you for the detailed response and answering all of my questions.", " We thank the reviewer for their review and feedback.\nRegarding the overall contribution, we refer the reviewer to the general comment to all reviewers, and also to the response to reviewer iAwH highlighting the importance and implications of our work. \n\nWe also address the reviewer’s specific questions below:\n\n> “Are the \"BaseModel\" models used publicly available?”:\n\n\nThe “BaseModel” is not publicly available, and as mentioned in footnote 1 on page 2, details about the BaseModel are omitted for the purpose of anonymization. Once anonymization is no longer an issue, the name of the BaseModel will be revealed and we will add the appropriate citations describing the BaseModel. \n\n> “Table 4 [...] What is the distribution for the 62B model, where the 8B model solved correctly and the 62B model did not? Is there a further breakdown of incorrect reasoning?”\n\n\nWe manually inspected a set of problems that the 8B model solved correctly and the 62B model did not, and we observed that similar to the reverse setup, the main failure mode is incorrect calculation and reasoning. Due to the similarity, we did not do a systematic study on a sufficient number of examples to report statistics in this case. \n\n> “[ …] dataset is the main novelty of their paper (line 45), there isn't sufficient discussion of it in the paper and appendix.”:\n\n\nWe are unfortunately unable to release the full dataset and exact scripts that generated the dataset, but we have provided some details with the general idea of the sources and processing that was used. Since our submission actually emphasizes the capabilities of the models trained on our dataset, and not the dataset itself, we have reworded this sentence in the paper to say: “The main novelty of this paper is the use of a large training dataset that juxtaposes natural language with correctly-formatted mathematical content in LaTeX format.”.\n\n> “Will this approach scale to more complex math and science problems? Are further gains expected with larger models?”\n\n\nMATH contains 5 difficulty levels, and we do find that more difficult problems are less likely to be solved by Minerva, as one would expect, and we do expect larger models to show better performance. In the updated manuscript, we have added our results from a 540B parameter model, which achieves SOTA performance on all the benchmarks studied in this submission.\n\n> “How are diagrams (as mentioned in the intro) processed?”:\n\n\nWe use all diagrams which are scripted using tikz, asymptote, and other rendering frameworks, directly as written, while we remove diagrams which are in image format, such as jpeg, pdf or similar. We have updated the wording in the intro to make this explicit. Thank you for pointing this out! \n", " Thank you for your valuable feedback; we are happy that you find the majority voting interesting and appreciate the strength of the model in answering questions from MATH. We have addressed all your concerns by adding new results, updating the paper and providing answers to your questions. In light of that, we hope you consider recommending our paper for acceptance.\n\nFirst, let us address the concerns about our training dataset, specifically:\n\n> The main problem with the paper, in my view, is that the training data is the main contribution of the paper, but I cannot tell what it is like.\n\nIn the appendix, section A, we describe in detail how our dataset is created and what documents it comprises. Specifically, section A.1 explains the selection and processing of the arXiv paper dataset, and section A.2 describes the selection and processing of the mathematical web pages dataset. \n\n> At one point, the authors mention \"equations and diagrams\" juxtaposed with natural language, but how would a language model process diagrams, or how would they be converted to text?\n\nThank you for pointing this out; this was poorly worded, as there are no diagrams in the dataset (unless they are already specified by text, for example as tikz or asymptote). We have rewritten this sentence as follows: \n\n“The main novelty of this paper is the use of a large training dataset that juxtaposes natural language with correctly-formatted (in LaTeX) mathematical content.” \n\nIf you have other questions about the dataset, which aren’t answered by the Appendix sections, we would be happy to address them.\n\n**Regarding the comment about the OCWCourses dataset**\n\n> The benchmark introduced may be too small to be meaningful, and I don't think the paper makes it clear how representative it is.\n\nAll of the problems in this dataset are available as a .tex file in the supplementary material; please see the file ocw_courses_problems.tex. Regarding the concern about size, we are in the process of expanding this dataset and will release a larger dataset comprising these problems once completed.\n\n> Furthermore, I worry, when it includes questions from \"‘solid-state chemistry’, ‘information and entropy’, ‘differential equations’, and ‘special relativity'\", that quantitative reasoning is not what we are testing, but specific scientific knowledge that makes use of quantitative reasoning.\n\nThis is true; as these questions were chosen to be representative of undergraduate STEM problems, some of them require quantitative reasoning and others rely more on knowledge. \n\n**Regarding EQUATE**\n\nThank you for bringing this dataset to our attention. We have evaluated our model on this dataset as well; the details are in the Appendix, section G.\n\n**Responses to minor questions**\n\n> The claim in L175-177 seems unfounded to me. Why wouldn't you be sure you haven't found a local minimum with the fine-tuned model by training on the wrong dataset, whereas you haven't with the base model?\n\nWe have removed this claim, and just kept the description of the empirical result.\n> In L57-58, I cannot parse \"Prompting language models... unseen problems\". Do you mean to say \"apply\" instead of \"output\"?\n\nWe have reworded this sentence to be clearer:\nPrompting language models using scratchpad (https://arxiv.org/abs/2112.00114) or chain-of-thought ()​https://arxiv.org/abs/2201.11903) solutions can lead them to generate step-by-step solutions to unseen problems.\n> I cannot tell whether the pre-trained model, on which Minerva is based, is a contribution of this paper or another paper?\n\nThis pre-trained model is a contribution of another paper; we will cite this in the deanonymized, camera-ready version of the paper.\n\n> I don't understand the need for the OCW benchmark introduced, in addition to MATH, GSM8k, and others. Which part of the new dataset do these more established benchmarks lack?\n\nOCW tests quantitative reasoning and scientific knowledge in a free-response setting on a more challenging class of problems, designed for undergraduate students rather than middle school and high school students. By contrast, MATH and GSM8k focus specifically on mathematics up to high-school difficulty, and MMLU is a multiple-choice benchmark.\n\nWe hope that this response has addressed your concerns about recommending to accept this work.\n", " Thank you for useful feedback! We have updated the paper (please also see our general comment) and provided responses to your review. Please let us know if this sufficiently addresses your concerns.\n\nRegarding your comments on originality and quality, we want to highlight that our work has very important implications regarding two main areas of disagreement among researchers working on reasoning:\n\n - **The role of scaling**: Reasoning problems were perhaps the most important group of problems that seemed not to benefit significantly from scaling. For example, Hendrycks et al. 2021 evaluated GPT-3 on the MATH dataset and estimated that “Assuming a log-linear scaling trend, models would need around 10^35 parameters to achieve 40% accuracy on MATH, which is impractical.” Given these results, many experts were skeptical of the ability of language models to achieve accuracy 50% any time soon, especially without making fundamental architectural changes that facilitate symbolic manipulation. Our largest model reported in our revision reaches the 50% accuracy milestone (please see our general comment highlighting new results and updates to the paper).\n\n - **Formal vs. informal**: Formal mathematics has historically been the main approach to using AI for mathematical proving and problem solving. Our work provides strong evidence that even though the informal direction is not able to benefit from tools such as automatic verification, it is a very promising direction given our access to a large corpus of technical data and the abilities of large language models to perform multi-step reasoning using scratchpad/chain-of-thought.\n\nWe also want to emphasize that while the story appears to be simple, successful training of large models involves a significant amount of work and there are many decisions that could make or break them. For example, careful processing of technical data was an important element (please see appendix C on “Training Dataset Details”).\n\nRegarding “I don't think there are many insights I can draw from this paper”, we believe while there are many analyses and useful insights in the paper (and we have added more in the revision), the conference page limit and the significance of the main result might make it difficult to draw attention to them. Based on your feedback, we have improved the presentation of the paper. Here, we highlight some of the analyses and useful insights in the paper :\n\n - Significant role of scratchpad/chain-of-thought + majority voting in performance (Figure 3 and Table 3)\n - We study the dependence of performance on the number of generated samples per question (Table 6 and Figure 5).\n - We compare majority voting to log-Likelihood reranking and show that majority voting has superior performance (Table 7).\n - We compare Few Shot, Custom prompt finetuning, Random prompt finetuning and No prompt finetuning and show that while finetuning our base model outperforms its shot prompting, Minerva’s few shot performance is superior to finetuned performance. We hypothesize that this can be attributed to the quality of our technical data used for training Minerva (Table 8).\n - We analyze the asymptotic behavior of majority voting at large k (Section D.4) .\n - Investigating the mistake types and classifying them into several categories and showing that 75% of them are incorrect reasoning and incorrect calculations (Table 4)\n - We show that while the false positive rate is low, it increases with the difficulty level\n - In a memorization sanity check, we modify MATH questions and show that Minerva’s accuracy before and after modifications are correlated, with no clear bias in favor of the original formulation (Figure 4).\n - Minerva has great performance on arithmetic tasks but the performance still degrades with number of digits (Figure 7)\n - We show that SymPy equivalence would only marginally improve the performance (Table 5)\n", " Thank you for taking the time to read our work, for the positive comments and for the suggested clarifications and improvements!\n\n**Weaknesses**\n> The main contribution here is collecting (scraping and filtering) a dataset, and fine-tuning a large model on it. By itself, I am not sure this is a huge contribution. However, the analysis in the latter half of the paper is quite comprehensive.\n\nWe are glad that you appreciate the analysis! We agree that this paper largely relies on existing techniques. We want to emphasize that one of the major takeaways from this work is precisely that, contrary to some expectations in the field, major performance improvements on quantitative reasoning tasks can be achieved solely with currently available techniques. \n\n> I was confused about the majn@k … would help to clarify in the Fig 1 caption and 2.3\n\nThanks for the suggestion. We have now clarified this in the captions for Figure 1 and Figure 4.\n\n> how does this work for something like a proof, though, that's inherently a chain of reasoning?\n\nThis is an insightful question, and indeed a limitation of majority voting is that it can only be applied in contexts with a clear final answer. Other sample re-ranking techniques, such as verifiers [Cobbe et al. (2021)], can perhaps be repurposed during the generation process to help in more open ended problem solving, such as theorem proving. This is an active area of future research. We have now emphasized this limitation of majority voting in section 6.1 Limitations.\n\n> I'd like to see more discussion on the limitations of existing quantitative benchmarks.\n\nWe have modified the discussion at the end of the Our Contributions section. The quantitative benchmarks we are aware of that present chain-of-thought rationale, rather than just final answers, focus on mathematics. The OCW dataset helps broaden this to cover more STEM fields.\n\n**Questions**\n> Table 1 lists tokens in the entire dataset, right? Not the amount sampled for the technical training dataset? Is the general NL data for the technical dataset sampled randomly from the full general NL dataset?\n\nYes the token count is the entire dataset. The general NL data is sampled randomly from the larger dataset. We have clarified this now in the text.\n\n> Why truncate inputs from the left? Besides fitting in the limited transformer input. How often does this happen?\n\nThe only reason to truncate is what you describe, so that prompt, question, and answer can all fit in the transformer window. This happens in 3% of MATH examples.\n\n> What does it look like for reasoning to be wrong but the answer to be right? It's a bit surprising that the false positive rate is 30% in difficulty 5 examples.\n\nTo make it easier to inspect, we have now included all of the false positive examples analyzed in the supplementary file minerva_false_positives.json. Anecdotally, there are a few different types of false positives. Some false positives occur when the final answer is a relatively simple combination of the problem input data. In other cases the model output skips steps, or trails off and then produces the correct final answer. We took a conservative approach and marked these as false positives. More generally, understanding false positives more deeply is an interesting area for further study!\n\n> How many times each are the 20 examples in 5.2 modified to test memorization?\nEach question has four modifications, one for each of the modification types listed in Appendix J2.\n\n**Limitations**\n> Are you planning to release the training data?\n\nThe arXiv data is publicly available and can be downloaded [here](https://arxiv.org/help/bulk_data). We are unfortunately unable to release the math webpage dataset as it was developed by filtering a proprietary internal dataset. We have however shared all the details needed to reproduce the processing of the dataset, in Appendix B.\n\n> details on the contract work used to clean the dataset\n\nWe did not use a crowdsourcing service for this work, and instead hired a team of expert contractors at competitive wages. We are unable to share compensation information for these contractors.\n", " Thank you for your encouraging review of our submission. We appreciate your feedback, and address your questions below:\n\n\n> Any reason behind the incorrect answers for the solutions that were short?\n\nFor solutions that were evaluated by a human labeler as being too short, it is typically the case that a calculation step is missing, or that the model restates the premise of the question and then immediately jumps to producing the incorrect answer. As an example of this failure mode, one problem in MATH is \"In the row of Pascal's triangle that starts with 1 and then 10, what is the next number?\". The 8B model responds with \"In the row of Pascal's triangle that starts with 1 and then 10, the next number is 100. Final Answer: The final answer is 100.\"\n\n> Can it derive math expressions instead of numbers as the final answer like solving an indefinite integral?\n\nYes, the model does return mathematical expressions when prompted for them. For example, when prompted with the MATH few-shot prompt followed by the problem \"Compute the antiderivative of $f(x) = x^3$\" the model responds \"Solution: The antiderivative of $f(x)=x^3$ is $\\frac{x^4}{4}+C$. I hope it is correct.\" Several questions in the OCWCourses are of this form, rather than numerical answers.", " We thank all the reviewers for their detailed and thoughtful comments. \n\nWe want to first reiterate our main finding: focusing on scale and data quality can unlock new and surprising reasoning capabilities in large language models, despite the lack of any inductive bias specific to mathematics in the model architecture. Please see our response to reviewer iAwH regarding the originality and significance of our work, as well as a summary of other insights from our paper.\n\nOur model achieves SOTA results on a variety of quantitative reasoning benchmarks, with “no outside access to libraries, calculators, or other tools in performing the task” (reviewer 9sZQ). We are delighted that the reviewers seem to agree that these results are significant, for example commenting that “the reasoning skill demonstrated by the model is amazing” (reviewer iAwH) and “The work will be a founding stone in making AI for tutoring” (reviewer 3teb). \n\nWe have updated the paper including several new results:\n - We trained and evaluated a 540B parameter model using the same procedure described in the paper. It achieves SOTA performance on “all” the benchmarks used in the paper. In particular, on the most challenging dataset, MATH, it reaches the 50% accuracy milestone! Note the previously published SOTA is 6.9%.\n - As suggested, we evaluated the model on the EQUATE reasoning dataset, and found that our model achieves SOTA performance on this benchmark.\n - We evaluated the 62B parameter model on MMLU-STEM using majority voting, obtaining SOTA results.\n - We evaluated the model on basic arithmetic with few-shot prompting, and found that the model obtains strong performance (for example, it achieves over 80% accuracy on 8-digit addition). We included an analysis of the model’s mistakes on basic arithmetic in the appendix.\n - We included all of the “false positive” examples we found in the appendix, so that readers can have a more complete picture of the model’s capabilities and limitations.\n - We have further updated the paper to address reviewers’ specific concerns.\n", " This work proposes a large language model *Minerva* to solve scientific and mathematical questions in natural language and generate step-by-step solutions using correct Latex notation. This model is trained on a high-quality data collection composed of scientific and mathematical information. This model got impressive results across a wide variety of quantitative reasoning tasks, including Precalculus, Geometry, and many more.\n\n * This paper provides a well-documented analysis of results and the reason behind the usage of metrics.\n* The work will be a founding stone in making AI for tutoring. This can have a significant impact on students with fewer resources.\n* The authors also addressed the issues of False positives and Memorization with a detailed investigation.\n 1. Any reason behind the incorrect answers for the solutions that were short?\n2. Can it derive math expressions instead of numbers as the final answer like solving an indefinite integral? There is currently no automated mechanism to check the correctness of the model's answers, and the authors also acknowledge it. This work has no major negative societal impact, but people may use it to cheat on some STEM tests.", " This paper introduces a large language model and new training and evaluation dataset for quantitative reasoning problems. It first overviews existing methods and resources for quantitative reasoning, then discusses the datasets collected for training and evaluation, and finally provides comprehensive results and analysis of a model that is finetuned on the collected training data, finding that in comparison to existing models, the proposed large language model performs well. Strengths:\n\n* Good overview on challenges associated with quantitative reasoning\n* Very comprehensive error analysis (and examples in appendix)\n* Great analysis of the model robustness wrt. memorization\n\n---\n\nWeaknesses:\n\n* The main contribution here is collecting (scraping and filtering) a dataset, and fine-tuning a large model on it. By itself, I am not sure this is a huge contribution. However, the analysis in the latter half of the paper is quite comprehensive.\n* I was confused about the majn@k format in Fig 1 and 2.3 until section 2.5. Would help to clarify in the Fig 1 caption and 2.3 that it's majority voting on the final answer, not the whole generated text (how does this work for something like a proof, though, that's inherently a chain of reasoning?).\n* I'd like to see more discussion on the limitations of existing quantitative benchmarks. I was expecting this in the related work section but couldn't find anything (just briefly mentioned on L86). * Clarification question: Table 1 lists tokens in the entire dataset, right? Not the amount sampled for the technical training dataset? Otherwise the proportions don't make sense. \n* Is the general NL data for the technical dataset sampled randomly from the full general NL dataset?\n* Why truncate inputs from the left? Besides fitting in the limited transformer input. How often does this happen?\n* What does it look like for reasoning to be wrong but the answer to be right? It's a bit surprising that the false positive rate is 30% in difficulty 5 examples -- without knowing what those examples look like, I find it very surprising that there are so many false positives.\n* How many times each are the 20 examples in 5.2 modified to test memorization? I'm wondering about copyright/consent issues when it comes to scraping/re-releasing data from math webpages and ArXiv. Are you planning to release the training data?\n\nSimilarly, details on the contract work used to clean the dataset -- typically crowdsourcing contributions will have (at least in the appendix) details on compensation, etc. but I couldn't find that.", " State-of-the-art language models have generally struggled with tasks that require quantitative reasoning, such as solving mathematics, science, and engineering problems at the college level. This paper introduced Minerva, a large language model pretrained on general natural language data and further trained on technical content. The authors conducted experiments in a few-shot setting across a range of tasks that require quantitative/numerical/symbolic reasoning, including one they manually collected for better evaluation (OCWCourses). The results demonstrated that it achieved strong performance on all the datasets and set state-of-the-art\n performance on the MATH dataset. Through an error analysis, they also found the two most common cases in which their model made wrong predictions are *Incorrect reasoning* and *Incorrect calculation*.\n\n\n - Originality: The idea of further training a pretrained language model on a math-heavy corpus is quite straightforward yet effective (as can be seen from the results).\n\n- Quality: The proposed approach is technically sound and the experimental results showed that it outperformed the previous language model on MATH and OCWCourses by a large margin. However, I don't think there are many insights I can draw from this paper. My main takeaway about this paper is that we can achieve much better performance on quantitative/numerical/symbolic reasoning by further training a pretrained language model on high-quality corpus containing equations and diagrams. What if we just train a language model from scratch using the same corpus? What is the role of the general language data played here? I think answering such questions could make this paper stronger.\n\n- Clarity: This paper is well-organized and easy to follow. \n\n- Significance: This paper achieved strong results across a range of different datasets and the reasoning skill demonstrated by the model is amazing. As pointed out by the authors that one of the main challenges is to automatically verify the correctness of the generated reasoning chains/answers, I believe this work would prosper the work in such directions.\n n/a The authors have sufficiently addressed the limitations in the paper.", " The authors fine-tune large PLMs (pre-trained on a corpus of scientific and mathematical text) on a new quantitative reasoning dataset, evaluating on existing benchmarks and a new, custom benchmark. The main contribution is the formulation of these datasets, which allows for sota on MATH and strong performance on other benchmark. Notably, this approach relies exclusively on a language model, which has no outside access to libraries, calculators, or other tools in performing the task. Strengths\n- The most obvious strength of the paper is the considerable improvement over SOTA scores, although I'm quite surprised at how low such scores are, especially when GPT-3 also so exceeds them. \n- I think the majority voting idea is really cool, since sampling obviously provides a limited view of the branching probability tree characterizing the language model's processing of the original prompt, and majority rule arguably offers a more fulsome picture of the outputs. The obvious problem with it is that cost scales linearly.\n- Checking the answers using sympy is a clever way to bin different expressions of answers.\n\nWeaknesses\n- The main problem with the paper, in my view, is that the training data is the main contribution of the paper, but I cannot tell what it is like. Basically the only thing I can tell is that it contains mathematical expressions. Fig. 1 and Fig. 2 show the evaluation data, which is already outlined in other work, and I wonder why that space wasn't used to explain this paper's training corpus. At one point, the authors mention \"equations and diagrams\" juxtaposed with natural language, but how would a language model process diagrams, or how would they be converted to text? I have many more questions, and they might be answered in the appendix, which wasn't available to me, but this lack of clarity seems to me to preclude the publication of this paper at NeurIPS.\n- The benchmark introduced may be too small to be meaningful, and I don't think the paper makes it clear how representative it is. Furthermore, I worry, when it includes questions from \"‘solid-state chemistry’, ‘information and entropy’, ‘differential equations’, and ‘special relativity'\", that quantitative reasoning is not what we are testing, but specific scientific knowledge that makes use of quantitative reasoning. \n- Doesn't reference EQUATE (Ravichander et al. 2019), one such recent, decently-cited quantitative reasoning benchmark.\n- The authors say in L51 that \"existing benchmarks are limited with respect to quantitative reasoning\", and I'd like to know what they mean by that. I couldn't find any corresponding discussion. - The claim in L175-177 seems unfounded to me. Why wouldn't you be sure you haven't found a local minimum with the fine-tuned model by training on the wrong dataset, whereas you haven't with the base model?\n- In L57-58, I cannot parse \"Prompting language models... unseen problems\". Do you mean to say \"apply\" instead of \"output\"?\n- I cannot tell whether the pre-trained model, on which Minerva is based, is a contribution of this paper or another paper?\n- I don't understand the need for the OCW benchmark introduced, in addition to MATH, GSM8k, and others. Which part of the new dataset do these more established benchmarks lack?\n The authors rightly identify 3 limitations of their method. There is nothing set in stone, however, about these limitations (one could engineer a dataset to give specific capabilities to a model, or use a model that had access to outside tools, etc), and the authors do not discuss why they chose the model, data, etc. that they did __with respect to__ these limitations.", " The paper focuses on the problem of quantitative reasoning in large language models, and introduces Minerva, a large language model that is pretrained on general language data and then further trained on a dataset containing scientific and mathematical data. \n\nThe training data is collected from arXiv and web pages that were preprocessed to conserve mathematical content. The authors then use this data to finetune pretrained decoder-only transformer LMs (8B and 62B parameter models) using an autoregressive objective. During inference time, the paper suggests sampling k > 1 solutions and selecting one using majority voting. The authors show empirically that this approach considerably outperforms greedy decoding. \n\nThe authors compare their models against previous works on the MATH, GSM8k, and MMLU-STEM datasets, and also build a dataset (OCWCourses) of 272 undergraduate questions in science and mathematics to evaluate the model's quantitative reasoning abilities in a chain-of-thought context. The authors report that the Minerva 62B significantly outperforms state-of-the-art results on the MATH dataset, and achieves similar performance on MMLU-STEM. Furthermore, the authors conduct analyses to understand the types of mistakes the models make, the effect of false positives (cases where the answer is correct but the chain of reasoning is not), and whether the model is memorizing the answers. They find that most of the errors are incorrect reasoning and incorrect calculations, that false positive rate increases with difficulty level (with an estimated average of 8% on the MATH dataset), and that there is little evidence that the model's performance can be attributed to memorization. \n\n\n\n **Strengths**\n\nThe topic of quantitative reasoning with large language models is of interest to the research community. By finetuning pretrained LMs on large math and science datasets and comparing against models that were not trained with such data, the paper highlights the importance of data quality for achieving quantitative reasoning abilities. Furthermore, the authors build a new dataset for evaluating quantitative reasoning, and conduct thorough evaluation of the type of mistakes the models make, highlighting the issue with false positives, and investigating the impact of memorization on performance.\n\n**Weaknesses**\n\nAlthough the authors mention that the dataset is the main novelty of their paper (line 45), there isn't sufficient discussion of it in the paper and appendix. Will this approach scale to more complex math and science problems? Are further gains expected with larger models? Furthermore, what is the breakdown of papers by area/topic? How are diagrams (as mentioned in the intro) processed? What are the examples of heuristics used to process the pages as mentioned in the appendix? etc.\n 1- Are the \"BaseModel\" models used publicly available? Appendix B does not contain sufficient information for reproducibility.\n2- Table 4 only reports the distribution of failure modes of the 8B model, and the text only mentions the dominating failure modes for the 62B model. What is the distribution for the 62B model, where the 8B model solved correctly and the 62B model did not? Is there a further breakdown of incorrect reasoning? The authors discuss the limitations and potential societal impact of their work." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 9, 7, 6, 2, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4, 3, 5 ]
[ "IhNzpc20v2t", "OUN7kBHG2mD", "4pHn3WfOaXM", "fWyUVKIcadp", "YS4dVxZ35XK", "uxZSxiVFd4r", "JMO2HywlVNq", "nips_2022_IFXTZERXdM7", "nips_2022_IFXTZERXdM7", "nips_2022_IFXTZERXdM7", "nips_2022_IFXTZERXdM7", "nips_2022_IFXTZERXdM7", "nips_2022_IFXTZERXdM7" ]
nips_2022_9t24EBSlZOa
Attention-based Neural Cellular Automata
Recent extensions of Cellular Automata (CA) have incorporated key ideas from modern deep learning, dramatically extending their capabilities and catalyzing a new family of Neural Cellular Automata (NCA) techniques. Inspired by Transformer-based architectures, our work presents a new class of _attention-based_ NCAs formed using a spatially localized—yet globally organized—self-attention scheme. We introduce an instance of this class named _Vision Transformer Cellular Automata (ViTCA)_. We present quantitative and qualitative results on denoising autoencoding across six benchmark datasets, comparing ViTCA to a U-Net, a U-Net-based CA baseline (UNetCA), and a Vision Transformer (ViT). When comparing across architectures configured to similar parameter complexity, ViTCA architectures yield superior performance across all benchmarks and for nearly every evaluation metric. We present an ablation study on various architectural configurations of ViTCA, an analysis of its effect on cell states, and an investigation on its inductive biases. Finally, we examine its learned representations via linear probes on its converged cell state hidden representations, yielding, on average, superior results when compared to our U-Net, ViT, and UNetCA baselines.
Accept
As summarized by reviewer GAh5, this paper proposes a novel combination of vision transformers and Neural Cellular Automata (NCAs), and uses them to create denoising autoencoders. An NCA is essentially a cellular automaton with the node updates being performed by a neural net. The ViTCA proposed here adds to this attention heads that are only focused on neighboring cells, and includes positional encodings. The paper demonstrates superior performance to a ViT on denoising autoencoder tasks, and demonstrates the robustness of the model to various perturbations. All reviewers agree that this is a novel contribution to NCAs. By replacing convolutions with vision transformers, it opens up many possibilities (such as scale). The experiments were conducted with detail and clarity. I do agree with reviewer mu6v, that while the quality of the paper is good, the experiments go fairly in-depth although on just one task of image denoising (which is less interesting than texture generation or other forms of morphogenesis), but even in its current form, it definitely meets the bar for a solid acceptance recommendation from me.
train
[ "5LpFefJXwUn", "8vE6vQOrU8h-", "JnKwUTCyJlrK", "wXbHChf0BSH", "6LJ5x9p5mMS", "ptQKLkLgznS", "gYJ8vwqzVE", "dt4B5562MKRR", "jhNON-Kk9Pb", "gZfOXtdsND-", "2ZMclWh6z9g", "U3uh9evRRB_", "4torJNPNNtT", "D14W8q6et0h", "7RBqn2C5JDa", "grZhuZoPgge", "ZWQIU0q-FF2", "Id7IxNZerzk", "981whYWVz_q", "o5TAmOmyc17", "Dih5Mdhd19E", "thAD35keQOs", "frWkj4wGvbh", "q8Tq-QfIXhG", "itN2a3nTANu", "xn1NWE-ONQ" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their very thorough reply which has indeed helped me understand the paper better and change my mind about it.\n\nA few comments about the reply: \n\nParts 1 and 2: Please disregard my comment on the three datasets, it was an oversight on my part and I apologize for it. \nIndeed this paper shows that NCA-like architectures can provide significant benefits to ViTs, which are still somewhat underwhelming compared to U-nets, and this is an interesting result.\n\nPart 3: Thanks for clarifying. I agree with the statement, I was just confused by the wording (still, I don't think that a model can be \"capable of [...] guarantees\" -- maybe it can be said better).\n\nPart 4: Thanks for clarifying. \n\nPart 5: While I agree in principle, especially if the goal of the NCA is \"just\" denoising and not modelling biological systems, I am not aware of CA models that explicitly take into account positional encodings (I do believe that self-localization, particularly in NCA, is what makes them interesting to some degree -- otherwise a morphogenesis NCA could simply overfit the map from PE to pixel output). \nHowever, I see the authors' point and I agree that it might be an interesting direction of research for the future (I especially enjoyed the closing remark). Plus, here PEs have a clear function and provide a non-trivial advantage, so they're well justified. Thanks for the discussion.\n\nPart 6: Thanks for clarifying Q4, and interesting discussion about Tables 1 and 2. \n\nPart 7: Again, interesting discussion and definitely something I suggest the authors include in the supplementary material. \n\nPart 8: What I meant to say is that NCA map inputs from and to the same domain, the authors are correct in pointing out that the domain need not be an image. \nThanks for the clarification regarding the baselines, I agree with the authors. \nThe texture generation results are probably worth a paper of their own, but they would definitely make a nice addition to the supplementary material. \n\nPart 9: Thanks for clarifying. \n\n---\n\nWhile advising the authors to add as much as possible of this interesting discussion to the paper/supplementary material, I am happy to raise my score to a full accept and I will vouch for the paper if needed in the reviewer discussion (although there seems to be quite the consensus). Good job. ", " \nAdditional clarifications we would like to make in response to some questions and comments:\n- **C1**: As more heads are added in multi-head self-attention (MHSA), for example, going from $h = 2$ to $h = 3$, the number of parameters remain the same since the dimensionality of each head is equal to the embedding size $d$ divided by the number of heads $h$. This is typical and followed by the original ViT paper. We also show this in L130 in our (unrevised) manuscript. However, when going from single-head self-attention to multi-head, $h = 1$ to $h = 2$, the number of parameters _slightly increases_ since in MHSA the heads need to be consolidated together using an additional weight matrix before adding the result to the token embedding. This matrix is $\\mathbf{W}$ in L132, which is not used when $h = 1$. Despite the performance gains in increasing the number of heads, as shown with ViTCA-32 in Table 2, it's important to note that it does negatively affect runtime performance with respect to compute time and memory usage (implied in L272-273), something we hope to alleviate in the future with custom CUDA kernels for the localized self-attention mechanism.\n- **C2**: To further clarify the meaning of parameter counts listed in the tables, they represent the number of _trainable_ parameters in the listed model for the respective experiment. Since Table 1 and 2 report denoising autoencoding test results, the parameter counts refer to the number of parameters in the listed models (since they're the ones being trained). In Table 3, since the experiment is for linear probing, the number of parameters are for the linear probe/classifier that is being trained on the features of the pre-trained fixed models. The last three models listed in Table 3 do not use pre-trained models since they're being trained directly on the input images, so the parameter counts listed are _their_ number of trainable parameters.\n- **C3**: We would like the reviewers to consider the relative and absolute scales of the PSNR, SSIM, and LPIPS metrics used and how that affects the interpretation of percentage of performance difference between the models. As a reminder, PSNR is on a logarthmic scale and SSIM and LPIPS are in a bounded range of [0, 1].\n- **C4**: The localized self-attention neighbourhood size does not affect the number of parameters but it does affect compute time and memory usage. Furthermore, by how localized self-attention is constructed, one can notice that it can be dynamically changed during runtime, which opens avenues for future work where ViTCA is trained to dynamically alter the attention neighbourhood size or dilation to maximize rate of information propagation, maximize accuracy, minimize unnecessary compute (if attention past a certain neighbourhood size isn't useful, shrink it before the next iteration), all while staying within some user-chosen max compute budget.\n\nMiscellaneous:\n- Reviewer mu6v cited texture synthesis as a more interesting task and ironically it was the first task we had tested on while working on this ViTCA project, since it stemmed from our interest in the Self-Organizing Textures work of [21]. We did not pursue this direction because of the following limitations. First, we felt it was too specialized of a task to infer meaningful conclusions on the inductive biases of ViTCA for generalized image synthesis-based tasks. Second, there is no training involved, only optimization on a single texture exemplar. Third, texture does not cover higher-level abstract representations and semantics which images in datasets like CelebA, CIFAR10, and Tiny ImageNet do. Finally, our other experimentation already fills the allowed space for our submission. However, we did intentionally use the LandCoverRep data to serve as a texture oriented evaluation since it can be viewed as essentially images of textures. Despite all this, we are delighted to share some results we had from these early experiments, linked in this [imgur link](https://imgur.com/a/5JwVHzW). To summarize what we observed from these experiments: compared to Texture ViTCA, Texture NCA takes a much longer time to output a texture which looks qualitatively worse, all while using 44% more parameters than Texture ViTCA. We hope to investigate this direction in the future, especially considering the exciting advancements made in [20] with its Optimal Transport Texture loss.", " We are grateful to all the reviewers for their constructive feedback and appreciate that they all recognize the novelty of enveloping Vision Transformers within an NCA framework. A quick note before proceeding to read our responses to your respective comments:\n1. All line number references are for the _unrevised_ manuscript, not the revised one we have submitted.\n2. Paper citation numbers refer to citation numbers listed in our unrevised manuscript unless otherwise stated since we provide an additional list of references at the final part of our multi-part responses.\n\nSome common strengths listed amongst reviewers were:\n- **S1**: The idea is novel.\n- **S2**: The paper is clear, well-structured, and well-presented.\n- **S3**: The experimentation is in-depth and complete.\n\nSome common weaknesses listed amongst reviewers were:\n- **W1**: Reviewers gcLv and GAh5 found some of the figures hard to follow. Figures 2 and 3 were specifically listed.\n\nCommon questions (with our answer) shared amongst reviewers were:\n- **Q1**: _(from reviewers GAh5 and mu6v)_ Why weren't non-linear activations used to restrict the range of cell updates instead of an overflow loss?\n- **Shortened author response**: _(please see our individual responses to each reviewer for more)_ We believe there is a misunderstanding of the purpose of the overflow loss. The overflow loss is not meant for restricting the range of cell update values, but rather for encouraging the range of cell _output_ values to be within [0, 1] and cell _hidden_ values to be within [-1, 1]. Overflow losses were used in [20] and [22] (in the paper's references) and are intended to encourage long term cell state stability. On the other hand, the cell _updates_ produced by the update rule, which _update cell values_, are unbounded. This allows for additive and subtractive cell updates, as done by [20], [21], and [30] (cited in the paper). Despite this, we _did_ experiment with restricting the bounds of cell updates by using a sigmoid, tanh, or ReLU activation after the MLP head's linear layer (which produces the update each iteration). In short, sigmoid caused explosive cell state growth, despite having an overflow loss, since its outputs were always positive; ReLU had the same effect; tanh made the training process prone to vanishing gradients and since it forced the update range to be between -1 and 1, it prevented quick corrective cell updates when cell state values were a bit too far past the overflow ranges, which in turn harmed training.\n\nModifications made to our manuscript in response to reviewer comments and questions:\n- **M1**: Reviewer mu6v pointed out an inconsistency in the number of parameters listed in Table 3 between the 2-layer MLP with 100 hidden units and the one with 1000 hidden units, under the CIFAR10 column. The 2-layer MLP with 1000 hidden units should have 3.1M parameters and not 308.3K. We have corrected this mistake in the revised manuscript.\n- **M2**: In light of some of reviewer mu6v's misunderstandings of the parameter counts in the tables, we have clarified the meaning of the parameter counts for the linear probe experiments in the caption of Table 3. The following sentence was added: \"Parameter counts exclude fixed parameters.\" As such, they refer to only the parameters of the additional classifiers. Although we would like to provide even further clarification, any additional words will push the manuscript past the 9 page limit. Further clarification is provided below in **C2**.\n- **M3**: L244 shows \"(1b) xy...\" which should be \"(1c) xy...\" since (1b) already refers to the \"sincos5xy\" method of positional encoding. This has been fixed in the revised manuscript.\n", " _Why not other recurrent models?_\n\nThis is a reasonable question. We wanted to remain focused on exploring the NCA paradigm and to not distract ourselves on chasing the state-of-the-art. We set out a goal to explore the benefits (and short-comings) of implementing localized attention within an NCA paradigm via a Vision Transformer. Initially, we felt it was perplexing to incorporate a global computation mechanism (global self-attention) within a local computation framework (NCAs) without breaking important rules that pertained to both ViTs and NCAs. However, we managed to find a way and realized, ironically, how well suited ViTs are within an NCA paradigm because global self-attention information can be made implicit through the hidden information stored in cells and slowly aggregated over time through repeated local self-attention operations. By designing our research goals this way, it meant that we had to include ViT as a baseline model and a non-ViT-based NCA that was powerful enough to serve as a fair comparison to ViTCA. Since we wanted to focus on analyzing ViTCA's inductive biases rather than its capabilities on a variety of tasks, we chose denoising autoencoding as our relatively generic image synthesis-based task. With all of this considered, we included U-Net since it is a reliable and widely-used competitive baseline for all sorts of image-based tasks, including image synthesis and image classification. From here, naturally, we had to include a UNetCA baseline. In the interest of time and keeping our narrative focused, we decided against including any other types of baselines.\n\n**References**\n\n[1] R. T. Q. Chen, Y. Rubanova, J. Bettencourt, and D. K. Duvenaud, \"Neural ordinary differential equations,\" in _Neural Information Processing Systems (NeurIPS)_, 2018.\n\n[2] X. Liu, H.-F. Yu, I. Dhillon, and C.-J. Hsieh, \"Learning to encode position for transformer with continuous dynamical model,\" in _International Conference on Machine Learning (ICML)_, 2020.", " \n**Q8**: _Since all NCAs are image-to-image maps, why didn't the authors consider other NCA baselines like the original NCA [30] or the Variational NCA [7]? Why not consider other recurrent baselines too? As it is (and considering point 5 above), the comparisons are not very informative_\n\n**Author response**: Although all NCAs presented so far have been image-to-image maps, it is not a necessity. One can imagine an NCA operating on audio waveforms where cells are connected along a one dimensional line, or an NCA operating on sentences for next word prediction. Despite this, we acknowledge the validity of your question and provide our answer.\n\n_Why not the original NCA?_\n\nThe original NCA from the Growing NCA distill publication [30] has around 8K parameters and was designed for \"growing\" an image from a single seed cell. It consists of an initial perception stage using three fixed filters (sobel_x, sobel_y, identity) followed by two fully connected layers separated by a ReLU. We felt it would not make sense to compare such a small NCA designed with such a different task in mind. Particularly, it used \"living cell masking\" and most importantly did not employ any input injection, _i.e.,_ it was not a conditional model trained on a dataset. One can artificially bump up its parameter count or lower ViTCA's parameter count but there are still fundamental differences in the architecture that led us to believe it would not perform well for denoising autoencoding. Conversely, we felt that including another task, specifically the task of growing an image from a single seed with living cell masking, would not have been as informative as denoising autoencoding when inferring the generalizability of ViTCA on other image synthesis-based tasks, such as texture synthesis. On that note, the follow up to [30] was on texture synthesis ([20] and [21]) and it was in fact the first task we started with when testing ViTCA. We did not include our results on texture synthesis since it would have taken up space in the main manuscript that we wanted to occupy with our extensive ablations. Texture synthesis, at least how it was implemented in [20] and [21], is a much more specialized task than denoising autoencoding in the sense that there is no \"training\" on a dataset, instead it is an optimization on a single examplar texture. Also, it lacks the higher level semantics present in images like faces, vehicles, and animals, meaning it is focused on a more specific kind of image representation unlike with denoising autoencoding. Despite all this, we are delighted to share some results we had from these early experiments, linked in this [imgur link](https://imgur.com/a/5JwVHzW). To sum it up, compared to Texture ViTCA, Texture NCA takes a much longer time to output a texture which looks qualitatively worse, all while using 44% more parameters than Texture ViTCA. We hope to investigate this direction in the future, especially considering the exciting advancements made in [20] with its Optimal Transport Texture loss.\n\n_Why not Variational NCA?_\n\nVariational NCA is a generative model while our architecture is a discriminative model. Furthermore, the contributions vastly differ between our paper and VNCA. Our contributions are centred on making global self-attention work in a strictly local computational model. Variational NCA's contributions are centred on introducing probabilistic generative modeling within an NCA framework, specifically using variational inference. Our approach is not meant to compete with Variational NCA and can actually be combined with Variational NCA to produce a probabilistic generative ViTCA, which would be an exciting direction for future work.", " \n**Q6**: _Why use an overflow loss instead of an activation function (sigmoid, tanh) that forces the updates to be in the required ranges?_\n\n**Author response**: We believe there is a misunderstanding of the purpose of the overflow loss. The overflow loss is not meant for restricting the range of cell update values, but rather for encouraging the range of cell _output_ values to be within [0, 1] and cell _hidden_ values to be within [-1, 1]. It is intended to encourage long term cell state stability, inspired by [20] and [22] (in the paper's references). On the other hand, the cell _updates_ produced by the update rule, which _update cell values_, are unbounded. This allows for additive and subtractive cell updates, as done by [20], [21], and [30]. Despite this, we _did_ experiment with restricting the bounds of cell updates by using a sigmoid, tanh, or ReLU activation after the MLP head's linear layer (which produces the update each iteration).\n\nBefore explaining our results for each, it's important to highlight the tricks we implemented to keep training stable and manageable:\n\nFirst, during training we zero initialize the weights and biases of the final layer of our CA-based models (mentioned in L199) which allows for the update rule to exhibit a \"do nothing\" initial behaviour (this idea was taken from [30]). Since cell output channels are initialized to 0.5, cell hidden channels are initialized to 0, target image RGB values are in the range [0, 1], and overflow loss occurs when the output RGB values are past the range [0, 1] and when the hidden values are past the range [-1, 1], this results in the first CA training iteration having a loss with low magnitude. This means gradients are also of low magnitude and the second training iteration will not result in a dramatic change in the loss. This makes the training process more stable. We experimented with initializing the weights and biases of the final layer the same as with other layers (He initialization) and noticed the training process quickly grow unstable, which was most unstable with UNetCA since it lacked any normalization layers like ViTCA does.\n\nSecond, ViTCA's layer normalizations, which helps keep internal activations within a manageable and stable range, which in turn improves training stability.\n\nThird, before applying gradients accumulated after the $T$ recurrent cell updates, we normalize them. Line 19 in Algorithm 1 (in Appendix A) shows this.\n\nNow we can get into the results of our activation experiments:\n\n_Sigmoid:_\n\nThis eliminated the \"do nothing\" initial behaviour since it transformed the zero output of the final layer (since its weights and biases were initialized to 0) to a 0.5 output, which resulted in unstable training by immediately increasing cell values. Furthermore, since a sigmoid outputs values between (0, 1), this meant cell updates were always additive and were never subtractive, which caused explosive cell state growth and unstable training. On a related note, recurrent models are much more susceptible to the exploding gradient problem when using sigmoid activations. This typically leads to using ReLUs instead.\n\n_ReLU:_\n\nAs experienced by [30], using a ReLU resulted in only positive cell updates and no subtractive cell updates. Also, since it's unbounded, it quickly resulted in explosive cell state growth, which destabilized training.\n\n_Tanh:_\n\nTanh allowed for both additive and subtractive cell state updates. Although its derivative quickly shrinks with non-zero inputs (even with input values with an absolute value < 1)---which can cause vanishing gradients---our use of gradient normalization, our weight/bias initialization (He init except for final layer that uses zero init), the fact that target RGB values are in the range [0, 1], and initial cell output values being 0.5 meant that vanishing gradients wouldn't be much of a problem, at least during the start of training when inputs to tanh were small. The problem with using tanh was that it was still susceptible to vanishing gradients _over time_, despite our mitigations, and it slowed down training progress since it would prevent being able to quickly update cells to the desired ranges (which would also result in worse training losses). We found it preferable to just not use an activation function since the training process was already quite stable.\n\n\n**Q7**: _Figure 3 shows some noise sampled from a uniform distribution being injected in the cell update. Where is this described in the text?_\n\n**Author response**: The $\\sigma$ shown in Figure 3 is used for asynchronous updating. It is not being injected in the cell update, it is being multiplied with the cell update, hence the Hadamard product symbol $\\odot$ that is placed between $\\sigma$ and the cell update.", " \n\n**Q4**: _Why is the ViTCA baseline configured to have a patch size of 1x1, while all the other ViTCA models have 3x3?_\n\n**Author response**: We are wondering if this is a misunderstanding stemming from L177 where the Moore neighbourhood is brought up. Unless otherwise stated, all models which perform input image patchification (_i.e.,_ ViT baseline and variants and ViTCA baseline and variants) do so with a patch size of $P_H \\times P_W = 1 \\times 1$, which is equivalent to not doing any patchification at all since $1\\times1$ is pixel-sized. We have modified L175 to be more clear in explaining this.\n\n\n**Q5**: _The number of parameters in Tables 1, 2, and 3 appears to be misreported. A few examples:_ \n* _Table 1, no difference between ViTCA (4 heads) and ViTCA-32 (32 heads)_\n* _Table 2, no difference between ViTCA with 4-64 heads_\n* _Table 3, no difference between the variants_\n* _Table 3, no difference between MLPs with 100 or 1000 hidden units on CIFAR10_\n\n_The absence of differences in Tables 1 and 2 is especially puzzling and important to clarify, because ViTCA-32 is the only model that can consistently surpass the baselines (while, e.g., ViTCA-4 cannot)._\n\n**Author response**: Below we provide our responses to each of the bullet points mentioned in your question.\n* _Table 1, no difference between ViTCA (4 heads) and ViTCA-32 (32 heads)_\n* _Table 2, no difference between ViTCA with 4-64 heads_\n\nSince we follow ViT's formulation of multi-head self-attention (MHSA), the parameter complexity of the model is kept constant as more heads are added. This is evidenced on L130 where $d$ (embed size) is divided by $h$ (number of heads). However, when $h = 1$ (single-head self-attention), there are fewer parameters than when $h = 2$ because single-head self-attention lacks the final weight matrix that's used to combine multiple heads before adding the result into the embedding. Although this explains why ViTCA and ViTCA-32 have the same number of parameters, it does not mean that adding more heads in ViTCA results in free performance gains. Due to the lack of low-level CUDA optimizations for attention-based operations (unlike with convolutions), more heads results in more computation time and more memory usage. This is made even more complicated when spatial localization is involved. We have managed to optimize our spatially-localized MHSA implementation when only using PyTorch's standard library (even investigating JIT compiling with TorchScript), but we are not completely satisfied yet since we know it can still be improved. We hope to implement our own CUDA kernels for our spatially-localized MHSA in the future.\n\n* _Table 3, no difference between the variants_\n\nThis is a valid misunderstanding. The parameter counts listed in Table 3 are for the number of trainable parameters. Since linear probing is done with frozen/fixed pre-trained models, this means the parameters listed are the number of parameters for the linear classifier. This was an oversight on our part and has been clarified in the caption of Table 3 in the revised manuscript.\n\n* _Table 3, no difference between MLPs with 100 or 1000 hidden units on CIFAR10_\n\nThank you for pointing this out. To succinctly answer, the 1000 hidden units linear classifier should have 3.1M parameters. This does not change the overall narrative though and in fact puts it more in favour of our model.", " \n**Q3**: _I couldn't find how the positional encodings are computed. IF PEs provide unique coordinates for each pixel, doesn't that defeat the principle of locality of CAs?_\n\n**Author response**: Descriptions for how the positional encodings are computed are in L229-253. We have corrected a mistake in L244 where \"(1b) xy...\" should be \"(1c) xy...\"\n\nTo clarify, there are two groups of methods of incorporating positional information that we experiment with:\n\n1. Injecting/concatenating absolute (xy) or relative (Fourier-based) coordinates into each cell ($\\gamma$ in Figure 3) which the update rule can not modify. You can call this the NeRF style of coordinate-based positioning:\n\n (a) _sincos5_: Handcrafted Fourier features computed as described in L237-238.\n\n (b) _sincos5xy_: Handcrafted Fourier features computed as described in L237-238 but with xy-coordinates appended as well.\n \n (c\\) _xy_: Just xy-coordinates where each cell's coordinate in the grid is appended to it.\n2. Using the Transformer-style of positional encoding/embedding where it's added into the embedding:\n\n (a) _handcrafted_: Handcrafted Fourier features which are not computed the same way as with (1a) although the two share similarities. The way this handcrafted Transformer-style of positional encoding is computed is described in [41] and is also how Vision Transformers compute positional encodings.\n \n (b) _learned_: Instead of using a handcrafted positional encoding, a learned positional _embedding_ is added into the cell embeddings. This was experimented in the original Vision Transformer paper [13] and they found it too limiting since it restricts the input/output size and so they stuck with the above handcrafted approach.\n\nInjecting positioning has an effect on cell dimensionality since it is a concatenation, which in turn has an effect on the total number of trainable parameters.\n\n\"_IF PEs provide unique coordinates for each pixel, doesn't that defeat the principle of locality of CAs?_\"\n\nYou raise a valid concern. However, we believe you are conflating self-organization with self-localization. The principle of locality of CAs is based on the local computational aspect of the update rule (L61-62), not on whether the cells it's operating on contain positional information. We are not aware of any CA literature which explicitly states a limitation on providing positional information or which require cells to even be able to localize themselves. As long as the update rule is operating on a neighbourhood of cells, it is valid. We would like to point out that just because a cell knows its relative or absolute position, it does not mean that it can use that information to immediately know the state of cells past its neighbours. That information must still be propagated over time, _i.e.,_ the update rule is still operating on local information which is spatially propagated over time so that future cell updates will implicitly be a function of not only the positions of cells within a neighbourhood but also the positions of cells past its neighbourhood. This is no different than the situation where no explicit positioning is provided and when it's hidden instead in the cell hidden channels. We also show that we don't need explicit positioning for the model to work and that although it doesn't perform as well compared to the baseline ViTCA, it's still quite close. This is in contrast to ViT without positioning, where our early experiments showed it fails miserably with no coherent output at all.\n\nAs a closing note, we feel that this idea of providing positional information to cells can be an exciting opportunity to investigate a case where cells are provided with initial positional information and are tasked to refine this information to better suit their needs depending on the task and dataset being trained on, inspired by this paper [2] (reference at the bottom). At that point one must wonder whether cells are moving themselves around in the grid or if they are remaining fixed but moving themselves in their \"imagined\" representation of their spatial surroundings. It's an interesting thought experiment.\n", " \n\n**Q2**: _The attention operation in Equations (1) and (3) is at the patch level, not at the cell level. This means that ViTCA is not comparable 1-to-1 with pixel-level NCAs unless the patch size is set to 1x1. Also, it might confuse the reader to refer to the \"Moore neighborhood\" in line 177 when talking about the patch size, because this term is typically used in CA literature to indicate the neighborhood of a cell (as it is used to compute the update). Here, the effective neighborhood of the cell is 9x9. Did I misunderstand something?_\n\n**Author response**: Yes, there is a misunderstanding here. We believe your are confusing patch size with cell neighbourhood size. You are correct in stating that Eq. 1 and Eq. 3 operate at the patch level. However, this is _also_ the cell level. As shown in L119, the \"flattened cell grid\" $\\mathbf{Z}$ has dimensions $N \\times L$ where $N$ is the number of cells and $L = C_PP_HP_W + C_h$ is the dimensionality of each cell. Note the $P_HP_W$ which imply that each cell contains an input image patch. Figure 3 and its caption explicitly describe this.\n\nUnless otherwise stated, all models which perform input image patchification (_i.e.,_ ViT baseline and variants and ViTCA baseline and variants) do so with a patch size of $P_H \\times P_W = 1 \\times 1$, which is equivalent to not doing any patchification at all since $1\\times1$ is pixel-sized. We have modified L175 to be more clear in explaining this. The only experiment where patch size differs is in the patch size ablation in Appendix A.2, where ViTCA's patch size is modified.\n\nThe Moore neighbourhood described in L177 is used for describing the $3\\times3$ cell neighbourhood size, which is synonomous with the update rule's attention neighbourhood size. L177 describes the Moore neighbourhood size as being $N_H \\times N_W = 3\\times3$, with $N_H$ and $N_W$ being referenced in L143-147 to describe the spatially localized attention window size. Furthermore, the attention neighbourhood size ablation in L580-588 in Appendix A.2 gets into more detail explaining the relationship between performance and local attention size while explicitly relating the Moore neighbourhood to the $3\\times3$ attention size used by the baseline ViTCA and other variants (L586-588).", " \n**Q1**: _What is the meaning of \"it is a fairly contractive model capable of fixed-point convergence guarantees\"?_\n\n**Author response**: The full sentence in question (L58-60):\n\n\"Since ViT (and by extension, ViTCA) employs Layer Normalization (LN) [43] at each stage of its processing, it is a fairly contractive model capable of fixed-point convergence guarantees [32].\"\n\nThis sentence cites the Deep Equilibrium Models (DEQ) paper [32] which is about analyzing recurrent models (RNNs), observing how their hidden states transform over recurrent updates, and analytically describing how at the limit of time the hidden states converge to an \"equilibrium point,\" or synomously, a \"stable\" or \"fixed\" point. Inspired by the Neural ODE paper [1] (reference provided below, not to be mixed up with the paper's references), they decide that instead of recurrently updating until the fixed point is reached, they can use out-of-the-box fixed-point solvers which treat the problem of finding a fixed-point as a root finding problem (_i.e.,_ when the change in state is 0). They highlight some recurrent models to test their theory, one of them being a recurrent autoregressive Transformer for next-symbol prediction in natural language processing. Importantly, near the bottom of page 8 in their paper, they highlight how the Layer Norms in the Transformer they use help make it \"contractive (and stable)\" which is important for it to be able to transform hidden states to a stable equilibrium point.\n\nVision Transformers (and Transformers) use Layer Norm to transform activation statistics for each token to be standardized (zero mean, unit standard dev). In ViT, it's applied before each attention block, before each feedforward MLP block, and before the final MLP head. Furthermore, there are learned affine parameters that can scale and translate the normalization as the network sees fit. The result is that the transformation of each token (which are cells in ViTCA's case) is more stable (fewer erratic and drastic changes to tokens as you process a sequence of tokens), predictable, and thus the model is easier to train. Furthermore, in a recurrent setting, when recurrently transforming the same tokens over time, it combats against explosive growth in token values that can occur due to any unchecked activations or operators that, without the normalization, would result in an expansive mapping (i.e., a Transformer that is not Lipschitz continuous).\n\nFor better understanding the \"contractive\" part of our sentence, we point to the Banach fixed-point theorem which states that if a mapping $F: \\mathbf{Z} \\mapsto \\mathbf{Z}$ is a contraction (a special kind of Lipschitz continuity where the Lipschitz constant $K$ is between $0 \\leq K \\lt 1$ and $F$ maps a metric space to itself) then repeated applications of $F$ on $\\mathbf{Z}$, like so: $\\mathbf{Z}, F(\\mathbf{Z}), F(F(\\mathbf{Z})), ...$ will result in a convergence to a unique fixed point at the limit. In other words, it will result in a stable result that does not change after more repeated applications of $F$. This is perhaps one of the reasons why ViTCA is much more capable of keeping cells at their converged state compared to UNetCA, despite the two of them being trained with an overflow loss. We also experimented with training ViTCA without Layer Normalization and although it was capable of stabilizing after convergence during test time on an example, the quality of the results were not as good. We are excited to explore the application of DEQs within ViTCA and NCAs in general.", " **Statement**: _The idea of the paper is novel, although it doesn't provide substantial contributions besides replacing convolutions with transformers._\n\n**Author response**: We are happy to see that you recognize the novelty of the idea. The integration of the spatially-localized attention mechanism within the NCA framework is not straightforward, and there are many pivotal points when designing such a mechanism where certain design choices can lead to pitfalls. When introducing an architecture that marries two such seemingly incompatible frameworks, we feel it is critical to explore architectural variants and other design choices in-depth to verify that the results from the baseline implementation of the core idea were not spurious. Thus, we decided to choose a well-generalizeable image-based task as our vehicle for exploring our core idea as opposed to exploring multiple tasks, which can be left for future work. In doing so, we have provided extensive ablative results exploring modifications such as:\n\n- Transformer embedding dimensionality\n- The number of attention heads\n- Transformer depth\n- Cell hidden dimensionality size\n- Cell initialization methods\n- Inverted bottlenecking\n- The effects of posititional encoding\n- and more...\n\nFurthermore, in combining ViTs with NCAs, we were confronted with exploring the effects of positional encodings in a NCA framework, which has not been explored previously. In relation to your third question, we had an initial hesitancy in incorporating positional information in a NCA framework but realized that the restriction pertaining to locality in CAs is just a restriction on the locality of computation and not that the cells can not contain their own explicit positioning. Please refer to our answer to your third question for more.\n", " We thank you for your extensive feedback and for recognizing the novelty of our idea. We are particularly grateful for your keen eye in pointing out the mistake in Table 3 on the MLP parameter counts. We have made a correction and used the opportunity to clarify the meaning of the parameter counts in the caption. Below we have provided our responses to your questions and to some of your comments. We hope we left \"no stone unturned\" with our responses, but if we have, we wholly welcome and look forward to more feedback. Finally, we have also added a top-level comment to summarize the changes made in response to all four reviewers' comments and questions.\n\n**Statement**: _The experiments show that the proposed architecture achieves marginally better results compared to two baselines on three simple datasets..._\n\n**Author response**: We are confused with this statement because our denoising autoencoding experiments use six (not three) datasets, including CelebA and Tiny ImageNet which are not generally considered simple datasets. We want to clarify that Vision Transformers have yet to achieve competitive performance against U-Nets (and convolutional networks in general) on image synthesis-based tasks, which require efficient pixel-level processing. Our work presents an architecture which simply envelops ViT within an NCA paradigm, with little change to the underlying ViT (the only notable change being localized attention) architecture, resulting in a complete transformation in ViT's performance on image synthesis-based tasks. To the point where it becomes competitive against a U-Net having more parameters. Although the number of parameters in ViTCA is around 10% more than in ViT, this is only due to the dimensionality difference in the inputs and outputs, since ViTCA's inputs and outputs consist of high dimensional cells instead of _just_ image patches. However, the performance gap between ViTCA and ViT averages in the double digits. For the sake of completion, below we provide results comparing an inverted bottleneck ViT (which we did not provide in the paper but did have some results on) with ViTCA-i in case the argument is made that the comparison between ViT and ViTCA is unfair due to parameter count differences (the parameter count differences between ViT-i and ViTCA-i is smaller than between ViT and ViTCA). Specifically, we provide four tables of denoising autoencoding results on the test sets for LandCoverRep, CelebA, MNIST, and FashionMNIST. We did not have results on CIFAR-10 or TinyImageNet due to time constraints.\n\n| | PSNR | SSIM | LPIPS | # Params. |\n|:-------:|:-----:|:-----:|:-----:|:---------:|\n| VITCA-i | **33.49** | **0.929** | **0.108** | 54.7K |\n| ViT-i | 29.99 | 0.878 | 0.154 | 50.4K |\n\n**Table 1**: Comparing denoising autoencoding test results on LandCoverRep between inverted bottleneck variants of ViT and ViTCA.\n\n\n| | PSNR | SSIM | LPIPS | # Params. |\n|:-------:|:-----:|:-----:|:-----:|:---------:|\n| VITCA-i | **26.10** | **0.904** | **0.074** | 54.7K |\n| ViT-i | 18.87 | 0.737 | 0.310 | 50.4K |\n\n**Table 2**: Comparing denoising autoencoding test results on CelebA between inverted bottleneck variants of ViT and ViTCA.\n\n\n| | PSNR | SSIM | LPIPS | # Params. |\n|:-------:|:---------:|:---------:|:---------:|:---------:|\n| VITCA-i | **26.03** | **0.930** | **0.033** | 54.3K |\n| ViT-i | 14.76 | 0.537 | 0.313 | 50.1K |\n\n**Table 3**: Comparing denoising autoencoding test results on MNIST between inverted bottleneck variants of ViT and ViTCA.\n\n\n| | PSNR | SSIM | LPIPS | # Params. |\n|:-------:|:---------:|:---------:|:---------:|:---------:|\n| VITCA-i | **22.84** | **0.827** | **0.139** | 54.3K |\n| ViT-i | 15.64 | 0.502 | 0.405 | 50.1K |\n\n**Table 4**: Comparing denoising autoencoding test results on FashionMNIST between inverted bottleneck variants of ViT and ViTCA.\n\nAlthough ViTCA-i has **~8%** more parameters than ViT-i (due to differences in input/output dimensionality), when considering that the scale of PSNR is logarithmic, that the range of SSIM is [0, 1], and that the range of LPIPS is [0, 1], there is a substantial jump in performance in ViTCA-i compared to ViT-i. On average, there is a **43.1%** relative improvement in PSNR, a **23.4%** absolute improvement in SSIM, and a **20.7%** absolute improvement in LPIPS.", " **Q4**: _Performing experiments on MNIST and CIFAR 10 datasets is OK. It is better to validate the performance on larger datasets to show the practicality of the proposed model. The authors can either provide some analysis if the performance is not good._\n\n**Author response**: We are not sure if this is a question so we are assuming it is a critique on the size of the datasets that were tested on for linear probing and will answer accodrdingly. To succicintly answer your concern, we agree that it would be useful to validate linear probe performance on datasets larger than MNIST, FashionMNIST, and CIFAR-10. However, considering that the linear probe experiments were meant as a brief look into the learned representations of the compared models and not as a primary motivation for ViTCA, when we triaged our list of experiments we had to forego probing with models trained on larger datasets in the interest of time. We decided not to use our Celeb-A pre-trained models since Celeb-A has 40 annotations per image, which would result in a difficult-to-parse PCA visualization. MNIST, FashionMNIST, and CIFAR-10 each have 10 classes, making for an easier qualitative verification of the linear separation of the learned representations when visualizing their PCA space. We decided not to use our Tiny ImageNet pre-trained models because of unforeseen memory limitations, which made it incredibly time-consuming to work around it with gradient checkpointing (since it reduces memory consumption at the cost of computation time). This is unlike the memory limitations found when training ViTCA and UNetCA. Specifically, since the method of linear probing was a linear layer on the concatenation of _all_ cells, and the cell grid spatial size being $64 \\times 64$ instead of $32 \\times 32$ because of the Tiny ImageNet image size, memory usage was too time-consuming to manage in time for submission. On top of that, we had the same concern with the number of classes as we did with Celeb-A since Tiny ImageNet has 200 classes. All this being considered, we settled on the three datasets mentioned in the paper. To conclude, this could be an interesting avenue to explore in the future for a paper-specific analysis on the learned representations of NCA models in general, which [21] also encourages.\n\n**References**\n\n[1] A. Ramesh, P. Dhariwal, A. Nichol, C. Chu, and M. Chen, \"Hierarchical text-conditional image generation with CLIP latents,\" _arXiv preprint arXiv:2204.06125_, 2022.", " **Q3**: _The proposed model requires iterative training to converge. How can you ensure the stability of the proposed model?_\n\n**Author response**:\nWe are unsure whether you are asking about the stability of the training process or the stability of cell states after they have converged to a solution during inference. We will attempt to answer both:\n\n_Ensuring training stability (preventing loss and gradient-related training issues):_\n\nThere are a couple of tricks we implement to keep training stable and manageable. First, during training we zero initialize the weights and biases of the final layer of our CA-based models (mentioned in L199) which allows for the update rule to exhibit a \"do nothing\" initial behaviour (this idea was taken from [30] as referenced in the paper). Since cell output channels are initialized to 0.5, cell hidden channels are initialized to 0, target image RGB values are in the range [0, 1], and overflow loss occurs when the output RGB values are past the range [0, 1] and when the hidden values are past the range [-1, 1], this results in the first CA training iteration having a loss with low magnitude. This means gradients are also of low magnitude and the second training iteration will not result in a dramatic change in the loss. This makes the training process more stable. We experimented with initializing the weights and biases of the final layer the same as with other layers (He initialization) and noticed the training process quickly grow unstable, which was most unstable with UNetCA since it lacked any normalization layers like ViTCA does. Second, ViTCA's layer normalizations, which helps keep internal activations within a manageable and stable range, which in turn improves training stability. Third, before applying gradients accumulated after the $T$ recurrent cell updates, we normalize them. Line 19 in Algorithm 1 (in Appendix A) shows this.\n\n_Ensuring cell state stability after convergence during inference:_\n\nThe overflow loss and pool sampling training scheme are the two techniques used to directly encourage long-term cell state stability during inference time, _i.e.,_ to have a cell update rule that does not cause the cells' values to diverge after they have settled on a stable point. The overflow loss encourages the update rule to keep cell states within a certain range of values. The pool sampling training strategy of sampling from a pool of cell grids which were previously iterated upon and iterating upon them again before putting them back into the pool is a way of training the cell update rule (ViTCA in this case, but it applies to all NCAs since this is a common NCA training method as mentioned in the paper) to always strive to keep cells at the correct solution (the point that results in the lowest loss). As can be inferred by the max size of the pool and that at each _odd_ training iteration it grows with a minibatch of new cell grids, there are multiple points during training when the sampled cell grid (_i.e.,_ during _even_ train iterations) has already reached its converged point (for example, at its 1000th iteration) yet ViTCA is still required to update it and maintain a low loss. We point to the Growing CA Distill article [30] (in the paper's references) for a well-presented explanation of the pool sampling process they originated. We also point to the cart-pole NCA paper [22] (in the paper's references) for more on the overflow loss.", " **Q2**: _Linear probes have been popular evaluation protocols to show the learning representation via self-supervised learning. I am wondering how does the proposed model perform using fine-tuning? Can the authors provide fine-tuning results or explain the reason why it is not good if fine-tuning performs poorly?_\n\n**Author response**: This is an excellent question and we are glad that you asked, as we did in fact have fine-tuning results but unfortunately did not have the time to complete all experiments and to include what we had in the main manuscript or Appendix. However, we are happy to share them below, with some commentary. We only have fine-tuning results between the baselines and not the variants. Before sharing the three tables, we would like to explain the setup. The representations used were the same ones used for linear probing, the only differences being that the pre-trained models did not have their weights frozen and that the denoising autoencoding loss was still used. This required us to run the model twice per training iteration: one with a clean input for the classification loss as done with linear probing, and one with a noisy input for the denoising loss as done with our previous denoising training process. As you can infer, this was a part of the reason it was difficult for us to obtain results as there were issues with memory usage that weren't easily solvable with gradient checkpointing and issues with computation time.\n\n\n| | Acc. | # Params. |\n|:------:|:----:|:---------:|\n| U-Net | 99.2 | 119.8K |\n| ViT | 94.6 | 1.4M |\n| UNetCA | 96.8 | 379.7K |\n| ViTCA | 98.0 | 419.4K |\n\n**Table 1:** Fine-tuning test accuracies of baseline models on MNIST. All models were pre-trained for denoising autoencoding then fine-tuned.\n\n\n| | Acc. | # Params. |\n|:------:|:----:|:---------:|\n| U-Net | 92.0 | 119.8K |\n| ViT | 87.2 | 1.4M |\n| UNetCA | 10.0 | 379.7K |\n| ViTCA | 89.6 | 419.4K |\n\n**Table 2:** Fine-tuning test accuracies of baseline models on FashionMNIST. All models were pre-trained for denoising autoencoding then fine-tuned.\n\n\n| | Acc. | # Params. |\n|:------:|:----:|:---------:|\n| U-Net | 73.4 | 122.0K |\n| ViT | 57.2 | 1.4M |\n| UNetCA | 65.5 | 381.7 |\n| ViTCA | 57.1 | 420.2K |\n\n**Table 3:** Fine-tuning test accuracies of baseline models on CIFAR-10. All models were pre-trained for denoising autoencoding then fine-tuned.\n\nNote that the number of parameters listed in the above tables are the total number of trainable parameters, which in this case is the number of parameters for the pre-trained model plus the number of parameters for the linear classifier. This is in contrast to the number of parameters for linear probing where it's _only_ the number of parameters for the linear classifier since the pre-trained models are frozen.\n\nThe above tables show some interesting results that motivate more experimentation for future work. The most obvious conclusion is that training a linear classifier on the U-Net's bottleneck features while permitting U-Net's weights to be fine-tuned results in the best classification performance. Despite this, classifying using ViTCA's cell hidden information results in better performance than with UNetCA's and and ViT's respective representations on MNIST and FashionMNIST. Oddly enough, classifying with ViTCA's cell hidden information is marginally worse than with ViT's representation on CIFAR-10 and much worse than with UNetCA's cell hidden representation. We do not have any immediate explanation as to why this happens and it is difficult to arrive at any definitive conclusion without further testing and analysis.", " We appreciate the extensive feedback and thank you for acknowledging the effectiveness of our proposed NCA model. Below we have provided our responses to your questions. We have also added a top-level comment to summarize the changes made in response to all four reviewers' comments and questions.\n\n**Q1**: _Choosing ViT as a baseline to show the superiority of the proposed ViTCA makes sense. But why do you adopt U-Net, the CNN architecture, as the baseline? And why the proposed ViTCA succeeds while U-net fails? The authors do not describe the motivation and the reasons for such an experimental setup._\n\n**Author response**: These are fair and valid questions. We believed the motivations for using a U-Net for comparison on denoising autoencoding performance were implied considering that U-Nets are widely used as \"default\" feedforward neural architectures that perform well on a variety of vision-based tasks, such as semantic segmentation, image classification, optical flow estimation, denoising autoencoding, etc. Before vision transformers, they were the dominant class of models for image classification. However, they remain highly competitive as backbone architectures for several vision-based tasks, especially image synthesis (of which denoising autoencoding is a subclass of). We cite [37] in the paper as an example of a competitive U-Net-based denoising autoencoder, of which we base our U-Net and UNetCA off of. We also point to [1] below which is a recent diffusion model which uses a U-Net backbone to provide state of the art results on text-conditioned image synthesis. To summarize, we tried to cover our evaluative bases by framing the comparisons as between two popular classes of image processing models: vision transformers (ViT) and convolutional neural nets (CNNs), and between two frameworks: NCA and non-NCA, of which the intersection of NCA and ViT (NCA + ViT = ViTCA) was what we were motivating.", " **Q3**: _Instead of overflow loss, have you tried just using nonlinear activation functions that stay within the desired range?_\n\n**Author response**: We believe there is a misunderstanding of the purpose of the overflow loss. We are assuming that you are referring to restricting the cell update range with a non-linear activation, please correct us if we are assuming incorrectly.\n\nThe overflow loss is not meant for restricting the range of cell update values, but rather for encouraging the range of cell output values to be within [0, 1] and cell hidden values to be within [-1, 1]. It is intended to encourage long term cell state stability, inspired by [20] and [22] (in the paper's references). On the other hand, the cell _updates_ produced by the update rule, which _update cell values_, are unbounded. This allows for additive and subtractive cell updates, as done by [20], [21], and [30]. Despite this, we _did_ experiment with restricting the bounds of cell updates by using a sigmoid, tanh, or ReLU activation after the MLP head's linear layer (which produces the update each iteration).\n\nBefore explaining our results for each, it's important to highlight the tricks we implemented to keep training stable and manageable:\n\nFirst, during training we zero initialize the weights and biases of the final layer of our CA-based models (mentioned in L199) which allows for the update rule to exhibit a \"do nothing\" initial behaviour (this idea was taken from [30]). Since cell output channels are initialized to 0.5, cell hidden channels are initialized to 0, target image RGB values are in the range [0, 1], and overflow loss occurs when the output RGB values are past the range [0, 1] and when the hidden values are past the range [-1, 1], this results in the first CA training iteration having a loss with low magnitude. This means gradients are also of low magnitude and the second training iteration will not result in a dramatic change in the loss. This makes the training process more stable. We experimented with initializing the weights and biases of the final layer the same as with other layers (He initialization) and noticed the training process quickly grow unstable, which was most unstable with UNetCA since it lacked any normalization layers like ViTCA does.\n\nSecond, ViTCA's layer normalizations, which helps keep internal activations within a manageable and stable range, which in turn improves training stability.\n\nThird, before applying gradients accumulated after the $T$ recurrent cell updates, we normalize them. Line 19 in Algorithm 1 (in Appendix A) shows this.\n\nNow we can get into the results of our activation experiments:\n\n_Sigmoid:_\nThis eliminated the \"do nothing\" initial behaviour since it transformed the zero output of the final layer (since its weights and biases were initialized to 0) to a 0.5 output, which resulted in unstable training by immediately increasing cell values. Furthermore, since a sigmoid outputs values between (0, 1), this meant cell updates were always additive and were never subtractive, which caused explosive cell state growth and unstable training. On a related note, recurrent models are much more susceptible to the exploding gradient problem when using sigmoid activations. This typically leads to using ReLUs instead.\n\n_ReLU:_\nAs experienced by [30], using a ReLU resulted in only positive cell updates and no subtractive cell updates. Also, since it's unbounded, it quickly resulted in explosive cell state growth, which destabilized training.\n\n_Tanh:_\nTanh allowed for both additive and subtractive cell state updates. Although its derivative quickly shrinks with non-zero inputs (even with input values with an absolute value < 1)---which can cause vanishing gradients---our use of gradient normalization, our weight/bias initialization (He init except for final layer that uses zero init), the fact that target RGB values are in the range [0, 1], and initial cell output values being 0.5 meant that vanishing gradients wouldn't be much of a problem, at least during the start of training when inputs to tanh were small. The problem with using tanh was that it was still susceptible to vanishing gradients over time, despite our mitigations, and it slowed down training progress since it would prevent being able to quickly update cells to the desired ranges (which would also result in worse training losses). We found it preferable to just not use an activation function since the training process was already quite stable.", " **Q2**: _On lines 209-210, you mention that ViTCA outperforms the other baselines on 10 out of the 18 datasets, although you are reporting only 6. Are there 12 more datasets you aren’t telling us about??_\n\n**Author response**: The sentence in question: \"Amongst baselines, ViTCA outperforms on most metrics across the majority of datasets used (10 out of 18).\"\n\nTo answer your question, no, there are not any more datasets beyond the six described in the paper. To clarify the meaning of the sentence, it is referring to 10 out of the total number of measurements made across all datasets, of which there are 18 measurements. Specifically, there are 6 datasets, there are 3 metrics used (PSNR, SSIM, LPIPS), so there are 6 * 3 = 18 measurements in total. ViTCA gets the best result on 10 of those measurements when comparing to the other baseline models in Table 1.", " **Q1**: _What are the essential differences between your model and a universal transformer with local attention? Universal transformers simply use the same weights over and over, so they are recurrent (Dehghani, et al. 2018). One difference is the asynchronous update rule. Others?_\n\n**Author response**:\nYou raise a completely valid concern and listed below are other essential differences between ViTCA and Universal Transformers (UT):\n1. In each time step, UT's attention and self-attention computations are computed across all tokens, _i.e.,_ _globally_. In each time step, ViTCA's self-attention is computed across a localized neighbourhood about each cell (cells are treated as tokens), circumventing the quadratic computational complexity of global self-attention while still maintaining global information propagation through recurrent cell updates.\n2. UT is designed for natural language processing, ViTCA is designed for image processing, resulting in different computational pipelines. UT first embeds each symbol (UT treats symbols as tokens, examples of symbols are words and phonemes) in the input sequence then uses its recurrent encoder to recurrently update the embedding with self-attention and its transition function. After $T$ steps, the embedding is then fed to the recurrent decoder where it uses attention on the encoder's embedding, self-attention on its own embedding, and its own transition function to update its own embedding. The decoder's embedding comes from embedding its current sequence of output symbols. The decoder produces the next output symbol at each decoder iteration with an argmax on its output probabilities which are computed using a softmax on the output of its transition function after $T$ steps. \nBasically, UT operates in two stages in sequence: an encoder stage followed by a decoder stage, each with their own transformer where the encoder has one attention block and the decoder has _two_ to attend to both its own tokens and the encoder's. After the encoder iterates $T$ times, the decoder iterates $T$ times to produce *one* output token. **This means the decoder needs to iterate $TN$ times to produce $N$ output tokens**. It's an autoregressive model. ViTCA, on the other hand has its encoder and decoder operate in lockstep over cell update iterations. The encoder processes tokens which are then fed to an MLP head acting as its decoder to produce cell updates for all cells, all in one time step, done asynchronously, using a single attention block.\n3. Recurrent updates in ViTCA are performed on tokens (_i.e.,_ cells) while recurrent updates in UT are performed on token _embeddings_ (_i.e.,_ symbol embeddings). In other words, ViTCA uses cells as its state representation and UT uses symbol embeddings as its state representation. At the start of each iteration, ViTCA feeds cells to an embedding layer, while UT does not. Another important difference between the token representations used in the two models is that cells not only contain the input information (like an image patch or even a symbol if need be) but also contain hidden information that can be used to facilitate inter-cell communication, output information for storing the output from the previous iteration, and optionally, positional information. UT's encoder and decoder tokens do not contain hidden information, they do not contain the previous iteration's output, and they're not designed to include additional information like positional encodings. Also, in ViTCA, the output is directly extracted from tokens (cells) while in UT the output is autoregressively decoded from a decoder transformer operating on encoder token embeddings and decoder token embeddings.", " Here we provide four tables of denoising autoencoding results on the test sets for LandCoverRep, CelebA, MNIST, and FashionMNIST. We did not have results on CIFAR-10 or TinyImageNet due to time constraints.\n\n| | PSNR | SSIM | LPIPS | # Params. |\n|:-------:|:-----:|:-----:|:-----:|:---------:|\n| VITCA-i | **33.49** | **0.929** | **0.108** | 54.7K |\n| ViT-i | 29.99 | 0.878 | 0.154 | 50.4K |\n\n**Table 1**: Comparing denoising autoencoding test results on LandCoverRep between inverted bottleneck variants of ViT and ViTCA.\n\n\n| | PSNR | SSIM | LPIPS | # Params. |\n|:-------:|:-----:|:-----:|:-----:|:---------:|\n| VITCA-i | **26.10** | **0.904** | **0.074** | 54.7K |\n| ViT-i | 18.87 | 0.737 | 0.310 | 50.4K |\n\n**Table 2**: Comparing denoising autoencoding test results on CelebA between inverted bottleneck variants of ViT and ViTCA.\n\n\n| | PSNR | SSIM | LPIPS | # Params. |\n|:-------:|:---------:|:---------:|:---------:|:---------:|\n| VITCA-i | **26.03** | **0.930** | **0.033** | 54.3K |\n| ViT-i | 14.76 | 0.537 | 0.313 | 50.1K |\n\n**Table 3**: Comparing denoising autoencoding test results on MNIST between inverted bottleneck variants of ViT and ViTCA.\n\n\n| | PSNR | SSIM | LPIPS | # Params. |\n|:-------:|:---------:|:---------:|:---------:|:---------:|\n| VITCA-i | **22.84** | **0.827** | **0.139** | 54.3K |\n| ViT-i | 15.64 | 0.502 | 0.405 | 50.1K |\n\n**Table 4**: Comparing denoising autoencoding test results on FashionMNIST between inverted bottleneck variants of ViT and ViTCA.\n\nAlthough ViTCA-i has **~8%** more parameters than ViT-i (due to differences in input/output dimensionality), when considering that the scale of PSNR is logarithmic, that the range of SSIM is [0, 1], and that the range of LPIPS is [0, 1], there is a substantial jump in performance in ViTCA-i compared to ViT-i. On average, there is a **43.1%** relative improvement in PSNR, a **23.4%** absolute improvement in SSIM, and a **20.7%** absolute improvement in LPIPS.", " Thank you for your constructive feedback and for recognizing the novelty and significance of our work. Below we have provided our responses to your questions as well as to some of your comments on the strengths and weaknesses of the paper. We have also added a top-level comment to summarize the changes made in response to all four reviewers' comments and questions.\n\n**Strength 3**: _While the baseline ViT used for comparison has slightly fewer parameters than the baseline ViTCA, making for an unfair comparison, smaller versions of the ViTCA (e.g., ViTCA-i) also outperform it with many fewer parameters. This could be better emphasized in the paper. Although this does beg the question - how would an inverted bottleneck ViT with a similar number of parameters perform on this task?_\n\n**Author response**: You raise an excellent question on how an inverted bottleneck ViT would perform. Fortunately, we had performed this experiment prior to submission. Unfortunately, we did not include it due to space and time constraints. Before we provide the table of results and commentary we would like to respond to your comment on the fairness of the comparison between the baseline ViT and baseline ViTCA on the grounds of slightly differing parameter counts.\n\nWe feel that ViT having slightly fewer parameters than ViTCA should not disqualify the fairness of comparisons between the two. ViTCA and ViT use the same backbone architecture and, parameter-wise, only differ in the size of their respective inputs and outputs, which affects their embedding layer and head layer parameter counts (keep in mind that the attention size does not affect parameter count). Their internal representation dimensionality remain in parity. ViTCA operates with higher dimensional inputs, _i.e.,_ cells, and higher dimensional outputs, _i.e.,_ cell updates. Each cell contains an input image patch (for our experiments, input/output image patches are $1 \\times 1$, meaning a single RGB pixel), an output image patch (produced from the previous iteration), hidden information, and (optionally) positional information (which would allow foregoing the positional encoding used within the transformer). Cell updates consist of updating the cell output channels and the cell hidden channels. ViT, on the other hand, consumes only input image patches and produces only output image patches. If we wanted parameter parity between ViT and ViTCA, we would have to change the inputs/outputs of ViT to match that of ViTCA, specifically, that the inputs are cells and the outputs are cell updates. The issue with this approach is that without recurrent updates, the cell-based setup would be pointless since the cell hidden information isn't being updated and previous cell states are not being considered. Furthermore, without recurrent updates, the additional information in the input, namely the cell hidden information and cell output information (which are initialized to 0 and 0.5, respectively) will act as useless noise for ViT. Once you add recurrent updates, then it is only one step away from being ViTCA: localizing attention. The main motivation of our paper is that by moving ViT to a CA framework, without changing its internal representation dimensionality and localizing its attention (which also doesn't change dimensionality), you end up with a more parameter efficient ViT.", " We are very thankful for your positive feedback and recognition of the contributions of ViTCA. Please find our response to your questions and comments below. We have also added a top-level comment to summarize the changes made in response to all four reviewers' comments and questions.\n\n**Q1**: _One of the main contributions here is the propagation of local information through CA. However, with large CA systems, I can imagine that there is a separation of timescales between local information propagation and the information being implicitly available to make global changes. Have the authors considered (empirically or theoretically) the rate of information propagation through the system and whether this scales with system size?_\n\n**Author response**: This is a great question. Yes, this was considered and discussed at some point during the project, with rudimentary calculations made to estimate the rate of information propagation over time. Excluding the effects of asynchronous updating, the rate of information propagation due to the recurrent applications of localized self-attention (at a $3 \\times 3$ attention neighbourhood size) is analogous to how the theoretical receptive field of neurons, with respect to the input, linearly grows through layers of $3 \\times 3$ convolutions. So the same formulas apply. Specifically, when looking at just the height dimension (same applies for width),\n\n$$r_T = \\left(\\sum_{t=1}^{T} \\left( N_H - 1 \\right) \\right) + 1 \\quad ,$$\n\nwhere $r_T$ denotes the height-specific receptive field of the final attention operation after $T$ iterations and $N_H$ is the attention height. So for example, with ViTCA's $3 \\times 3$ attention, after one iteration each cell is updated using information from their $3 \\times 3$ neighbourhood. After two iterations each cell is updated using information, implicitly, from their $5 \\times 5$ neighbourhood. Third iteration, $7 \\times 7$ neighbourhood, and so on. Their receptive fields grow linearly with respect to time and you can interpret this in a transposed manner to say that the rate of spatial information propagation from one cell is linear in the exact same way (_i.e.,_ it's the same equation). We suggest looking at [1] (references provided below) for more commentary on receptive fields and for interactive demos visualizing receptive field growth, which similarly applies to the rate of information propagation from localized attention.\n\nThe rate of information propagation does not scale with the size of the cell grid (assuming that is what you meant by system size), it scales with the size of the cell neighbourhood (for ViTCA, the size of its attention neighbourhood and for UNetCA, the size of its first convolution). Although in this case the scale is linear, one can increasingly dilate or grow the attention size over time for an exponential growth in the rate of information propagation, although we feel this pushes against the philosophy of local computations in cellular automata (although locality is relative, after all). As a matter of fact, considering that the attention size does not affect the number of parameters, an experiment we would like to investigate in future work is to allow ViTCA to dynamically alter the attention neighbourhood size or dilation to maximize rate of information propagation, maximize accuracy, minimize unnecessary compute (if attention past a certain neighbourhood size isn't useful, shrink it before the next iteration), and staying within some user-chosen max compute budget. Another one of our experiment ideas for future work is to visualize the actual region of influence of a cell over time, similar to Figure 14 in the cart-pole NCA paper [2].\n\n**References**\n\n[1] A. Araujo, W. Norris, and J. Sim, \"Computing receptive fields of convolutional neural networks,\" _Distill_, 2019, https://distill.pub/2019/computing-receptive-fields/ .\n\n[2] A. Variengien, S. Nichele, T. Glover, and S. Pontes-Filho, \"Towards self-organized control: Using neural cellular automata to robustly control a cart-pole agent,\" in _Innovations in Machine Intelligence (IMI)_, 2021, pp. 1-14.", " The authors combine the Vision Transformer architecture with Neural Cellula Automata (NCA, which combines artificial neural networks with cellula automata). They demonstrate that introducing self-attention and positional encodings to NCAs significantly improves performance. One of the main contributions of this work is the authors adaptation of to the global self-attention mechanism to respect the localisation condition for NCAs. The receptive field of individual cells is implicitly grown due to cell updates during CA iterations such that there is a global propagation of information from localised self-attention. This self-organisation is shown to give performance advantages when compared to other architectures. \n\n\n This is a welcome extension of NCAs to include self-attention and, as such, is highly original. As I understand it, one of the main challenges of achieving this has been how to combine the local computation of NCAs with global updates in ViT, without having highly inefficient learning process. The main contribution from this paper has been an attempt to do exactly this. \n\nThe authors have provided a thorough analysis into the ViTCA, across six benchmark datasets and multiple baseline architectures including ViT, U-Nets and UNetCA. They find that ViTCA can outperform other approaches across on most benchmark datasets. Particularly interesting is the discussion of cell states, as compared to UNetCA, where the authors find that ViTCA is maintains cell stability even when damaged whereas UNetCA diverges. \n\nIn general, I found the manuscript clear and full of information. Some of the figures had almost too much information and it was difficult to relate the caption to the figure. For example, in Figure 2 it took me some time to understand what ‘middle' and 'left,right' exactly referred to. Overall, I thought it was well structured and presented.\n One of the main contributions here is the propagation of local information through CA. However, with large CA systems, I can imagine that there is a separation of timescales between local information propagation and the information being implicitly available to make global changes. Have the authors considered (empirically or theoretically) the rate of information propagation through the system and whether this scales with system size? As the authors note, there are memory constraints with using recurrent models such as CA, which they offer potential future solutions to. ", " This paper proposes a novel combination of vision transformers and Neural Cellular Automata (NCAs), and uses them to create denoising autoencoders. An NCA is essentially a cellular automata with the node updates being performed by a neural net. The ViTCA proposed here adds to this attention heads that are only focused on neighboring cells, and includes positional encodings. The paper demonstrates superior performance to a ViT on denoising autoencoder tasks, and demonstrates the robustness of the model to various perturbations.\n\nI have read the authors' responses and am happy to update my rating to a 7. Their responses were quite thorough, although (correct me if I'm wrong) they didn't respond to the \"one task\" critique.\n\n\n Strengths\n\n1. This is a novel combination of transformers and neural cellular automata.\n\n2. The results look very good compared to a vanilla ViT. E.g., Figure 1 and parts of Figure 2 make a good case for the model.\n\n3. While the baseline ViT used for comparison has slightly fewer parameters than the baseline ViTCA, making for an unfair comparison, smaller versions of the ViTCA (e.g., ViTCA-i) also outperform it with many fewer parameters. This could be better emphasized in the paper. Although this does beg the question - how would an inverted bottleneck ViT with a similar number of parameters perform on this task?\n\n4. The writing is fairly clear, although Figures 2 and 3 are not.\n\n5. I believe, given the results, that this work is significant, but they only demonstrated efficacy on one task, which reduces enthusiasm somewhat.\n\n\nWeaknesses:\n\n1. It is very hard to understand what is going on in Figures 2 and 3. I can make some sense of Figure 3, but Figure 2 is a mystery. I don’t know what a “splat map” is, and I can’t tell that the cells are focusing on foreground, noise and background, as mentioned in the caption. The authors should try showing these figures to a colleague and finding out what they have to explain to get the idea across, and then simplify the figure to make this clear.\n\n2. This reviewer, at least, would have appreciated a demonstration of this architecture on another problem (e.g., semantic segmentation), rather than a deep dive into the model robustness to ablations and various perturbations (section 4.1.1-4.1.3), which could have been relegated to the supplementary material. \n\n3. This model is similar to a universal transformer, which also uses essentially depth 1 (since the parameters are used over and over), with iterative processing.\n\n 1. What are the essential differences between your model and a universal transformer with local attention? Universal transformers simply use the same weights over and over, so they are recurrent (Dehghani, et al. 2018). One difference is the asynchronous update rule. Others?\n\n2. On lines 209-210, you mention that ViTCA outperforms the other baselines on 10 out of the 18 datasets, although you are reporting only 6. Are there 12 more datasets you aren’t telling us about??\n\n3. Instead of overflow loss, have you tried just using nonlinear activation functions that stay within the desired range?\n\nMostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, Łukasz Kaiser (2018) Universal Transformers. arXiv:1807.03819. Yes.", " This paper proposes a new family of Neural Cellular Automata (NCA) based on visual Transformer for the first time. The main difference from the plain ViT is the use of a specially localized self-attention mechanism that is still globally organized, thus yielding a parameter-efficient model. Quantitative and qualitative experiments show that the proposed ViTCA outperforms U-Net, UNetCA, and ViT on various benchmarks. The learned representations are also verified via linear probes evaluation. Strengths:\n+ This paper provides a new perspective on neural cellular automata with a visual Transformer architecture. Although it is a combination of problem and architecture, the authors propose an effective self-attention scheme to improve the model.\n+ The experiments are relatively complete and can verify the proposed contributions.\n+ The paper is well organized and provides informative figures.\nWeaknesses:\n- The biggest problem is that the authors do not explain why U-Net is adopted as a baseline model for comparison. This makes experimental comparisons confusing. Did you choose U-net as a CNN structure for comparison?\n- It is better to validate the performance of linear probes on larger datasets since experiments are only evaluated on small MNIST and CIFAR 10 datasets. However, I do not mean that this will be the reason for the rejection of the paper. - Choosing ViT as a baseline to show the superiority of the proposed ViTCA makes sense. But why do you adopt U-Net, the CNN architecture, as the baseline? And why the proposed ViTCA succeeds while U-net fails? The authors do not describe the motivation and the reasons for such an experimental setup. \n- Linear probes have been popular evaluation protocols to show the learning representation via self-supervised learning. I am wondering how does the proposed model perform using fine-tuning? Can the authors provide fine-tuning results or explain the reason why it is not good if fine-tuning performs poorly?\n- The proposed model requires iterative training to converge. How can you ensure the stability of the proposed model?\n- Performing experiments on MNIST and CIFAR 10 datasets is OK. It is better to validate the performance on larger datasets to show the practicality of the proposed model. The authors can either provide some analysis if the performance is not good. The authors have discussed the limitations in Sec 5.", " The paper presents a neural cellular automata (NCA) architecture for image denoising that replaces the typical convolutional layers with a vision transformer.\n\nThe experiments show that the proposed architecture achieves marginally better results compared to two baselines on three simple datasets, and the authors show that their model appears to be stable after converging to a target image (NCAs are known to be unstable when evaluated on longer horizons than the one seen at training time). The idea of the paper is novel, although it doesn't provide substantial contributions besides replacing convolutions with transformers.\nThe quality of the paper is good, the experiments go fairly in-depth although on just one task of image denoising (which is less interesting than texture generation or other forms of morphogenesis, in the reviewer's personal opinion).\n\nThe clarity of the paper is good, almost all details are reported clearly and there are only a few details that require clarification (see below).\nThe significance of the paper is modest, since it is not even clear that the proposed method provides real advantages compared to the two simple baselines tested by the authors. The paper is unlikely to have an impact on the NCA community, and surely not on the much larger computer vision community. Overall, I lean towards a borderline reject which can turn into a borderline accept if the following questions are addressed.\n\n1. What is the meaning of \"it is a fairly contractive model capable of fixed-point convergence guarantees\"?\n\n2. The attention operation in Equations (1) and (3) is at the patch level, not at the cell level. This means that ViTCA is not comparable 1-to-1 with pixel-level NCAs unless the patch size is set to 1x1. \nAlso, it might confuse the reader to refer to the \"Moore neighborhood\" in line 177 when talking about the patch size, because this term is typically used in CA literature to indicate the neighborhood of a cell (as it is used to compute the update). Here, the effective neighborhood of the cell is 9x9.\nDid I misunderstand something?\n\n3. I couldn't find how the positional encodings are computed. IF PEs provide unique coordinates for each pixel, doesn't that defeat the principle of locality of CAs?\n\n4. Why is the ViTCA baseline configured to have a patch size of 1x1, while all the other ViTCA models have 3x3?\n\n5. The number of parameters in Tables 1, 2, and 3 appears to be misreported. \nA few examples: \n - Table 1, no difference between ViTCA (4 heads) and ViTCA-32 (32 heads)\n - Table 2, no difference between ViTCA with 4-64 heads\n - Table 3, no difference between the variants\n - Table 3, no difference between MLPs with 100 or 1000 hidden units on CIFAR10\n\n The absence of differences in Tables 1 and 2 is especially puzzling and important to clarify, because ViTCA-32 is the only model that can consistently surpass the baselines (while, e.g., ViTCA-4 cannot).\n\n6. Why use an overflow loss instead of an activation function (sigmoid, tanh) that forces the updates to be in the required ranges?\n\n7. Figure 3 shows some noise sampled from a uniform distribution being injected in the cell update. Where is this described in the text?\n\n8. Since all NCAs are image-to-image maps, why didn't the authors consider other NCA baselines like the original NCA [30] or the Variational NCA [7]? Why not consider other recurrent baselines too? As it is (and considering point 5 above), the comparisons are not very informative.\n\nI'm looking forward to the discussion with the authors. The limitations are not discussed extensively besides what concerns memory costs. \nI have pointed out a few issues in my questions that could be discussed better (e.g., point 3). \n\nThere are no societal concerns with the work." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 4 ]
[ "wXbHChf0BSH", "JnKwUTCyJlrK", "nips_2022_9t24EBSlZOa", "6LJ5x9p5mMS", "ptQKLkLgznS", "gYJ8vwqzVE", "dt4B5562MKRR", "jhNON-Kk9Pb", "gZfOXtdsND-", "2ZMclWh6z9g", "U3uh9evRRB_", "xn1NWE-ONQ", "D14W8q6et0h", "7RBqn2C5JDa", "grZhuZoPgge", "itN2a3nTANu", "Id7IxNZerzk", "981whYWVz_q", "o5TAmOmyc17", "Dih5Mdhd19E", "q8Tq-QfIXhG", "frWkj4wGvbh", "nips_2022_9t24EBSlZOa", "nips_2022_9t24EBSlZOa", "nips_2022_9t24EBSlZOa", "nips_2022_9t24EBSlZOa" ]
nips_2022_v9Wjc2OWjz
The price of ignorance: how much does it cost to forget noise structure in low-rank matrix estimation?
We consider the problem of estimating a rank-$1$ signal corrupted by structured rotationally invariant noise, and address the following question: \emph{how well do inference algorithms perform when the noise statistics is unknown and hence Gaussian noise is assumed?} While the matched Bayes-optimal setting with unstructured noise is well understood, the analysis of this mismatched problem is only at its premises. In this paper, we make a step towards understanding the effect of the strong source of mismatch which is the noise statistics. Our main technical contribution is the rigorous analysis of a Bayes estimator and of an approximate message passing (AMP) algorithm, both of which incorrectly assume a Gaussian setup. The first result exploits the theory of spherical integrals and of low-rank matrix perturbations; the idea behind the second one is to design and analyze an artificial AMP which, by taking advantage of the flexibility in the denoisers, is able to "correct" the mismatch. Armed with these sharp asymptotic characterizations, we unveil a rich and often unexpected phenomenology. For example, despite AMP is in principle designed to efficiently compute the Bayes estimator, the former is \emph{outperformed} by the latter in terms of mean-square error. We show that this performance gap is due to an incorrect estimation of the signal norm. In fact, when the SNR is large enough, the overlaps of the AMP and the Bayes estimator coincide, and they even match those of optimal estimators taking into account the structure of the noise.
Accept
This paper studies precise high-dimensional asymptotics in a simple low-rank matrix estimation problem. When there exists a distributional mismatch between the true noise distribution and the Gaussian noise assumption imposed to run the AMP algorithm, the authors observe, and formally quantify, the performance gap between the AMP algorithm and the Bayes estimator. While the model considered in this paper might still be too idealistic (which limits its broader impacts), the paper is well-written and solid. All reviewers recommend acceptance of this paper, and I echo their recommendation.
train
[ "DPtJRWy-pa", "XEwzERKE196", "7JbIvSjd7xJ", "hFgMe16jjIw1", "kZ8QtRkFOi-", "PETbs6VDG2y", "WHZ8e1slaz4", "Rw8gBrdIjcT", "7-mndDrPc2D", "zbikadWyf7v", "Is06A1nhSVG", "HpDDQeOazel", "FaWi64Mr-og", "RicTV0sC2p", "5LuitNkWlVR" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank the authors for detailed response to my comments. I'll raise the rating to 6. ", " Thank you to the authors for the detailed response, to which I am satisfied. I will raise the rating from 5 to 6.", " Thank you for the positive feedback and evaluation of our paper.\n\nWe agree with this last comment, and we have just updated the revision accordingly: we now use the notation $h_t$ and $\\tilde h_t$ to refer to the two types of denoisers, see l. 290-294.", " Thanks the authors for responding to my previous questions.\nMy rating for this submission remains unchanged (6: Weak Accept).", " Thank you for your response, and particularly for the clarification regarding the two types of denoisers involved in the algorithm analysis. I would suggest to specifically refer to these functions when mentioned using the notation $h_t$ and $\\tilde{h}_t$ (e.g., line 290).\n\nMy rating for this submission remains unchanged.", " We thank the reviewer for the helpful comments. We reply to the various points raised in the review below. \n\n---------\n\n*\"Some instances of simplified notation do not seem necessary\"* \n\nWe thank the reviewer for this remark. We have changed Eq. (4) and l. 179-180 according to the suggestion. We realize that the paper is rather dense in the notation and the terminology employed (also because of the strict 9 page limit); hence, if the reviewer has any additional suggestions on how to improve the clarity, we would be happy to revise accordingly.\n\n---------\n\n*\"The discussion after Theorem 2 would be more clearly stated as a corollary\"*\n\nThis is a really good point, and it was also raised by reviewer *MSna*. In the revision, we have added Corollary 1, which gives the expression for the asymptotic MSE and overlap achieved by the Gaussian AMP iterates.\n\n---------\n\n*\"For the rescaled overlap measure (5), is it the same as the asymptotic cosine distance?\"* \n\nThe rescaled overlap measure (5) and the asymptotic cosine distance are indeed the same, and a comment has been added to note this (see l. 184-185 of the revision).\n\n---------\n\n*\"concern on the Bayes estimator [...] would not also apply to the MSE?\"* \n\nThis is in fact the case. However, it only applies to the *vector MSE* and not to the *matrix MSE* studied in the paper. In particular, the comment the reviewer refers to implies that $\\frac{1}{N} \\mathbb E \\|\\| \\int \\boldsymbol x P_{{\\rm mis}}(d\\boldsymbol x\\mid \\boldsymbol Y) -\\boldsymbol X\\|\\|^2 = 1$ always (which means that the vector MSE is not a proper error measure for this model). However, the (matrix) MSE we analyze is $\\frac{1}{2N^2} \\mathbb E \\|\\| M_{\\rm mis}(\\boldsymbol Y) - \\boldsymbol X \\boldsymbol X^T\\|\\|_F^2$, which is non-trivial.\n\n---------\n\n*\"Where is the function ht in (8) described? [...] Can there be a more explicit discussion of what \"carefully\" choosing denoisers entails in practice?\"*\n\nWe believe there is a misunderstanding here, and we would like to clarify these points. \n\nFirst, the function $h_t$ in (8) is *generic* and the algorithm designer is free to choose it. The idea is that $h_t$ can be picked in order to exploit any information available for the signal prior. The only constraint on $h_t$ is given by Assumption 3, which is however rather mild and satisfied by commonly used functions, such as soft-thresholding and ReLU, see also point 1 in the response to reviewer *MSna* and l. 278-279 in the revision.\n\nSecond, in the sketch of the proof of Theorem 2, we are referring to the denoisers $\\tilde{h}_t$ of the abstract AMP iteration, see (33)-(34) in Appendix C.1. The denoisers $\\tilde{h}_t$ need to be chosen carefully, in order to ensure that the AMP iteration (8) (i.e., the object we are interested in analyzing) is close to the AMP iteration (33) (i.e., the object for which a state evolution result can be proved).\n\nFinally, let us highlight that the denoisers that matter in practice are $\\{h_t\\}$, and the algorithm designer is free to choose them. The denoisers $\\{\\tilde{h}_t\\}$ are introduced solely as a proof technique, in order to show the state evolution result of Theorem 2; hence, they do not have an impact on the practicality of the algorithm.\n\nWe hope our explanation answers the question of the reviewer and we have added clarifications at l. 199-200 and 290-292 of the revision. \n\n---------\n\n*\"In line 254, should \"converges... to the random vector\" be \"converges... to that of the random vector\"?\"*\n\nFor $i= 1, \\ldots, t$, the empirical distribution of the random vector $\\boldsymbol x^i$ converges to the random variable $x_i$. Similarly, for $i= 1, \\ldots, t+1$, the empirical distribution of the random vector $\\hat{\\boldsymbol x}^i$ converges to the random variable $\\hat{x}_i$. Thus, the joint empirical distribution in l. 262 “converges… to the random vector”, and not \"converges... to that of the random vector\". \n\nWe agree with the reviewer that it is worth mentioning that the second random vector provides the state evolution, and we have modified the manuscript accordingly (see l. 263 of the revision). \n\n---------\n\n*\"more detailed discussion on the applicability of Assumptions 1, 2, and 3\"*\n\nThis is a very good point. Some remarks over the generality of Assumptions 1-2 have been added to the text (see l. 139-140 and 151-152 of the revision). Assumption 3 is rather mild and standard in the AMP literature: we have added l. 278-279 in the revision to discuss this point, see also point 1 in the response to reviewer *MSna*.\n\n", " *\"I'd appreciate it if the authors can provide some comments about the applicability of the algorithms discussed in this paper. Under what type of scenarios might one favor one approach over the other?\"*\n\nIn this paper, we study the effect of mismatch on three types of inference procedures: a Bayes estimator, AMP algorithms, and spectral methods. Spectral methods consist in suitably rescaling the principal eigenvector of the data matrix. The scaling factor takes into account the spectrum of the data matrix and it is chosen in order to minimize the MSE. This approach is unable to take advantage of any information on the signal prior. In fact, as the noise matrix is rotationally invariant, both the MSE and the overlap of the spectral estimator do not depend on the signal prior (see l. 225-226 of the revision). In contrast, in the AMP algorithm (8), the algorithm designer can choose the denoiser $h_t$ in order to adapt to the signal prior (see l. 199-200 of the revision). This is a key advantage of AMP and it is what makes it Bayes-optimal in certain (matched) settings, see e.g. [29, 30, 61, 13]. Finally, the Bayes estimator directly samples from the posterior, but it does not lead to a practical algorithm (see l. 176 in the revision).\n\n----\n\n*\"a dedicated discussion of the limitations of the scope and applicability of the results might still be helpful\"*\n\nThis is a very good point. As mentioned by the reviewer, our contribution is of theoretical nature. Thus, an in-depth investigation of the practical applicability of our findings is out of the scope of our paper. At the same time, AMP algorithms have been recently considered for several practical scenarios. Examples include problems related to genetics (see [86]), inpainting (see [66]) and MRI image recovery (see [76]). Thus, we can foresee an application of the results of this paper in such domains of applications. We have included reference [76] in the revision and edited in l. 37-38. Furthermore, if the reviewer thinks it would be helpful, we could incorporate a brief discussion on the scope and applicability of our results in a revision.\n", " We thank the reviewer for the useful comments. We reply to the various points raised in the review below, and we are happy to answer additional follow-up questions. In case the reviewer is satisfied with our responses (especially the first one concerning the impact of our contribution), we hope that raising the rating will be considered.\n\n----\n\n*\"I am not fully convinced of the level of impact that this paper makes. This paper analyzes two existing algorithms with mismatched spherically symmetric noise, which is a step up from the analysis in [71] of mismatched Gaussian SNR\"*\n\nWe clarify below the impact of our contribution which goes well beyond the existing analysis in [71].\n\nFirst, let us highlight that the assumptions of our paper are much weaker than assuming a ‘mismatched spherically symmetric noise’. In fact, our only requirement on the noise is that it is rotationally invariant, namely the eigenvector matrices are uniformly sampled among rotation matrices. This allows for an arbitrary spectrum of the noise, and it can therefore model the complex correlations that often appear in application domains. Indeed, only in the case of the semicircle law for the noise eigenvalues, the entries of the noise are independent. For any other choice, the matrix entries are dependent, and this is covered by our analysis. This is not the case in [71]. We emphasize that taking into account dependent random variables in such precise high-dimensional analyses is usually considered a rather involved task and a strong generalization, and has been much less studied than gaussian/independent settings for that reason (as we point out in the introduction).\n\nSecond, while [71] simply proposes a proof strategy, the results in our manuscript are fully rigorous. In fact, following the strategy suggested in [71] might prove rather difficult, since it would require a very precise control of many error terms when applying Laplace’s method. Having a completely rigorous analysis is key in order to assess the unexpected phenomenology we discuss in the experimental part of the paper. In particular, given the interest of a broad community concerning AMP algorithms, we believe that our surprising discoveries on its behavior will trigger new research directions.\n\nFinally, in [71] the authors mention as an interesting open problem the performance comparison between the Bayes estimator and AMP. This is exactly what our contribution is able to achieve. In fact, we provide rigorous performance guarantees on both a mismatched Bayes estimator (Theorem 1) and an AMP algorithm (Theorem 2), all this in a much more general setting than the one studied in [71]. Let us conclude by mentioning that our paper not only provides analytical results, but it also uses such results to display the remarkable phenomenology presented in Section 4. This phenomenology is absent in [71], and we regard it among the most surprising results of our contribution. \n\nGiven all this and the fact that rank-one matrix estimation is one of the most paradigmatic models of high-dimensional inference, we foresee an impact in the broad and interdisciplinary community of researchers working on the fundamental and algorithmic limits of inference and message-passing algorithms.\n\n----\n\n*\"The discussion about Bayes estimator and AMP from the numerical results is interesting, but it might be helpful to provide more insights into the performance gap of AMP and Bayes in this problem setup\"*\n\nThis is an excellent point. Our numerical simulations show that the overlap of Gaussian AMP and of the Bayes estimator match at high SNR, see the plots on the right in Fig. 1. Hence, this performance gap must be due to an incorrect estimation of the norm of the signal, see also the discussion at l. 347-352 of the revision. It is an open problem (and an exciting direction for future research) to bridge the gap between the Gaussian AMP algorithm and the mismatched Bayes estimator. See also the answer 3 to reviewer *RAkD*.\n", " We thank the reviewer for the useful feedback. We reply to the various points raised in the review below. \n\n----\n\n*\"The Gaussian AMP algorithm was provided in equation (8), but not much details were provided on how it is derived.\"*\n\nThis is an excellent point. A very good pointer here is the recent review [39], which discusses in detail how such an AMP can be derived. We have updated the paper accordingly (see l. 203-204 of the revision).\n\n----\n\n*\"the mismatch performance metrics were provided in equation (9), but not much details were provided on the meaning of these metrics\"*\n\nWe thank the reviewer for this remark, which has allowed us to improve the clarity of Theorem 1. In fact, the quantities $M(\\lambda,\\lambda_*)$ and $Q(\\lambda,\\lambda_*)$\nhave a simple interpretation in terms of the mismatched Bayes estimator: $Q(\\lambda,\\lambda_*)$ represents the squared norm of the mismatched Bayes estimator and $M(\\lambda,\\lambda_*)$ represents its inner product with the signal. We have edited l. 230-232 of the revised version to address this comment. \n \n----\n\n*\"A natural question is whether the AMP algorithm could be modified accordingly\"*\n\nThis is also an excellent point. If the designer is aware of the noise statistics, then one natural choice is what we call the ‘correct AMP’. This algorithm is used for comparison in the numerical results of Section 4, it can be obtained from [37, 87] and the details are provided in Appendix D of the supplementary material. In contrast, if the designer still incorrectly assumes Gaussian statistics, it is an open problem to bridge the gap between the Gaussian AMP algorithm and the mismatched Bayes estimator. Given that our numerical results show that the performance gap is due to an incorrect signal’s norm estimation, we may consider running Gaussian AMP and a-posteriori normalizing its final estimate (like for the spectral estimators). Thanks for the interesting suggestion that we will consider for future work.\n\n----\n\n*\"It would be nice if the authors can relate the research to some real world scenarios\"*\n\nThank you for this remark. We would like to clarify that providing experimental results on real data is out of scope for the current contribution, whose goal is to provide a rigorous performance analysis for various estimators (Bayes, AMP, spectral) in the presence of mismatch. Having said that, we also remark that AMP algorithms have been recently considered for several practical scenarios. Examples include problems related to genetics (see [86]), inpainting (see [66]) and MRI image recovery (see [76]). Thus, we can foresee an application of the results of this paper in such domains of applications. We have included reference [76] in the revision and edited in l. 37-38.\n", " We thank the reviewer for the various detailed comments. We reply to the 6 questions raised in the review below, and we are happy to answer additional follow-up questions. We have also modified both the paper and the appendix accordingly. In case the reviewer is satisfied with our response, we hope that raising the rating will be considered.\n\n----------------------\n\n**Question 1**\n\nAssumption 3 is rather mild, and it allows us to cover most practically relevant choices of $h_{t+1}$, such as soft-thresholding or ReLU. In fact, $h_{t+1}$ is even allowed to have discontinuous partial derivatives, as long as this discontinuity occurs on a set with 0 measure (this is the case e.g. of ReLU). Let us also point out that such an assumption is standard in the AMP literature: it appears for example in [39] (see Assumption (M2) in Section 3.1), [60] (see the paragraph ‘Continuous differentiability and other technical assumptions’ in Section 3.1), [82] (see Assumption (A1)), and [87] (see Assumption 2.2(b)), among other related works. We have incorporated this comment in l. 278-279 of the revision. \n\n----------------------\n\n**Question 2**\n\nThis is an excellent suggestion, and it was also pointed out by reviewer *ggxP*. In the revision, we have added Corollary 1, which gives the expressions for the asymptotic MSE and overlap achieved by the Gaussian AMP iterates.\n\n----------------------\n\n**Question 3**\n\nThe key point here is that the statistician *assumes* the signal to be spherically distributed. As we analyze a mismatched model, this does not imply that the distribution of the signal *is really* uniform on the sphere. As stated in Assumption 1, we only need that the signal has a fixed radius which, without loss of generality, we take to be $\\sqrt{N}$. All this means that the signal may very well be uniform in the hyper-cube, for example. Some clarifications on this matter have been added to the revised version of the manuscript (see l. 170-174 of the revision). This is the reason why we decided to focus on spherically distributed signals in the numerical part.\n\n----------------------\n\n**Question 4**\n\nThis is a good suggestion, and we would like to thank the reviewer for this comment, which has allowed us to improve the quality of our manuscript. Actually, by following steps similar to those detailed in Appendices C.1-C.3 to show Theorem 2, we can derive a state evolution result for the Gaussian AMP with spectral initialization. We have added a comment in l. 206-207 of the revision. We now describe in detail the state evolution corresponding to the Gaussian AMP with spectral initialization in the newly created Appendix C.4 contained in the supplementary material of the revision.\n\nLet us finally remark that the simulation results of Section 4 analyze the *fixed point* of the Gaussian AMP algorithm and, therefore, no difference is expected when considering a spectral initialization instead of the initialization of Eq. (7).\n\n----------------------\n\n**Question 5**\n\nTheorem 1 and 2 provide an exact characterization in the high-dimensional limit of the performance of the Bayes estimator and of an AMP algorithm, respectively. More specifically, Eq. (10) gives the asymptotic MSE of the Bayes estimator sampling from the (mismatched) posterior, and Eq. (18) provides a general state evolution result for AMP which holds for any pseudo-Lipschitz function $\\psi$. Following the second question of this reviewer and a comment from reviewer *ggxP*, we have added Corollary 1 which explicitly gives the asymptotic MSE and overlap of the AMP iterates. The proofs of Theorem 1 and 2 are rather technical and, for space reasons, we had to defer them to the appendices. However, we provide a proof sketch for both these results (see l. 249-259 and 289-299 of the revision).\n\nOur theoretical analysis fuels directly the numerical experiments presented in Section 4, where we investigate in detail the resulting phenomenology. Thus, there is a direct and strict connection between the theoretical and the empirical part of our contribution.\n\nIf the reviewer thinks it would increase the clarity of our paper, we would be happy to add a paragraph at the end of Section 1.1. In this paragraph, we could summarize the content of the following sections and make the connection between Theorems 1-2 and the simulation results of Section 4 more clear.\n\n----------------------\n\n**Question 6**\n\nThe reviewer is correct: the characterization of the spectral estimators does not appear in the “Main Results” section, because it is not an original contribution of our paper. As explained in l. 222-224, the asymptotic formulas for the MSE and for the overlap of these estimators follow from previous results in literature, which are reviewed in Appendix A, Theorem 4. We have opted not to incorporate such results in the main text for space reasons: our preference was to use the 9 pages to highlight our novel rigorous results and the surprising phenomenology that arises from them.", " We would like to thank the reviewers for their numerous valuable comments, which have allowed us to improve the quality of our manuscript. We are glad to read their overall positive evaluation of our work: “good match between predictions and true performance levels” (*Rev. ggxP*); “thorough study [...] with in-depth theoretical analysis and interesting empirical findings” (*Rev. RAkD*); “theoretically solid performance guarantees” (*Rev. oMyw*).\n\nWe provide separate replies to the four reviewers, in which we address all their points. We have uploaded a revision of the body of the paper and of the supplementary material as well. In our responses, we point to the parts of the paper that have been modified. The numbering of lines, equations and references refer to the revised version, and the main changes are highlighted in blue color. We have tried to keep such changes to a minimum, in consideration of the strict 9-page limit holding also for the revision uploaded at this time.\n", " This paper studies rank-1 matrix estimation and provides performance guarantees for two inference methods, a Bayes estimator and an AMP algorithm. 1. The author should comment on whether Assumption 3 holds for some representative examples like soft-thresholding function under the setting considered in this paper.\n2. The results and quantities mentioned below Theorem 2 should be formalized into a theorem or corollary.\n3. How crucial does Theorem 1 depend on the prior distribution on the signal X (line 163)? Given that the topic of this paper is to study whether these estimators are robust to wrong noise distributional assumption, it doesn't make sense if the result relies highly on the distributional assumption on the signal X (which seems to be an even stronger assumption that might not be correct).\n4. When analyzing AMP, the authors assume access to an initialization that is independent of the noise and satisfy certain probabilistic condition (line 186). The authors also claim that in prior literature it is possible to analyze AMP with a practical spectral initialization (line 190). Given the assumed model is already simple enough (rank-1 spiked model), I recommend the authors to improve their analysis to adapt to such a practical initialization scheme.\n5. I suggest the authors to make effort discussing their theoretical result in more detail and show how they are related to the main topic of the paper. Currently it is difficult to see this from Theorem 1 and 2.\n6. If I am not missing something, the main result for spectral estimators is missing, although from Section 2.3 it seems that there should be some theory for spectral estimators. See \"Strengths And Weaknesses\". See \"Strengths And Weaknesses\".", " The paper considers the penalty to be paid when recovering a rank-1 matrix from noisy partial observations when the noise is assumed to be Gaussian in the case where the noise isn't so. The paper's analysis considers two algorithms and regimes: a Bayes estimator in an asymptotic regime and an approximate message passing algorithm in a state evolution regime. The analysis finds a gap in the performance of the two estimators that depends on the level of noise due to an inaccurate estimation of the signal norm in AMP.\n The experimental results show a good match between the performance predictions of the analytical results and the true performance levels from the numerical simulations, and f the dependence of the performance gap between the algorithms on the SNR and noise model mismatch level.\n\nSome instances of simplified notation do not seem necessary, and can detract from clarity (e.g., dropping the index N in line 171, the lack of description of the terminology used in Assumption 2). \n\nThe discussion after Theorem 2 would be more clearly stated as a corollary: given that the statement of Theorem 2 appears to be very broad, the authors could provide a statement that is more relevant to assessing the performance of the estimators using metrics like those shown before. For the rescaled overlap measure (5), is it the same as the asymptotic the cosine distance between $\\bf X$ and $\\bf{\\hat{x}}({\\bf Y})$? If yes, this may be easier to describe. Additionally, the concern on the Bayes estimator in line 179 seems to be independent of the overlap measure - would it not also apply to the MSE?\n\nWhere is the function ht in (8) described? The sketch of the proof of Theorem 2 refers to \"choosing denoisers carefully\". Can there be a more explicit discussion of what \"carefully\" choosing denoisers entails in practice?\n\nIn line 254, should \"converges... to the random vector\" be \"converges... to that of the random vector\"? It may help with clarity to state at this point that the second random vector provide the state evolution for this AMP analysis. The paper could use a more detailed discussion on the applicability of Assumptions 1, 2, and 3 in practical settings.", " The authors considered the problem of rank-one signal estimation corrupted by structured rotationally invariant noise, and studied the situation when the actual noise statistics deviates from the Gaussian assumption. Three estimation procedures were proposed and discussed: mismatched Bayes estimator, approximate message passing, and spectral estimators. Both theoretical analysis and simulation experiments were provided to compare these different methods, together with reasonable conclusions and interesting insights. Strengths:\n1) A thorough study on the mismatched estimation problem for rank-one matrices was provided, with in-depth theoretical analysis and interesting empirical findings.\n2) Different estimation methods were proposed and compared, showing the strengths and weaknesses of each method.\n3) An interesting phenomenon was identified that Gaussian AMP does not perform as well as the mismatched Bayes estimator. Discussions were provided on potential causes of this phenomenon.\n\nWeaknesses:\n1) The Gausian AMP algorithm was provided in equation (8), but not much details were provided on how it is derived. It is unclear what objective function this procedure is optimizing. More technical details would help the readers to better understand the algorithm.\n2) Similarly, the mismatch performance metrics were provided in equation (9), but not much details were provided on the meaning of these metrics. Better explanations of these metrics may help the readers to better understand the motivations behind the equations. The authors mentioned that the performance gap between the AMP algorithm and the Bayes estimator is due to an incorrect estimation of the signal norm. A natural question is whether the AMP algorithm could be modified accordingly, so that a better estimation of signal norm can be achieved, so as to narrow down the gap to the Bayes estimator. The paper focused on theoretical analysis and simulated experiments. It would be nice if the authors can relate the research to some real world scenarios, and show some experimental results on real data.", " This paper presents a theoretical analysis of the performance of two algorithms for rank-1 matrix recovery under mismatched noise. Specifically, a closed-form MSE for the Bayes estimator and a state evolution analysis of the AMP algorithm are provided. The paper also discusses some phenomena arising from numerical experiments regarding these algorithms under two different performance metrics. Strengths:\n- This paper gives theoretically solid performance guarantees for the Bayes estimator and AMP algorithm for mismatched rank-1 matrix recovery. Settings with unknown noise distributions are common in real-world applications, where this work helps to understand the effect of these algorithms under mismatched noise. \n- Numerical experiments comparing the theoretical, experimental, and noise-aware performances are presented, which corroborate the results of this paper and shed light on the points of discussion.\n- The writing and presentation of this paper is clear.\n\nWeaknesses:\n- Aside from the technical contributions, I am not fully convinced of the level of impact that this paper makes. This paper analyzes two existing algorithms with mismatched spherically symmetric noise, which is a step up from the analysis in [71] of mismatched Gaussian SNR, but might not suffice for this venue. Would consider raising the score if this concern is properly addressed. The discussion about Bayes estimator and AMP from the numerical results is interesting, but it might be helpful to provide more insights into the performance gap of AMP and Bayes in this problem setup, in addition to this phenomenon itself.\n\nI understand that this is a theoretical work and if the authors think these are out of scope for this work, but I'd appreciate it if the authors can provide some comments about the applicability of the algorithms discussed in this paper. Under what type of scenarios might one favor one approach over the other?\n While this is very theoretical work that proves performance guarantees, a dedicated discussion of the limitations of the scope and applicability of the results might still be helpful." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 4 ]
[ "zbikadWyf7v", "WHZ8e1slaz4", "kZ8QtRkFOi-", "7-mndDrPc2D", "PETbs6VDG2y", "FaWi64Mr-og", "5LuitNkWlVR", "5LuitNkWlVR", "RicTV0sC2p", "HpDDQeOazel", "nips_2022_v9Wjc2OWjz", "nips_2022_v9Wjc2OWjz", "nips_2022_v9Wjc2OWjz", "nips_2022_v9Wjc2OWjz", "nips_2022_v9Wjc2OWjz" ]
nips_2022_48TmED6BvGZ
Biological Learning of Irreducible Representations of Commuting Transformations
A longstanding challenge in neuroscience is to understand neural mechanisms underlying the brain’s remarkable ability to learn and detect transformations of objects due to motion. Translations and rotations of images can be viewed as orthogonal transformations in the space of pixel intensity vectors. Every orthogonal transformation can be decomposed into rotations within irreducible two-dimensional subspaces (or representations). For sets of commuting transformations, known as toroidal groups, Cohen and Welling proposed a mathematical framework for learning the irreducible representations. We explore the possibility that the brain also learns irreducible representations using a biologically plausible learning mechanism. The first is based on SVD of the anti-symmetrized outer product of the vectors representing consecutive images and is implemented by a single-layer neural network. The second is based on PCA of the difference between consecutive frames and is implemented in a two-layer network but with greater biological plausibility. Both networks learn image rotations (replicating Cohen and Welling’s results) as well as translations. It would be interesting to search for the proposed networks in nascent connectomics and physiology datasets.
Accept
This manuscript presents novel biologically plausible algorithms for learning representations for Lie groups. The derivation of the algorithms and the networks are based on previously studied biologically plausible networks. Although there are some limitations, the reviewers agree that this work is sound, clearly presented, and represents a valuable contribution to both computational/theoretical neuroscience and machine learning.
train
[ "WqoQAyQnQvi", "P69Cpfz9Hi", "oQoCOw_gWIm", "NrJho8N2NBH", "AWo0WetEIYF", "mRErNVPW02G", "34XDhdE1Ohv", "TMFSzVuwiOf", "zmYScOqmfHj", "j8aMGivvtog", "jNwRq6fspyu", "joEuus4fSr", "qrGsgQnB3cj", "ygpIDpFsm8k" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " \n### Lack of evidence that the structure of image transformations due to movement is learnt\n\n> The type of results that you provide on real images in Figure 6b-c also witnesses the mismatch between abstraction of the provided results and the concrete problem that was intended to be addressed. Couldn't you **simply leverage the learnt representation to perform rollouts such that we can see that a meaningful representation of the \"*transformations of stimuli due to their physical movement and changes of perspective*\" has actually been learnt**? While I would naturally expect something along the lines of Figure 1, all I see is a collection of filters that look similar to the ones obtained with completely synthetic experiments, suggesting some form of generic basis has been learnt, as in many previous works based on biologically plausible algorithm. But unless I missed it, the reader is left with no indication that the concrete and biologically relevant problem intended to be solved has been addressed.\n\n* We thank the reviewer for suggesting this idea for validating our approach. Theoretically, we know the transformations $A_t$ can be recovered from the full set of filters $Q$ and all of the angles $\\theta_{t,i}$. We now also demonstrate numerically that the output $\\hat\\theta_{t,i}$ of our network (Algorithm 1) contains sufficient information to reconstruct the transformed images. To this end, we train a network $f(x_{t-1},\\hat\\theta_{t})$ to minimize the mean-squared error $|f(x_{t-1},\\hat\\theta_{t})-x_t|^2$ and show that the output of the decoder performs well at reconstructing the transformed images (see Appendix D). In this way, we demonstrate that one can \"simply leverage the learnt representation to perform rollouts such that we can see that a meaningful representation of the '*transformations of stimuli due to their physical movement and changes of perspective*' has actually been learnt.\"", " We thank the reviewer for their thoughtful comments and suggestions.\n\n### Novelty of biologically plausible SVD\n\n> In principle, once PCA is solved SVD is solved too, as the singular vectors of matrix $A$ are the eigenvectors of $AA^\\top$ and $A^\\top A$ so I am wondering how the need for a new algorithm will be justified.\n\nThis is a good suggestion and we have considered this. While it is true that one can recover the left- and right-singular vectors of the matrix $A$ by performing PCA on $AA^\\top$ and $A^\\top A$, it is quite challenging to implement this in a biologically plausible neural network (i.e., online with local learning rules).\n\nBiologically plausible PCA networks have been derived for the special case that the goal is to perform PCA on the covariance of the input dataset (see Oja 1982 and Pehlevan et al. 2015). In our setting, we perform SVD on the matrix $B=\\langle x_tx_{t-1}^\\top-x_{t-1}x_t^\\top\\rangle_t$, which is equivalent to performing PCA on $BB^\\top$ or $B^\\top B$ which are not covariance matrices themselves. We are unaware of any biologically plausible PCA algorithms that can be applied in this setting. In fact, we are unaware on any online algorithm that can solve this problem: it is not clear how to get an online estimate of $BB^\\top$ or $B^\\top B$ since $\\mathbb{E}[B_t]=B$ does not imply that $\\mathbb{E}[B_tB_t^\\top]=BB^\\top$ or $\\mathbb{E}[B_t^\\top B_t]=B^\\top B$. Therefore, our SVD results allow for biologically plausible computations that cannot be performed using existing algorithms.\n\n---\n\n### Whitening assumption and relation to the concrete problem of learning transformations due to movement\n\n> The fact that specific neurons may perform some sort of whitening does not mean that this whitening is/can be simply plugged in to a module that extracts an irreducible representation that meaningfully represents \"transformations of stimuli due to their physical movement and changes of perspective\", state you claim to learn in the introduction. If I take an image of an object undergoing transformations such as translation and rotations, it is very unclear that I would still be able to uncover the structure of those changes after a mapping from the pixel space to the whitened space. Moreover, your setup is framed in the pixel space at the beginning of section 2, not in an abstract space.\n\n* Whitening of images performed in the brain is traditionally modeled by some kind of low-pass filtering (for study of motion, see e.g. Cadieu and Olshausen 2012). Examples are \"Mexican hat\" filter or ZCA filtering. These whitening transformations do not map data into some arbitrary space. Rather, the whitened images retain close visual similarity to the original images (see, e.g., Figures 1 and 2 of Pal & Sudeep 2016). Therefore, the transformations of the whitened images are highly informative of the transformations of the original images.\n\n* Also, consider an unwhitened stream of inputs $\\{s_t\\}$ such that $s_t=T_ts_{t-1}$, where $T_t$ is a commutative transformation in the sense that $T_t=P\\Gamma_t P^{-1}$ for some invertible matrix $P$ and block diagonal matrix $\\Gamma_t$ of rotation blocks. Let $\\{x_t\\}$ be the ZCA whitened transformation of $\\{s_t\\}$; that is, $x_t=Ws_t$, where $W=C_{ss}^{-1/2}$ and $C_{ss}$ is the covariance of $\\{s_t\\}$. This whitening transformation can be implemented in a biologically plausible neural network (see Pehlevan and Chklovskii 2015). Then \n$$x_t=Ws_t=WT_ts_{t-1}=WT_tW^{-1}Ws_{t-1}=A_tx_{t-1}$$\nwhere $A_t=WT_tW^{-1}$. Suppose $A_t$ lie in a toroidal group; that is, $A_t=Q\\Gamma_tQ^\\top$ for some fixed $Q$. Then for all $t$,\n$$T_t=(Q^\\top W)^{-1}\\Gamma_t(Q^\\top W).$$\nTherefore, by finding an irreducible representation of the transformations of the whitened stimuli $A_t$, we also find an irreducible representation of the original transformations. In particular, if one's goal is to reproduce the transformations $T_t$ acting on the pre-whitened signal $\\{s_t\\}$ (though we do not believe this is the brain's goal), these can be recovered given the *fixed* filters $Q^\\top W$ and the *time-varying* angles $\\theta_i$.\n\n* Finally, we note that our approach is more general than looking at pixel space. We can address transformations of any time series. Here we focus on operations on pixel space because they are easy to visualize.\n", " The author's reply clearly states the difference between the current manuscript and the ref. 1. And I am positive that the typo and clarity of the writing can be improved later by the authors.", " Thank you for your reply. While I agree one should not expect to have the most realistic setting to make an interesting contribution to this question, I am struggling to see concrete and specific evidence that progress is made here.\n\n**Novelty of biologically plausible SVD**\n\nRegarding your comment:\n> \"We will add further details highlighting the relationship and the novelty. One of our approaches uses previously developed biologically plausible PCA algorithm as a component of motion detection learning. We are not claiming novelty in biologically plausible PCA. However, our use of PCA in the context of motion detection, as well as our biologically plausible SVD, algorithm are novel.\"\n\nI am looking forward to read these further details. In principle, once PCA is solved SVD is solved too, as the singular vectors of matrix A are the eigenvectors of $AA^T$ and $A^TA$ so I am wondering how the need for a new algorithm will be justified.\n\n**Whitening assumption and relation to the concrete problem of learning transformations due to movement**\n\nFor example let us take the question of whitening. As you say:\n> \"Whitening of sensory inputs is considered one of the important functions of the sensory systems so we believe it is reasonable to assume that the circuit receives prewhitened inputs. Furthermore, a number of biologically plausible algorithms for input whitening have been proposed and experimentally observed in some cases [3,4].\"\n\nThe fact that specific neurons may perform some sort of whitening does not mean that this whitening is/can be simply plugged in to a module that extracts an irreducible representation that meaningfully represents *\"transformations of stimuli due to their physical movement and changes of perspective\"*, state you claim to learn in the introduction. If I take an image of an object undergoing transformations such as translation and rotations, it is very unclear that I would still be able to uncover the structure of those changes after a mapping from the pixel space to the whitened space. Moreover, your setup is framed in the pixel space at the beginning of section 2, not in an abstract space. \n\n**Lack of evidence that the structure of image transformations due to movement is learnt**\n\nThe type of results that you provide on real images in Figure 6b-c also witnesses the mismatch between abstraction of the provided results and the concrete problem that was intended to be addressed. Couldn't you simply leverage the learnt representation to perform rollouts such that we can *see* that a meaningful representation of the *\"transformations of stimuli due to their physical movement and changes of perspective\"* has actually been learnt? While I would naturally expect something along the lines of Figure 1, all I see is a collection of filters that look similar to the ones obtained with completely synthetic experiments, suggesting some form of generic basis has been learnt, as in many previous works based on biologically plausible algorithm. But unless I missed it, the reader is left with no indication that the concrete and biologically relevant problem intended to be solved has been addressed. \n\n", " We thank the reviewer for their feedback and constructive suggestions. We are encouraged that the reviewer values biologically plausible implementations of algorithms. We are also pleased that the reviewer finds our contribution valuable for both neuroscience and machine learning. Below we address some of the concerns of the reviewer.\n\n> It would be nice to see a quantitative analysis on the learned filters, explicitly comparing to the standard translational and rotational Fourier bases, and reporting how well the learned weights approximate these bases.\n\nWe agree with the reviewer that such analysis is important, and we are adding it to the manuscript (Section 6 for translations, Appendix C for rotations). To summarize, we find that our filters are qualitatively similar to the Fourier modes but are not exactly the same. \n \n> It would be useful to see how well the model trains on natural datasets.\n\nWe thank the reviewer for this suggestion. We carried out a numerical experiment by looking at rotations on naturalistic images from the Van Hateren database. We again see that the filters that our algorithm finds are qualitatively similar to Fourier modes (see Appendix C).\n \n> The paper is framed as a neurobiological model, but it really seems more \"neuromorphic\" as the emphasis is on how to learn all these things with local learning rules in a neural circuit, as opposed to specific neurobiological substrates in the visual system. There are issues such as how to compute arctan, and many others, that would need to be discussed or elaborated more to think of this in neurobiological terms.\n\nWe thank the reviewer for this suggestion. We agree that our approach can potentially prove beneficial for neuromorphic hardware and will mention this as one of the implications of our work. We will also clarify that, as the reviewer correctly points out, the relation of our model to actual neurobiological circuits in living organisms is only conjectural and there remain hurdles in turning our algorithm into a biologically realistic model.\n \n> It will be interesting to see how these results could be extended to real-world applications, such as estimating optical flow from natural videos.\n\nThis is a great suggestion. We believe that our framework can potentially handle real-world use cases when augmented and properly layered. We thank the reviewer for their encouragement and leave this direction to future work.\n\n", " We thank the reviewer for their feedback and constructive suggestions. We are greatly encouraged that the reviewer finds the proposal of biologically plausible algorithms valuable. We are also pleased that the reviewer finds our exposition accessible. Below we address some of the concerns of the reviewer.\n\n> Not clear which parts are contributions, i.e. labeled as \"Theorem\"\n\nThe main contribution of this work is the derivation of two biologically plausible online algorithms for learning toroidal group transformations. One approach is based on SVD of the bivector between consecutive frames and involves deriving a novel algorithm for SVD. The other approach is based on PCA of the discrete time derivative of the time series and applies previously developed algorithms for PCA to a new context. We will highlight the novel contributions of the work more clearly by adding a short contributions section. We will also clarify this in the text.\n\n> Sadly no code provided.\n\nWe thank the reviewer for highlighting this. The omission of the code was unintentional and we have now added the code in the supplementary materials. \n\n> Quantitative comparison of the experimental results with \"biologically implausible\" counterparts (i.e. Cohen and Welling) would've been interesting\n\nWe agree with this. Cohen and Welling (2014) performed similar experiments with rotations of randomly generated images. Unfortunately, no code was provided and no quantitative measures were presented in that work to compare against. We are now adding a quantitative comparison against theoretical predictions, Section 6 and Appendix C.\n \n> How do the non-biologically plausible SVD-based algorithm and the altered version compare in terms of accuracy?\n\nThe altered version converges to the correct result, however the speed of convergence is somewhat slower (i.e. is less sample efficient).\n", " We thank the reviewer for their detailed feedback and constructive suggestions. Some writing mistakes have been fixed in the text basing on reviewer's comments. Below we address some of the reviewer's concerns.\n\n> The network model implementing PCA (similarity matching specifically) to learn transformations is based on a recent study (ref. 1) so it is not clear how much difference of this part compared with the mechanism presented in ref. 1.\n\nThis work is different from that of Ref. 1 in two ways. First, a major limitation in ref. 1 is that they work with infinitesimal transformations whereas we can also handle larger transformations. This allows us to apply our algorithm in a much broader range of settings. Second, in ref. 1, they use the outer product $(x_t-x_{t-1})x_t^\\top$ which leads to multiplicative synapses. We instead use bivectors and avoid this requirement. We will clarify these differences in the text. \n\n> It is not clear the derivation in Eq. 18 based on Eq. 16 and some motivations are missing.\n\nWe thank the reviewer for pointing out the lack of clarity and typos in our manuscript. Eq. 18 is obtained by substituting in with Legendre transforms for the terms Tr $(\\dot X^\\top\\dot X\\dot Y^\\top\\dot Y)$ and Tr $(\\dot Y^\\top\\Lambda^{-1}\\dot Y\\dot Y^\\top\\Lambda^{-1}\\dot Y)$. The motivation for these substitutions is to encode the covariance matrices $\\frac1T\\dot X\\dot Y^\\top$ and $\\frac1T\\dot Y\\dot Y^\\top$ as synaptic weight matrices so that we can obtain an online learning algorithm. We will elaborate on this further in the manuscript.\n\n> Also, it is not clear the meaning of matrices W and M in Eq. 18 and I need to guess whether they correspond to feedforward and recurrent weight matrices respectively.\n\nThe matrix $W$ corresponds to feedforward weights and the matrix $M$ corresponds to recurrent lateral weights. We will clarify these further in the manuscript.\n\n> Fig. 5a: not clear which algorithm leads to the presented filters. That is, are those filters learnt from the SVD (representing the u and v), or from the network implementation of PCA (W and M)?\n\nWe thank the reviewer for pointing this out. We clarified this in the text.\n\n> The Introduction (lines 44-45) mentions that an advantage of learning irreducible representation solves the need for multiplicative units and only a linear number of synaptic connections will be needed per neuron. However later in the Result section, I don’t see a detailed explanation of how this problem is solved in the proposed algorithms. Some elaborations are needed.\n\nIn ref. 1, the authors use the outer product $x_t(x_t-x_{t-1})^\\top$, which is vectorized and represented by neural activities (denoted by $\\chi_t$). Calculating this outer product requires $d^2$ multiplicative units. The filters are then learned by performing PCA or nonnegative matrix factorization on $\\chi_t$ (hence the number of synapses scales as $d^2$ synapses).\n \nIn this work, by assuming an irreducible representation, we show that the filters can be learned as the singular vectors of $B=\\langle B_t\\rangle_t$, where $B_t$ is the bivector $x_tx_{t-1}^\\top-x_{t-1}x_t^\\top$, or the principal components of the time series $\\{x_t-x_{t-1}\\}$ (assuming $x_t$ is white). This allows us to compute the SVD without explicitly computing $B$ in our online algorithm, thus avoiding the need for multiplicative synapses or a quadratic number of neurons. Rather, in our algorithm, each neuron computes projections of the form $u^\\top B_tv$, which can be computed using $2d$ synapses (which are encoded in $u$ and $v$). Similarly, the PCA algorithm only requires $d$ (feedforward) synapses per neuron. We will clarify this point further in the manuscript.\n \n \n> Both models (Fig. 2 and Fig. 3) explicitly output the rotation angles $\\theta$. In contrast, there are solid neuroscience studies that the angle is represented in a neural population code and can be read out by a linear decoder such as population vector. I am wondering how much the proposed network model and learning rules will be changed if we use a neural population code to represent the rotation angle.\n \nIndeed, angles are often encoded in the brain using a population code, for example in the case of heading direction or wind direction. However in our work, the transformation angle is the amount of rotation of the image vector \n*per unit of time*, so it is better thought of as an angular velocity, or the speed of the transformation. This is often encoded in single neurons in the brain. For instance, the L1 and L2 neurons in the fly lamina encode the magnitude of changes in light intensity. Projection neurons in the invertebrate olfactory system also encode the magnitude of changes in their inputs (from olfactory receptor neurons).\n\n\n", " > Both models considered in this paper only passively estimate the rotation angle. It would be more interesting to synthesize the transformed images, as in an autoencoder framework.\n\nWe agree that the synthesis of transformed images is an important and interesting problem. However, this goes beyond the scope of our current work. Our submission can indeed be viewed as a first step in this direction, since estimating the transformation angle is presumably necessary for synthesis. On a biological level, we are not aware of any evidence that a full reconstruction of visual inputs is performed in the brain.\n\n", " We thank the reviewer for their feedback and constructive suggestions. We are encouraged that the reviewer finds the problem of finding bio-plausible algorithms of transformation learning interesting and relevant. Below we address some of the concerns of the reviewer.\n\n\n> The setting is restrictive, reducing the problem to previously proposed biologically plausible algorithms for PCA and SVD.\n\nWe agree with the reviewer that the setting of the problem in terms of the toroidal group is restrictive. However, we believe that the reduction of this difficult problem to a simpler problem is non-trivial and novel. The use of SVD and PCA in the context of biological learning of finite transformations is new. Furthermore, the proposed biologically plausible algorithm for SVD is itself a novel contribution.\n\n> Experiments are very toy-like, with very simplified assumptions whose influence is not tested\n\nWe have expanded our numerical simulations section to include an experiment on natural images and find that our algorithms' performance here is similar to that on the synthetic dataset (see Appendix C for more details). However we agree with the reviewer that the experiments are of small scale. We designed the experiments to prove the viability of our algorithms in principle.\n\n> Relation to previous work on biologically inspired PCA and other such algorithms is elusive. It is difficult to figure out what is the originality of the proposed approaches, and why previous approaches could not directly be applied.\n\nWe thank the reviewer for pointing out this shortcoming. We will add further details highlighting the relationship and the novelty. One of our approaches uses previously developed biologically plausible PCA algorithm as a component of motion detection learning. We are not claiming novelty in biologically plausible PCA. However, our use of PCA in the context of motion detection, as well as our biologically plausible SVD, algorithm are novel.\n\n> The assumption of line 89-90 seems fundamental to be able to the whole paper, but are not discussed. What do they entail? One particular concern is the independence between $A_t$ and $x_t$, as it is likely that in many practical settings, transformations are performed depending on the previous state.\n\nThis is an important point and we thank the reviewer for bringing it up. It is possible indeed that in many practical settings transformations may depend on the previous state. However, in the context of animal vision, it is critically important for the animal to be able to detect the same kind of transformation being applied to arbitrary image. This need to decouple images and transformations in the study of vision was recognized by Rao and Ruderman in 1999 [1]. Many subsequent works also assume and build on top of this separation [2].\n\n> Moreover, assuming the the pixel intensity vector is white is also very unusual and unlikely in practice.\n\nWhitening of sensory inputs is considered one of the important functions of the sensory systems so we believe it is reasonable to assume that the circuit receives prewhitened inputs. Furthermore, a number of biologically plausible algorithms for input whitening have been proposed and experimentally observed in some cases [3,4].\n\n> The experiment seem to assume that rotations in each subspace are also chosen independently, which is not necessarily the case in practice, and may play a key role in identifying the subspaces.\n\nIt is true that in some of our experiments (Fig. 4) subspace rotations are independent. However this is not an assumption used in the proofs. In particular, in our other experiments (Fig. 5) subspace rotations are correlated. We will clarify this in the manuscript.\n \n> Limitations are not discussed.\n\nWe will add a section to highlight the shortcomings of the approach. An important limitation of our approach is the assumption that all observed transformations come from one commutative group (toroid). We intend to overcome this limitation in our continuing work.\n \n[1] Rajesh P. N. Rao and Daniel L. Ruderman. Learning lie groups for invariant visual perception. In Advances in neural information processing systems, pages 810—-816, 1999.\n\n[2] Matthias Bethge, Sebastian Gerwinn, and Jakob H. Macke. Unsupervised learning of a steerable basis for invariant image representations. Human Vision and Electronic Imaging XII. Edited by Rogowitz, 6492:64920C–64920C–12, 2007.\n\n[3] Yang Dan, Joseph J. Atick, R. Clay Reid. Efficient Coding of Natural Scenes in the Lateral Geniculate Nucleus: Experimental Test of a Computational Theory, Journal of Neuroscience 15 May 1996, 16 (10) 3351-3362.\n\n[4] Wanner, A.A., Friedrich, R.W. Whitening of odor representations by the wiring diagram of the olfactory bulb. Nat Neurosci 23, 433–442 (2020).\n", " We thank the reviewers for their thoughtful feedback. We are pleased that they found the subject of our study important and interesting. We are also encouraged that the reviewers believe that the algorithms can be of interest to the ML community and that the novel biologically plausible algorithms would be of great value to the neuroscience community.\n\nWe found the reviewer comments very constructive, and in accordance with their suggestions, have made modifications. We summarize the changes and additions here and provide more detailed responses to each reviewer individually.\n\n* **Numerical experiments on naturalistic images.** To verify our algorithm in a more challenging setting, we have performed an experiment looking at transformations on naturalistic images. \n \n* **Quantitative comparison with Fourier basis.** We have included quantitative comparison of the filters obtained in our algorithm (for both naturalistic and synthetic experiment) with Fourier basis filter. \n ", " The authors present two online biologically plausible algorithms for the irreducible representation learning method proposed by Cohen and Welling (2014). \nThe first algorithm uses SVD to extract the subspace of maximal average rotation. As part of this algorithm, the authors’ present a novel algorithm for online SVD: cross-power iteration.\nThe second is based on PCA of time difference vectors and uses local learning rules to preserve biological plausibility.\nThe authors propose that these models may serve as hypotheses to guide future connectomics and neurophysiology research\n\nResults. \nThe authors find that both the SVD and PCA methods are able to recover the irreducible representations of SO(2) and S1 on synthetic datasets\nThese synthetic datasets are sets of random images or vectors sampled from the standard normal distribution, followed by the application of the transformations of interest (translation or rotation).\n Strengths:\n- The paper contributes two novel and interesting methods for learning the irreducible representations of toroidal groups with biologically plausible mechanisms. This is a valuable contribution for both computational neuroscience (as a hypothesis for neural circuitry) and machine learning (as a novel algorithm for learning and modeling transformation structure from data). \n\nWeaknesses:\n- It would be nice to see a quantitative analysis on the learned filters, explicitly comparing to the standard translational and rotational Fourier bases, and reporting how well the learned weights approximate these bases.\n- It would be useful to see how well the model trains on natural datasets.\n\nGeneral comments:\n- the paper is framed as a neurobiological model, but it really seems more \"neuromorphic\" as the emphasis is on how to learn all these things with local learning rules in a neural circuit, as opposed to specific neurobiological substrates in the visual system. There are issues such as how to compute arctan, and many others, that would need to be discussed or elaborated more to think of this in neurobiological terms.\n- it will be interesting to see how these results could be extended to real-world applications, such as estimating optical flow from natural videos.\n\n see above see above", " This manuscript proposes biologically plausible algorithms for learning group transformations from sequences of observations. Two algorithms are proposed, one based on SVD and the other on PCA. The performance of these algorithms is evaluated in synthetic experiments.\n Strengths:\n- the authors address a very interesting and challenging question: biological plausibility of the learning of transformation representations,\n\nWeaknesses:\n- the setting is quite restrictive, reducing the problem to previously proposed biologically plausible algorithms for PCA and SVD.\n- experiments are very toy-like, with very simplified assumptions whose influence is not tested,\n- relation to previous work on biologically inspired PCA and other such algorithms is elusive. It is difficult to figure out what is the originality of the proposed approaches, and why previous approaches could not directly be applied. The assumption of line 89-90 seems fundamental to be able to the whole paper, but are not discussed. What do they entail?\nOne particular concern is the independence between $A_t$ and $x_t$, as it is likely that in many practical settings, transformations are performed depending on the previous state.\nMoreover, assuming the the pixel intensity vector is white is also very unusual and unlikely in practice.\n\nIn addition, the experiment seem to assume that rotations in each subspace are also chosen independently, which is not necessarily the case in practice, and may play a key role in identifying the subspaces. Limitations are not discussed. ", " The present paper studied the neural implementation of learning irreducible representation of commuting group transformations. Specifically, using the 2d rotation group as an example, the paper studied how the neural network learns the rotation groups by implementing different algorithms, including singular value decomposition and principal component analysis, and different algorithms eventually lead to different network architecture. It is a fundamental question that how neural circuits learn the commutative group transformations. This study provides two novel solutions which rely on single neuron mechanism (implementing SVD) and network mechanism (implementing PCA) respectively to learn the group transformations. The network model implementing PCA (similarity matching specifically) to learn transformations is based on a recent study (ref. 1) so it is not clear how much difference of this part compared with the mechanism presented in ref. 1. The whole paper is structure-wise but some presentation is not clear and the comparison with earlier studies was not done systematically (see details below).\n\n### Writing\n- Eq. 3: the index t should be t-1 if I understood correctly.\n\n- Line 141: should the $\\sigma$ be the g in Lemma 1? If so, please use consistent notations across the manuscript. \n\n- It is not clear the derivation in Eq. 18 based on Eq. 16 and some motivations are missing. Also, it is not clear the meaning of matrices W and M in Eq. 18 and I need to guess whether they correspond to feedforward and recurrent weight matrices respectively.\n\n- Fig. 5a: not clear which algorithm leads to the presented filters. That is, are those filters learnt from the SVD (representing the u and v), or from the network implementation of PCA (W and M)?\n - The Introduction (lines 44-45) mentions that an advantage of learning irreducible representation solves the need for multiplicative units and only a linear number of synaptic connections will be needed per neuron. However later in the Result section, I don’t see a detailed explanation of how this problem is solved in the proposed algorithms. Some elaborations are needed.\n\n- Both models (Fig. 2 and Fig. 3) explicitly output the rotation angles $\\theta$. In contrast, there are solid neuroscience studies that the angle is represented in a neural population code and can be read out by a linear decoder such as population vector. I am wondering how much the proposed network model and learning rules will be changed if we use a neural population code to represent the rotation angle.\n - Both models considered in this paper only passively estimate the rotation angle. It would be more interesting to synthesize the transformed images, as in an autoencoder framework.", " The authors depict two novel biologically plausible algorithms for online estimation of transformations using irreducible representations. By building on prior work, existing algorithms making use of commutative Lie groups for decomposing transformations are altered for biological plausibility. Possible neural implementations are depicted which can be searched for in connectomics. The results are evaluated experimentally on two small examples. Strengths:\n- Originality: The general idea of representing Rotations by toroidal groups is not new but nonetheless interesting and coming up with actual biologically plausible implementations is invaluable.\n- The text is accessible: starting with a general introduction and good explanation of the problem at hand. \n- All Proofs including proof of convergence for \"cross-power iteration\" are readily available in the appendix\n\n\nWeaknesses:\n- Not clear which parts are contributions, i.e. labeled as \"Theorem\"\n- Sadly no code provided\n- Quantitative comparison of the experimental results with \"biologically implausible\" counterparts (i.e. Cohen and Welling) would've been interesting\n\nSuspected errors:\n- Line 73: canonical (ly)\n- Line 227: canter How do the non-biologically plausible SVD-based algorithm and the altered version compare in terms of accuracy? Limitations, such as biological implausibility of the original SVD based algorithm, are stated clearly." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 3 ]
[ "P69Cpfz9Hi", "NrJho8N2NBH", "qrGsgQnB3cj", "zmYScOqmfHj", "jNwRq6fspyu", "ygpIDpFsm8k", "qrGsgQnB3cj", "qrGsgQnB3cj", "joEuus4fSr", "nips_2022_48TmED6BvGZ", "nips_2022_48TmED6BvGZ", "nips_2022_48TmED6BvGZ", "nips_2022_48TmED6BvGZ", "nips_2022_48TmED6BvGZ" ]
nips_2022_Bct2f8fRd8S
The Unreliability of Explanations in Few-shot Prompting for Textual Reasoning
Does prompting a large language model (LLM) like GPT-3 with explanations improve in-context learning? We study this question on two NLP tasks that involve reasoning over text, namely question answering and natural language inference. We test the performance of four LLMs on three textual reasoning datasets using prompts that include explanations in multiple different styles. For these tasks, we find that including explanations in the prompts for OPT, GPT-3 (davinci), and InstructGPT (text-davinci-001) only yields small to moderate accuracy improvements over standard few-show learning. However, text-davinci-002 is able to benefit more substantially. We further show that explanations generated by the LLMs may not entail the models’ predictions nor be factually grounded in the input, even on simple tasks with extractive explanations. However, these flawed explanations can still be useful as a way to verify LLMs’ predictions post-hoc. Through analysis in our three settings, we show that explanations judged by humans to be good—logically consistent with the input and the prediction—more likely cooccur with accurate predictions. Following these observations, we train calibrators using automatically extracted scores that assess the reliability of explanations, allowing us to improve performance post-hoc across all of our datasets.
Accept
The authors perform an analysis that suggests that explanations may not provide reliable signal in few-shot in-context learning, showing that adding explanations yields only minimal gains over raw in-context learning. They then develop an approach to approximate the reliability of predictions automatically using these explanations. In the initial reviews, the reviewers pointed out issues with the empirical rigor of the study and its framing. However, they seem to have addressed these concerns by narrowing the scope of their contributions and providing additional experiments supporting their claims.
train
[ "7wOapfV8m7", "gu3l-nETHbi", "RNd3XKs7237", "CycfcwLfXx", "7cOyFuSdTpKV", "BYl0AIcaXiN", "eGavVUhIVIK", "3x8rNW3WYuL", "SWgkJoKqIln", "K8GZ4Ryirl8", "rC5W4rhbJLz", "9vzRj9H6mTo" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for the response! We'd like to respectfully let you know the review score above is not actually updated.", " I think the new framing of the paper \"The Unreliability of Explanations in Few-shot Prompting for Textual Reasoning\" makes more sense. Though the mathematical reasoning chain of thoughts are also likely to be unreliable, the paper didn't provide any evidence to show that. Restricting the domain to textual reasoning makes the paper more consistent. \n\nAlso, thanks for adding comprehensive results for Davici-001 and 002, the results seem very interesting. Davinci-001 and Davinci-002 both use instruction to fine-tune GPT model, but their trends are totally different. Davinci-001 performs very similarly as Davinci and other GPT models, however, Davinci-002 benefits significantly from the explanation in the front. There might be some dark magic behind the scene, but we are not sure what's the actual reason. The author conjectured that this improvements come from data leakage, which I'm not quite convinced because your synthetic dataset also gets huge bump. Anyway, these results seem be consistent with the \"Large Language Models are Zero-Shot Reasoners\".\n\nOverall, I'm satisfied with the author response and would like to raise my score. ", " **Q:** Have you considered the datasets in the FEB benchmark? Is there any reason why those aren't appropriate?\n\n**A:** We only became aware of FEB when it was publicized in April 2022 after its acceptance at NAACL. We agree that including more datasets will make the claims more general, but we primarily focused on the most commonly used tasks at the time of our experiments.\n\nWe initially considered CoS-E due to its prominence, but like the FEB authors, we disregarded it due to the poor quality of the gold standard explanations. In fact, as in Wei et al. (2022), simply including explanations in CommonsenseQA only leads to mild improvements, similar to what we reported in our paper. In any case, solving instances in ECQA/CommonsenseQA often involves materializing commonsense knowledge that’s relevant to the question at hand, rather than producing a reasoning chain grounded in a question and some given evidence, which was our focus here. \n", " Thank you for the very clear answers, the large number of new experiments, and the appropriate reframing of your work.\n\nMy technical correctness concerns have been alleviated, so I have raised my score significantly, from 4 to 6.\n\nI agree that the problem of how much explanations help is deeply important, but I do still worry that the evidence provided here is so limited by the current datasets that it is difficult conclude very much about textual reasoning in general. However, I also agree that this is a necessary first-step.\n\nOn the subject of what other datasets to use, it seems that the Marasović et al. 2022's FEB benchmark (which you appropriately cite) has three datasets that you do not consider? Is there any reason why those aren't appropriate? Including those would make me, and I am guessing many other readers, more confident your claims are sufficiently general.\n\nMarasović, Ana, et al. \"Few-shot self-rationalization with natural language prompts.\" _NAACL Findings_ (2022).", " **Q:** The claim that incorporating explanation as in-context examples only marginally improves performance is a strong claim that contradicts prior work. \n\n**A:** Please see the general response about framing.\n\n---\n\n**Q:** Why are there so many missing cells in Table3&4? Will those results affect the conclusion?\n\n**A:** Please see the general response.\n\n---\n\n**Q:** Only 250 testing examples are considered. Can authors justify their choice, as compared to using the official test set? \n\n**A:** In our preliminary experiments we found that a test set of 250 examples is enough to yield statistically significant differences between the models we examine. Using larger datasets would significantly increase the cost of this work. We instead focused our experimentation on investigating the effects of choosing different “shots”, running multiple trials for each experiment instead of running on more instances per trial. We also include information about cost in Appendix L.\n\n---\n\n**Q:** For results in Table 1, I do not agree that such improvements (10% relative improvements) are \"mild\". They are just not that surprising compared to some other tasks.\n\n**A:** Thanks for pointing this out. When assessing the scale of the improvements and choosing to describe them as “mild” or “not substantial”, we are using as calibration the facts that (a) Synth is a synthetic dataset, easily solved by a rule-based system, and therefore we expect these models to do very well on it; (b) supervised models on Hotpot can achieve substantially higher performance as well. (We have added this as footnote 8 in the Appendix)\n\nWe will describe the improvements more precisely in any future version.\n\n", " **Q:** Without sufficient experiments (synthetic + human-dataset across different tasks), it's hard to draw a more convincing conclusion. Re-running the experiments on text-davinci-002 will help tease out the confounding factor of prompt/model.\n\n**A:** Please see the general response. We included additional results on OPT-175, davinci, and text-davinci-002 in Appendix H and Appendix I.\n\n---\n\n**Q:** The paper doesn't provide enough detail about its prompt engineering. According to the \"Large Language Models are Zero-Shot Reasoners\", picking a correct prompt could totally change the landscape.\n\n**A:** In our preliminary experiments, we found that GPT-3 Instruct series models are robust with respect to the changes in prompt template (e.g., using “Q”, “A” vs “Question”,“Answer”), and are more sensitive to the changes of “shots” in the prompt. Taking into account that these experiments are expensive, we focus our experimentation on varying the sets of shots in the prompt.\n\nNevertheless, for the synthetic dataset, we also tried using an alternative style of explanation in our earlier experiments. The performance was slightly worse than the style we used in our main paper. The reliability of the generated explanations in the alternative style is congruent with the findings of our paper. Please see Appendix J for more details.\n\nWe acknowledge that including triggers for multi-step reasoning could impact the results. We experimented with adding “let’s think step by step” to E-P, which does not affect the performance. Please see Appendix K for more details.\n\n---\n\n**Q:** Did you also test on the different math-based datasets like MultiArith, GSM8K to evaluate their factuality and consistency, etc?\n\n**A:** Please see the general response about framing.\n", " **Q:** What automatic rules were used for analyzing the factuality and consistency of the synthetic dataset?\n\n**A:** Please see Appendix G.\n\n---\n\n**Q:** Despite suggesting that E-SNLI can’t be assessed as factual/non-factual the authors attempt to calibrate E-SNLI using factuality, without discussion of how this works other than using “an analogous score following the same principle”. Why wasn’t the pseudo-factuality setup used for calibrating E-SNLI factuality also used to annotate E-SNLI?\n\n**A:** \nLines 223-224 describe how the principle works. The intuition is that a more accurate predicted explanation should at least cover some aspects of the premise instead of being purely based on the hypothesis. This in turn correlates with an explanation more grounded in the input.\n\nHowever, while this is a rough heuristic that can be useful for calibration, it isn’t suitable to accurately assess factuality. For example, in the second example in Table 11 of Appendix F (inlined here below), the premise and explanations have low lexical overlap, but the explanation is grounded in the premise. Assessing this is highly subjective and requires human judgments, likely multiple judgments by annotators agreeing on some standard. We felt that devising an annotation protocol for it was outside the scope of this paper. Because of the extensive design and vetting process that would be necessary for this, we omitted it to avoid putting forth a human evaluation that may be untrustworthy.\n\n> A person wearing sunglasses, a visor, and a British flag is carrying 6 Heineken bottles.\n>\n> Q: The person has beer bottles and has an england flag and glasses. True, False, or Neither?\n>\n> A: False, because the person could be carrying any number of items and not just six Heineken bottles. \n\nHere is another example of a particularly subjective case:\n\n> A group of rollerskaters skate on a patch of cement with palm trees in the background.\n>\n> Q: A boy is walking around on the grass. True, False, or Neither?\n>\n> A: False, because the boy is not rollerskating.\n\n---\n\n**Q:** Why is reliability of the E-P method not assessed on E-SNLI?\n\n**A:** We did not evaluate the reliability of E-P for E-SNLI as its performance was far behind Few-Shot and P-E.\n\n---\n\n**Q:** Why weren’t any standard few-shot learning datasets considered, especially ones where previous work found explanation generation helpful?\n\n**A:** We considered a range of commonly used datasets in the literature (see the Wiegreffe and Marasović survey, 2022). and filtered by the following criteria: (1) textual explanations were annotated in prior work; (2) the datasets focus on primarily textual reasoning (not math reasoning); (3) GPT-3 is not solving the dataset already in a few-shot setup without explanations.\nIf the reviewer has suggestions of standard few-shot datasets that satisfy these criteria, we will be happy to consider them for a future version.\n\n---\n\n**Q:** No evidence given that the scores used for approximating factuality correlate with the actual factuality annotations reported in Table 2.\n\n**A:** We verified the correlations in the following way. Consider using the approximate factuality scores (real values) of explanations generated using the E-P paradigm to predict the ground truth factuality annotations (binary labels). If we treat these scores as a single feature, a classifier based on that feature has an AUC of 76.5, substantially higher than a random baseline of 50.0.\n", " As shown in the table above, the results on OPT and davinci are consistent with our findings on text-davinci-001. E-P does consistently provide the strongest performance on the AdvHotpot setting, but the improvements are 5% absolute or less. On Synth and ESNLI, E-P typically degrades performance (except on Synth for text-davinci-001) and P-E is inconsistent across the different models. Overall, vanilla LLMs (OPT and davinci) see limited benefit from using explanations, and even text-davinci-001 does not see substantial improvement.\n\nThe only exception is text-davinci-002. text-davinci-002 greatly benefits from explanations in the prompt across all the three tasks, and E-P is consistently more effective than P-E. However, it is unclear what makes the difference. As far as we are aware, the differences between text-davinci-002 and text-davinci-001 are not described in any publication or blog post.(One publicly-described difference is the addition of editing and insertion, discussed at https://openai.com/blog/gpt-3-edit-insert/, but this does not explain the performance differences we observe.)\nComparing davinci and text-davinci-001, we see the move to Instruct series models is *not* sufficient to explain the difference.\n\nOne possibility is that 002 is an updated version of 001 that includes more Instruct data collected using the API. One hypothesis for the improvement is data leakage from our test set. Because we started running experiments for this work in late 2021, it is conceivable that text-davinci-002 was trained on human-written completions for our data. Another hypothesis is that text-davinci-002 features T0-like fine-tuning on some available datasets such as HotpotQA, which would also change the interpretation of the results.\nGiven the lack of transparency with this model, we hesitate to make scientific claims here. In any case, the stronger results only appearing in this setting means that we cannot broadly argue that explain-predict is the superior setting.\n\n\n**Missing Numbers in Table 3 and 4:**\n\nThe missing numbers either do not apply to those rows or do not affect the claims of the paper.\n\nLine 2 in Table 3: given the limited prompt length, the extra examples cannot be added in the prompt. Naively adding these examples and truncating the prompt would lead to identical results to the leftmost value in this row. So the other values are intentionally left blank.\n\nLine 3 in Table 3: we use the performance obtained using 128 examples as an estimation of the upper-bound performance of Few-shot (NN) (Liu et al., 2022). The performance (58.9) is already lower than any other calibrator-based approaches (FewShot+ProbCalib, P-E+ProbCalib, and P-E+ExplCalib) using calibrators (even those only using 32 examples). So these values would not affect our claims. \n\nLine 5 in Table 3: Similar to Line 2, they are not applicable given the prompt length limit.\n\nIt’s worth noting that filling in Few-Shot (NN) for Table 3 and Table 4 would cost roughly $600. Given that it doesn’t affect the claims, we intentionally avoid doing it to reduce the cost. (We also included information about cost in Appendix L, which is useful information for reproducibility.)\n", " Thanks to all reviewers for the thoughtful comments and feedback! We added substantial content covering new experiments in the appendices of the paper, which we describe below. We’d also like to clarify the main framing of the paper as well as answer some common questions posed by more than one reviewer.\n\n**Changes to the paper**\n\nWe added Appendices G-L and left the body of the paper mostly unchanged. We will integrate this information throughout the paper in any future version. The updated paper PDF contains the full appendix for easier viewing, so you do not need to consult the supplementary materials.\n\nThe changes are as follows:\n\n* Added the results of few-shot, E-P, and P-E for other language models, including OPT-175B, davinci (GPT-3 API non-Instruct series), and text-davinci-002. Please see Appendix H for details. We also inlined the new table below.\n* Added the reliability evaluation for the synthetic dataset on these other language models (Appendix I).\n* Added the results of using an alternative style of explanations for the synthetic dataset (Appendix J)\n* Added the results of including “let’s think step by step” in the prompt (Appendix K).\n* Added the details of automatic rules used for assessing the consistency and factuality of explanations for the synthetic dataset (Appendix G)\n* Added information about the cost for running our experiments (Appendix L).\n\n**Framing of the paper**\n\nOur paper focuses on textual reasoning tasks. As a result, we do not extensively discuss symbolic and arithmetic reasoning datasets; some of these bridge textual and mathematical reasoning, but we still consider these problems to have different characteristics. To make it more clear, we changed the title of paper to:\n\n*The Unreliability of Explanations in Few-shot Prompting for Textual Reasoning*\n\nWe view our results as complementary to Scratchpad and chain-of-thought: using step by step explanations clearly works well for those types of symbolic reasoning and math problems, and even gives some gains here as we note in the paper. But these methods do not increase accuracy as much as we might expect here, and most surprisingly, the explanations they generate have flaws that we did not expect, most notably factuality. These limitations are, in our view, worth calling out.\n\n**Results using Other LLMs:**\n\nFor the revision, we include additional results on OPT-175B, davinci (GPT-3 non-Instruct series API), text-davinci-002 (the latest Instruct series API). The detailed results can be found in Appendix H and Appendix I. \nWe include the table from the paper; here is our discussion of the results (also copied from the appendix of the revised paper):\n\n| **LM** | **Approach** | **Synth** | **AdvHotpot** | **ESNLI** |\n|------------------|--------------|:---------:|:-------------:|:---------:|\n| OPT-175B | Few-Shot | **40.5** | 49.7 | **44.0** |\n| OPT-175B | E-P | 29.6 | **52.6** | 39.3 |\n| OPT-175B | P-E | 40.2 | 43.3 | 43.4 |\n| davinci | Few-Shot | 49.5 | 49.1 | 43.3 |\n| davinci | E-P | 47.1 | **54.1** | 40.4 |\n| davinci | P-E | **51.3** | 48.7 | **48.7** |\n| text-davinci-001 | Few-Shot | 54.8 | 53.2 | 56.8 |\n| text-davinci-001 | E-P | **58.5** | **58.2** | 41.8 |\n| text-davinci-001 | P-E | 53.6 | 51.5 | **59.4** |\n| text-davinci-002 | Few-Shot | 72.0 | 77.7 | 69.1 |\n| text-davinci-002 | E-P | **86.9** | **82.4** | **75.6** |\n| text-davinci-002 | P-E | 81.1 | 77.2 | 69.4 |\n\n", " The authors suggest that conditioning on and generating explanations of a task or problem yields only minimal gains over raw in-context learning, but that the validity of such explanations may be correlated with the accuracy of answers the model produces. They use this hypothesis to motivate approximating reliability automatically as a method of calibrating in-context learning.\n\nThree datasets are used. One is an automatically generated synthetic dataset that is engineered so that automatically testing the reliability of explanations is possible, at least in theory. One is an adversarial subset of a previous QA dataset (HotPotQA) that is balanced so that GPT-3’s default performance is split 50/50 between correct and incorrect answers. The third is a Natural Language Inference (NLI) dataset with human-annotated explanations.\n\nThe authors consider two methods for using explanations: explain-then-predict and predict-then-explain. In the former, the model first provides reasoning and then makes a prediction, with past work making the claim that this explanation helps the model predict more accurately. In the latter, the model predicts then explains, so that its explanation does not directly impact the prediction, but the explanations in the few-shot demonstrations still impact the final prediction. Varying number of few-shot demonstrations are used, as can fit in GPT-3’s context window.\n\nThe results show only mild gains from using explanations on these benchmarks (which were specifically intended to probe the use of these explanations and thus may not be representative of the average few-shot learning case). The comparison to previous work is narrativized on lines 123-129, but evidence is not given for whether this narrative holds, which is makes it difficult to assess.\n\nFactuality and consistency are inspectd. For the synthetic dataset, rules are used to judge consistency, but this rule system is not described, even in the appendices. Again, while it is possible these rules are sound, it seems there are many potential pitfalls and not describing these rules makes their validity impossible to assess. For the other datasets, the authors manually annotate 100 of the same examples to report an impressively high annotator agreement. The results suggest that non-factual explanations indicate incorrect prediction, as a rule of thumb. Reporting of results is somewhat inconsistent: on E-SNLI E-P is not reported and factuality is not judged, though the authors give some reasoning for both of these. Not reporting the E-P strategy because results are so poor still seems strange.\n\nNext the authors attempt to calibrate the model, using a handful of extra data and human judgements. This is especially interesting, as the authors are more-or-less using the full context window available to GPT-3 so these examples could not have been used as in-context demonstrations. A method for calibrating factuality on each dataset is proposed. While sensible, these methods are dataset specific and somewhat adhoc, with no evidence given that they actually correlate with the factuality annotations reported in Table 2.\n\nDespite suggesting that E-SNLI can’t be assessed as factual/non-factual the authors attempt to calibrate E-SNLI using factuality, without discussion of how this works other than using “an analogous score following the same principle” where they consider the premise of the NLI inference as the context. If this is how calibration happens, why couldn’t this have been done when annotating E-SNLI for factuality?\n\nAll the calibration methods are shown to improve the accuracy of the results.\n\nRelated work is reviewed. A clear discussion of the potential risks of explanations given by models, especially trusting such explanations too much, is presented. Strengths\n\n- The research questions are well-motivated: despite impressive performance on selected datasets it is unclear how much or why explanations help models.\n- Evidence given that factuality is correlated with consistency and with non-factuality with incorrect prediction is an interesting result.\n- Exploiting extra data that can’t be put into the context window of a model for calibration is potentially a useful technique for many tasks.\n- The calibration shows seemingly meaningful performance improvements (though on highly specific and sometimes custom datasets)\n\nWeaknesses\n\n- ~~Only one version of GPT-3 is considered—GPT-3 Instruct, which is finetuned on human feedback, making the results somewhat less comparable to other work.~~ Edit: Authors have added significant new results.\n- Result reporting is selective—on E-SNLI factuality is not reported, the E-P method is not assessed, and Table 3 is missing many values. Edit: Authors have provided reasoning as to why some numbers were not included though in some cases (e.g. the expense of calculating those number), this is still a weakness, albeit a less intense one.\n- While the bespoke datasets for looking at this hypothesis are useful tools, it is unclear what these results mean for tasks where few-shot learning is commonly used. Edit: the authors have revises their framing in a way that partially alleviates this, focusing on textual reasoning. However, the available datasets still comprise a tiny fraction of textual reasoning datasets.\n- ~~The comparison to previous work is described, but evidence is not directly given. (Lines 123-129)~~ Edit: the authors have specified a new framing that makes this comparison significantly more valid.\n- ~~The automatic rules for evaluating factuality and consistency are not described, even in the appendices.~~ Edit: Authors have added a description of their synthetic task.\n- ~~Despite suggesting that E-SNLI can’t be assessed as factual/non-factual the authors attempt to calibrate E-SNLI using factuality, without discussion of how this works other than using “an analogous score following the same principle”.~~ Edit: Authors have described why they can heuristically calibrate factuality without being able to assess it.\n- ~~The calibration methods are methods are dataset specific and somewhat adhoc, with no evidence given that they actually correlate with the factuality annotations reported in Table 2~~ Edit: Authors have provided reasoning why their calibration methods should correlate with factuality annotations. - Why wasn’t the pseudo-factuality setup used for calibrating E-SNLI factuality also used to annotate E-SNLI?\n- What automatic rules were used for analyzing the factuality and consistency of the synthetic dataset?\n- Why weren’t any standard few-shot learning datasets considered, especially ones where previous work found explanation generation helpful? ~~Authors have adequately discussed the limitations of their work.~~ Edit: Given the new framing, it will be very important for authors to revise the paper to make it clear how the available datasets limit the claims that can be drawn from this work, as the types of textual reasoning studied are rather limited.", " This paper aims to investigate different aspects of explanation-enhanced few-shot in-context learning, especially under textual reasoning scenarios with the task of question answering and natural language inference. The paper mainly discusses: 1) whether the explanation fed into the prompt with P-E or E-P format can help the model improve its accuracy on downstream tasks, 2) whether the explanation is consistent with its prediction, 3) whether the explanation itself is factual enough to be trusted, 4) whether we can use the explanation to help the model calibrate its confidence in prediction. The paper performs experiments on three datasets, one synthetic dataset plus two existing annotated datasets. Through the paper, the authors claim that: 1) the explanation can mildly help the accuracy, 2) the explanation is mostly consistent with the prediction, 3) explanations are quite unfaithful, causing significant hallucination, 4) using the explanation to calibrate the model could further help the model. Strength:\n1. the paper builds on top of the recent breakthrough of explanation-based in-context few-shot learning (Chain-of-thoughts) to investigate its reliability issue.\n2. the paper investigates a highly important problem in large language model, which could contribute a lot to the community to influence the follow-up research.\n3. the paper dives deeper to exploit the issue of explanation and propose explanation-based calibration methods.\n\nWeakness:\n(See the Limitation Section)\nWithout sufficient experiments (synthetic + human-dataset across different tasks), it's hard to draw a more convincing conclusion. On the other hand, re-running the experiments on Davinci-002 will help tease out the confounding factor of prompt/model. \n\n Did you also test on the different math-based datasets like MultiArith, GSM8K to evaluate their factuality and consistency, etc? The limitation of this paper is mainly in its deficiency of experiments to support its claims. First of all, the paper only test two tasks, namely question answering and natural language inference. Furthermore, for each of these two tasks, the authors only pick one human-annotated dataset. I think this might not be sufficient to draw a more general conclusion about LLM/GPT-3. I would hope to see more tasks, even many synthesized tasks would help in better understanding LLM's explanations. \n\nThe other limitation is that the paper doesn't provide enough detail about its prompt engineering. According to the \"Large Language Models are Zero-Shot Reasoners\", picking a correct prompt could totally change the landscape. I'm afraid that the lack of comprehensive prompt engineering will make the claims weaker. For the experiment result in Table-1 E-P E-SNLI, I'm not sure whether better prompt engineering or Davinci-02 will make the number totally different. ", " This paper explored the capabilities of GPT-3 in using explanations in in-context learning. Although using in-context explanation has been demonstrated quite helpful for symbolic reasoning tasks, this paper finds that it's not the case for textual reasoning tasks such as NLI. Upon investigation, they find that simply including an explanation as in-context example does not always yield big improvements. Their analysis shows that this is because the explanations generated by GPT3 could be nonfactual. Given those observations, they showcase how to use in-context explanation to calibrate GPT3's prediction. The proposed method achieves significant improvements over three datasets. The observation in this paper is interesting and could be a good complement to current in-context explanation research. \nThe proposed explanation-based calibration method is simple yet effective. \nThe paper is clear and easy to follow.\n\nHowever, I also have the following concerns. The claim that incorporating explanation as in-context examples only marginally improves performance is a strong claim that contradicts prior works. To back up the claim, I suggest the author include a few more datasets in table 1. Besides, for experiments in S2, only 250 testing examples are considered. Can authors justify their choice, as compared to using the official test set? Since on such a small test set, the conclusion is not that convincing. \nBesides, for results in Table 1, I do not agree that such improvements (10% relative improvements) are \"mild\". They are just not that surprising compared to some other tasks. Why there are so many missing cells in Table3&4? Will those results affect the conclusion? The authors did not address the limitations and potential negative societal impact of their work." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "gu3l-nETHbi", "BYl0AIcaXiN", "CycfcwLfXx", "eGavVUhIVIK", "9vzRj9H6mTo", "rC5W4rhbJLz", "K8GZ4Ryirl8", "SWgkJoKqIln", "nips_2022_Bct2f8fRd8S", "nips_2022_Bct2f8fRd8S", "nips_2022_Bct2f8fRd8S", "nips_2022_Bct2f8fRd8S" ]
nips_2022_-vXEN5rIABY
Inductive Logical Query Answering in Knowledge Graphs
Formulating and answering logical queries is a standard communication interface for knowledge graphs (KGs). Alleviating the notorious incompleteness of real-world KGs, neural methods achieved impressive results in link prediction and complex query answering tasks by learning representations of entities, relations, and queries. Still, most existing query answering methods rely on transductive entity embeddings and cannot generalize to KGs containing new entities without retraining entity embeddings. In this work, we study the inductive query answering task where inference is performed on a graph containing new entities with queries over both seen and unseen entities. To this end, we devise two mechanisms leveraging inductive node and relational structure representations powered by graph neural networks (GNNs). Experimentally, we show that inductive models are able to perform logical reasoning at inference time over unseen nodes generalizing to graphs up to 500% larger than training ones. Exploring the efficiency--effectiveness trade-off, we find the inductive relational structure representation method generally achieves higher performance, while the inductive node representation method is able to answer complex queries in the inference-only regime without any training on queries and scale to graphs of millions of nodes. Code is available at https://github.com/DeepGraphLearning/InductiveQE
Accept
It is very hard to make a final decision on this paper; the scores are: 4,6,8, and 5. The research problem raised in this paper is interesting and worth further study. However, reviewers have raised some concerns about the experimental resuls. In the original paper, they did not compare with any external baselines. In the discussion period, the authors presented additional results with BetaE, which were originally designed for transductive reasoning. Not surprisingly, BetaE delivers poor results. How strong is this comparison? One would expect some experiments with better initialization for unseen entities, e.g. with KB embeddings (e.g. TransE and ConvE used in KBC tasks).
val
[ "78ydZOlM_G", "KT8qv_ElOMl", "ubVwLdolJH6", "WcPGQPDP4nw", "cp9Q_DGJ-zM", "qGfbE2rMfQ4", "gsLDVbtnFHC", "a3SbpB61uc", "soEikxEsk7h", "supK8F7dA1G", "GRI7twJrfts", "_Nf1cD8YXfu", "oWx_x0LvcSV", "cVY0WL46uV-", "aAxW4MsAkSt", "XqTaP3kTYn", "c8ma3t7hoN", "H9GfKHAHvI", "MTd_7R_WP1x", "QlQyyvwltlM", "skRJ9zJA4J", "bOmJy3B1Zrf", "V7wszp_QQHJ" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the suggestion - the edge-type baseline should indeed be stronger. \nWe finished implementing this baseline within those hours after your suggestion but did not manage to complete the experimental evaluation until the end of the discussion period. Nevertheless, we will add this baseline's results to both small and large datasets in the camera-ready version. ", " > Do you plan to discuss this in the main paper?\n\nYes, camera-ready versions allow for one more page where we plan to include the ROC AUC discussion.\n\n> I do not see the updated appendix and cannot judge the section text. Can you post the text here?\n\nSure, posted below. The text is also available in Appendix E - please note that we included the whole Appendix into the revised manuscript here on OpenReview - once you open a PDF, it should include 10 more pages of the appendix after the main text and references. All new materials are highlighted with the blue font.\n\n\n**Identifying Easy and Hard Answers**\n\nIn addition to evaluating *faithfullness* that measures whether a model could recover easy answers, it is also interesting to measure whether all easy answers can be ranked higher than hard answers. That is, a reliable query answering model would first recover all possible easy answers and would enrich the answer set with highly-probable hard answers.\nTo this end, we apply a ROC AUC metric over original *unfiltered* scores.\n\nThe idea of computing ROC AUC is as follows:\nSuppose we have a list of unfiltered raw scores from which we extract scores of all easy and hard answers. Suppose we have a query with 4 easy and 1 hard answer: [5, 6, 7, 8, 32] where 8 is a rank of a hard answer while 5, 6, 7, 32 are ranks of easy answers. \nWe then create binary labels for the scores assigning 1 to the hard answers, e.g., [0, 0, 0, 1, 0].\n\nGiven those two arrays, we then compute the ROC AUC score that would measure how many hard answers are ranked after easy answers, e.g., in our example ROC AUC is 0.75. \nNote that the score does not depend on actual values of ranks, that is, the metric will be high when easy answers are, e.g., ranked 1000-1004 as long as hard answers are ranked 1005 and lower. \nTherefore, ROC AUC still needs to be paired with MRR to see where easy and hard answers are ranked absolutely.\n\nWe compute ROC AUC for each query and average them over each query type thus making it macro-averaged ROC AUC. \nOur experimental results on all query types using the models reported in Table 1 on the reference 175% dataset are compiled in Table 14.\n\nNBFNet-QE performs almost perfectly w.r.t. ROC AUC as it was trained on complex queries. \nNodePiece-QE models are acceptable for inference-only models that were only trained only on 1p simple link prediction and have never seen any complex query at training time.\n\n> That is, I don't understand what curve leads to an area of 0.75 under it given your example\n\nROC AUC measures the probability of a random positive sample to have a higher rank than a random negative sample. In our example, we have 1 positive and 4 negative samples - choosing negatives randomly, we get 4 possible pairs (8 vs 5, 8 vs 6, 8 vs 7, 8 vs 32). In 3 out of 4 cases, the rank of positive sample 8 is higher, hence ROC AUC is also ¾ = 0.75.\n\nIn simple words, ROC AUC measures how many data points with positive labels have higher scores (or higher ranks in our case) than data points with negative labels. In our case, positive labels are hard answers, and negative labels are easy answers. Having an array of binary labels and absolute scores (ranks), we can construct a ROC ([Receiver Operating Characteristic](https://en.wikipedia.org/wiki/Receiver_operating_characteristic)) curve (eg, using a [standard sklearn function](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html)) that depicts False Positive Rate on the X axis and True Positive Rate on the Y axis. \n\n> Is this a sufficient metric for describing faithfulness? What if we consider the easy answers as the gold target and calculate hits@k?\n\nROC AUC does help to understand whether hard answers are ranked after easy ones (and it is a sufficient metric for that task) but we would argue that ROC AUC **alone** is not a sufficient metric to properly assess faithfulness - that is, we still need to combine it with filtered ranking metrics to assess absolute rank values.\n\nGenerally, faithfulness as defined by Sun et al [1] is different from the ranking of easy answers before hard answers. Faithfulness is only about recovering easy answers in full graphs. Technically, we can rank all easy answers perfectly with a subgraph matching algorithm (which can’t solve any hard answer), it will be perfectly faithful but meaningless for real setting with graphs with missing links. \n\nWe admit that a good query answering model generalizes to missing links and is faithful to existing links. However, it is not meaningful to look at the faithfulness alone, as a subgraph matching algorithm has perfect faithfulness but no generalization to missing links. We should also emphasize that faithfulness is much easier to achieve than generalization. For any neural model, we can ensemble it with a subgraph matching algorithm to achieve nearly perfect faithfulness.\n\n[1] Sun et al. Faithful Embedding for Knowledge Base Queries. NeurIPS 2020.\n\n", " Yes, this is correct. We provide dataset stats in Table 3 of Appendix B - average degree decreases from 27 to 5 in training graphs and from 40 to 20 in inference graphs.\n\nPlease note that we included the whole Appendix into the revised manuscript here on OpenReview - once you open a PDF, it should include 10 new pages of the appendix after the main text and references. Appendix B is also available in the separate PDF in the zip archive with the supplementary materials.\n", " As we are at the end of the discussion period, we won’t be able to add new experiments to the current revision. Still, we would argue that any fine-tuning of embeddings of new nodes at inference time essentially leads to: \n\n(1) either creating a new inductive model - like averaging of neighboring training entities - this actually has been proposed in [1] and is likely to work only in relatively dense graph scenarios when the number of new nodes is much less than the number of known trained nodes. That is, for example, in our scenarios of up to 550% of new nodes, most new nodes will be aggregating randomly initialized vectors and the signal of few trained embeddings is likely to disappear in the noise. In fact, any node neighborhood aggregation is akin to message passing and we have shown in the original experiments with NodePiece-QE and NBFNet-QE that message passing GNNs do help to enrich node representations but still deteriorate with the growth of the inference graph. \n\nOr (2) re-training node embeddings over the inference graph - that is the transductive scenario again with all nodes known (like the proposed task of additional link prediction training after adding new nodes). Particularly in scenarios with large inference graphs, like that of 217% and higher, where the inference graph is 2.5-25x times larger in the number of edges, any additional fine-tuning / optimization would take much more time than original training. Practically, this step does not conform with the goal of learning inductive representations and fast inference which we can achieve in one model forward pass - order of (milli)seconds - whereas re-training every time the graph changes costs hours.\n\nBoth options mentioned by the Reviewer differ from the standard inductive setup (eg, defined in the GraIL paper [2] where new nodes in the inference graph arrive only with the connectivity information without any input node features) where we are not allowed such fine-tuning and we would like to obtain inductive node / relational structure representations in the true inductive zero-shot way. \n\n[1] Albooyeh et al. Out-of-Sample Representation Learning for Knowledge Graphs. EMNLP Findings 2020\n[2] Teru et al. Inductive Relation Prediction by Subgraph Reasoning. ICML 2020. \n", " Thanks for taking the time to respond. I don't have any more queries. I do appreciate your work and the contribution. In light of your responses, I am still comfortable sticking to my current score. ", " Thank you for thinking about this. I feel like this is an important aspect of the task that you are proposing. Do you plan to discuss this in the main paper?\n\nI do not see the updated appendix and cannot judge the section text. Can you post the text here?\n\n1. I am missing some links here but I'm unsure how this is an \"area under the curve\". That is, I don't understand what curve leads to an area of 0.75 under it given your example.\n\n2. Is this a sufficient metric for describing faithfulness? As you point out, this metric ignores the overall ranking quality. It does not account for predicting incorrect answers before the easy answers.\n\n3. Suggestion: What if we consider the easy answers as the gold target and calculate hits@k? This would be for the evaluation queries as opposed to the train queries already reported. Do you see value/drawbacks in this metric?", " > In simpler words, we hypothesize that the performance grows as long as the positive component outweighs the negative effect of OOD generalization\n\nAm I understanding correctly that the gain is due to the fact that the graph becomes sparser (average node degree decreases) as the graph grows? Can I look at a particular statistic in a table (from the paper/appendix) to see this in practice?", " > Yes, the performance is non-trivial w.r.t random guessing - the probability of an answer to be in top-100 predictions over 2M nodes is 100 / 2*10^6 = 0.00005.\n\nI think the random baseline can be much stricter with answer type matching (entities that have an incoming edge of type *r* assuming the query has a type *r* edge leading into the answer node). \n\nMoreover, this can be made stronger with some kind of reordering by using the distance from anchor node. Entities of the expected answer type within a 3-hop neighborhood of the answer entity (I don't expect this will give much boost given your previous argument about graph diameter).\n\nThe claim about non-trivial performance should be compared against such a stronger, non-trivial baseline.", " > W1. They did not compare the performance of their model to any other external baselines, so it's not clear how well the proposed models perform.\n\nIt's not surprising that random initialization of transductive baselines shows poor performance in inductive settings (thank you for demonstrating this). I feel to really drive the point home that simple modifications to transductive baselines will not work in this setting, authors can try to initialize embeddings for new entities smartly. e.g. embeddings for new entities can be initialized to minimize the 1p training objective while holding relation and old entity embeddings frozen.\n\nThis would further strengthen the original claim that the proposed approaches have strong performance without training on growing graphs.\n\nThis may require fine-tuning for BetaE but might be easier to do for transductive link prediction models that are based on geometry like RotatE (new entity embeddings are the average of rotated embeddings of neighbors).", " We thank the reviewers for the valuable feedback that helped to improve the manuscript. In addition to particular responses, in this general response, we would like to draw your attention to major manuscript updates (marked in the blue color in the new version) and 4 new experimental results.\n\n1. Addressing the comments of Reviewer **YzXG** and **YfZT**, we updated all results of NodePiece-QE and NodePiece w/ GNN thanks to the improved pre-training strategy. In particular, for the GNN version, it leads to almost 2X better performance in both test queries (Table 1, Figure 3 in the main text, and Appendix C) and 1.5X on training queries (Figure 4 in the main text, and Appendix C) as a direct outcome of better fitting the training data. With that, the gap between the inference-only NodePiece-QE w/ GNN and trainable NBFNet-QE is significantly reduced, and in some queries and dataset ratios, the inference-only model even outperforms the trainable model, e.g., in *ip* and *2u* queries (Figure 5 in Appendix C). We believe this is a strong signal that better fitting of a simple link prediction model might result in significant performance gains.\n\n2. Addressing the comments of Reviewer **tzzT**, we experimented with transductive BetaE where embeddings of new nodes at inference time are initialized randomly. BetaE results on the reference 175% dataset are attached below and added to Table 1 with hyperparameter description in Appendix D. The results suggest that state-of-the-art transductive complex query answering approaches do not work in the inductive setup.\n\n| | avg\\_p | avg\\_n | 1p | 2p | 3p | 2i | 3i | pi | ip | 2u | up | 2in | 3in | inp | pin | pni |\n| -------------------- | ------ | ------ | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | --- |\n| *BetaE (transductive)* | *1.8* | *0.4* | *2.8* | *0.8* | *0.4* | *3.2* | *5.1* | *2.1* | *1.1* | *0.3* | *0.3* | *0.3* | *0.7* | *0.4* | *0.3* | *0.2* |\n| NodePiece-QE | 11.3 | \\- | 24.1 | 10.5 | 10 | 11 | 12.8 | 9.1 | 8.6 | 7.3 | 8.3 | \\- | \\- | \\- | \\- | \\- |\n| NodePiece-QE w/ GNN | 37.4 | \\- | 56.5 | 27.1 | 16 | 49.3 | 57.1 | 35.7 | 31.8 | 39.7 | 23.6 | \\- | \\- | \\- | \\- | |\n| NBFNet-QE | 51.1 | 31.4 | 66.1 | 40.9 | 31.2 | 73 | 83.3 | 58.3 | 41.3 | 37.8 | 27.8 | 31.1 | 44.3 | 28.4 | 25.2 | 28 |\n\n3. Addressing the comments of Reviewer **YzXG**, we ran an experiment to identify whether all “easy” answers (not requiring link prediction) are ranked higher than “hard” answers (that do require inferring missing edges). For that, we compute **macro-averaged ROC AUC** score from **unfiltered** predictions. Results on the reference 175% dataset are presented below and added to Appendix E with further details on the methodology and setup. We will add ROC AUC results for other splits in the final version.\n\n| | avg_p | avg_n | 1p | 2p | 3p | 2i | 3i | pi | ip | 2u | up | 2in | 3in | inp | pin | pni |\n| ------------------- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |\n| NodePiece-QE | 0.653 | | 0.609 | 0.656 | 0.666 | 0.628 | 0.627 | 0.645 | 0.681 | 0.677 | 0.686 | | | | | |\n| NodePiece-QE w/ GNN | 0.675 | | 0.688 | 0.689 | 0.667 | 0.655 | 0.624 | 0.646 | 0.69 | 0.713 | 0.701 | | | | | |\n| NBFNet-QE | 0.978 | 0.901 | 0.997 | 0.981 | 0.962 | 0.987 | 0.978 | 0.975 | 0.975 | 0.982 | 0.965 | 0.919 | 0.884 | 0.888 | 0.913 | 0.904 |\n\nThe results suggest that NBFNet-QE performs almost perfectly w.r.t. ROC AUC as it was trained on complex queries. NodePiece-QE models are pretty good for inference-only models that were only trained only on 1p simple link prediction and have never seen any complex query at training time. \n\n4. Addressing the comments of Reviewers **YzXG** and **YfZT**, we updated Table 2 with the performance of NodePiece-QE on the large WikiKG dataset increasing the average EPFO Hits@100 performance by relative 40% (from 6.6 to 9.2) and also made an attempt to train NodePiece-QE w/ GNN in the full-batch mode for 1 epoch. GNN results are slightly higher on average but that is achieved mainly by 1p accuracy. We will run a GNN pre-training for a longer time and update the results in the final version.\n\n| | EPFO | 1p | 2p | 3p | 2i | 3i | pi | ip | 2u | up |\n| ------------------- | ---- | ---- | --- | --- | ---- | ---- | --- | --- | --- | --- |\n| NodePiece-QE | 9.2 | 22.6 | 5.2 | 3.9 | 11.6 | 17.4 | 7 | 4.5 | 7.4 | 3.2 |\n| NodePiece-QE w/ GNN | 10.1 | 66.6 | 0.9 | 0.6 | 5.4 | 8.2 | 2.3 | 0.8 | 5.2 | 0.5 |\n\n\nThank you and please feel free to ask any further question.\n", " (A2) Better scaling of relational GNNs especially in terms of GPU memory consumption. As discussed in the answer to Reviewer YzXG pertaining to the WikiKG dataset, GraphSAGE-style neighborhood sampling won’t help much in the link prediction setup as graph diameter is relatively small, and with a high number of negative samples, the input edge index quickly becomes comparable to the full graph with 3- and higher hops neighborhood sampling. Besides, at inference time, we still have to rank all 2M nodes and it is still more efficient to run a GNN in the full-batch mode. That said, during the rebuttal period, we managed to pre-train a 50d NodePiece-QE w/ GNN in the full-batch mode for 1 epoch on a 48 GB GPU (which takes about 20 hours) and reported initial results - the 1p performance is surprisingly high while complex queries performance is quite low. We commit to running pre-training in the distributed mode for a longer time and report updated numbers in the camera-ready version. \n\n(A3) Distillation of trained NBFNet-QE to more efficient shallow node embeddings. This is a rather non-trivial task as it implies distilling a model of one architecture to a totally different (or simpler) one. But that would theoretically allow to run a model with performance like NBFNet-QE in the inference-only mode.\n\n(A4) More expressive decoding strategies for inference-only methods. We observe from the experiments that for CQD-Beam reaching high 1p simple link prediction pre-training accuracy does not immediately correspond to improving the performance on complex queries. For example, increasing 1p accuracy by 10 Hits@10 points does not correspond to increasing the accuracy on 2p, 3p, and other queries by 10 points. We hypothesize it might happen due to a simple pointwise scoring function (when processing a projection atom query $(h, r, ?)$) over entity vectors. We hypothesize that incorporating the graph structure into the decoding process might further improve the prediction quality. \n\n(A5) Alleviating the OOD generalization issue of GNNs when running inference on larger graphs than a model saw during training. This is a known issue of GNNs such that any progress in this problem would lead to increases in both inference-only and trainable models. For example, a recent work by Zhou et al [4] proposes some theoretical and practical insights how to imbue GNNs with better generalization capabilities but at the cost of increased computational complexity. We believe this is a promising area for new inductive GNN architectures.\n\nWe will add those discussions in the final version.\n\n> **Q1. Since the node representation is learned using the edge structural patterns. I am wondering if the model will output the same representations for different nodes (which I think is not ideal).**\n\nFor NBFNet-QE, the model *might* return the same representations for two nodes if they form an automorphism (including relation types). It is theoretically possible but very rarely happens in any real-world non-regular graphs such as our sampled datasets. We would further hypothesize that if two nodes do form an automorphism, it then does make sense to predict links to both of them.\n\nFor NodePiece-QE, two nodes tokenized with the same sequence of incident relations and/or anchors will have the same initial encoded representation (akin to hash collision). The tokenization procedure will return the same tokens for two nodes only of they share exactly the same set of unique incident relations (and top-k nearest anchors if using a vocabulary of anchors). However, in our datasets we observe that more than 90% of nodes have unique tokenizations which leads to unique initial encoded representations.\n\nWe can further alleviate the issue of exactly the same node representations by adding any message passing GNN on top of NodePiece hashes, i.e., it would work akin to a 1-WL kernel that assumes all input features are the same and after several message passing steps it often returns unique node representations (subject to isomorphism, but, again, in any large non-regular graphs this is practically rare). That is, a GNN enriches initial NodePiece representations by aggregating neighboring features and makes features more distinguishable. In practice, this hypothesis holds as we see experimentally a better performance of NodePiece-QE w/ GNN compared to a plain NodePiece-QE.\n\nReferences: \n[1] Teru et al. Inductive Relation Prediction by Subgraph Reasoning. ICML 2020 \n[2] Zhang and Yao. Knowledge Graph Reasoning with Relational Digraph. WWW 2022 \n[3] Zhu et al. Neural Bellman-Ford Networks: A General Graph Neural Network Framework for Link Prediction. NeurIPS 2021 \n[4] Zhou et al. OOD Link Prediction Generalization Capabilities of Message-Passing GNNs in Larger Test Graphs. arxiv:2205.15117 ", " We thank the Reviewer for acknowledging the novelty of the studied problem, experimental setup, and presentation of the manuscript. \nBelow, we would like to comment on the weaknesses and limitations.\n\n> **W1. Both models are based on existing works on link prediction … The contribution to the model design is rather incremental.**\n\nAlthough the proposed baseline models are adaptations of published works to the inductive setup, we would like to emphasize that we did not claim any contribution as to the model design in this work. Instead, we aim at designing a principled approach towards studying the inductiveness (and defining the inductive logical query answering task) and possible ways to achieve it with existing graph representation learning methods that we categorize into inductive node representations and inductive relational structure representations. Choosing (or baking into a model) a suitable inductive bias is still an open challenge in representation learning and it is often accompanied with certain pros and cons on which we focused in this work. \n\nIn the paragraph describing contributions (lines 61-69) we introduce the new inductive complex query answering setup and important challenges of those models such as OOD generalization to larger inference graphs, scaling to large graphs, and the ability to recover new answers for train queries. Solving some of those challenges (eg, OOD generalization) is an ongoing effort of the Graph ML community, and, of course, we could not tackle all of them in a single piece of work proposing the “best model” that would immediately solve the task. Nevertheless, we believe that the proposed baselines demonstrate a non-trivial performance and would be strong competitors in their respective branches (such inference-only models and end-to-end trainable).\n\n> **W2. The proposed framework is only applicable to unseen nodes as the node representation learning process heavily depends on the edges (relations). It cannot deal with unseen relations.**\n\n(This issue is also brought up by Reviewer tzzT.) Albeit it is correct that the set of relations has to be fixed, we would not attribute this fact as a weakness of this particular work, but rather as a weakness (or rather a deliberate design choice) of the whole inductive link prediction literature with its influential examples like GraIL [1], RED-GNN [2], and NBFNet [3].\n\nWe follow that formulation of the inductive task in Section 3 and assume that the set of relation types remains the same. The problem of inductive reasoning over *unseen relations* is highly complex and deserves a dedicated research paper - to the best of our knowledge, this setup has not been studied on theoretical or practical (like simple link prediction) levels in recent ML venues. This is definitely a solid avenue for a future work.\n\n> **W3. The discussion on the efficiency-effectiveness trade-off between NodePiece-QE and NBFNet-QE is a bit trivial. Solutions/insights that can address this trade-off are expected + This limitation makes me doubt the usability of the proposed two approaches + Is there any way to marry the benefits of both models?**\n\nWe could not delve into this discussion due to space limitations, but we see at least five avenues towards bridging the trade-off.\n\n(A1) Improved pre-training of inference-only models. Please find in the revised version of the manuscript that we have updated the performance of NodePiece-QE w/ GNN on all datasets thanks to a better training strategy with almost 2X average EPFO improvement. With that, the gap between the inference-only NodePiece-QE w/ GNN and trainable NBFNet-QE is significantly reduced, and in some queries and dataset ratios, the inference-only model even outperforms the trainable model, e.g., in *ip* and *2u* queries (Figure 5 in Appendix C). We also observe an improved performance of NodePiece-QE w/ GNN on training queries. We believe this is a strong signal that better fitting of a simple link prediction model might result in significant performance gains (we briefly touch upon this fact in lines 292-294 in Section 5.3 of the manuscript but will include a more detailed discussion in the camera-ready version)\n\n", " We thank the Reviewer for acknowledging the soundness of our contribution to the field of query answering and KG completion, its positioning among related work, and overall clarity of the manuscript. \nWe would like to ask whether you have any further question so we might convince you to increase the final score?\n", " > **Performance in Table 2 on the large 2M node graph shows the accuracy of <10%. Why is this considered non-trivial? Is trivial just considered random guessing performance? There should be better baseline estimates that can scale to this graph size with graph neighborhood sampling.**\n\nYes, the performance is non-trivial w.r.t random guessing - the probability of an answer to be in top-100 predictions over 2M nodes is $100 / 2*10^6 = 0.00005$. We would like to emphasize the hardness of this large-scale task, i.e., at inference time a model sees 500k new nodes and 4M new edges, and we perform ranking against all 2M nodes for each query.\nPlease note that we have updated Table 2 in the manuscript with (i) new numbers for NodePiece-QE improving the average EPFO H@100 from 6.6 to 9.2 thanks to a better pre-training strategy as well as (ii) added a preliminary version of a full-batch NodePiece-QE w/ GNN. \n\nFor any relational GNN, GraphSAGE-style neighborhood sampling won’t help much here (most of sampling approaches were created for node classification) in the link prediction setup as graph diameter is relatively small, and with a high number of negative samples, the input edge index quickly becomes comparable to the full graph with 3- and higher hops neighborhood sampling. Besides, at inference time, we still have to rank all 2M nodes and it is still more efficient to run a GNN in the full-batch mode. That said, during the rebuttal period, we managed to pre-train a 50d NodePiece-QE w/ GNN in the full-batch mode for 1 epoch on a 48 GB GPU (which takes about 20 hours) and reported initial results - the 1p performance is surprisingly high while complex queries performance is quite low. We commit to running pre-training in the distributed mode for a longer time and report updated numbers in the camera-ready version.\n\n\n[1] Ren et al. Query2box: Reasoning over Knowledge Graphs in Vector Space using Box Embeddings. ICLR 2020 \n[2] Ren and Leskovec. Beta Embeddings for Multi-Hop Logical Reasoning in Knowledge Graphs. NeurIPS 2020 \n[3] Sun et al. Faithful Embeddings for Knowledge Base Queries. NeurIPS 2020 \n[4] Zhang et al. Labeling Trick: A Theory of Using Graph Neural Networks for Multi-Node Representation Learning. NeurIPS 2021 \n[5] Zhou et al. OOD Link Prediction Generalization Capabilities of Message-Passing GNNs in Larger Test Graphs. arxiv:2205.15117 ", " > **Can you flesh out how the mechanism of NBFNet is a labeling trick?**\n\nSure, the idea of the labeling trick [4] assumes that certain nodes of interest have some unique “label” (often expressed as an additional feature vector) to break neighborhood symmetries that might end up with nodes having the same final updated vector.\nNBFNet is designed for the link prediction task, particularly, for answering simple queries $(h, r, ?)$. For *each* training or inference query, NBFNet initializes a graph with **unique** starting node features - for instance, for a $(h, r, ?)$ query, one single node $h$ is initialized with a “query” vector corresponding to relation $r$ while **all other** nodes in a graph are initialized with all-zero vectors (edges in the graph retain their relation type features). Therefore, we uniquely label a node $h$ with a unique query vector and perform message passing through NBFNet layers to get final node states and a distribution over all nodes as possible answers - we denote it as a *conditional distribution* $p(t|h,r)$ since the predicted tail node $t$ is conditioned on the unique initialization of head node $h$ and query relation $r$. The labeling trick is powerful but computationally expensive, e.g., at inference, we need a separate GNN forward pass over the whole graph for each query.\n\nWe’d add that the query embedding matrix is not the same as edge features used in the NBFNet layer - the edge features are obtained either via a layer-specific linear projection $f: \\mathcal{R}^d \\rightarrow \\mathcal{R}^{|R| \\times d}$ that maps from a query vector to a matrix of all relation types, or via a layer-specific learnable embedding matrix of all edge types. For example, in a 3-layer NBFNet we would have 1 $\\mathcal{R}^{|R| \\times d}$ query matrix and 3 more relation embedding matrices of the same shape, one per layer.\n\n> **Are the validation and test queries guaranteed to require missing edges to find the full answer set? Does the query sampling ensure this?**\n\nYes, it is guaranteed by the query sampling procedure that validation and test queries have at least one missing edge to predict at inference time. Without predicting this missing edge, a model won’t be able to recover “hard” answers and, therefore, the full answer set.\n\n> **How do you explain the initial increase in performance as the graph grows (e.g. in Fig 3)?**\n\nPlease note that we have updated the manuscript with new results of NodePiece-QE w/ GNN. \nGenerally, we observe that GNN-based models like NodePiece-QE w/ GNN and NBFNet-QE are more susceptible to this performance bump. This performance bump phenomenon is indeed interesting and we definitely plan to investigate it further to make sure it is not just a dataset noise. So far, we hypothesize the final performance on larger inductive graphs is affected by two components: (1) positively by reducing the average node degree in the inference graphs while the number of unique relations stays more or less the same (440-470 relations, more stats in Table 3 in Appendix B), such that edge types are more “informative“; and (2) negatively by a known fact that GNNs have troubles generalizing to graphs larger than training ones - a recent work by Zhou et al [5] shows interesting theoretical reasons for that. In simpler words, we hypothesize that the performance grows as long as the positive component outweighs the negative effect of OOD generalization, and eventually all GNN-based models deteriorate as the negative effect prevails. We will include this discussion in the camera-ready version.\n\n\n", " We thank the Reviewer for acknowledging the novelty and presentation of the manuscript. \nBelow, we would like to comment on the weaknesses and answer the questions. \n\n> **There should be more discussion on evaluating FOL execution systems in the inductive setting and what the right metric for the task is… Can we trust a system that does not achieve 100% accuracy on \"easy\" answers?**\n\nWe agree that existing evaluation protocols might be far from perfect - in this work, we mostly follow the transductive query answering literature, i.e., seminal works Query2Box [1] and BetaE [2] who employed link prediction metrics MRR and Hits@k in evaluation, and the community decided to stick to that. The problem of logical query answering is definitely more challenging than simple link prediction and we believe it is a reasonable assumption that MRR / Hits@k might not capture all characteristics of a more complex task. \n\nThat said, we observe that more refined metrics started to appear, e.g., “faithfulness” introduced by EmQL [3] as an ability to recover easy answers. In the inductive setup where the inference graph extends the training one, we extend the notion of “faithfulness” and measure whether query answering models are able to identify new “easy” answers to known training queries by traversing newly added entities and edges (no link prediction required here and no “hard” answers) - this is the experiment reported in Section 5.3.\n\nGenerally, we are inclined to trust predictions of a system that does not perfectly recover all easy answers - some models / inductive biases / hyperparameters might work better in generalization than entailment. Still, it is always possible to equip a well-generalizable model with a purely symbolic model (edge traversal) that would perfectly solve easy answers but would be rather useless for hard answers. \n\n> **the evaluation metric should penalize any entity being predicted as an answer before all the \"easy\" answers are enumerated … how many times does the system predict a \"hard\" answer higher than an \"easy\" answer?**\n\nIndeed, measuring whether all “easy” answers can be found before guessing “hard” ones makes a lot of sense - we believe ROC AUC over **unfiltered** predictions including both easy and hard answers fits here the best. We performed an inference experiment on the reference 175% dataset (the trained models reported in Table 1) computing ROC AUC of unfiltered predictions.\n\nThe idea of computing ROC AUC here is as follows:\nSuppose we have a list of unfiltered raw scores from which we extract scores of all easy and hard answers. Suppose we have a query with 4 easy and 1 hard answer:\n```\n[5, 6, 7, **8**, 32] # 8 is a rank of a hard answer while 5, 6, 7, 32 are ranks of easy answers\n```\nWe then create binary labels for the scores assigning 1 to the hard answers, e.g. \n```\n[0, 0, 0, 1, 0]\n```\nGiven those two arrays, we then compute the ROC AUC score that would measure how many hard answers are ranked after easy answers, e.g., in our example ROC AUC is 0.75. Note that the score does not depend on actual values of ranks, that is, the metric will be high when easy answers are eg ranked 1000-1004 as long as hard answers are ranked 1005 and lower. So ideally we would still need to pair ROC AUC with MRR to see where easy and hard answers are ranked absolutely.\n\nWe compute ROC AUC for each query and average them over each query type thus making it **macro-averaged ROC AUC** (we note that queries with few answers and one-two misplaced hard predictions give a low score, so ideally we’d need some normalization by the answer set size). Our experimental results on all query types on the reference 175% dataset:\n\n| | avg_p | avg_n | 1p | 2p | 3p | 2i | 3i | pi | ip | 2u | up | 2in | 3in | inp | pin | pni |\n| ------------------- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |\n| NodePiece-QE | 0.653 | | 0.609 | 0.656 | 0.666 | 0.628 | 0.627 | 0.645 | 0.681 | 0.677 | 0.686 | | | | | |\n| NodePiece-QE w/ GNN | 0.675 | | 0.688 | 0.689 | 0.667 | 0.655 | 0.624 | 0.646 | 0.69 | 0.713 | 0.701 | | | | | |\n| NBFNet-QE | 0.978 | 0.901 | 0.997 | 0.981 | 0.962 | 0.987 | 0.978 | 0.975 | 0.975 | 0.982 | 0.965 | 0.919 | 0.884 | 0.888 | 0.913 | 0.904 |\n\nNBFNet-QE performs almost perfectly w.r.t. ROC AUC as it was trained on complex queries. NodePiece-QE models are pretty good for inference-only models that were only trained only on 1p simple link prediction and have never seen any complex query at training time. We added this experiment to the Appendix E of the updated manuscript and will add more ROC AUC results for other dataset splits in the camera-ready version.\n\n", " > **L1. .. consider discussing … to what extent will this model work with (i) unseen relations, (ii) sparse graphs with very few edges for nodes, (iii) types of supported reasoning operations**\n\nThanks for bringing up those points!\n\n(i) Current inductive link prediction literature (e.g., GraIL [4], RED-GNN [5], NBFNet [3], and many more) assumes the set of relations is fixed and is not extended at inference time. We follow that formulation in Section 3 and assume that the set of relation types remains the same. The problem of inductive reasoning over *unseen relations* is highly complex and deserves a dedicated research paper - to the best of our knowledge, this setup has not been studied on theoretical or practical (like simple link prediction) levels in recent ML venues. This is definitely a solid avenue for a future work.\n\n\n(ii) GNNs, on which we base our models (NodePiece-QE w/ GNN employs CompGCN [6] and NBFNet-QE employs NBFNet, respectively) demonstrate a reasonably good performance on sparse graphs depending on chosen inductive biases. For example, NBFNet in the original paper shows high performance on inductive link prediction datasets where the average degree might be as low as 2. Our sampled datasets, in turn, vary in average degree from 5 to 37 (Appendix B).\n\n\n(iii) NodePiece-QE + CQD supports Existential Positive First Order (EPFO) queries - with projection, intersection, and union operators (lines 179-181 in the manuscript, Section 4.1). NBFNet-QE supports EPFO and queries with negation (lines 213-215, Section 4.2). Generally, inductive representation approaches (be it node or relational structure approaches) are quite orthogonal to the set of supported logical operators. That is, those approaches provide an *encoder* that yields representations, and it is a task of a *query decoder* to leverage those representations - whatever set of operators is supported by the query decoder, it would work in the inductive setup. That said, if CQD is extended to work with negation queries, NodePiece-QE + CQD will work with negation queries. It is also surely possible to swap CQD with another query answering decoder with its own set of logical operators.\n\nWe will include that discussion in the camera-ready version as well.\n\nReferences:\n\n[1] Ren and Leskovec. Beta Embeddings for Multi-Hop Logical Reasoning in Knowledge Graphs. NeurIPS 2020. \n[2] Arakelyan et al. Complex Query Answering with Neural Link Predictors. ICLR 2021. \n[3] Zhu et al. Neural Bellman-Ford Networks: A General Graph Neural Network Framework for Link Prediction. NeurIPS 2021. \n[4] Teru et al. Inductive Relation Prediction by Subgraph Reasoning. ICML 2020 \n[5] Zhang and Yao. Knowledge Graph Reasoning with Relational Digraph. WWW 2022 \n[6] Vashishth et al. Composition-based Multi-Relational Graph Convolutional Networks. ICLR 2020. \n", " > **Q1. For both methods, do you make the assumption that \"anchor\" entities are seen at training time?**\n\nNo, “anchor” nodes might be totally new and unseen at inference time. In complex logical queries, anchor nodes are merely the starting entities from which a query is executed. At training time, the models only see the nodes from the training graph. At inference time (validation/test), when a graph is updated with many new nodes and edges, validation/test queries might start from those recently added nodes. In fact, in the first experiment reported in Section 5.2, we assume anchor entities of inference queries are new and unseen, and models have to predict missing links among unseen nodes; while in Section 5.3 we study the performance of original training queries (with anchors from the training graph) but executed on the larger inference graphs that add new answers to original training queries.\n\n> **Q2. Could you please provide more information about the CQD-Beam method for decoding? How are the entity embeddings used in CQD-Beam?**\n\nApologies, the description didn’t make it to the main text due to space constraints - CQD [2] is a Continuous Query Decomposition method of answering complex queries with two variations - Continuous Optimization (CQD-CO) and Beam Search (CQD-Beam), one of the state-of-the-art query answering methods published at ICLR 2021. We employ CQD-Beam due to its inference-only non-parametric nature. CQD does not require training on actual complex queries and takes as input only entity and relation embedding matrices pre-trained on a simple link prediction task (1p in terms of complex queries) with a fixed scoring function. CQD is agnostic to the source of those pre-trained embeddings which allows us to replace transductively trained embeddings (as in the original paper) with inductive NodePiece representations (still pre-trained on 1p link prediction)\n\nLogical queries (with projection, intersection, and union) are decomposed into atomic *projection queries* (aka simple 1p link prediction that we solve by scoring each single triples with a scoring function) and non-parametric logical operations on top of them (t-norms and t-conorms that are essentially algebraic operations over two vectors). Projection queries are executed as a scoring function (ComplEx in our case) and induce a distribution of all entities in the current graph as possible answers. In queries that require multi-hop reasoning, we keep top-k scored entities as intermediate variables and execute further atomic queries using those top-k ranked nodes as starting nodes - this is essentially a Beam search well-known in natural language generation, hence the name CQD-Beam.\n\nWe will extend the Section 4.1 with a more detailed description of CQD-Beam in the camera-ready version.\n\n> **Q3. In the NBFNet-QE method, the authors made an assumption that all edges (KB triples) that are required for prediction are present in the graph. How will your model be affected by an incomplete KB?**\n\nWe might have chosen vague formulations - could you please point to the line in the manuscript where this assumption is made so we could clarify it? Generally, the task of complex query answering assumes the graph is incomplete by default and we have to predict missing links on the fly (at inference) to find *hard* answers. Both NodePiece-QE and NBFNet-QE are designed to work in incomplete graphs, so it’s totally fine.\n\nNBFNet-QE does indeed rely on the relational structure of a query (combinations of colored relations in Figure 2), but it does *not* require all edges to be physically present in the graph - in fact, the projection operator modeled with NBFNet [3] is able to predict those missing edges (or, to be more precise, predict a correct tail node of a query $(h, r, ?)$). This is achieved by the training procedure (similar to the original NBFNet link prediction objective) where, given an atomic projection query $(h, r, ?)$, we remove from the edge index all outgoing from $h$ edges of type $r$, and NBFNet has to learn to infer this edge of type $r$ outgoing from $h$ to possible intermediate or final answer nodes.\n\n\n", " We thank the Reviewer for the valuable feedback and for acknowledging the plausibility of the proposed inductive models for complex query answering over unseen nodes.\n\nWe would like to clarify the Reviewer's concerns about evaluation and comments on the limitations.\n\n>**W1. They did not compare the performance of their model to any other external baselines, so it's not clear how well the proposed models perform.**\n\nAs you correctly mentioned in the summary, previous QE methods are unable to handle the inductive complex query answering task. To the best of our related work knowledge, there do not exist any inductive QE baselines with which we could directly compare NodePiece-QE and NBFNet-QE in the proposed task. \n\nThat said, to further verify the effectiveness of inductive learning methods vs transductive ones, we trained BetaE [1], one of the most prominent state-of-the-art _transductive_ logical query answering models, with the convention that new, unseen entities in the inference graphs are initialized with random embeddings. It means that during training the model only updates the embeddings of training nodes (and some internal parameters), but at inference time it has to reason over both trained seen and randomly initialized unseen nodes. \nThe BetaE configuration is configured similar to the best performing setup on transductive datasets including 400d embeddings and 200k training steps. The results on a reference 175% inductive split (reported in Table 1 & Section 5.2) are summarized below (and we also updated Table 1 in the manuscript and Appendix D with precise hyperparameters):\n\n| | avg\\_p | avg\\_n | 1p | 2p | 3p | 2i | 3i | pi | ip | 2u | up | 2in | 3in | inp | pin | pni |\n| -------------------- | ------ | ------ | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | --- |\n| *BetaE (transductive)* | *1.8* | *0.4* | *2.8* | *0.8* | *0.4* | *3.2* | *5.1* | *2.1* | *1.1* | *0.3* | *0.3* | *0.3* | *0.7* | *0.4* | *0.3* | *0.2* |\n| NodePiece-QE | 11.3 | \\- | 24.1 | 10.5 | 10 | 11 | 12.8 | 9.1 | 8.6 | 7.3 | 8.3 | \\- | \\- | \\- | \\- | \\- |\n| NodePiece-QE w/ GNN | 37.4 | \\- | 56.5 | 27.1 | 16 | 49.3 | 57.1 | 35.7 | 31.8 | 39.7 | 23.6 | \\- | \\- | \\- | \\- | |\n| NBFNet-QE | 51.1 | 31.4 | 66.1 | 40.9 | 31.2 | 73 | 83.3 | 58.3 | 41.3 | 37.8 | 27.8 | 31.1 | 44.3 | 28.4 | 25.2 | 28 |\n\nClearly, a baseline transductive model does not demonstrate any reasonable performance on inductive tasks. The numbers, in turn, demonstrate that the proposed inductive baselines perform reasonably well and are non-trivial.\n\nAs the crux of inductive reasoning in obtaining good representations of unseen nodes at inference time, our query answering framework might employ any existing QE approach known in transductive models as long as there is _some_ inductive representation mechanism. In this work, we paired NodePiece with CQD, an inference-only approach, to demonstrate the capabilities of inductive representation learning in the challenging inference-only setup, that is, when we do not train a model on complex queries at all. On the other hand, it is surely possible to pair NodePiece with BetaE (or anything else from the literature) where NodePiece will serve as an encoder learning inductive representations while BetaE (or anything else from the literature) will be performing query answering based on those representations. \n\n> **The reported metric, i.e. Hits@10, is highly dependent on the incoming / outgoing degree of the \"anchor\" nodes.**\n\nWe would like to clarify that we compute Hits@k in the filtered setting - that is, if there are several true answers of a query, we mask out the scores of all the other true answers with (-inf) such that they do not interfere with the ranking of a target answer. Hence, the filtered Hits@k metric does not depend on incoming/outgoing degree. Besides, ranking metrics like MRR and Hits@k are rather standard in the query answering literature [1,2]. In the Appendix C, we also report a more challenging Hits@3 metric. Finally, following the question of Reviewer YzXG, we compute the ROC AUC score of easy vs hard answers of their original unfiltered scores showing that all proposed models are able to identify easy answers (via edge traversal) before hard answers (via link prediction).\n\n\n", " The paper proposed a graph-based reasoning task in an inductive setting. Different from previously studied transductive reasoning task, a reasoning task contains unseen entities at inference time. Previous QE methods are unable to handle unseen entities. The paper discussed two methods to handle unseen entities at inference time, NodePiece-QE and NBFNet-QE. In NodePiece-QE, the model computes embeddings for unseen entities from the relations of their neighbors. In NBFNet-QE, the model does not explicitly compute entity embeddings, but instead matching the graph structures locally. Both methods are plausible in solving the problem of novel entities. Experiment results show that the proposed methods achieve reasonable results. \n\nHowever, I am a bit concern about the soundness of experiments. The authors experimented their models on a modified datasets to test models' ability in handle novel entities. They did not compare the performance of their model to any other external baselines, so it's not clear how well the proposed models perform. The reported metric, i.e. Hits@10, is highly dependent on the incoming / outgoing degree of the \"anchor\" nodes. Please consider carefully compare your models with other external baselines. 1. For both methods, do you make the assumption that \"anchor\" entities are seen at training time?\n\n2. Could you please provide more information about the CQD-Beam method for decoding? How are the entity embeddings used in CQD-Beam? CQD-Beam is not a widely known methods so it may be confusing to readers.\n\n3. In the NBFNet-QE method, the authors made an assumption that all edges (KB triples) that are required for prediction are present in the graph. How will your model be affected by an incomplete KB?\n The authors claim the most important limitation of this work is the tradeoff between performance and computation complexity. While this is true, the authors should consider discussing more general limitations of this work, e.g. to what extent will this model work with unseen relations, sparse graphs with very few edges for nodes, types of supported reasoning operations, etc. More importantly, please compare your model with external baselines to justify your experiment results.", " This work proposes the task of Inductive Logical Query Answering which is the natural next step after past work on:\n* KG Completion: Predicting new triples in a knowledge graph (KG) with a fixed set of entities\n* Inductive KG Completion: Predicting triples for new entities (and some of their triples) added to an existing KG\n* Logical Query Answering: Executing First-Order Logic (FOL) queries on a KG where we assume that the KG is incomplete and the set of answers contains more entities than those that can be predicted from just FOL query execution\n\nInductive Logical Query Answering is then executing FOL queries in a KG where new entities are added at inference time. The inductive settings are motivated by real-world scenarios where KGs are constantly being updated with new information and new entities.\n\nThe authors then propose two approaches (each with its own strengths and weaknesses) to solve the task. Strengths\n\n[+] Inductive query answering is well-motivated as an extension of past work. This work is also well-placed with respect to related work.\n\n[+] The paper is well-presented and descriptions are mostly sufficient to understand the work.\n\n[+] The baseline approaches described are suitable for the task and have complementary strengths and weaknesses.\n\nWeaknesses\n\n[-] There should be more discussion on evaluating FOL execution systems in the inductive setting and what the right metric for the task is. \n\nReporting accuracy of the systems on train queries at inference is an important first step toward establishing \"faithfulness\". Can we trust a system that does not achieve 100% accuracy on \"easy\" answers?\n\nSimilarly, just reporting Hits@10 (after filtering out \"Easy\" answers) hides under the rug what it means to predict a \"hard\" answer at a higher rank than an \"easy\" answer (before filtering). Personally, I am of the opinion that the evaluation metric should penalize any entity being predicted as an answer before all the \"easy\" answers are enumerated. This ensures trust in the system to first execute the FOL query when possible and only then resort to \"guessing\" more answers by filling in missing edges. * Line 208: Can you flesh out how the mechanism of NBFNet is a *labeling trick*?\n* This is somewhat implied from the text. Are the validation and test queries guaranteed to require missing edges to find the full answer set? Does the query sampling ensure this?\n* How do you explain the initial increase in performance as the graph grows (e.g. in Fig 3)?\n* Line 347: Performance in Table 2 on the large 2M node graph shows the accuracy of <10%. Why is this considered non-trivial? Is trivial just considered random guessing performance? There should be better baseline estimates that can scale to this graph size with graph neighborhood sampling.\n* Following up on the discussion of better evaluation metrics, how many times does the system predict a \"hard\" answer higher than an \"easy\" answer? Is this a strict constraint that should be applied when evaluating systems on these tasks? The weakness section highlights a missing discussion on the evaluation of systems in the inductive setting. This is a limitation of the query answering task in general and not just of this work. However, given that this work adds an additional component of inductive KG growth, this discussion seems relevant and necessary for completeness.", " This work studies the inductive query answering task where inference is performed on a graph containing new entities with queries over both seen and unseen entities. Which is a sound contribution to the field of query answering and knowledge graph completion.\n\nThe authors argue that several existing studies on complex query answering with logical operators have been conducted in transductive mode where the set of entities does not change at inference time.\n\nThe aim of this study is to extend the complex logical query answering to inductive representation learning algorithms. Focusing on the inductive setup where an inference graph is a superset of a training graph such that:\n- Inference queries require reasoning over both seen and new entities; \n- Original training queries might have more correct answers at inference time with the addition of new entities.\n\nIn this work;\n- They show that inductive models are able to perform logical reasoning at inference time over unseen nodes generalizing to graphs up to 500% larger than training ones. \n- They also explore the efficiency-effectiveness trade-off, and the findings indicate that the inductive relational structure method generally achieves higher performance.\n- Also the inductive node representation method is able to answer complex queries in the inference regime only without any training on queries and scale to graphs of millions of nodes.\n - The paper extends the idea of inductive representation learning algorithms to logical query answering where inference queries require reasoning over both seen and new entities which is a sound contribution to the field of query answering and knowledge graph completion. \n- The authors also cite/explain the related works in both knowledge graph completion and complex query answering. The paper also makes a good differentiation between the proposed approach and the existing work.\n\n- The paper is well written, with elaborative figures and experimental results support the proposed method.\n N/A N/A", " In this paper, the authors studied the problem of logical query answering over knowledge graphs under an inductive setting. Existing methods usually focus on the transductive setting where all nodes and edges are seen during the training stage. This work goes beyond and investigates the scenario where there are unseen nodes in the validation and testing stages. To tackle this problem, they proposed two methods and demonstrated their effectiveness via extensive experiments. ### Strength\n- The studied problem is novel and interesting. The inductive setting is novel and more generalized.\n- They set up new benchmarks for the inductive logical query task by constructing two new datasets, one using FB15K-237 and a large one using OBG WikiKG.\n- The proposed NBFNet-QE performs well on the new FB15K-237 dataset (The numbers are relatively higher compared to the transductive setting) and outperforms NodePiece-QE by a large margin. This finding is interesting and may provide directions for future improvement.\n- The presentation is clear and easy to follow.\n\n### Weakness\n- The contribution to the model design is rather incremental. Both models (NodePiece and NBFNET) are based on existing works on link prediction (a simpler yet closely related task). The query answering part is also based on existing transductive logical query answering models (e.g., CQD) on KG. \n- The proposed framework is only applicable to unseen nodes as the node representation learning process heavily depends on the edges (relations). It cannot deal with unseen relations.\n- The discussion on the efficiency-effectiveness trade-off between NodePiece-QEand NBFNet-QE is a bit trivial. Solutions/insights that can address this trade-off are expected.\n - Since the node representation is learned using the edge structural patterns. I am wondering if the model will output the same representations for different nodes (which I think is not ideal). \n- Is there any way to marry the benefits of both models?\n Although the NBFNet-QE can achieve good performance, it is not scalable, and no results are reported for the WikiKG dataset. However, the more efficient NodePiece-QE performs rather worse. This limitation makes me doubt the usability of the proposed two approaches." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 8, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 5 ]
[ "a3SbpB61uc", "qGfbE2rMfQ4", "gsLDVbtnFHC", "soEikxEsk7h", "oWx_x0LvcSV", "XqTaP3kTYn", "aAxW4MsAkSt", "cVY0WL46uV-", "MTd_7R_WP1x", "nips_2022_-vXEN5rIABY", "_Nf1cD8YXfu", "V7wszp_QQHJ", "bOmJy3B1Zrf", "aAxW4MsAkSt", "XqTaP3kTYn", "skRJ9zJA4J", "H9GfKHAHvI", "MTd_7R_WP1x", "QlQyyvwltlM", "nips_2022_-vXEN5rIABY", "nips_2022_-vXEN5rIABY", "nips_2022_-vXEN5rIABY", "nips_2022_-vXEN5rIABY" ]
nips_2022_4v7PSPp-TAe
Regularized Gradient Descent Ascent for Two-Player Zero-Sum Markov Games
We study the problem of finding the Nash equilibrium in a two-player zero-sum Markov game. Due to its formulation as a minimax optimization program, a natural approach to solve the problem is to perform gradient descent/ascent with respect to each player in an alternating fashion. However, due to the non-convexity/non-concavity of the underlying objective function, theoretical understandings of this method are limited. In our paper, we consider solving an entropy-regularized variant of the Markov game. The regularization introduces structures into the optimization landscape that make the solutions more identifiable and allow the problem to be solved more efficiently. Our main contribution is to show that under proper choices of the regularization parameter, the gradient descent ascent algorithm converges to the Nash equilibrium of the original unregularized problem. We explicitly characterize the finite-time performance of the last iterate of our algorithm, which vastly improves over the existing convergence bound of the gradient descent ascent algorithm without regularization. Finally, we complement the analysis with numerical simulations that illustrate the accelerated convergence of the algorithm.
Accept
The paper studies the problem of finding the Nash equilibrium of a two-player zero-sum Markov game. Despite nonconvexity, this min-max optimization satisfies the PL condition and hence it is "easy" to solve. The authors rigorously studied the iteration complexity of this problem. The reviewers found the paper well organized and self-contained. The reviewers also found the contributions of the work solid. I recommend acceptance of the work. It would be great if the authors can address the reviewer's concerns and particularly the following concerns in the final version of the paper (most of which were answered in the discussion forum, but not in the paper): - Please include discussions on various related works (such as[Cen 2021], [Perolat 2015], [Yu 2020] in the paper.) Please avoid having a "laundry list" type of citations, and do your best to discuss each of these works to the extent possible in the paper given one additional page (as done in the discussion forum). In addition, some of the references are closely related. For example, [Yang et al 2020]'s two-sided PL setting is closely related to the paper. Also, [Ostrovskii et al 2021] considers non-Euclidean geometries which is closely related to entropy regularization. - Please include more details on the experiments (see the individual reviews) - Please include the discussion on Assumption 2 (see the individual reviews)
train
[ "g9PR4n4YizQ", "qIj7QWvrka1", "CFJYvRd2-rj", "PcbgxIoetN", "zLknPT-bNyc", "4_cW-71k0YT", "gfeLn_YR7gm", "5AsLGrfWnW", "epqk9yIEyKm", "vuwS3_bENB", "r1AsSAqJmnG", "_VEA53u8QOrU", "xFHZFLF__dm", "KPoRmJ2IQHO", "lEGeYfAZVX" ]
[ "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We do not understand why the reviewer mentioned: \"there is no theoretical advantage of the author's algorithm compared with [Yu 2020]\"? The work in [Yu 2020] is the value-based method and provides regret analysis. On the other hand, our paper is about policy gradient descent ascent and study finite-time analysis. The nature of these two studies is very different, and both are important areas in machine learning literature. Directly comparing our work with [Yu 2020], in our opinion, is inappropriate, like comparing apple vs orange. ", " We want to point out that using function approximation, such as neural networks (NNs), in the context of game theory is much more challenging than a single MDP problem. Indeed, in minimax problem considered in this paper, if we use NN, the objective function is in general nonconvex and nonconcave. In this case, the notion of a stationary point is not well defined in the literature. Even if we can adopt some definition of a stationary point in the existing work, it is also unclear how can we relate this point to the Nash equilibrium of the game. Moreover, the gradient descent ascent algorithm can also diverge in this general setting. On the other hand, in a single MDP problem, the existing literature always shows that gradient descent converges to a stationary point when using function approximation. Even in this simple setting, it is still challenging to characterize the quality of this stationary point. Thus, we do not think that an extension to the case of function approximation would be trivial in the context of game. Of course, one can use function approximation and the concept of our algorithm will remain the same. However, theoretical guarantees are largely open. \n\nIt is also worth noting that even in the tabular setting, the theoretical understanding of gradient descent ascent is very sparse. Indeed, as mentioned our results significantly improve the existing results by using different techniques. \n", " We thank the reviewer for the follow-up feedback.\n\nWe very much agree with the reviewer that the investigating the convergence of the algorithm under NN is an interesting future direction. However, it is worth noting that the understanding of reinforcement learning algorithms for both MDP and Markov game needs to start from the tabular case which lays the foundation (for example, see [Bai 2019, Bhandari 2019, Cen 2021]). The extension of the result to the NN function approximation is certainly non-trivial, and so is the extension to linear function approximation and/or stochastic gradients. Exactly for this reason, we think it is more proper to leave NN as a future work and present the paper as it is, as it may be relevant to the researchers in the community seeking different extensions.\n\nWith regards to the experiments, the sole purpose of the experiment in the paper is to suggest that the algorithm may work for non-completely mixed games and to motivate the future work in this direction. On the comparison to [Yu 2020], we note that [Yu 2020] derives a regret bound in the number of episodes (and a large number of parameter updates are carried out in each episode) and our result is a convergence rate in the number of parameter updates. Such differences make the direction comparison of the results inapplicable. In addition, the regret bound in [Yu 2020] does not imply a last-iterate guarantee, while our algorithm guarantee the convergence of the last iterate.\n\nReference\n\nBai, Yu, and Chi Jin. \"Provable self-play algorithms for competitive reinforcement learning.\" International conference on machine learning. PMLR, 2020.\n\nBhandari, Jalaj, and Daniel Russo. \"Global optimality guarantees for policy gradient methods.\" arXiv preprint arXiv:1906.01786 (2019).\n\nCen, Shicong, Yuting Wei, and Yuejie Chi. \"Fast policy extragradient methods for competitive games with entropy regularization.\" Advances in Neural Information Processing Systems 34 (2021): 27952-27964.", " Thank the author's detailed response for $\\nabla J$. However, most of my concerns remain unsolved.\n\n1. I agree with the author that completely mixed Markov games are considered a hard problem to solve. Therefore I still recommend the authors include the \"completely mixed Markov games\" in the title, which is more rigorous and gives a better summarization of this work. \n\n2. Nowadays, policy optimization algorithms are often applied with NN or features, but I don't know how to extend this paper's result to practical settings, e.g., Lemma 4 will no longer hold for the model with function approximations. \n\n3. The author said in response c) that policy optimization algorithms are \"an important subject to study due to the wide use of such algorithms in practice.\" Therefore, the authors should add more practical experiments to show their algorithm indeed works in practice. I believe this is important since there is no theoretical advantage of the author's algorithm compared with [Yu 2020]. \n", " We thank the reviewer for the follow-up. We agree that if a Bernoulli random variable with probability .5 is sampled and depending on the outcome $\\pi_1$ or $\\pi_2$ is used to generate the entire trajectory, then we get a value function of $.5(V^{\\pi_1}+V^{\\pi_2})$. In fact, this argument holds for any arbitrary function $V$ (even if it is not the objective function of a policy optimization problem). \n\nTo show $V$ is linear, we want to show $\\alpha V^{\\pi_1}+(1-\\alpha)V^{\\pi_2} = V^{\\alpha\\pi_1+(1-\\alpha)\\pi_2}$. The $\\pi_3$ suggested by the reviewer is actually not a linear combination of $\\pi_1$ and $\\pi_2$ but rather by construction the policy that produces the LHS of the equation, and the argument shows that the LHS equals the LHS and does not touch on the RHS. ", " a) thank you for your answer now I understand your point better. \n\nHere is mine: The non-convexity depends on how you define the convex-combination of $\\pi_1$ and $\\pi_2$. By using the definition of [Agarwal 2021] $\\pi_3 = .5(\\pi_1+\\pi_2)$ is defined as $\\pi_3(a|s) = .5\\pi_1(a|s)+.5*\\pi_2(a|s)$. In that case, I agree that $V^\\pi$ may be non-linear. \n\nHowever, if you define the convex combination of $\\pi_1$ and $\\pi_2$ as the policy where you decide *at the beginning* which policy you will follow and then follow this policy *all along the trajectory*, then it seems that $V^\\pi$ will be linear. More precisely $\\pi_3 = .5(\\pi_1+\\pi_2)$ is defined as the policy that samples a benouilli with probability $.5$ and follow either $\\pi_1$ or $\\pi_2$ depending on the result on that bernouilli. \n\nDoes it make sense?", " a) In response to the reviewer's question on the non-concavity of the value function, we note that the linear relation only holds in matrix games (single-state Markov game). For general Markov games, the reward collected in the current state obeys this linear relation, but $\\pi_1$ and $\\pi_2$ can result in different transitions to the future states and the expected future reward is non-linear. In Lemma 3.1 of [Agarwal 2021], counter-examples show that the objective function is not convex or concave even in a single-agent MDP.\n\nb) Regarding the question on the tightness of our upper bound, we do not have any lower bound yet, and we are not aware of any result in the literature that we can invoke with modest changes to derive a lower bound. In fact, we do believe that our rate is likely sub-optimal. As we mention in the third bullet in our response to reviewer KhY2, value-based methods and policy optimization methods are two main classes of algorithms that achieve comparable convergence guarantees in single-agent MDP. Value-based methods for Markov games and MDPs have been shown to have similar convergence rates, but our rate $\\mathcal{O}(k^{-1/3})$ is far from the optimal convergence rate of the policy gradient algorithm for MDPs.\n\nc) We thank the reviewer for the thoughtful comment on the connection of Assumption 2 to the MaxEnt Nash. We have investigated the connection similar to what the reviewer suggests. In a single-agent MDP, we can show that the smallest entry of the MaxEnt optimal policy is a lower bound on the optimal policy of regularized MDPs for any $\\tau>0$. But in a two-player Markov game, this is not the case, and we have examples where the smallest entry of the MaxEnt Nash is larger than the the smallest entry of the regularized Nash under some small $\\tau>0$. We do hope to formally translate Assumption 2 to a condition on the MaxEnt Nash. At the same time, as we discuss in the second bullet in our response to all reviewers, we are refining the PL-type condition derived in Lemma 4 to completely remove Assumption 2.\n\nd) On the inaccessibility the exact gradient in practice, we discuss in the first bullet in our response to all reviewers how sample-based extensions of our algorithm can be made with some extra technical but straightforward analysis.\n\nReferences\n\n\nAgarwal, Alekh, et al. \"On the Theory of Policy Gradient Methods: Optimality, Approximation, and Distribution Shift.\" J. Mach. Learn. Res. 22.98 (2021): 1-76.", " a) On the connection of the work to [Cen 2021], we note that as the reviewer described, [Cen 2021] is essentially a value-based method which uses value iteration in the outer loop and runs an extragradient algorithm only in the inner loop to solve a regularized bimatrix game. [Perolat 2015] is one of the first works to extend value-based methods from single-agent MDP to two-player Markov games. Since then, the basic techniques for analyzing value-based methods are known. In fact, the key to the linear convergence established in [Cen 2021] is a contraction property that their value-based method takes advantage of. On the other hand, policy optimization algorithms are much less understood in Markov games, but this is an important subject to study due to the wide use of such algorithms in practice. Our analysis treats the Markov game purely from the optimization perspective and is very different from the analysis in [Cen 2021] in nature, though a small portion of the steps may look similar as both our work and [Cen 2021] need to take advantage of the structure of the game. In single-agent MDPs, value-based methods and policy optimization methods have comparable convergence guarantees today, and our work aims to narrow the gap between the understanding of these two classes of algorithms in two-player Markov games. An additional difference in the algorithm structure is that our algorithm is a single-loop algorithm which may be more convenient to implement in practice than nested-loop algorithms.\n\nb) On the difference in convergence metric compared to [Cen 2021], both our metric and the duality gap considered in [Cen 2021] are common metrics in the literature. The duality gap metric arises more naturally when value-based algorithms are considered. Our metric is more connected to the bi-level/Stackelberg viewpoint where the game is treated as a bi-level optimization problem. Our convergence metric being 0 for a policy pair implies that the policy pair is the Nash equilibrium. The same is true under the duality gap metric.\n", " a) Regarding how to compute $\\nabla J$, we note that the main bottlenecks are the discounted visitation distribution and the advantage value function. In small systems where we have explicit knowledge of the transition probability kernel, these quantities can be solved with matrix inversion and the exact gradient is accessible. When we do not explicitly know the transition kernel and can only take samples according to it, sample-based algorithms such as REINFORCE and actor-critic can be used to estimate $\\nabla J$. In our first bullet in the response to all reviewers, we discuss how these sample-based extensions are straightforward extensions of the deterministic gradient method considered in this paper.\n\nb) On whether the algorithm can be applied with function approximations, we note that most of our analytical techniques should still be applicable, but a key necessary change is to replace Lemma 4 with a gradient domination condition derived for the specific function approximation. We agree with the reviewer that solving the problem beyond the tabular setting under linear or general function approximations is interesting, but we believe that our result stands on its own as the foundation and the analysis for function approximations can be built on the top.\n\nc) On the connection and novelty of our work compared to the suggested prior work, we note that [Yu 2020], along with [Cen 2021] mentioned by reviewer 7QJ7 and a number of other prior works [Xie 2020, Sayin 2022], is a value-based method at the core. [Yu 2020] considers a value iteration algorithm with confidence bounds. In [Cen 2021], the outer loop uses value iteration and only the inner loop runs an extragradient algorithm to solve a regularized bimatrix game. [Perolat 2015] is one of the first works to extend value-based methods from single-agent MDP to two-player Markov games. Since then, the basic techniques for analyzing value-based methods are known. On the other hand, policy optimization algorithms are much less understood in Markov games, but this is an important subject to study due to the wide use of such algorithms in practice. Our analysis treats the Markov game purely from the optimization perspective and is very different from the analysis in [Yu 2020, Cen 2021]. In single-agent MDPs, value-based methods and policy optimization methods have comparable convergence guarantees today, and our work aims to narrow the gap between the understanding of these two classes of algorithms in two-player Markov games.\n\nReferences\n\nBai, Yu, and Chi Jin. \"Provable self-play algorithms for competitive reinforcement learning.\" International conference on machine learning. PMLR, 2020.\n\nCen, Shicong, Yuting Wei, and Yuejie Chi. \"Fast policy extragradient methods for competitive games with entropy regularization.\" Advances in Neural Information Processing Systems 34 (2021): 27952-27964.\n\nPerolat, Julien, et al. \"Approximate dynamic programming for two-player zero-sum Markov games.\" International Conference on Machine Learning. PMLR, 2015.\n\nSayin, Muhammed O., Francesca Parise, and Asuman Ozdaglar. \"Fictitious play in zero-sum stochastic games.\" SIAM Journal on Control and Optimization 60.4 (2022): 2095-2114.\n\nXie, Qiaomin, et al. \"Learning zero-sum simultaneous-move markov games using function approximation and correlated equilibrium.\" Conference on learning theory. PMLR, 2020.", " a) We thank for the reviewer for suggesting more discussion on the experiment design. The two games considered in Section 5 are minimal scale examples of completely mixed games and deterministic Nash games, respectively. To create a completely mixed game with $|\\mathcal{A}|=|\\mathcal{B}|=2$, we simply need to choose the reward function such that $r(s,\\cdot,\\cdot)$ as a 2x2 matrix is diagonal dominant or sub-diagonal dominant for any state $s$, and we can use an arbitrary transition probability kernel. Regarding why the last iterate of GDA without regularization also converges in Fig.2 (in the game with deterministic Nash equilibrium), we note that finding the Nash equilibrium in a deterministic game is often considered easier since the gradient of each player has the same sign regardless of the action of the other player, even if no regularization is used at all. Please see the second bullet in our response to all reviewers for more discussion. We will also consider adding experimental details to the main text in the next revision.\n\nb) Also in the second bullet in our response to all reviewers, we discuss how Assumption 2 comes up, why we believe it is an artifact of the analysis, and how we may remove the assumption in the future.", " We greatly appreciate the feedback from the reviewers. We will carefully consider and incorporate the suggestions in the next revision of the paper.\n\nThe major questions for the paper are on the inaccessibility of the exact gradient $\\nabla J$ and Assumption 2. \n\na) We agree with the reviewers that the exact gradient is difficult to compute in large systems where we do not have explicit knowledge of the transition probability kernel. However, it is important to note that our deterministic gradient algorithm lays the mathematical foundation and its sample-based extensions are technical but straightforward. By introducing a critic variable that tracks the Q function, we can show that the purely sample-based online (non-batch) actor-critic version of the regularized GDA converges under additional standard assumptions on the critic. Or if we assume having access to an unbiased, bounded-variance stochastic estimate of the true gradient returned by some sampling procedure like REINFORCE, our exact analysis can be used to guarantee convergence, with small changes to the choice of the step sizes. We will add discussion on these data-driven extensions in the next revision of the paper.\n\nb) Assumption 2 is indeed a restrictive assumption that does not seem necessary but rather arises as an artifact of the current analysis. When we apply the weaker PL-type condition (Lemma 4) in the analysis, the entries of the iterates $\\pi_{\\theta_k},\\phi_{\\theta_k}$ need to be uniformly lower bounded, which is difficult to establish using the game structure. We come up with an innovative induction approach to quantify the connection between $\\min_{s,a}\\pi_{\\theta_k}(a\\mid s),\\min_{s,b}\\phi_{\\psi_k}(b\\mid s)$ and the optimality gaps $\\delta_k^{\\pi},\\delta_k^{\\phi}$. This approach allows us to transform the uniform lower bound requirement on $\\pi_{\\theta_k},\\phi_{\\psi_k}$ to that on the Nash equilibrium, leading to the uniformly completely mixed game assumption. \n\nHowever, completely mixed Markov games are considered harder to solve in many places than games with deterministic Nash equilibrium, for which we do not have convergence guarantees yet. [Mertikopoulos 2018, Vlatakis-Gkaragkounis 2020] show that a classic algorithm fails only when the game is completely mixed.\nIn some sense, the difficulty of analyzing GDA for non-convex non-concave optimization is directly related to the mixed nature of the Nash equilibrium, which causes the gradient of one player to constantly change signs as the other player updates and results in the oscillation observed in first two plots of Fig.1. In games with only deterministic Nash equilibria, the gradient of one player does not change sign regardless of the policy of the other player, which makes convergence easier and explains why the last iterate of GDA without regularization converges in Fig.2. \n\nWe believe that we can remove Assumption 2 in the future, and the key is to use an enhanced version of Lemma 4. By refining the use of performance difference lemma and KL divergence, we may be able to establish Lemma 4 with $\\min_{s,a}\\pi_{\\theta_k}(a\\mid s),\\min_{s,b}\\phi_{\\psi_k}(b\\mid s)$ replaced by $\\min_{s,a:\\pi^{\\star}(s,a)\\neq 0}\\pi_{\\theta_k}(a\\mid s),\\min_{s,b:\\phi^{\\star}(s,b)\\neq 0}\\phi_{\\psi_k}(b\\mid s)$, which combined with careful mathematical treatment may remove Assumption 2. But this will likely require more sophisticated analysis beyond that in the current paper.\nFinally, we note that the completely mixed game assumption is not uncommon in the literature when entropy-flavored regularization is used (see for example [Mertikopoulos 2016] and Theorem 6.1 of [Perolat 2021]). \n\nReferences\n\nMertikopoulos, Panayotis, and William H. Sandholm. \"Learning in games via reinforcement and regularization.\" Mathematics of Operations Research 41.4 (2016): 1297-1324.\n\nMertikopoulos, Panayotis, Christos Papadimitriou, and Georgios Piliouras. \"Cycles in adversarial regularized learning.\" Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms. Society for Industrial and Applied Mathematics, 2018.\n\nPerolat, Julien, et al. \"From Poincaré recurrence to convergence in imperfect information games: Finding equilibrium via regularization.\" International Conference on Machine Learning. PMLR, 2021.\n\nVlatakis-Gkaragkounis, Emmanouil-Vasileios, et al. \"No-regret learning and mixed nash equilibria: They do not mix.\" Advances in Neural Information Processing Systems 33 (2020): 1380-1391.", " This work studies the so-called two-player zero-sum Markov game. They provides theoretical analysis of the convergence (to the Nash equilibrium) of practical gradient descent ascent (GDA) algorithms with some new numerical tools. In particular, they introduce an entropy regularization to the value function of the original game, and shows the linear convergence of GDA to the equilibrium point of the regularized game.\n\nIn addition, in order to derive practical algorithms to solve the original problem, they also studies a critical parameter, the weight parameter of the regularizer. Since there is a clear trade-off between improving convergence rate and minimizing the gap between the regularized game and the original one, they provide some practical schemes to adjust the regularization weight. The hope is that when the regularization weight is iteratively reduced to zero, the GDA algorithm will converge to the Nash equilibrium of the original game and hence the original game will be solved. Strength:\n1. The paper is well-written, self-contained and organized. \n2. This work extends the current research on finding the Nash equilibrium of two-player zero-sum Markov game. They provide significant contributions to convergence analysis of practical gradient-based algorithms.\n3. They also derive some practical algorithms to apply their regularization technique to solve the original game.\n\nWeakness:\n1. The description of the numerical experiments part is too simplified. I would like to see more details, for example, more information about the game environment, why the current setting of the synthetic game is chosen, more analysis on the results. Although some of the details have been provided in appendix, but I think that is not enough, and some details should be put into the main paper.\n2. The role of Assumption 2 played in the main results of this work is not clear. As the paper itself mentioned, some of the experimental results show that the Algorithm 2 still converges correctly when it is applied to a Markov game that does not observe Assumption 2. \n3. In some of experimental results,  the pure GDA approach without regularization also has a last-iterate convergence and does not exhibit the oscillation behavior. Did the authors choose an appropriate game environment to valid their contributions?\n4. The idea of introducing an entropy regularizer to improve the structure of the game is straightforward.\n The questions are listed above. no.", " This paper provided a new algorithm to find the Nash equilibrium for a two-player zero-sum Markov game. The new algorithm is based on a gradient descent with KL regularization. For completely mixed Markov games, the authors prove that the algorithm can find Nash equilibrium efficiently. They also conduct a simple experiment for tabular Markov games with low dimensions. The experiments support their main theoretical results and show that their algorithm may also work for the Markov games that are not completely mixed. This paper is well organized, and the proofs seem sound to me. However, I still have several concerns about the clarity and significance of the main results. \n\n1. The main result of this paper only works for \"completely mixed Markov games.\" I would recommend the authors indicate that in the title. Otherwise, it would be kind of misleading. Moreover, completely mixed Markov games seem quite strong in the applications, which may limit the significance of the theoretical result.\n\n2. The authors conduct experiments and show that their algorithm may work for Markov games without completely mixed assumptions. However, the experiment is too simple to support their claims seems it is a tubular Markov game with dimension 2. \n\n3. One of the biggest advantages of using gradient descent to solve Markov games is that it can deal with complex models. However, this paper only conducted simple tubular Markov games by using a tabular softmax policy parameterization. Can the algorithm be applied to Markov games with more complex structures, such as with linear function approximation [1,2] or general function approximation [3]?\n\n4. This paper assumes that we can access the expected value function during the training and doesn't consider the exploration problem. \n\n\n[1] Xie, Qiaomin, et al. \"Learning zero-sum simultaneous-move Markov games using function approximation and correlated equilibrium.\" Conference on learning theory. PMLR, 2020.\n\n[2] Chen, Zixiang, Dongruo Zhou, and Quanquan Gu. \"Almost optimal algorithms for two-player Markov games with linear function approximation.\" arXiv preprint arXiv:2102.07404 (2021).\n\n[3] Huang, Baihe, et al. \"Towards general function approximation in zero-sum Markov games.\" arXiv preprint arXiv:2107.14702 (2021).\n 1. How can we access the gradient $\\nabla J$?\n\n2. Can this algorithm be applied to Markov games with more complex structures?\n\n3. What's the advantage of the new algorithm compared with other algorithms that are designed for tabular settings such as [4]?\n\n\n[4] Bai, Yu, and Chi Jin. \"Provable self-play algorithms for competitive reinforcement learning.\" International conference on machine learning. PMLR, 2020. The authors have addressed their work's limitations and potential negative social impact.", " This paper develops policy gradient algorithms for entropy-regularized zero-sum games. Under some assumptions, the authors show that the entropy-regularized objective function obeys a Polyak-Łojasiewicz type condition, which allows linear convergence of policy-gradient-based GDA to the equilibrium point of the regularized game. Then, the authors propose two scheduling schemes for the regularization parameter and analyze their convergence and complexities. Strength: comprehensive theoretical result.\n\nWeakness: It is still not clear why Assumption 2 is needed and justifiable. Overall, the paper is well written, and the references are properly cited. I have the following questions/comments.\n\n1. Entropy-regularized zero-sum game has been studied in the existing literature (Cen 2021), and I think this paper provides a policy-gradient-based solution and complexity analysis. \n\n2. The convergence metric used in this paper is different from the duality gap metric used in Cen 2021, is there a connection between these two metrics? How do your convergence rate and complexity compare with those established in Cen 2021?\n\n3. Comparing with the analysis of Cen 2021, the authors stated that they discuss and leverage structure in the regularized Markov\ngame that was previously unknown. While I agree that the policy gradient approach and PL condition of the Entropy-regularized zero-sum game were not discussed in Cen 2021, their analysis actually leverages certain similar structures implicitly. Specifically, Cen 2021 follows the standard value iteration algorithm that has a linear convergence rate. In the inner loop, they need to solve an entropy-regularized bilinear matrix game, which satisfies a strongly-convex-strongly-concave type geometry due to the entropy regularization (w.r.t to the $\\ell_1$ norm). In their convergence proof, this geometry helps them achieve a linear convergence. Yes", " This paper shows last iterate convergence for regularized gradient descent-ascent on markov games with finite state and action space under the assumption that the smallest coefficients of the support of the regularized markov games are uniformly (uniformly lower bounded over the regularization parameter.) Additionally, the authors assume that the initial state distribution has a full support. \nstrengths:\n- To the extent of my knowledge, the result seems novel and relevant to the community.\n- This paper is well written. \n\nweaknesses: \n- Assumption 2 is strong and does not seem necessary.\n- There is no discussion on the optimality of the rate proposed rate. \n- Algorithm is not practical since one needs access to the (full) gradient of J.\n- The authors could do more experiments to investigate these questions left open. \n - After (1), you mention that J is non-concave with respect to the policy of the first player. But it seems to me that J is actually linear (with respect to the policy) since we have $$V^{\\lambda \\pi_1 + (1-\\lambda)\\pi_2,\\phi} = \\lambda V^{\\pi_1,\\phi} + (1-\\lambda) V^{\\pi_2,\\phi}$$ (playing according to the mixed policy is equivalent to tossing a coin and playing according to either the first or the second policy). Can you comment on that point?\n- What is your opinion on the optimal convergence rate? Are there lower bounds ? (since the problem is nonconcave-nonconvex in the parameters pre-softmax the standard convex-concave lower bounds do not hold) \n- It seems that Assumption 2 could be connected to the Maximum entropy Nash (MaxEnt). (the lower bound should be on the support of the MaxEnt. One way to prove such a result could be by saying that the MaxEnt is the limit of the regularized Nash. Since the Regularized Nash always has full support (the regularization goes to +\\infty for non-full support solutions) and its limit (the MaxEnt for no regularization and the uniform distribution for infinite regularization) also has full support, there exists a (uniform) lower bound on the support of any regularized Nash. However, it does not show that this lower bound is the lower bound on the MaxEnt (which seems intuitively true). Any comment on that?\n\n\nI will be eager to increase my score if the authors answer my questions and are able to show that Assumption 2 can be relaxed. - This work is limited to the full batch case which is not practical.\n- The proposed rate may be suboptimal. It could be interesting to investigate experimentally different decays ($1/t$, $1/\\sqrt{t}$). \n- Also a log-log scale for the plots could help to visualize the experimental convergence rate. \n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "PcbgxIoetN", "PcbgxIoetN", "PcbgxIoetN", "epqk9yIEyKm", "4_cW-71k0YT", "gfeLn_YR7gm", "lEGeYfAZVX", "KPoRmJ2IQHO", "xFHZFLF__dm", "_VEA53u8QOrU", "nips_2022_4v7PSPp-TAe", "nips_2022_4v7PSPp-TAe", "nips_2022_4v7PSPp-TAe", "nips_2022_4v7PSPp-TAe", "nips_2022_4v7PSPp-TAe" ]
nips_2022_UqA1mcOxiq
Posted Pricing and Dynamic Prior-independent Mechanisms with Value Maximizers
We study posted price auctions and dynamic prior-independent mechanisms for (ROI-constrained) value maximizers. In contrast to classic (quasi-linear) utility maximizers, these agents aim to maximize their total value subject to a minimum ratio of value per unit of payment made. When personalized posted prices are allowed, posted price auctions for value maximizers can be reduced to posted price auctions for utility maximizers. However, for anonymous posted prices, the well-known $\frac 1 2$ approximation for utility maximizers is impossible for value maximizers and we provide a posted price mechanism with $\frac12(1 - 1/e)$ approximation. Moreover, we demonstrate how to apply our results to design prior-independent mechanisms in a dynamic environment; and to the best of our knowledge, this gives the first constant revenue approximation with multiple value maximizers. Finally, we provide an extension to combinatorial auctions with submodular / XOS agents.
Accept
This paper got uniformly positive reviews. That said, reading into the actual text of the reviews, it is evident that the results are not as strong as one might like. The biggest limitation is that the result rely heavily on personalized pricing for reducing the problem to one of utility maximization for utility maximizers; posted prices are often not a very attractive approach in practice.
train
[ "3ES3Ks_6D_f", "iVnc14hLMPC", "CY10lVO6XiO", "CyQQ5EiuOS", "WMZqgWw7IWV", "ixMZdpU4Zx", "DcmopXNHkc", "-1sY6yTYjK", "JWYLke0KTJb" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Yes, that is correct (sorry for the confusion). Please let us know if you have any other questions or comments.", " Thanks for the response. Perhaps I was unclear, but it seems to me that the proof of Proposition 7 makes use of the fact that ROI = 1. My question was (1) is this necessary and (2) does the 1/4 bound hold when ROI is not equal to 1 (i.e., prior to rescale/adjustment). Given your response, it seems to me that the 1/4 guarantee is against the liquid welfare (and not the unadjusted welfare), and so the result holds when ROI \\neq 1 as well. Am I understanding this correctly? ", " Thanks for your response. It addressed my concern.", " Thank you for your insightful and helpful comments. Re proof of Proposition 4: this indeed deserves further explanation. Recall that we require $q_i$ to be in the support of $D_i$ in line 177 (this is without loss of generality, because if $q_i$ is not in the support, we can increase it such that the probability that the buyer accepts stays the same, until $q_i$ is back in the support). Then we can choose $p_i$ such that $\\theta(D_i, p_i) = q_i$ in line 180, and $p_i$ must be unique since we also assume $D_i$ is non-atomic, which also means $\\mathbb{E}[v_i \\mid v_i \\ge q_i] = p_i$. On the other hand, we know that $\\mathbb{E}[v_i \\mid v_i \\ge x]$ increases monotonically in $x$, and $q_i \\ge 0$, so $p_i = \\mathbb{E}[v_i \\mid v_i \\ge q_i] \\ge \\mathbb{E}[v_i \\mid v_i \\ge 0] = \\mathbb{E}[v_i]$. We are sorry for the confusion, and will clarify in the paper.", " Thank you for your detailed and insightful comments.\n\nRe Proposition 1 (and 4): the reviewer is definitely right that that result is not particularly deep in the technical sense, and we are quite frank about it in the paper (we analyze personalized prices as a warm-up, and even say upfront that the proof is \"fairly simple\"). The proof we provide (which may appear a bit lengthy but definitely not complex) is merely a means to highlight some connections and differences between traditional buyers and value-maximizers. The argument suggested by the reviewer captures another aspect of the problem (which of course also provides nice insight), and we are happy to discuss that in the paper.\n\nRe \"dynamic incentive-compatibility\": the reviewer is right that patient buyers may still want to lie in response to that mechanism. We intended for that mechanism to be a proof of concept. It is not hard to design a dynamic IC mechanism based on our posted-price mechanisms, as long as buyers are less patient than the seller (which is also a common assumption in dynamic auctions). For example, one can adopt the exploration-exploitation framework as in [Deng and Zhang, 2021] in the following way: we first run the exploration mechanism by Deng and Zhang for each buyer for a while to learn the approximate value distributions of all buyers. Then we run our posted-price mechanism with the price slightly lowered to account for potential inaccuracy in the value distributions learned earlier. By trading off between the lengths of the exploration phase and the exploitation phase, one should be able to get regret $\\tilde{O}(T^{2/3})$ against a constant fraction of the optimal revenue. We will add a remark about this in the paper.\n\nRe \"gap between the bounds\": while there is a gap between our upper and lower bounds, we note that closing the gap likely involves a fundamentally different pricing scheme, and/or a fundamentally different upper bound argument, from the existing ones in the literature on posted pricing and prophet inequalities. In particular, most existing pricing schemes are based on variants of the common price $\\frac12 \\mathbb{E}[\\max_i v_i]$. However, as we show in Proposition 5, this price cannot give a better ratio than $\\frac12 (1 - 1/e)$. Our analysis of the price also deviates significantly from common ones used before. On the other hand, most existing upper bound arguments are fairly simple, and can be derived by considering instances with only 2 buyers. However, doing so can never give upper bounds strictly smaller than $1/2$. In this paper, we already make nontrivial progress and break the barrier of $1/2$. So in summary, we narrow down the range of the right ratio from the trivial $[0, 0.5]$ to $[0.316, 0.479]$, breaking barriers on both ends of the range. Pinning down the tight ratio would likely take a whole different paper or two.\n\nRe \"randomization may help\": we agree randomization in general is a promising direction. As for the randomized VCG paper in particular, as far as we understand, the techniques there work only when there are only 2 buyers. We suspect some new ideas are needed to utilize randomization in posted pricing with many buyers.", " Thank you for your insightful and encouraging comments. Re \"target ROI is 1\": throughout the paper, we assume each buyer's target ROI ratio is 1. This is without loss of generality since we consider the liquid welfare (i.e., welfare as measured by each buyer's value adjusted by that buyer's target ROI ratio), which is standard in the study of ROI-constrained value-maximizers (see, e.g., [1-3]). With the liquid welfare as the benchmark, one can replace the actual value with the adjusted value and rewrite the ROI constraint such that the target ROI ratio is 1. The liquid welfare is also the maximum amount of money that can be extracted as revenue in the idealized world where all buyers' values are publicly known, which makes it an appropriate benchmark to compare to. On the other hand, the standard (unadjusted) notion of welfare is less meaningful with value-maximizers, since the full unadjusted welfare can never be extracted as revenue under ROI constraints, even disregarding incentive issues.\n\n[1] Gagan Aggarwal, Ashwinkumar Badanidiyuru, and Aranyak Mehta. Autobidding with constraints.\n\n[2] Santiago Balseiro, Yuan Deng, Jieming Mao, Vahab Mirrokni, and Song Zuo. Robust auction design in the auto-bidding world.\n\n[3] Yuan Deng, Jieming Mao, Vahab Mirrokni, and Song Zuo. Towards efficient auctions in an autobidding world.", " The paper studies posted-price auctions for ROI-constrained value\nmaximizing bidders, who seek to maximize their total value subject to\nthe condition that the return-on-investment (ROI) on their spend is at\nleast some prespecified ratio. This is in contrast to\nutility-maximizing bidders who seek to maximize their total utility\n(value - spend). Past results have shown that in the\nutility-maximizing case, using an anonymous posted price mechanism can\nachieve a (1/2) approximation of the welfare (or revenue). The current\npaper shows that if personalized prices are allowed, the same (1/2)\napproximation continues to hold. However, the paper presents instances\nwhere the no anonymous posted price mechanism can achieve an\napproximation ratio on welfare greater than 0.483. Furthermore, the\nauthors show that the using the optimal anonymous posted price\nmechanism for the utility-maximizing case obtains a (1/2) (1- 1/e)\napproximation of the optimal welfare. The authors also relate their\nresults to prior-independent dynamic auctions and combinatorial\nauctions.\n The paper extends the results for utility-maximizing bidders to\nvalue-maximizing bidders with ROI constraints, an approach common in\nonline advertizing. The results are obtained by directly reducing the\nanalysis to that in the utility-maximizing case using personalized\nprices.\n\nThe authors construct explicit instances where anonymous posted-prices\nachieve strictly less than (1/2) approximation to welfare, though a\ntight characterization for this setting is missing.\n\nThe authors also provide a tight characterization in the\nvalue-maximizing setting for the usual anonymous posted price\nmechanism used in the utility-maximizing setting.\n For the combinatorial auctions setting, the proof of Proposition~7\nuses the factthat the welfare is at least as large as the max of\nrevenue and buyers' utility. For the welfare to be at least as large\nas the revenue, the implicit assumption that the ROI is equal to 1 is\nrequired. Does the same (1/4) guarantee hold for general ROI, or does\nthe guarantee deteriorate?\n None", " The paper studies the price of anarchy / efficiency in auctions for ROI-constrained value-maximizing bidders. More specifically it studies posted-price types of auctions. First, it states that an approximation ratio of $\\frac{1}{2}$ when non-anonymized posted-prices are possible as it can be reduced to finding posted-prices for utility-maximizing buyers, which is a solve problem. Then, it shows that when using an anonymized posted-price, the approximation ratio is upper-bounded by 0.479. After, it shows that an approximation ratio of $\\frac{1}{2}(1-1/e)$ can be obtained with the same allocation as for utility-maximizing buyers. Finally, it provides a $\\frac{1}{4}$ approx ratio guarantee when valuations are non-additive. 1. I find Prop. 1 (or 4) is strait-forward: one can write the lagrangian version of the buyer optimization problem between lines 147-148 and observe that choosing $q = \\frac{p \\lambda^*}{1+\\lambda^*}$ where $\\lambda^*$ is the optimal lagrange multiplier leads to the optimization problem of a utility-maximizing buyers faced with price $q$. It does not feel like a hugely novel result.\n\n2. I'm not a big fan of Sec. 5. The mechanism has a prior-dependent / non-measurable parameter $p^*$ and the section shows it can be tracked using an adapted sequence of parameters $p_t$ by discretizing its support and running a multi-armed bandit. In my mind, the real challenge about removing prior-dependence is to do so without introducing incentives for the buyers to lie. It's not clear to me that buyers wouldn't be incentivized to lie here, as accepting a price at time $t$ influences the price they get in the future (see [1,2] or [3] for a survey).\n\n3. My main concern about the paper is the strength of the claims. The main results are Th. 3 and 4 in my opinion, but the gap between the upper and the lower bound is large.\n\n[1] Repeated contextual auctions with strategic buyers, Amin et al 2014\n[2] Characterizing Truthful Multi-armed Ban- dit Mechanisms, Babaioff et al 2014\n[3] Learning in repeated auctions, Nedelec et al 2022 I know the paper I'm going to refer to has been published right before NeurIPS submission deadline, so I only put this as a question. Recent results showed that approximation ratios better than $\\frac{1}{2}$ can be attained with randomized VCG [4]. Do you think similar ideas could help improve the efficiency of posted-price auctions ?\n\n[4] Auction Design in an Auto-bidding Setting: Randomization Improves Efficiency Beyond VCG. Mehta 2022. I don't have many suggestions on the form of the paper as I find it particularly well-written and easy to read. \n\nI'd say if authors want to \"thicken\" the content, they can look towards randomized posted-prices as it seems to be the promising direction for such problems.", " The paper studies the posted price auctions for ROI constrianed value maximizers. Personalized posted prices are shown to be equivalent to posted price auctions for utility maximizers. For anonymous posted prices, a mechanism is provided, proving to be $\\frac12(1-1/e)$ approximation. The mechanism is further a to applied prior-independent mechanism with constant approximation, and extended to combinatorial auctions with submodular / XOS agents. Strengths:\n1. The studied problem, auction designs for value maximizers, is important and attracts much attention in the field of auctions recently. The paper is probably one of the first few papers on posted price auctions for value maximizers. \n2. Concrete theoretical results are provided. Most proofs, as far as I checked, are corrected. \n3. The paper is clearly written and well organized.\n\nWeaknesses:\nOne proof is not fully justified. See questions. In the proof of Proposition 4 (Line 193), why the construction satisfy the condition $p\\geq \\mathbb{E}_{v\\sim D}[v]$? Not applicable." ]
[ -1, -1, -1, -1, -1, -1, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "iVnc14hLMPC", "ixMZdpU4Zx", "CyQQ5EiuOS", "JWYLke0KTJb", "-1sY6yTYjK", "DcmopXNHkc", "nips_2022_UqA1mcOxiq", "nips_2022_UqA1mcOxiq", "nips_2022_UqA1mcOxiq" ]
nips_2022_pAq8iDy00Oa
Incorporating Prior Knowledge into Neural Networks through an Implicit Composite Kernel
It is challenging to guide neural network (NN) learning with prior knowledge. In contrast, many known properties, such as spatial smoothness or seasonality, are straightforward to model by choosing an appropriate kernel in a Gaussian process (GP). Many deep learning applications could be enhanced by modeling such known properties. For example, convolutional neural networks (CNNs) are frequently used in remote sensing, which is subject to strong seasonal effects. We propose to blend the strengths of deep learning and the clear modeling capabilities of GPs by using a composite kernel that combines a kernel implicitly defined by a neural network with a second kernel function chosen to model known properties (e.g., seasonality). Then, we approximate the resultant GP by combining a deep network and an efficient mapping based on the Nystrom approximation, which we call Implicit Composite Kernel (ICK). ICK is flexible and can be used to include prior information in neural networks in many applications. We demonstrate the strength of our framework by showing its superior performance and flexibility on both synthetic and real-world data sets. The code is available at: https://anonymous.4open.science/r/ICK_NNGP-17C5/.
Reject
The submission considers fusing prior knowledge into neural networks by *modulating* the learnt features. The modulation is either additive or multiplicative using another set of features outputted by a kernel-based mapping with linear/periodic kernels. The method is akin to using composite kernels (or combining kernels) in the GP literature and is tested on several regression benchmarks. The reviewers acknowledged the method is simple and seems to give a desirable performance when the prior knowledge is known. I (AC) read the paper and have several concerns (some of which have already been raised by one or more reviewers): - the clarity around the connection to composite GP kernels could be improved (there was also a question about the goal of this connection and the authors stated that it is to motivate the proposed approach), - many desirable properties of GPs (hyperparameter learning using the marginal likelihood, predictive uncertainty) seem to be lost since the objective function is only MSE/MAE and the parameters are not treated probabilistically [the authors promised to consider this in the next iteration]. - limited experiments to show the capability of the ICK framework beyond two data sources, and for various forms of prior knowledge beyond seasonality and the PM 2.5 forecasting task. Some further analyses of the forecasting results + errorbars would be appreciated. Despite the simplicity of the approach, for the above reasons, I feel that the submission is not ready for publication. I hope, however, to see an updated version published at a conference soon. The reviews are already fairly positive and the discussions here could be useful for polishing up the experiments and writing.
train
[ "u6V9iRNHe9c", "Cx5kURhXdkf", "bRh_QuKc4_", "J6mSkG7haCD", "3tkIcg4iOKo", "uXC-qS3oNFG", "umpqbFQ_Oyw", "NdWPtR0YZK6", "UxNRXpPSfJ0", "JPyAgg45_Pk", "OfRRwd4IquA", "DN8v6MafF-0", "5vAkcB2IJ4L", "Iep8FBMFwJE", "sNcjo6L7ECd", "lXiq4OAkSCI", "klSjXjlfBHw", "XZ3ARXKB-UX" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " i thank the authors for their response. i think it's valuable that the authors added some additional empirical evaluations. in my opinion, this is still less empirical evaluation than we would see in an ideal neurips submission of this kind, but it is a step in the right direction. for this reason i have raised my score. frankly, i'm surprised that at least two of the reviewers are apparently satisfied with the empirical evaluation included in the original submission.\n\nin particular, given the details provided in the draft i'm unable to determine if the attention-based models that were added make use of attention in a way that is canonical. my concern is that this may not be the case, but it is difficult to evaluate. \n\nthe bayesian deep learning literature is gigantic, uses a mix of different languages and, consequently (and unfortunately) can be somewhat difficult to follow. i recall at least one work that was somewhat similar to the present work, but unfortunately i am unable to come up with a reference. for what it's worth, here are some additional references that may be of interest to the authors:\n\n- https://arxiv.org/abs/2011.12829\n- https://dl.acm.org/doi/abs/10.5555/3546258.3546415", " Thank you for your suggestion! The sample-then-optimize framework is more closely to our proposed model and would make our work more self-contained. We will update the references and text accordingly. For clarity, we note that the NNGP reference compares a GP to an SGD point estimate, not a BNN. They state this conclusion on their page 7, \"We additionally find the performance of the best finite-width NNs, trained with a variant of SGD, approaches that of the NNGP with increasing layer width...\" It hints at a possible relationship between SGD and Bayesian inference in certain regimes – were the neural networks trained in a fully Bayesian fashion, rather than by SGD, the approach to NNGP in the large width limit would be guaranteed.", " Thanks for the reply. I disagree that NNGP [1] sheds light on this work. The difference is obvious. In NNGP, the Gaussian prior on weight space corresponds to a GP in function space, and then the training of *Bayesian neural networks* amounts to the exact analytical Bayesian inference in the GP form. But this work, in practice, learns a point-estimate model, thus it is meaningless to say the model approaches the NNGP posterior after training.\n\nIt is possible to tell the story from the \"sample-then-optimize\" [*1, *2] viewpoint where the point estimate is regarded as a posterior sample corresponding to some GP prior. It will make the whole paper more self-contained.\n\n\n[*1] Bayesian Deep Ensembles via the Neural Tangent Kernel. Bobby He et al.\n[*2] Bayesian Inference with Anchored Ensembles of Neural Networks, and Application to Exploration in Reinforcement Learning\nTim Pearce et al.", " Thank you so much for your question!\n\nTo clarify, we do not expect Theorem 1 to hold exactly after training, but we would expect that the learned network is much closer to the composite GP due to the model structure. We tried to be clear with the limitations in our writing. We only claimed that the properties held a priori on lines 146-148 by stating \"we first analytically show that our ICK framework is approximately equivalent to a composite GPR model a priori using a multiplicative kernel between the kernel implicitly defined by the NN on $x^{(1)}$ and the chosen kernel on $x^{(2)}$\" and explicitly stated that we did not expect it to hold a posteriori in the limitations. Regardless, to prevent any misconceptions on the theoretical claims, we have added the following clarifying text on lines 148-150: \"This theory is used to motivate the model form. The model will deviate from the GP solution after learning, but we note that recent work suggests that the predictions from a trained neural network may not vary too much from its GP equivalent [1].\"\n\nReference:\n[1]. Lee, Jaehoon, et al. \"Deep neural networks as gaussian processes.\" arXiv preprint arXiv:1711.00165 (2017).", " It is glad to see your reply. I am largely satisfied with it. \n\nYet, as you agreed, Assumption 1 holds only in the prior case and would fail in the course of training. And you only learn a point-estimate $f(, theta1)$. Do these mean the learning outcome is not a composite Kernel GP anymore? If so, it is better the clarify that Theorem 1 serves as a motivation for this work instead of a theoretical basis, and you should revise arguments like \"Then, we approximate the resultant GP by combining a deep network and an efficient mapping based on the Nyström approximation\". In my opinion, during training, you don't approximate the composite Kernel GP as Theorem 1 doesn't hold, right?", " Thanks authors for your helpful response. This work is quite strong, and I look forward to the Bayesian variant which I think could be very high impact.", " \"Regardless, we struggle to think of a title that would be clearer and are happy to consider a suggestion from the reviewer.\"\nI agree it's not trivial, I had in mind something along \"incorporating prior knowledge from auxiliary data\", but I am unsure whether it's clearer. Anyway, it was not impacting my evaluation, so it's okay.", " -- I'm not sure that reverting to 0 (Fig. 3 bottom) counts as \"failing gracefully\" when the standard deviation is so small. Why is it confident in its bad prediction, and is there a way to mitigate this?\n\nWe note that this reverting to the mean behavior is idealized for the point estimate prediction and would match the prediction that you would expect from a GP. Therefore, this is little uncertainty about the mean estimate as the network is determining that little is known in this case and is therefore confident that the prior mean is the best estimator. However, as the reviewer points out, fully evaluating the uncertainty and the posterior distribution is necessary to fully evaluate the out-of-distribution estimates. For now, we have added a clarifying comment in to those results in Section 5.1, and we will evaluate this property more fully in a full Bayesian implementation in future work.\n\n-- As mentioned in related work, there are at least 1-2 other methods that equip NNs with priors using GPs. Can you compare with them or at least explain when/why they would be less effective than your approach? \n\nIn our revision, we add 2 more benchmarks in the revised version of our paper based on Gaussian Neural Process (GNP), which is a class of latent variable models which are capable of learning efficiently from data and adapting rapidly to new observations. However, as stated by [1], the kernel function in GNP cannot be stationary as it models a posterior covariance, which makes it more challenging to specify an appropriate kernel function. Thus, we find that straightforwardly choosing the kernel in ICK results in an easier algorithm to specify and implement. As shown in Section 5.3, GNP underperforms ICKy by a large margin as it fails to capture the seasonality in time dimension. Another related work is the Bayesian NNs developed by Pearce et al. [2] which models their GP counterparts. However, as we discussed in Discussion section, their framework cannot simulate complicated kernels such as periodic with linear decay or spectral mixture.\n\nReferences:\n[1] Markou, Stratis, et al. \"Efficient Gaussian Neural Processes for Regression.\" arXiv preprint arXiv:2108.09676 (2021).\n[2] Pearce, Tim, et al. \"Expressive priors in Bayesian neural networks: Kernel combinations and periodic functions.\" Uncertainty in artificial intelligence. PMLR, 2020.\n\n-- It's not entirely clear to me what integral is being approximated with Nystrom quadrature. Is it the expectation in (10)? Might be helpful to briefly mention somewhere in 4.2.1 when you talk about integrating points. \n\nThe Nystrom quadrature approximates satisfies the quantity in Equation 4, which is then used in the sum in (10) over z^{(2)}. We have added a clarifying comment in Section 4.2.1 to alleviate future confusion.\n\n-- Consider rescaling the numbers in Table 1 so there aren't so many decimal places. Line 90 chose -> choose\n\nWe have fixed the typos and updated the format of Table 1 (Table 2 in the new version) so there are now only 2 decimal places. Thank you for your suggestion.\n\n-- Perhaps the conjugate gradient method is a feasible alternative to Cholesky decomposition if p needs to be larger? For future consideration.\n\nWe agree with the reviewer that this is an interesting suggestion, and have added a comment about it in the Discussion section.\n\n-- Assumption 1 seems reasonable to me, but the authors should try to characterize the cases where it doesn't hold. Perhaps a rigorous condition is hard to derive, but is there any intuitive argument that the cases where it doesn't hold are ill-behaved?\n\nAssumption 1 is a rather mild assumption (see response to Reviewer pXCj) that holds true with most initialization procedures. We have not determined an experiment that breaks this structure in a natural way, so we are unaware of any ill-behaved cases where it doesn’t hold. ", " -- I think computational time should be discussed more. The Cholesky decomposition is cubic in the number of pseudo-inputs, and the overhead required by differentiating through it is unclear to me.\n\nTo elaborate on the computational time, we added an additional experiment to Appendix E to show the effect that the number of pseudo-inputs has on the total computation time. The total time is dependent on the per-iteration time, which is what the Cholesky directly impacts, and the total number of epochs. We note that the training time is not heavily dependent on the complexity of the Cholesky decomposition, which is a relatively small computation compared to the neural networks, and instead is largely dependent on the how the # of pseudo-inputs impacts the number of epochs. It appears that when # pseudo-inputs is very small, the model takes more steps, which leads to a longer training time compared to larger values. After around 80 pseudo-inputs, the algorithm is fairly stable and the total training is fairly flat.\n\n-- \"Incorporating Prior Knowledge\" is a pretty strong title and motivation for the paper, since the technique is feasible only in the very specific setting described in the summary. It is impossible (as far as I understand) to include generic prior information if it is relative to the high-dimensional data (e.g., an image). I think the claims in the paper should be downsized accordingly.\n\nWe concede the reviewers’ point that the title does not convey that our proposed method only covers cases whether there is a clear assumption/kernel model form on part of the data and combines it with a standard neural network. This does cover many applications in geostatistics, so it is not too narrow of a task either. Regardless, we struggle to think of a title that would be clearer and are happy to consider a suggestion from the reviewer.\n\n-- Since the kernel is included in the autodiff process, I wonder whether it is also possible to optimize some of its parameters (e.g., the seasonality) automatically. Have the authors considered this point?\n\nYes, in our implementation of ICK, we can freely determine whether the kernel parameters (e.g. length scale, period, noise level, etc.) are trainable or not.", " -- The method isn't specified in sufficient detail. We are told (line 143ff) that \"the parameters of both the NN and the kernel function are learned via gradient-based optimization methods.\" OK but with what objective function? Mean squared error for regression tasks? Something else? Does the objective function matter?\n\nWe use MSE objective for Section 5.1 and 5.2 and MAE objective for Section 5.3 (to put less weight on outliers and thus enhance the model performance). Please see Appendix G in the revised version of our paper for details of model structures and training procedure. We believe that the primary contribution to the predictive performance is our model formulation. We also note that code was included with the submission that documents all the detailed modeling choices and implementation.\n\n-- The number/variety of benchmark algorithms used in experiments is inadequate. What about a MLP that is given access to cyclical/fourier features in the time domain in the experiments in Sec. 5.3? What about various neural processes, e.g. reference [1] below? What about attention-based mechanisms?\n\nWe find the reviewer’s observation that attention-based methods have mathematical similarities to our proposed approach quite intriguing, and we thank the reviewer for pointing it out. While we note that we are unaware of any attention-based mechanisms that were explicitly designed to model similar data structures, we have attempted to construct attention-based models to address this criticism. In the revised version of our paper, we add 4 benchmarks that leverages the Vision Transformer (ViT) structure to process the remote sensing data in Section 5.2. However, ViTs are not directly applicable to data with multiple information sources/modalities. Therefore, we again combine them with Random Forests (RFs) to form a joint model, expand the temporal information, and again consider the sinusoidal trick that was used in other baselines. The results in the updated Table 1 show that ViT-based models have better performance than CNN-based models, but still underperform ICKy. Furthermore, we replaced the NN part in ICKy with ViT-based structure, which had minimal impact on performance. \nAdditionally, in Section 5.3, we have added a cyclic MLP and two Gaussian Neural Processes (GNPs) benchmarks. It appears that all these benchmarks yield much larger prediction errors than ICKy. We believe these additional benchmark methods help demonstrate the strength of ICKy and thus make our claim more convincing.\n\n-- In general, the appendix reveals very little information about the MLPs used in the experiments. How many hidden units? How were these trained? etc.\n\nFor synthetic data, the MLP consists of just one single hidden layer of width 1000. For remote sensing data, the last layer of the backbone NN for ICK and all benchmark models has 1000 hidden units. It is then followed by one or two hidden layers with 512 units (two hidden layers for benchmark models and one hidden layer for ICK). For UCI ML repository data, all MLPs contain 3 hidden layers of width 128, 32, 32, respectively. Please see Appendix G for the details of MLPs and training procedure (i.e. optimizer, loss function, etc.) in the revised version of our paper.\n\n-- For M>2 the authors suggest using a \"chained inner product\" to form predictions (Appendix A). The authors also state (line 295ff) that \"A major limitation of ICK lies in our method of combining latent representations as the nature of inner product (i.e. the effect of multiplying small numbers) may cause vanishing gradient problems when we have a large number of sources of information (i.e. M is large).\" Have you tried M>3? When do you expect ICK to break down?\n\nIn Figure A1(c), we can see that the true and predicted values of the target are almost aligned with the line y = x. We tried M = 4 in our experiment but did not observe substantial improvement upon the results of M = 3. When M > 4, the gradient over both the parameters of NN and GP starts to change very slowly. Therefore, we do not recommend to set the value of M greater than 4. There are some additional tricks that could be used to make this scale to a higher value of M, such as combining modalities with a single Nyström map, but we have not evaluated these schemes and this is left as future work.\n\n-- What happens when Assumption 1 (line 158ff) does not hold? Can you test this experimentally with an artificial dataset that breaks this assumption?\n\nAssumption 1 is a rather mild assumption (see response to Review 1) that holds true with most initialization procedures. We have not determined an experiment that breaks this structure in a natural way, so it is difficult to test.", " -- The novelty and empirical validation presented would seem to be more in line with a workshop submission than a full paper. The suggested method is more trivial than not and I am somewhat skeptical that very similar and/or identical methods do not already exist in the literature.\n\nWe disagree on the lack of novelty of our work. As stated in our introduction, to the best of our knowledge, no existing approaches allows a modeler to choose any appropriate kernel of known prior information from multiple data sources. If the reviewer believes that an identical method exists in the literature, we would sincerely appreciate a reference. If not, we would appreciate the reviewer removing this comment. \nWe would also argue that by no means simplicity equates to lack of novelty or unconvincingness. Our final framework is presented in a simple and elegant form, but it was derived from a novel model and required a complete theoretical derivation to motivate it, which does not seem trivial to us. Furthermore, we would argue that the fact that our theoretical motivation yields a comparatively simple algorithm is a positive aspect of this algorithm. The machine learning field frequently adapts relatively simple modifications over more complex contributions; for example, consider skip-connections (e.g., resnet), which are straightforward to implement but have been widely adopted.\n", " -- Assumption 1 seems to apply to NN-GP kernels but does not apply to the approximate NN-GP kernels corresponding to finite-width NNs. It does not apply to typical kernels like RBF/polynomial ones as well. Can you clarify its validity theoretically/empirically?\n\nAssumption 1 does hold in the finite width neural network case under zero-mean iid random parameters, which is the typical initialization strategy. To see this, note that z_i will be determined by the inner product of a random vector w_i^L and the output of the previous layer, where L is the number of layers in the neural network, and z_j is likewise determined by w_j^L. If w_i^L and w_j^L are zero-mean random independent variables, then the expectation of the products turns into the product of the expecations, then the covariance of z_i and z_j will be 0, as E[w_j^L]=E[w_i^L]=0. Likewise, this assumption is straightforward for random Fourier features, as each one of the random Fourier features integrates to 0 with the other features in expectation. This is more challenging to prove for the Nyström approximation but seems to reasonably hold in practice. We do note that Assumption 1 only holds over the expectation of the a priori parameter distribution, and we do not claim it holds for realized values nor after training, as discussed in our limitations.\n\n-- In the case of an NN (whose parameters are θ(1)) plus a classic kernel (whose parameters are θ(2)), the learning is to find a good distribution p(θ(1)) and a point-estimate θ(2)? If so, can you clarify how you parameterize p(θ(1))? (E.g., a variational BNN with Gaussian variational whose mean is fixed as 0?) If not, and if you just learning a point-estimate θ(1), Lemma 1 would not apply and Theorem 2 does not hold.\n\nIn our algorithm, we initialize parameters from a zero-mean scaled random Gaussian and proceed with learning a point estimate. We did implement this approach using Variational Bayesian layers to estimate a Gaussian posterior over the parameters, but because we did not see an increase in performance we preferred the simpler point-estimate formulation. Regarding Lemma 1, we state that this property holds a priori, but as we discuss in our limitations, we do not claim that it holds after the network is learned. Rather, we use this as motivation to choose our model form. We note that existing empirical results suggest that training sufficiently wide NNs yields similar models to their infinite GP equivalent [1], so we believe that this is a reasonable strategy for model construction, and our empirical results support this conclusion. Future work will consider Bayesian learning strategies in more detail.\n\nReference: [1] Lee, Jaehoon, et al. \"Deep neural networks as gaussian processes.\" ICLR (2017).\n\n-- The loss function for training is just the MSE error? \n\nWe use MSE objective for Section 5.1 and 5.2 and MAE objective for Section 5.3, which puts less weight on outliers and thus enhance the model performance. Please see Appendix G for all model architectures and training details in the revised version of our paper. \n\n-- Can the framework be extended to handle classification problems? It seems that it is non-trivial as the output should only be 1-D.\n\nThe framework can be extended to handle classification problems based on existing approaches for multi-class classification from GPs. Briefly, a separate ICK architecture can be run for each class, and then the scalar output for each class can be concatenated and put through a softmax to create class probabilities. Please see Appendix H for more details and a proof-of-concept experiment on Rotating MNIST on how our method can be adapted for classification.", " -- How to pre-define the integrating points? Can they be updated by the training signals?\n\nThere are several approaches to pre-define the integrating points in the literature, such as uniform sampling without replacement [1], adaptive sampling [2], and pseudo-inputs [3]. In our experiments, we take evenly spaced points over the input space and use them as integrating points (which can be viewed as a simplified approach to generate pseudo-inputs) as described in Section 4.2.1. For example, if we believe the input has a periodic relationship with T = 100 with the output and we want to use 10 integrating points, then these points are simply defined as [0, 10, 20, …, 90]. We take this approach because it gives empirically better results compared to other sampling methods. \nIn the current implementation, the integrating points are treated as hyperparameters and are not updated. We note that there is prior work to suggest that the integrating or inducing points can be learned in a joint framework (e.g, [4]). We are unaware of any reason why these learning approaches could not be adapted for our situation, but we have not empirically evaluated this strategy. As our performance has largely saturated with the current number of integrating points (see Supplemental Figure E1), we would not expect a large improvement by learned better points.\n\nReference: \n[1] Kumar, Sanjiv, Mehryar Mohri, and Ameet Talwalkar. \"Sampling techniques for the nystrom method.\" Artificial intelligence and statistics. PMLR, 2009\n[2] Deshpande, Amit, and Santosh Vempala. \"Adaptive sampling and fast low-rank matrix approximation.\" Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques. Springer, Berlin, Heidelberg, 2006. 292-303.\n[3] Snelson, Edward, and Zoubin Ghahramani. \"Sparse Gaussian processes using pseudo-inputs.\" Advances in neural information processing systems 18 (2005).\n[4] Li, Yitong, et al. \"Targeting EEG/LFP synchrony with neural nets.\" Advances in Neural Information Processing Systems 30 (2017).\n\n-- How does the proposal compare to the recent work NeuralEF?\n\nThe NeuralEF paper proposes to estimate the kernel, including NNGP kernel and NTK, by using a series of NNs to approximate eigenfunctions under the principle of eigen-decomposition. However, in our proposal, instead of estimating the kernel itself, we use the Nystrom method to map the kernel matrix into a latent space with desired dimension. Therefore, we do believe that NeuralEF is highly related to our proposed approach. We have included it in the Related Work section, but have not implemented it as a benchmark model for comparison.\n", " We thank all the reviewers for their insightful comments. We have listed our answers to your questions below. We have also uploaded a revised manuscript that addresses many of the questions and comments from all reviewers, including several clarifications and many additional competing methods. Major additions and edits are marked in blue, and we highlight below when we made specific manuscript edits. The anonymous code repository has also been updated to reflect the new comparison methods.", " The paper proposes to blend the strengths of deep learning and the clear modeling capabilities of GPs by using a composite kernel that combines a kernel implicitly defined by a neural network with a second kernel function chosen to model known properties. Technically, the authors approximate the resultant GP by combining a deep network and an efficient mapping based on the Nyström approximation. Results on both synthetic and real-world data sets show the superior performance and flexibility of the proposal. Strengths\n- The paper proposes a practical method to incorporate prior knowledge into NNs. It offers an effective way for processing data with multiple sources of information where some explicit priors can be introduced in the form of GPs.\n- The paper establishes a way to integrate NNs and GPs efficiently and train them jointly.\n- The empirical results are interesting and insightful.\n\n\nWeaknesses\n- Assumption 1 seems to apply to NN-GP kernels but does not apply to the approximate NN-GP kernels corresponding to finite-width NNs.\nIt does not apply to typical kernels like RBF/polynomial ones as well. Can you clarify its validity theoretically/empirically?\n\n- In the case of an NN (whose parameters are $\\theta^{(1)}$) plus a classic kernel (whose parameters are $\\theta^{(2)}$), the learning is to find a good distribution $p(\\theta^{(1)})$ and a point-estimate $\\theta^{(2)}$? If so, can you clarify how you parameterize $p(\\theta^{(1)})$? (E.g., a variational BNN with Gaussian variational whose mean is fixed as 0?) If not, and if you just learning a point-estimate $\\theta^{(1)}$, Lemma 1 would not apply and Theorem 2 does not hold.\n\n- The loss function for training is just the MSE error? Can the framework be extended to handle classification problems? It seems that it is non-trivial as the output should only be 1-D. \n\n- How to pre-define the integrating points? Can they be updated by the training signals?\n\n- How does the proposal compare to the recent work NeuralEF?\n\nNeuralEF: Deconstructing Kernels by Deep Neural Networks. ICML 2022. See Weaknesses the authors have adequately addressed the limitations", " The authors introduce a neural network construction that forms predictions by computing inner products between a hidden representation generated by a neural network and a kernel-derived basis of functions. This construction is motivated by a desire to combine the flexibility of neural networks with the domain-specific knowledge that can be encoded in kernels (e.g. seasonality). The properties of the resulting method are explored on three data sets. The main strength of this submission, as I see it, is that one of the empirical comparisons includes a remote sensing data set, which would seem to be a rich test bed for these kinds of methods---though it should be noted that accessing this data set requires a PlanetScope license.\n\nUnfortunately, the weaknesses of this submission outweigh the strengths by a good margin. The novelty and empirical validation presented would seem to be more in line with a workshop submission than a full paper.\n1. The suggested method is more trivial than not and I am somewhat skeptical that very similar and/or identical methods do not already exist in the literature. However, even if we take it for granted that nothing like this has been done before, the empirical comparisons are too shallow, see below.\n2. The method isn't specified in sufficient detail. We are told (line 143ff) that \"the parameters of both the NN and the kernel function are learned via gradient-based optimization methods.\" OK but with what objective function? Mean squared error for regression tasks? Something else? Does the objective function matter? \n3. The number/variety of benchmark algorithms used in experiments is inadequate. What about a MLP that is given access to cyclical/fourier features in the time domain in the experiments in Sec. 5.3? What about various neural processes, e.g. reference [1] below? What about attention-based mechanisms? The omission of the latter is particularly troubling given the empirical success of attention-based methods and the similarity of attention-based mechanisms with your inner product construction. Given the simplicity of the proposed construction, a thorough empirical assessment (using a variety of datasets and benchmark methods) is absolutely crucial to convince the reader that the proposed method is useful in practice. In general the appendix reveals very little information about the MLPs used in the experiments. How many hidden units? How were these trained? etc. The proposed method may be interesting, but these toy experiments do not convince. \n\nReferences\n1. \"Convolutional Conditional Neural Processes,\"\nJonathan Gordon, Wessel P. Bruinsma, Andrew Y. K. Foong, James Requeima, Yann Dubois, Richard E. Turner\n2. \"Attention Is All You Need,\"\nAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin - For M>2 the authors suggest using a \"chained inner product\" to form predictions (Appendix A). The authors also state (line 295ff) that \"A major limitation of ICK lies in our method of combining latent representations as the nature of inner product (i.e. the effect of multiplying small numbers) may cause vanishing gradient problems when we have a large number of sources of information (i.e. M is large).\" Have you tried M>3? When do you expect ICK to break down?\n- What happens when Assumption 1 (line 158ff) does not hold? Can you test this experimentally with an artificial dataset that breaks this assumption? The authors do not discuss any potential negative societal impacts of their work.", " The paper proposes a way to combine prior knowledge inside deep networks. It assumes data comes from different sources (at least one low-dimensional source and one high-dimensional source), and prior knowledge is known about the low-dimensional one (e.g., seasonality). They build a composite kernel where the low-dimensional data is processed with a user-defined kernel, while the high-dimensional one with a neural network. The former is then processed though kernel approximation methods to obtain a latent representation which is multiplied by the embedding of the neural network to get the final prediction. They perform several experiments ranging from a synthetic dataset to a realistic use case in remote sensing. The idea is simple but (as far as I know, as I am not an expert in neural Gaussian Processes) novel. The paper is easy to read, the technique sound, and the experiments are interesting and relatively clear in the benefit of the method. - I think computational time should be discussed more. The Cholesky decomposition is cubic in the number of pseudo-inputs, and the overhead required by differentiating through it is unclear to me.\n- \"Incorporating Prior Knowledge\" is a pretty strong title and motivation for the paper, since the technique is feasible only in the very specific setting described in the summary. It is impossible (as far as I understand) to include generic prior information if it is relative to the high-dimensional data (e.g., an image). I think the claims in the paper should be downsized accordingly.\n- Since the kernel is included in the autodiff process, I wonder whether it is also possible to optimize some of its parameters (e.g., the seasonality) automatically. Have the authors considered this point? NA", " The authors propose ICK, a regressor that can naturally incorporate priors over low-dimensional subsets of the input. A high-dimensional input is processed with a neural network to produce a latent vector, and the low-dimensional input produces a latent vector of the same size via a Gaussian process (GP), where the kernel function incorporates the known priors. The GP is efficiently integrated using the Nystrom method and Cholesky decomposition. Strengths\nThe approach is an elegant combination of GPs and neural networks for incorporating useful priors, supported by a number of experiments.\nThe paper is well-written and the figures are very clear.\nRe: vanishing gradients as a limitation. I actually think your approach improves on the naive approach wrt vanishing gradients! Normally a network may ignore low-dimensional inputs even if they are informative, but your projection of both high-dimensional and low-dimensional data sources into a common latent space encourages them to be treated more equally.\n\nWeaknesses\nOne of the nice properties of GPs is that they can capture uncertainty in the prediction. However, the toy experiment suggests that this uncertainty is perhaps lost or not well calibrated. For example, I'm not sure that reverting to 0 (Fig. 3 bottom) counts as \"failing gracefully\" when the standard deviation is so small. Why is it confident in its bad prediction, and is there a way to mitigate this? It would be nice to see an experiment/metric that also demonstrates that your method matches distributions well.\nIt may be helpful to discuss what kinds of priors are amenable to being represented as the kernel function of a GP. It seems that this work focuses on continuous scalar variables where the prior represents periodicity, and the experiments are a little limited in this respect. Discuss some other types of priors and their corresponding kernels, and if possible add an experiment along this line.\n As mentioned in related work, there are at least 1-2 other methods that equip NNs with priors using GPs. Can you compare with them or at least explain when/why they would be less effective than your approach?\nIt's not entirely clear to me what integral is being approximated with Nystrom quadrature. Is it the expectation in (10)? Might be helpful to briefly mention somewhere in 4.2.1 when you talk about integrating points.\nConsider rescaling the numbers in Table 1 so there aren't so many decimal places.\nPerhaps the conjugate gradient method is a feasible alternative to Cholesky decomposition if p needs to be larger? For future consideration.\nLine 90 chose -> choose Yes. Assumption 1 seems reasonable to me, but the authors should try to characterize the cases where it doesn't hold. Perhaps a rigorous condition is hard to derive, but is there any intuitive argument that the cases where it doesn't hold are ill-behaved?" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 3 ]
[ "JPyAgg45_Pk", "bRh_QuKc4_", "J6mSkG7haCD", "3tkIcg4iOKo", "DN8v6MafF-0", "NdWPtR0YZK6", "UxNRXpPSfJ0", "XZ3ARXKB-UX", "klSjXjlfBHw", "lXiq4OAkSCI", "lXiq4OAkSCI", "sNcjo6L7ECd", "sNcjo6L7ECd", "nips_2022_pAq8iDy00Oa", "nips_2022_pAq8iDy00Oa", "nips_2022_pAq8iDy00Oa", "nips_2022_pAq8iDy00Oa", "nips_2022_pAq8iDy00Oa" ]
nips_2022_tadPkBL2gHa
Recruitment Strategies That Take a Chance
In academic recruitment settings, including faculty hiring and PhD admissions, committees aim to maximize the overall quality of recruited candidates, but there is uncertainty about whether a candidate would accept an offer if given one. Previous work has considered algorithms that make offers sequentially and are subject to a hard budget constraint. We argue that these modeling choices may be inconsistent with the practice of academic recruitment. Instead, we restrict ourselves to a single batch of offers, and we treat the target number of positions as a soft constraint, so we risk overshooting or undershooting the target. Specifically, our objective is to select a subset of candidates that maximizes the overall expected value associated with candidates who accept, minus an expected penalty for deviating from the target. We first analyze the guarantees provided by natural greedy heuristics, showing their desirable properties despite the simplicity. Depending on the structure of the penalty function, we further develop algorithms that provide fully polynomial-time approximation schemes and constant-factor approximations to this objective. Empirical evaluation of our algorithms corroborates these theoretical results.
Accept
Thank the authors for their submission. The paper studies a hiring problem, arguably a more realistic formulation compared to prior work. An agent has access to a pool of candidates, each with its own value and probability of accepting a hiring offer. The goal is to select a batch of candidates as to maximize the cumulative value of the candidates accepting the offer minus a penalty term for deviating from a target number of candidates. The problem is fully explored considering multiple types of penalty terms. Optimal algorithms are provided whenever possible and otherwise approximation algorithms are shown, all attained using simple greedy strategies. Synthetic experiments are also provided. The paper is well-written and easy to follow.
train
[ "eUfdZLFE8V", "GNfC88gD2Vh", "4iuGiJJzgZIG", "3IKsiNEHCCU", "qtWb9XwPs3o", "v6LindN6rao", "drxf5aJf8Sg", "ENMtGjIo-BH" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We are grateful for the kind review. ", " ### Questions\n\n> In all experiments presented, xGreedy seems to be equivalent to, if not better than, LowValueL1+. What is the reason behind this result? Is it because LowValueL1+ was run with $\\tau=0$? If so, how sensitive is LowValueL1+ to the choice of $\\tau$ and how would one tune $\\tau$ for their specific problem instance? Is it because $p_{min}$ is not low enough to see the advantage of LowValueL1+ being a constant-factor approximation (as opposed to xGreedy whose performance depends on $p_{min}$)? If so, how does the performance difference between xGreedy and LowValueL1+ varies with respect to $p_{min}$? Since the performance xGreedy is lower bounded in terms of $p_{min}$, I assume it should be possible to make LowValueL1+ arbitrarily better than xGreedy. Is it possible to show this empirically? I believe more experiments are needed to answer these questions.\n\nThe reviewer's point is well taken. In response, we have included a broader range of experiments in Section E of the revised appendix. Note that even in our original experiments, xGreedy does not always outperform LowValueL$_1^+$: in particular, it underperforms relative to both LowValueL$_1^+$ and xpGreedy in the settings with intermediate penalty regularizer and no correlation (see Figure 1, center) and with high penalty regularizer and negative correlation (Figure 2, right). As $\\tau$ controls the number of small solutions which are checked by brute force, choosing $\\tau > 0$ matters most for settings when the target M is small. If M is large and there are few candidates with high $x_i$ and $p_i$, then we should not expect the choice of $\\tau$ to impact the quality of the solution. Finally, it is possible demonstrate instances where LowValueL$_1^+$ dramatically outperforms xGreedy, by constructing distributions which mimic the instance used to show that xGreedy is at most a p-min approximation (as described in the proof of Remark 5 in Appendix A.2). \n\n> Also, I think evaluating the performance of the strategy proposed for $L_2$ with respect to the greedy strategies would have still been valuable. I understand that $L_2$ can be made arbitrarily close to being optimal but how would that compare to the greedy strategies? LowValueL1+ had different theoretical guarantees that made it more preferable over the greedy strategies, but the experiments revealed that xGreedy could perform better than a practical implementation of LowValueL1+ for most problem instances considered.\n\n\nOur additive inapproximability results notwithstanding, evaluating the greedy algorithms on instances with two-sided $L_2$ error may well shed light on how they interact with the problem. In response to the reviewer's question, we now include experiments for this objective in Section E.2 of the revised appendix. The new experiments suggest that xGreedy performs quite poorly relative to xpGreedy for two-sided objectives, and we include a tentative explanation of why this might be the case. We paste it here for convenience:\n\n\"*We observe that xpGreedy seems to dramatically outperform xGreedy when the loss is two-sided. We provide an informal explanation, using the two-sided $L_2$ loss as an example. Under this loss, a candidate $i$ contributes $x_i p_i$ to the reward term of the objective, while contributing $p_i(1 - p_i)$ to the variance of the realized size. When faced with two candidates of equal value $x_i$, we should therefore at the margin prefer the candidate with the higher probability, since this candidate contributes less to the variance per contribution to the reward. Note that for sufficiently large $\\lambda$ two-sided loss functions encourage algorithms to choose solutions expected size very close $M$, meaning that the variance and the two-sided $L_2$ loss are nearly equal. Here xpGreedy prefers this higher-probability candidate, while xGreedy is indifferent, explaining the superior performance of xpGreedy.*\"", " \n### Weaknesses\n\n> There are a few inconsistencies in the numerical experiments section. In the description of Figure 1, the paper stated \"n=50\". However, in line 304, at the end of the line, the paper stated \"n=40\". In line 337, \"\\lambda = 1\", there is no plot for such value. Instead, in Figure 2, there are plots for with values 1.5, 5, and 30.\n\nWe used $n=50$ in the experiments. We appreciate the proofreading, and have corrected these typos in the revised version of the paper. \n\n### Questions\n\n> While the greedy heuristic is a natural benchmark, is there any other algorithm/benchmark that can be used for comparison in this setting?\n\nWe believe the presented greedy heuristics are intuitive and natural, and for additional comparison, we now also benchmark against OPT for small instances (Figure 3 and Figure 4 in Section E.1 of the revised appendix). For completeness, here are two more heuristics we considered. (1) pGreedy: selecting the best prefix of candidates when ordered by decreasing probability; and (2) random: selecting the best prefix of candidates when ordered randomly. pGreedy fails when the highest-probability candidates have very low values, and random fails when only a small proportion of candidates have high values and probabilities. We opted not to include these two heuristics due to these practical weaknesses. If there are suggestions for other compelling baselines, we are open to including them.\n\n> Although the result for L_2 loss is discussed, does the same result hold for the two-sided L_1 loss?\n\nUnfortunately, it does not. The two-sided $L_2$ loss is handled by writing the problem as an unconstrained binary quadratic program, with a constant rank for the quadratic coefficient matrix. This approach fails for the two-sided $L_1$ loss, and we did not identify other means to provide an approximation to it.\n", " We believe we can provide satisfying answers to some of the reviewer's main questions and concerns. \n\n**Weakness 1**\n\nOur perspective on this issue is different. We agree that in practice we don't know the penalty function — but this actually aids the deployment of our methods. Instead of coming up with an exact penalty function (which we all agree is impossible), a user of our method would choose a \"standard\" penalty function whose structure seems to best match the scenario at hand. If the user doesn't know what penalty function to choose either, they may try different ones and inspect how their solutions differ. From this viewpoint, the goal should be to provide a rich menu of penalty functions to choose from; our results take significant steps in this direction.\n\n**Weakness 2 and Question 3**\n\nFinding the optimal solution appears to be intractable on many instances. However, in response to the reviewer's comment, we have included figures which compare our various algorithms with the optimal solution on smaller instances (where we can compute the optimal solution by brute force) in Section E.1 of the revised appendix. The figures demonstrate that, on these small instances, the algorithms are close to optimal.\n\nThe drop in objective function for negative correlation does not point to a weakness of the algorithms. Rather, this is a fundamentally more difficult setting, and the optimum is indeed lower (as evidenced by the experiments in Section E.1 of the appendix). The reason is quite intuitive: In general, we desire high values and high probabilities (i.e., less variance). Under positive correlation, these two criteria coincide. The high-value candidates have high probabilities, so we recruit many high-value candidates whose probabilities are close to one, \"filling up\" the capacity while introducing very little variance for the penalty term. Conversely, under negative correlation, high value candidates have low probabilities, introducing a fundamental trade-off as for whom to select.\n\n\n**Question 1**\n\nBy quadratic factorization we mean rewriting the two-sided $L_2$ objective (Equation 3) as the objective of a quadratic program (Equation 5 in Appendix B). That is, the two-sided $L_2$ objective is rewritten in the form $x^T A x + c^Tx + d$, where $x_i$ is the binary decision variable for whether to select candidate $i$. Obtaining this form with a matrix $A$ of constant rank is what allows us to construct an FPTAS for the two-sided $L_2$ objective. The $L_2^+$ objective does not take this form, and this approach fails.\n\nNote that we do not claim in Line 215-218 that the one-sided loss $L_2^+$ is necessarily inapproximable due to its nonlinearity; only that our algorithm for $L_1^+$ does not readily generalize. Concretely, our algorithm for one-sided $L_1^+$ attains a constant-factor approximation by first grouping candidates based on their values and addressing each case separately (Algorithm 2). The low-value case (Algorithm 4 in Appendix C.1) relies on rounding the probabilities of the instance and bounding the effect of this rounding on the objective; the high-value case uses the fact that for the $L_1^+$ objective, if the value $x_i > \\lambda$ then adding item $i$ to the selection always improves the objective. In either case, the same analysis does not apply to $L_2^+$.\n\n**Question 2**\n\nThe distributions proposed by Purohit et al. capture our intuition about real-world phenomena. On the one hand, candidates may be high value because they are an excellent match with our institution, and are therefore more likely to accept an offer (positive correlation). On the other hand, candidates may be high value because they are objectively strong, and would therefore have many competing offers and be less likely to accept ours (negative correlation). Regarding performance on data from other distributions, it is difficult to give a precise answer without referring to a specific distribution. That said, we do note that our theoretical results — while more conservative — do not make specific distributional assumptions, and that is an advantage of our analysis. \n\n**Question 4**\n\nWe agree that it is worth better understanding the empirical performance of our algorithms for other losses. In response to the reviewer's comment, we have added experiments evaluating the greedy heuristics against the one-sided $L_2$ loss function as well as two-sided loss funcitons to Section E.2 of the revised appendix. The experiments suggest that the performance of xGreedy relative to xpGreedy is similar under the one-sided $L_1^+$ and $L_2^+$ losses, with more marked differences between the two-sided $L_1$ and $L_2$ losses.\n\n**Question 5**\n\nThe notation $x|_S$ is the restriciton of the $n$-dimensional vector $x$ to the subset $S\\subseteq [n]$ of its coordinates. We thank the reviewer for pointing this out, and we now introduce this notation in the revised version of the paper (Line 250-251).", " The paper considers a problem of academic recruitment where the committee aims to select a set of candidates that maximizes the total expected reward minus the penalty caused by deviation from the target (i.e., over-hiring or under-hiring). It considers three simple greedy algorithms: (1) PGREEDY: selecting candidates in order based on their likelihood to accept the offer $p_i$; (2) XGREEDY: selecting candidates in order based on their value (qualification) $x_i$; (3) XPGREEDY: selecting candidates in order based on $x_i\\cdot p_i$. The authors conduct the accuracy analysis of these greedy algorithms under certain settings and show both positive and negative results. For different penalty functions (i.e., one-sided and two-sided linear/square loss), the paper also proposes algorithms that attain fully polynomial-time approximation and constant-factor approximation to the optimal objective. Strengths: \n\n1. The paper studies an interesting problem. In practical academic recruitments, multiple offers are typically made in parallel and target number of positions is typically treated as a soft constraint. This is captured in the model considered in this work.\n2. The paper considers several simple greedy algorithms and conducts theoretical analysis. These results provide insights for better hiring practices. \n\n\nWeaknesses:\n\n1. My biggest concern is the restrictive conditions/assumptions of the results. The analytical results highly rely on the structure of penalty functions (i.e., induced cost as a function of the target and the actual number of hires). The paper primarily considers two types of penalty function (one-sided linear loss and two-sided square loss), and most results and algorithms are limited to these function types. In practice, we don’t know the exact penalty function, and we see from the results that different penalty functions may lead to completely different conclusions. As a result, the proposed method may not be easily deployed in practice. \n2. The experiments only validate the proposed algorithms in synthetic settings; there is no real-world data that can be used for validation. Moreover, there is no other baseline method to compare with. The experiments show that when the candidate’s value $x_i$ and probability $p_i$ are negatively correlated (a realistic setting), the accuracy seems to be very low (e.g., in Figure 1, accuracy drops by almost half compared to the positive correlation case). It’s not clear how serious such a decrease is. What is the best attainable objective value $U(S*)$ for each target? It is important to highlight $U(S*)$ in the figures.\n \n1. In lines 215-218, the authors explained briefly why it’s difficult to analyze the algorithms under one-sided L2+ loss. Can you elaborate more? What does “quadratic factorization” mean and why it is an important property the objective needs to have? Why linearity is necessary for analyzing the one-sided loss? \n\n2. In experiments, the authors construct synthetic data based on the approach of Purohit et al. Can you explain why this is good modeling? How will the algorithms perform on data with other distributions?\n\n3. The experiments show that when $x$ and $p$ are negatively correlated, the performance of all algorithms drops significantly compared to positive and no correlation (Figure 1). Does it suggest that algorithms should be developed by incorporating such a negative correlation?\n\n4. In practice, penalty functions may deviate from one-sided linear functions. While the performance of algorithms is theoretically guaranteed under a one-sided linear function, they can still be evaluated empirically on other types of loss functions. I suggest authors conduct experiments on settings with other loss functions to examine the robustness of the algorithms. \n\n5. Algorithm 2, missing “{“ in lines 1-3; what are the notations $x|N_L$, $p|N_L$? Yes", " The paper considers a recruitment problem in which a agent can make offers to a set of possible candidates. Each candidate is characterized by a value and a probability of accepting the offer. The agent must make a single batch of offers, choosing a subset of the possible candidates. The objective is to maximize the expected value associated to the candidates that accept, minus a penalty term the depends on the deviation from a target number of desired acceptances.\nThey consider different types of deviations: $L_1$ and $L_2$ losses, and $L_1^+$ and $L_2^+$ losses in which recruiting less candidates that the target number is not penalized.\nThe authors characterized the computational complexity of computing optimal strategies and provide approximation results when the problem is intractable.\nFinally, they perform an experimental analysis. The paper introduces a new computational problem. A similar problem was studied with sequential interaction, while this paper considers a different interaction with a single batch of offers. This model is more realistic in many settings.\n\nThe paper is well written, easy to follow, and provides a detailed analysis of previous works.\n\nThe technical results are technically involved and provide a complete picture of the problem. No questions. Yes", " The paper investigates the problem of batch hiring in an academic recruitment setting. While previous works consider a hard budget constraint on the number of acceptance, the authors argue that this assumption is inconsistent in practice and carries computational hardness. Instead, they view the budget as a soft constraint with a penalty for overshooting the target. Under these assumptions, the paper shows that greedy heuristics demonstrate some desirable properties. Finally, the authors provide an algorithm that approximates the optimal solution within a constant factor in polynomial time. Strengths:\n\n- The paper provides a novel algorithm for the case of one-sided L_1 loss between the realized outcome and the target number of hires.\n\n- The paper is well-written and well-organized. \n\n- The related work section is sufficient. \n\n- The discussion of greedy heuristics performances is sufficient and provides good baseline intuition for the algorithm.\n\n- The theoretical guarantee of algorithm ONESIDEDL1+ is significant and improves upon the greedy heuristics in experiments. \n\n- The theoretical claims are supported by discussions that provide intuition to the proof.\n\n- The numerical experiments section contains a sufficient discussion of the setting and explanations for choices of hyperparameters. The results of these experiments support the theoretical claims made. \n\nWeaknesses:\n\n- There are a few inconsistencies in the numerical experiments section. In the description of Figure 1, the paper stated \"n=50\". However, in line 304, at the end of the line, the paper stated \"n=40\". In line 337, \"\\lambda = 1\", there is no plot for such value. Instead, in Figure 2, there are plots for $\\lambda$ with values 1.5, 5, and 30.\n - While the greedy heuristic is a natural benchmark, is there any other algorithm/benchmark that can be used for comparison in this setting? \n\n- Although the result for L_2 loss is discussed, does the same result hold for the two-sided L_1 loss? The authors have adequately addressed the limitations and potential negative societal impact in their paper. ", " This paper proposes strategies to find the optimal batch of offers to make in a recruitment scenario, where the goal is to maximize the quality of the recruited candidates but there is a soft constraint on the maximum number of candidates that can be recruited (and it is uncertain whether candidates will accept an offer or not). When the penalty function that characterizes the soft constraint is based on the mean squared error, the authors provide a polynomial-time strategy that can be made arbitrarily close to being optimal; when the penalty function is one-sided and linear, they provide a polynomial-time strategy that has a constant approximation factor. The problem seems to be well motivated. I especially like the explanation of why the sequential and hard-constrained setting studied previously in the literature fails to capture the more practical complexities of recruitment (as opposed to single-batch and soft-constrained setting considered in this paper). However, most of the evidence for this seems to be anecdotal, maybe it could be supported with more references.\n\nThe problem is explored fully. In particular, a practical strategy is proposed for both a two-sided penalty function (where recruiting fewer candidates than targeted is penalized explicitly) as well as a one-sided penalty function (where recruiting fewer candidates is not penalized explicitly, but it would implicitly lower the total quality of the recruited candidates). However, it is not discussed when would one strategy/penalty function would be preferred to the other. While a one-sided penalty function seems natural to me, it would have been nice to have examples where under-recruiting would require explicit penalty (on top of the opportunity cost of not recruiting as many candidates as possible).\n\nThe experiments seem to be the weakest part of the paper; I have put my questions regarding the experiments below. In all experiments presented, xGreedy seems to be equivalent to, if not better than, LowValueL1+. What is the reason behind this result? Is it because LowValueL1+ was run with $\\tau=0$? If so, how sensitive is LowValueL1+ to the choice of $\\tau$ and how would one tune $\\tau$ for their specific problem instance? Is it because $p_{\\text{min}}$ is not low enough to see the advantage of LowValueL1+ being a constant-factor approximation (as opposed to xGreedy whose performance depends on $p_{\\text{min}}$)? If so, how does the performance difference between xGreedy and LowValueL1+ varies with respect to $p_{\\text{min}}$? Since the performance xGreedy is lower bounded in terms of $p_{\\text{min}}$, I assume it should be possible to make LowValueL1+ arbitrarily better than xGreedy. Is it possible to show this empirically? I believe more experiments are needed to answer these questions.\n\nAlso, I think evaluating the performance of the strategy proposed for $L_2$ with respect to the greedy strategies would have still been valuable. I understand that $L_2$ can be made arbitrarily close to being optimal but how would that compare to the greedy strategies? LowValueL1+ had different theoretical guarantees that made it more preferable over the greedy strategies, but the experiments revealed that xGreedy could perform better than a practical implementation of LowValueL1+ for most problem instances considered. One limitation is that the strategies proposed in the paper are specifically designed for $L_2$ or $L_1^{+}$ penalty functions whereas a user might want to design their own penalty function according to the characteristics of their own problem instance based on historical data." ]
[ -1, -1, -1, -1, 5, 8, 7, 6 ]
[ -1, -1, -1, -1, 3, 3, 4, 4 ]
[ "v6LindN6rao", "ENMtGjIo-BH", "drxf5aJf8Sg", "qtWb9XwPs3o", "nips_2022_tadPkBL2gHa", "nips_2022_tadPkBL2gHa", "nips_2022_tadPkBL2gHa", "nips_2022_tadPkBL2gHa" ]
nips_2022_XByg4kotW5
When does return-conditioned supervised learning work for offline reinforcement learning?
Several recent works have proposed a class of algorithms for the offline reinforcement learning (RL) problem that we will refer to as return-conditioned supervised learning (RCSL). RCSL algorithms learn the distribution of actions conditioned on both the state and the return of the trajectory. Then they define a policy by conditioning on achieving high return. In this paper, we provide a rigorous study of the capabilities and limitations of RCSL something which is crucially missing in previous work. We find that RCSL returns the optimal policy under a set of assumptions that are stronger than those needed for the more traditional dynamic programming-based algorithms. We provide specific examples of MDPs and datasets that illustrate the necessity of these assumptions and the limits of RCSL. Finally, we present empirical evidence that these limitations will also cause issues in practice by providing illustrative experiments in simple point-mass environments and on datasets from the D4RL benchmark.
Accept
This paper theoretically analyzes a new popular class of RL algorithms, referred to as Return-Conditioned Supervised Learning (RCSL). The paper theoretically shows that this class of method requires stronger assumptions than standard DP-based approaches for learning the optimal policy. The paper shows that RCSL method requires near-deterministic dynamics as well as a priori knowledge of the optimal conditioning function to perform well, and construct examples where the stated assumptions are necessary for empirical performance. This paper is solid in all four axes: originality, quality, clarity, and significance. All reviewers advocated for acceptance.
train
[ "_fzlLLAIOy", "FXUIIrtZ52q", "Jmz8e5BwH2I", "hSiukmIQef", "x_C-ZfMB-U", "ldH7l9wucwo", "9RCVDvDLYJP", "wiQ9caBS4n", "gL0em4Eoviv", "cVeFKbJqU2s", "MfU4GpRjgEN", "zu_3rwoZkDL", "IFI56J_2HUo" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for engaging in the discussion!\n\n1. It is still not clear to us why the reviewer thinks that RCSL is just BC. This does not seem to be substantiated and there are key differences, namely the ability to condition on returns and outperform the behavior policy.\n\n2. We want to reiterate that there is no need for the function class to be fully expressive. We agree that using tabular function approximation with full realizability of all policies in an adversarially constructed problem RCSL requires a larger function class than the policy in a DP algorithm. But no one is advocating for this function class! These are methods intended to be used with function approximation and rewards are not naturally represented as discrete variables. In practice, we use neural nets of very similar expressivity for all methods. RCSL depends on approximation error as defined in assumption 3 of corollary 3 and this only requires approximation of the optimal policy not the full policy class (for example epsilon could be zero even if the function class only has one function if it is the optimal one). The reviewer instead is discussing the assumptions 1 and 2 in section 5.1 that are necessary for guarantees about the DP algorithm. In contrast, those DP assumptions require realizability and completeness in the space of Q functions rather than policies. The assumptions are not directly comparably, but we would argue that the DP assumptions are stronger since they require both a realizability assumption and a completeness assumption while RCSL only requires a notion of realizability. Of course it is possible that the RCSL map from S x R -> A is more complicated that the DP map from S x A -> R, but we don’t think there is a clear argument either way in general and think the two function classes should be viewed as having similar complexity in theory and in practice.\n\nPlease let us know if anything is still unclear or if you would be willing to raise your score.\n", " Thanks for engaging in the discussion!\nThere seem to be two fundamental issues raised by the reviewer: (1) the title and (2) the experimental results.\n\n1. On the title: there seems to be some confusion caused by our title. We ask the question: when does RCSL work for offline RL? We answer this question by our main contribution which is the theory that gives precise conditions under which RCSL works and examples where it fails. However, the conditions are in fact quite strong (and necessary) so that in practice we often expect RCSL to fail, which we see in the experiments. We want to make clear that our main claim is not that RCSL works, but that it can work under specific settings while failing under others. Would the reviewer be more satisfied if we changed the title to something like “Analyzing RCSL for offline RL”?\n\n2. We have added some additional experimental results and discussion to a new Appendix A for several different types of datasets in the three D4RL environments from the main text for a total of 9 experiments across 6 algorithms in D4RL now. This way we have coverage of multiple dataset types in each of three substantially different environments from the D4RL benchmarks (one from each of the classes of environments). The results confirm those that were already in the paper, mainly highlighting that RCSL is usually beaten by DP-based algorithms (as we might expect in cases with poor return coverage as the theory suggests), but there are situations where this may not be the case (specifically cases with high-quality trajectories in the dataset, but poor coverage over the state space, as we may expect from the theory). Hopefully these results are sufficient to convince the reviewer to raise their score. We again want to emphasize that our primary results are theoretical and these experiments are not our main contribution, they simply provide one demonstration of how to interpret the insights from the theory.\n\nLet us know if this response was satisfactory to raise your score or if there is anything else that is still unclear. \n", " I think all my concerns except for the evaluations on stochastic examples were well addressed in the rebuttal. I think this paper is worth being accepted by NeurIPS, but I will keep my score unchanged considering the current experimental results.\n\n", " Thanks for the detailed responses from the authors.\n\nThe reviewer appreciates the honesty of the authors. As discussed in the main review, the theoretical bound for DP-based methods and that of RCSL are not directly comparable. Also, it is better to put the theoretical results of top-% BC into the main text.\n\nWhereas the reviewer still deems that it is vital to figure out why RCSL fails while DP-based method works from a theoretical view, as this paper discusses \"when does RCSL work for offline RL\". The authors are also suggested to fulfill it through empirical experiments. While unfortunately, as the reviewer commented in the main review, the experiments are not sufficient. The authors can take some more datasets from MuJoCo or Adroit and consider the datasets from kitchen.\n\nWhen commenting \"RCSL fails/underperforms baselines on 3 out of 4 datasets\", the reviewer does not mean that the authors need to make RCSL beat DP-based methods. The reviewer wants to challenge the authors on the key claims of the paper, i.e., the title. With only one tested task where RCSL beats DP-based method, can we really say that the evidence is enough to convince readers on the proposed claims in the paper?", " Dear authors,\n\nThank you for your detailed responses. \n\n(1) I understand that \"this algorithm\" has never been studied before, but my point is that in this restrictive setting, \"this algorithm\" is just BC, so it's not a super novel result in its own right that RCSL, given enough data, converges to optimality. \n\n(2): The dimension blow-up arises from history, not an additional state observation. Consider your reward to be discrete over R integer values, and the environment is H step. Then, to have a look-up table storing all possible policies, you would need to increase the size of the look up table from |S||A| to roughly the order of |S||A||R||H|. \n\nNow, in the non-tabular setting this paper focuses on, the size of the function class is absorbed into the epsilons in Assumption 1 and 2. However, to obtain the same epsilon, the function class for RCSL policies necessarily is bigger than that of regular policies. This key detail is not compared when comparing the regret bound to DP algorithms.\n\nWhile the authors state in the rebuttal that for RCSL algorithms, the function class does not need to be able to represent all policies, Assumption 1 and 2 in Section 5.1 appear to contradict this point (for the bound to be reasonably meaningful).\n\nFinally, I understand that in practice, the implementation would not be so different. But the authors have positioned this paper to be a theory paper, so this point should be properly addressed in theory. ", " First, we would like to thank the reviewer for their thoughtful review and positive evaluation of our paper!\n\nThe reviewer essentially raises two main areas for improvement: (1) more experiments and (2) a more explicit characterization of lower bounds.\n\n1. We agree with the reviewer that more experiments could always make our point stronger. We are working on the following experiments based on the reviewers advice to add for the camera ready: (1) we will run on more of the D4RL datasets with different behavior policies, (2) we will add a variety of behavior policies rather than just reward functions in the toy example to see how performance degrades/improves as the behavior changes, and (3) we are also planning to add a stochastic toy environment in response to reviewer a7eE that may also be of interest.\n\n2. We agree that formalizing the lower bounds could sharpen the contribution. This should not be too difficult as we already have derived the hard instances and presented them in the figures, but we will add a more formal treatment of lower bounds for the camera ready when we are allowed a longer page limit. The reviewer also suggests that we add more discussion of limitations, which we will add for the camera ready with the longer page limit. \n\nIf anything still seems to be lacking or unclear, please let us know.\n", " First, we would like to thank the reviewer for their thoughtful review and positive comments about the originality, significance, and quality of our paper. If we are able to resolve some misunderstandings, we hope that the reviewer will be willing to raise their score.\n\nWeaknesses:\n\na) Indeed the reviewer is right that some of the figures are slightly blurred, we will fix this in the revision. Thank you for catching this!\n\nb) We are happy to make the requested change to line 199 in the revision, but it is not clear to us why the first version is not acceptable to the reviewer. This seems to us to be a fairly standard way to write papers in machine learning, so we are not sure where else to make changes to satisfy the reviewer. If you are able to raise more specific instances that are problematic, please let us know and we can make the changes.\n\nc) We agree with the reviewer that the known bounds for the two algorithms are not directly comparable since they make different assumptions. In this section we hope to provide some contrast that weighs the relative strengths of the algorithms and we think that our analysis provides solid high-level guidance comparing the assumptions needed for the different algorithms. However, we agree with the reviewer that these bounds are not perfect and will not tell us precisely when one algorithm will work and the other will not in practice because they are not tight in every instance and they all rely on assumptions that may not be satisfied. However, we think the bounds and the discussion are instructive to give general guidance about the types of factors worth considering (e.g. stochasticity of the environment, coverage of optimal behavior in the dataset, coverage of optimal state distributions in BC, or expressivity concerns in DP).\n\nd) We want to make it very clear that we are not arguing that RCSL is better than the alternative algorithms we implement in the experiments. The purpose of our paper is to examine and analyze how RCSL compares to more well-established methods like BC and DP. To that end the toy experiments and examples provide the clearest evidence of the claims of the paper and then we consider the benchmark datasets to see how the insights can be applied to problems where the exact correspondence to the theory is less clear and where we would expect to see less clear cut results. We chose a sample of datasets from D4RL that have highly different properties to see what happens in different cases since we think there is not much insight to be gained from the minor variation across most of the D4RL datasets. However, we can run the experiments on more datasets for the camera ready, just let us know which datasets would be useful to see results from. \n\nQuestions:\n\na) We make a brief comment about the connection between RCSL and distributional RL to aid readers in understanding the connection between the two. Both algorithms deal with the same object which is the distribution over future returns. In distributional RL this distribution over returns is modeled explicitly, while RCSL implicitly uses this same distribution to define its policy. We can remove the comment if the reviewer finds it too confusing, but we thought it provided some useful context.\n\nb) Indeed, top-% BC suffers from the same issues as RCSL in stochastic environments since it filters using the stochastic returns. Since this algorithm is not the focus of the paper we did not explicitly create these example MDPs for top-% BC, but we can add them in the revision if the reviewer thinks it would be useful. \n\nc) RCSL training is simply supervised learning and as such is as easy to train and debug as supervised learning is. We agree that when using complex models like transformers this is not trivial, but supervised learning is widely regarded as easier to train than DP-based methods, this is all we are trying to claim. \n\nd) RCSL has the potential to be beneficial in cases where the dataset already contains some high-return trajectories, but where the dataset does not have good coverage over the state space (as would be necessary for DP-based methods). We see this somewhat in the results on the human-pen-v1 dataset. However, we agree with the reviewer that our results are largely negative about the RCSL algorithm. Again, we want to reiterate that we are not trying to claim that RCSL is the best algorithm. Rather, we are analyzing a recently created but not well understood approach. As to the questions of stochasticity, we have shown that RCSL as presented has fundamental issues. Trajectory transformer is a different method that is essentially model-based RL and as such can deal with issues like stochasticity. It is beyond the scope of the paper to comment more deeply about how to resolve these issues. \n\nHopefully these responses can clear up some misunderstandings and the reviewer is willing to increase their score. If anything is still unclear that may impact the score, please let us know! ", " First, we'd like to thank the reviewer for their thoughtful review and largely positive assessment of the paper. We will respond to the weaknesses first and then the question.\n\nWeaknesses:\n\n1. It is not exactly clear to us what the reviewer means here by an augmented state space. In some sense the idea of RCSL is to add the future return to the state during training, but this algorithm has never before been analyzed to our knowledge. If the reviewer has a specific result in mind, we would be happy to compare it to our work more directly. We also want to clarify that while the dynamics are (nearly) deterministic, the behavior policy that generates the data can be highly stochastic, so the return itself is not at all a deterministic quantity, even when conditioned on the state.\n\n2. We don't see any obvious issue with the comparison of function class sizes between RCSL and DP. There seem to be two points of misunderstanding here. (1) It seems that the reviewer is taking issue with our presentation of RCSL as history-conditioned rather than just conditioning on the most recent observation (and this is where the exponential dependence is arising). We presented RCSL as history conditioned to be fully general and to capture the Decision Transformer work, but in Markovian environments, all of our analysis holds for simple state-conditioned policies without any history. Moreover, in non-Markovian settings even standard DP algorithms will have to condition on the entire history. (2) It seems that the reviewer is assuming that our policy class must be fully representative of all possible policies (as in a full lookup table of all states and actions). This need not be the case, and for our positive results we only need the policy class to be able to realize the optimal policy to compete with it. This is a strength of RCSL compared to DP, since DP algorithms require stronger assumptions on the Q function class beyond realizability. Finally, we just want to note that in practice there is essentially no difference in the actual neural networks we use to represent the RCSL policies versus the DP policies and Q functions. For example the RvS policy, TD3+BC policy and Q function, and IQL policy and Q function in our implementation have the same base architecture and are not history dependent (with only minor differences in input space: (s, g) for RvS policies, (s) for DP policies, and (s,a) for DP Q functions). The decision transformer does indeed have a different architecture, but this is somewhat orthogonal to the point we are making about the underlying loss functions that guide our analysis.\n\nQuestions:\n\n1. We agree with the reviewer that section 6.2 is not a particularly novel experiment since we are just applying prior methods to a standard benchmark. We simply include it for completeness of our discussion and do not try to claim that it is our core contribution.\n\nWe hope that these responses clarify our contribution and encourage the reviewer to raise their score. If there are any remaining questions about our response that may impact the score, please let us know!\n\n", " First, we'd like to thank the reviewer for their thoughtful review and largely positive evaluation of the paper. Below we respond to each of the questions the reviewer raises and then the weaknesses (although there is some overlap).\n\n1. Perhaps we were not clear enough in our explanation of how RCSL works and we will try to clear this up here. There is no mismatch between g(tau) and f(s), they are both meant to represent the return over trajectories. At test time, we don't have access to the future, so RCSL conditions on a desired value for the future return of the trajectory from the current timestep onwards and we denote this desired return as f(s). We also want to emphasize that we are not proposing the RCSL algorithm as our contribution, rather we are providing an analysis on prior work that introduces the algorithm and pointing out the flaws and strengths of the approach.\n\n2. Indeed N is dataset size, thanks for catching our omission! We will add the definition to the revision.\n\n3. Initially we omitted stochasticity from the experiments since we already had simple examples where stochasticity provably causes RCSL to fail. However, the reviewer raises a reasonable point that a more full characterization of the issue in a toy experiment could improve our argument. We are working on implementing this now, and hope to have results soon.\n\nThe only weakness that the reviewer lists is to add an experiment to address question 3, which we are working on now. Hopefully this response clarifies things enough for the reviewer to increase their score. If there are any other weaknesses that need to be addressed to improve the score, please let us know.\n", " This paper studies the capacities and limitations of return-conditioned supervised learning (RCSL) methods and illustrates their relationship to behavior cloning (BC) and dynamic programming (DP). The main conclusion is that RCSL methods are theoretically incapable of achieving high performance when the underlying environment is non-deterministic and the return coverage is insufficient. These findings are both intuitive and theoretically sound. The novelty of this paper lies in the theoretical part. The authors start with the setting with infinite data and a fully expressive policy and then extend the derived theorems to a more practical setting with finite data and a limited policy class. The theoretical derivations are solid and the theorems are easy to understand. The core conclusion is that RCSL is fundamentally limited when the environment is stochastic and the return coverage is insufficient, which is intuitive and theoretically sound. However, from my perspective, I think the authors should strengthen the experimental parts. For example, to verify that RCSL is indeed limited to near-deterministic environments, the authors should conduct experiments in environments with different stochasticity. 1. In the training phase, RCSL minimizes the empirical negative log-likelihood loss described by Equation (1), while the test-time policy is defined as Equation (2). It seems that there is a mismatch between the training-time policy and the test-time one: the training-time policy is conditioned on the return of an entire trajectory $g(\\tau)$ but the test-time policy is conditioned on a condition function $f(s)$ which varies with the states in a trajectory. The authors should explain it. Besides, will this discrepancy affect the conclusion?\n2. The authors did not pre-define the variable $N$, which firstly appears in line 182 and Equation 9. (I guess it refers to the dataset size)\n3. Though the authors claim that RCSL is incapable of learning high-performance policies when the environment is stochastic, which is described by Theorem 1 and explained by Figure 1, I think the authors may construct toy MDPs to provide experimental evidence in the experimental part.\n It would be better if the authors could conduct experiments to support their theoretical claims. For example, experiments investigating the return coverage and stochasticity of the dataset and the environment, respectively.", " This paper theoretically analyzes a new popular class of RL algorithms, referred to as Return-Conditioned Supervised Learning (RCSL). The paper theoretically shows that this class of method requires stronger assumptions than standard DP-based approaches for learning the optimal policy. The paper shows that RCSL method requires near-deterministic dynamics as well as a priori knowledge of the optimal conditioning function to perform well, and construct examples where the stated assumptions are necessary for empirical performance. Strength:\n1. This paper is solid in all four axis: originality, quality, clarity, and significance. It is the first paper to provide a theoretical analysis of a new class of practically relevant RL algorithms. The exposition is clear, and the theoretical results are mostly proper (though, I did not check the supplement fully). The experiments are well thought out, and complement the theoretical analysis. Given that this class of algorithm has received significant attention from the RL community, this work is timely and relevant. \n\nWeakness:\n1. It's unclear that under deterministic dynamics, RCSL can recover the optimal policy, is a novel result. Under this assumption, the return can be practically treated as a new state-dimension, and RCSL reduces to Behavior Cloning with an augmented state space.\n2. The dependence on the size of the policy class needs to be clarified when comparing RCSL and DP-based algorithms in Section 5. Even in the tabular case, just to represent the policy, it would require a lookup table that is exponentially larger than what is required to represent a standard policy for DP approaches (i.e, |S||A|). For this reason, it's not clear that the comparison is super meaningful. \n 1. It's unclear what insight Section 6.2 adds; I believe similar findings have already been reported in prior papers, for example, https://arxiv.org/abs/2112.10751. \n\nI am willing to increase my score if the authors can address my qualm about the theoretical analysis in the above section. Overall, this is a sound paper. Yes. ", " This paper discuss the conditions that return-conditioned supervised learning (RCSL) work for offline reinforcement learning (RL). This paper mainly contributes theoretically on the optimality of the RCSL algorithms and the sample complexity of the RCSL algorithms. Under some strong assumptions, they present theoretical guarantees for RCSL given both infinite data and finite data. Furthermore, empirical evidence on failure and success of RCSL methods are provided. **Originality and Significance**: this paper is quite original, which theoretically studies a very new topic in RL. The conclusions in this paper is of importance to the RL community, and can benefit the progress of offline RL\n\n**Quality**: this paper has a good quality overall. The theoretical results seem sound to some extent. The reviewer randomly checked some proofs in the appendix (but not all of them), and they seem correct, despite some minor errors in some of them. For example, in appendix Eq. (56), the reviewer thinks it ought to be $\\epsilon (\\frac{1}{\\alpha_p}+1)$ instead of $2\\epsilon (\\frac{1}{\\alpha_p}+1)$. The theoretical contribution of this paper is clear. While the empirical experiments are kind of disappointing\n\n**Clarity**: this paper is of medium clarity. The reviewer likes the way the authors tell a story like this. While the figures are of poor quality, which negatively affect the clarity of this paper. Some figures, e.g. figure 3 and 4, are sort of blur.\n\n**Summary of Strengths**:\n\na) the discussed topic is important and interesting. It is vital to figure out when and under what conditions will RCSL work for offline RL\n\nb) the theoretical contribution of this paper is of importance to better understand \"when return-conditioned supervised learning work for offline reinforcement learning\"\n\nc) the toy examples presented in the paper is quite helpful to understand why RCSL fail in stochastic environments\n\n**Weaknesses**:\n\na) some of the figures are blur.\n\nb) there are some grammatical issues, e.g., line 199 \"To see why this is the case, consider the MDP illustrated\" --> \"To see why this is the case, we consider the MDP illustrated\". Please do not use colloquial expressions in submitted papers. While this is just a random example. Please double check the paper to ensure that there are no grammatical issues.\n\nc) the discussion lacks depth. For example, in Section 5.1, the authors compare the sample complexity bound of RCSL and dynamic programming (DP). The reviewer cannot tell what is the point here, since the basic assumptions for these bounds are different. Whereas the bound for top-%BC are omitted in the main text. Meanwhile, the discussion in Section 5.1 does not reveal why RCSL fails while DP-based methods work.\n\nd) the empirical evidence is expected to be enriched. Currently, RCSL is only tested on some selected datasets, e.g., 1 D4RL MuJoCo dataset, 1 D4RL Adroit dataset, and 2 D4RL Antmaze datasets. RCSL fails/underperforms baselines on 3 out of 4 datasets. The reviewer agrees that MuJoCo/Antmaze datasets come from \"stochastic\" environments. While there is only one single test on datasets with narrow distribution (e.g., human). The reviewer cannot say that it can convince readers on the proposed claims in the paper. The reviewer has the following several questions:\n\na) why discussing the connection of RCSL to distributional RL? They actually lie in quite different categories.\n\nb) the theoretical results in appendix (on top-%BC) also involve the assumption of \"near determinism\". Then will top-%BC fail on stochastic environments? If we include all the data, i.e., top-100% BC or vanilla BC, are the derived bounds still meaningful? Any discussion here?\n\nc) the authors claim that DP-based methods are often \"unstable and difficult to debug\" (line 371-372), then are RCSL methods easy to debug and exhibit stability during training? Why RCSL methods are easy to debug? It is hard to say that decision transformer (DT) is easy to tune and debug.\n\nd) what is the benefits of RCSL on earth? Since IQL, a DP-based method, shows very good performance over all of the tested datasets. Does there exist some methods that can make RCSL work in stochastic environments? The authors mention that trajectory transformer works on MuJoCo datasets as it involves planning, will planning breaks this dilemma? N/A", " The paper considers return-conditioned supervised learning (RCSL), a class of methods that learn a return-conditioned policy from offline data and recover the evaluation policy by conditioning on a given desired return. It then analyzes these method theoretically, bounding the regret of RCSL policies under the assumption of close-to-deterministic transitions&rewards and return-coverage of the behavior policy. Moreover, the paper provides examples of MDPs where RCLS is not able to recover the optimal policy, which demonstrate that the additional assumptions are required. Lastly, in the experimental evaluation the paper shows on a toy-example scenarios where dynamic programming has advantages over RCSL and also evaluate performance on the D4RL dataset. The paper is clearly written, easy to follow, and studies a class of methods that has been of interest lately. The theoretical analysis clearly illustrates when we can expect RCSL to work and the example-MDPs are useful to understand when it doesn't. The comparison to dynamic programming and behavior cloning is interesting, but comparing lower-bounds would probably be more convincing. The experiment section illustrates the main points of the paper, but is relatively limited in scope. Overall I think this is a well-written and interesting paper. It would perhaps be useful to explicitly state a lower-bound on performance given that the introduction is largely motivated from that perspective. It would be nice to expand the experiment section a bit further: For the toy-example and ablation study on how performance changes depending on the behavior policy would be useful and more D4RL baselines could be run. For the latter it would be particularly interesting how performance compares on different behavior datasets for the same environment. While the theory clearly states its assumptions, limitations of the analysis are not discussed." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "x_C-ZfMB-U", "hSiukmIQef", "gL0em4Eoviv", "9RCVDvDLYJP", "wiQ9caBS4n", "IFI56J_2HUo", "zu_3rwoZkDL", "MfU4GpRjgEN", "cVeFKbJqU2s", "nips_2022_XByg4kotW5", "nips_2022_XByg4kotW5", "nips_2022_XByg4kotW5", "nips_2022_XByg4kotW5" ]
nips_2022_G2kkDEujOw
Detection and Localization of Changes in Conditional Distributions
We study the change point problem that considers alterations in the conditional distribution of an inferential target on a set of covariates. This paired data scenario is in contrast to the standard setting where a sequentially observed variable is analyzed for potential changes in the marginal distribution. We propose new methodology for solving this problem, by starting from a simpler task that analyzes changes in conditional expectation, and generalizing the tools developed for that task to conditional distributions. Large sample properties of the proposed statistics are derived. In empirical studies, we illustrate the performance of the proposed method against baselines adapted from existing tools. Two real data applications are presented to demonstrate its potential.
Accept
This manuscript enjoyed universal recommendation of acceptance from the reviewers after the initial review phase. The reviewers did note several minor issues in these initial reviews, many of which were resolved by insightful responses from the authors. I encourage the authors to edit the manuscript to reflect the insights gained from this interaction when preparing an updated version.
val
[ "vCLuGpdBkT7", "Wbf6_GTNtyi", "RUhnJoeQENW", "raLyEEEBk9n", "FMgFC3oro57", "VUtw1EKJqLc", "a5DdXDn4V3v", "6OMlzfxnXtf", "-Bq5pRuM8SG", "_jBUbG196xD", "CsEUkqcV07V" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the suggestion. We will look carefully at multiple points in the real data for the final paper (it is easy to implement a binary segmentation algorithm - one for which we have no theoretical guarantees). ", " Dear authors,\n\nThank you for your response and for adding those discussions to the manuscript. I know it is more of a gimmick, but for the real-world experiments you have chosen, multiple change point detection would be a great extension. So please consider adding a simple experiment to the appendix with either windowing or binary segmentation, even though there is no theoretical guarantee for this. To me, it would nicely round up the story for the experiments. However, this is not critical and does not change my evaluation of the paper.\n\nBest regards\n\n", " We thank the reviewer for the encouraging feedback and insightful comments!\n\n**Whether our method detects a second change point around February 2016**\n* No, our method is designed for a single change point setup, so once we find the first change point, we stop. But we agree that generalizing to multiple change points setting is a realistic and interesting direction for future work. As the reviewer mentioned, a sliding window approach is a possibility; and simple binary segmentation is also worth investigating. The theoretical analysis that is needed to guarantee that the proposed method can still work when multiple change points exist is less straightforward and we leave it for future work. We will add some discussion to this generalization in our final version. Thank you for raising this important point!\n\n**Whether the method is able to detect less abrupt changes in practice**\n* This is an interesting question for which we currently do not have an answer. Note that for gradual changes, defining a proper type of alternative is more difficult. In Task II (conditional mean shift), we agree that we can define the alternative as a smooth function added to the original mean function (which is also the setup considered in some previous literature in this strand, for example, [1]). But for Task I, this is less obvious. We will add some discussion to this question in the final version and leave it as an interesting direction for future work.\n\n**References**\n\n[1] Michael Vogt. Testing for structural change in time-varying nonparametric regression models. 398 Econometric Theory, 31(4):811–859, 2015.\n", " We thank the reviewer for the positive feedback and constructive comments!\n\n**On the choice of $h_X$ or $h_Y$**\n* As it is common for all kernel methods, the bandwidth is either set a priori or it requires tuning. In our simulations we tune it; in our real data analysis, we subjectively set a value $h_X^2=h_Y^2=0.1$ without tuning and the proposed method still works well. \n\n**On the permutation/bootstrap procedure**\n* The reviewer is correct - permutation/bootstrap procedures can be expensive. In the nonparametric setup of our paper, deriving an accurate analytic formula for p-values can be very difficult, leading to our adoption of this option. Please note that this has been used for some previous change point methods, for example, [1].\n\n**Connection to standardization of $\\hat\\Delta_t$ and CUSUM**\n* Thank you for the insightful comment! Yes, they indeed share many similarities both from their form and the mathematics hidden behind. We will add more discussion to this point in the final version.\n\n**Is it possible to have an asymptotic requirement i.e. as sample size $n$ goes to infinity $\\Delta$ remain bounded above zero?** \n* If we understand correctly, the reviewer is saying that from Theorem 5.1, the proposed method succeeds only when $\\Delta\\ne 0$; and the reviewer is asking whether it can also work when $\\Delta$ going to zero but at a certain rate. We note that the final statistic we use is $\\hat\\Delta_t=[t(n-t)/n]\\tilde\\Delta_t$. Our hypothesis is that the proposed method can still work as long as $n\\Delta$ goes to infinity, although a strict conclusion will definitely involve more careful analysis.\n\n**Are we allowed to have discrete covariates $x$’s in the settings authors proposed?** \n* Yes, the methodology works for discrete covariates as long as the semi-metric space they lie in satisfies the technical assumptions in Section 5.\n\n**How is the empirical power calculated in the experiments in Section 6?** \n* It is calculated using bootstrap distributions. In more detail: we first calculate $\\max_{n_0\\le t\\le n_1}\\hat\\Delta_t$ using the original sequence $\\{(x_1,y_1),...,(x_n,y_n)\\}$, and let us denote its value as $S$. Then we use permutation/bootstrap to get a new sequence, say, $\\{(x^1_1,y^1_1),...,(x^1_n,y^1_n)\\}$, and we calculate $\\max_{n_0\\le t\\le n_1}\\hat\\Delta_t$ on this new sequence. Say we get $S^{(1)}$. We repeat this re-sampling step for $m$ times and get $S^{(1)}, …, S^{(m)}$. The p-value is then calculated as the $1/m\\sum_{i=1}^mI(S^{(i)} \\ge S)$, i.e., the proportion of empirical samples that is larger than the observed value $S$. We note that this is also the approach used in many existing change point literature, for example, [1][2][3].\n\n**References**\n\n[1] Dubey, P. and Mu ̈ller, H.-G. Frechet change point detection. arXiv:1911.11864, 2019. \t\t\n\n[2] Chen, H., Zhang, N., et al. Graph-based change-point de- tection. The Annals of Statistics, 43(1):139–176, 2015\t\t\t\t\t\n\n[3] Chu, L., H. Chen, et al. (2019). Asymptotic distribution-free change-point detection for multivariate and non-euclidean data. The Annals of Statistics 47(1), 382–414. ", " We thank the reviewer for the positive review and thoughtful questions!\n\n**Reasons for defining $\\hat\\Delta_t$ at the beginning of page 6**\n* We apologize for the confusion - we should have clarified the reasoning for the choice of $\\hat\\Delta_t$ and we will certainly do it in the final version. The motivation for defining $\\hat\\Delta_t$ is Theorem 5.1 in Section 5. In short, Theorem 5.1 studies the limiting behavior of $\\tilde\\Delta_t$, and says that its variance is bounded by the complicated multiplicative constant in the definition of $\\hat\\Delta_t$. So we use that constant to rescale $\\tilde\\Delta_t$. \n* The confusion might come from the way we organize the paper. Since the derivation of Theorem 5.1 requires a lot of technical assumptions, we deliberately keep it as a separate section in the end so readers can more easily focus and understand the main method. Since this is causing some confusion, we will make the cross-reference clearer in the final version.\n\n**How realistic are the assumptions**\n* Thank you for raising this important issue! Our assumptions are technical, but in general they are mild and not that restrictive. For example, when the space of $x$ is compact, and $k_X$ and $k_Y$ are Gaussian kernels, assumption 1,3,4 are all satisfied. If we further assume the space of $x$ is an Euclidean space with a density function, then assumption 5 is also satisfied. Assumption 2 is probably the hardest to give an intuition, but it essentially states that the conditional distribution $p(Y\\mid X=x)$ changes smoothly in the sense that $p(Y\\mid X=x_0)$ is close to $p(Y\\mid X=x_1)$ when $x_0$ is close to $x_1$. We will add a paragraph that details these in the final version.\n\n**Relevance of Theorem 5.1 in the context of finite data sets**\n* The results in Theorem 5.1 are purely asymptotic; the theorem does not consider a finite sample size setup. \n* As the reviewer points out, a finite sample theoretical result would be needed if we were to tell ‘how big’ the abrupt change should be in order for the proposed method to detect it. However, in our nonparametric setup, the finite-sample theoretical analysis can be technically challenging. We agree with the reviewer this issue is worth looking into, and a feasible trade-off is perhaps to derive a bound (either finite-sample or asymptotic) that explicitly depends on the signal-noise ratio (which will involve data distribution and kernel we choose in some way). This is an interesting area for future work.\n\n**Computational complexity**\n* We thank the reviewer for raising this important issue. The computational complexity of KCE is $O(n^2)$, and that for KCD is $O(n^3)$. In comparison, the computational complexity of our baselines ($D_Y, D_{XY}$, [1]) are all $O(n^2)$. The running time for both are acceptable in our simulations with a $n=1000$ sample size: KCE takes about 1 second and KCD 30 seconds to calculate the corresponding statistics in R on a 2017 Macbook Pro. We agree that for larger datasets this can be slow, and developing faster algorithms is an important topic for subsequent work. \n\n**References**\n\n[1] Clive R Loader. Change point estimation using nonparametric regression. The Annals of 362 Statistics, 24(4):1667–1678, 1996.\n", " We thank the reviewer for the positive review and thoughtful questions!\n\n**On more discussion on the transition from Task II to Task I** \n* Our general motivation for starting from Task II (the simplified problem) and later transitioning to Task I (the original problem) is that we find Task II to be a better understood problem in literature, and a problem that it is much easier to solve. The solution to Task II allowed us to immediately identify components (pairs of inner product between $y_i$ and $y_j$) which can be generalized to solve Task I by using an established kernel trick. \n* On why Kolmogorov’s entropy is involved: in short, it is an important concept used for the technical part of the paper, both for assumptions and proofs. We provide next more details. Recall that Kolmogorov’s entropy is needed when we standardize $\\tilde\\Delta_t$ so that it has the same asymptotic mean and variance across different $t$’s. To achieve that, we need to understand the limiting behavior of $\\tilde\\Delta_t$. Notice that for the simpler Task II, we can use Equation (6). For Task I, we have a similar expression\n$$\\tilde\\Delta_t=n^{-1}\\sum_{i=1}^n||\\hat f_-(t,x_i)-\\hat f_+(t,x_i)||_{\\mathcal{H}}^2 \\quad\\quad\\quad (1)$$\n\nwhere $\\hat f_-, \\hat f_+$ are functions taking value in the infinite-dimensional space $\\mathcal{H}$. Let us ignore all details here and focus on Equation (1). Understanding the limiting behavior of $\\tilde\\Delta_t$ ultimately requires deriving an upper bound. To bound each term in the sum in Equation (1) (say, the $i$-th term), we can derive a bound that depends on that particular $x_i$. But the $x_i$’s are random, and further we take the sum, so that bound will not work! In the end, we will need a ‘uniform bound’. To obtain such uniform bounds, regularity conditions must be placed on the space $x$’s. The Kolmogorov’s entropy is introduced to restrict the complexity of the space of $x$’s. We will add more discussion to this important point (thank you for raising it!) in the final version of the manuscript.\n\n**Missing reference of CP detection with Bayesian approaches**\n* Thank you for pointing out this lack of references! We will add them as a separate paragraph in Section 2 in the final version.\n\n**Performance of proposed statistics when change point is close to 0 or 1**\n* Thank you for raising this important issue. Yes, the performance of our method will be affected by the value of $\\rho^*$. When it is close to 0 or 1, it does not perform as good when it is close to 0.5, for the reason pointed out by the reviewer. That is why we place a constraint for it to be bounded away from 0 and 1 in the sense that $0<\\rho_0\\le\\rho^*\\le\\rho_1<1$, which is a common assumption in existing change point literature, see, for example, [1][2][3].\n* To better answer your question, we perform an additional experiment for KCE with $\\rho^*=0.9$ and all other settings exactly the same as Experiment A in the paper (results with $\\rho^*=0.7$ are directly taken from the paper):\n| $f_1(x)$ | $5x$ | $cos(x)$ | $x^2$ | $abs(x)$ | $0.1max(0,1-x)$ | $e^x$ | $1/[2(x+3)]$ |\n| ------ | ---- | ---- | ------| ----| ----------------| ---- | ---------- |\n| $\\rho^*=0.7$ | $2.1\\pm0.3$ | $2.7\\pm 0.5$ | $4.9\\pm 1.3$ | $2.9\\pm 0.5$ | $6.4\\pm1.6$ | $1.9\\pm 0.4$ | $3.1\\pm 0.6$ |\n| $\\rho^*=0.9$ | $3.4\\pm 0.9$ | $6.1\\pm 1.9$ | $17.4\\pm 6.3$ | $10.7\\pm 2.6$ | $6.0\\pm 2.0$ | $8.2\\pm 2.7$ | $15.7\\pm 6.8$ |\n \nNote that the performance does becomes worse from $\\rho^*=0.7$ to $\\rho^*=0.9$, but even for $\\rho^*=0.9$ we find it to be good.\n\n**How to pick $c_2(x)$ in practice**\n* In practice we use permutations/bootstrap to calculate p-values, so $c_2(x)$ is not required. For purpose of standardization, we do not actually need $c_2(x)$ since it does not depend on $t$, and it always appears as a multiplicative constant for variance of $\\Delta_t$’s, so it can be safely canceled out (in the same way we cancel out any constants).\n\t\n**References**\t\n\t\t\n[1] Dubey, P. and Mu ̈ller, H.-G. Frechet change point detection. arXiv:1911.11864, 2019. \t\t\t\n\n[2] Chen, H., Zhang, N., et al. Graph-based change-point de- tection. The Annals of Statistics, 43(1):139–176, 2015\t\t\t\t\t\n\n[3] Chu, L., H. Chen, et al. (2019). Asymptotic distribution-free change-point detection for multivariate and non-euclidean data. The Annals of Statistics 47(1), 382–414. ", " We thank all reviewers for their positive feedback and constructive comments! \n\nWe are happy to see that all reviewers agree that this work tackles a relevant problem to the NeurIPS community, provides a novel and general solution, with theoretical guarantees and promising empirical results. \n\nWe appreciate all comments pointing out insufficiencies and providing ideas for future improvements, including more explanations/examples for the technical assumptions, and generalizing to multiple or gradual change points. Below we answer each reviewer’s questions and issues raised as separate comments. \n", " The paper studies a problem of detections and localizations of changes using the discrepancies between the Nadaraya-Watson estimator on data before and after a time. The paper develops the methods from scalar output cases to a general settings of outputs using kernel techniques.\nTo detect and localize the changes in data, a statistical test is proposed.\n Strengths:\n- The paper is well-written in general. It states the problem clearly, giving solutions for simple tasks then the genel ones.\n- The experiment results look interesting and promising. For example, in real world data, the method can detect change points which agree with actual events.\n\nWeakness:\n- Lack of discussion or emphasis on the transition from task I to task II. Specifically, it would be better to give an explanation as to why Kolmogorov’s entropy is involved. It might be technical but will provide a sense of directions for readers\n- Missing some references of CP detection with Bayesian approaches. I believe the covariate shift would be related to this line of research, especially there is work using kernel maximum mean discrepancy to tackle the covariate shift problems that resembles the approach in this paper.\n\nReference:\nGretton et al. 2008. Covariate Shift by Kernel Mean Matching. In Dataset Shift in Machine Learning.\n - Let’s consider a case $t=1$, obviously ${\\hat{f}}(t, \\cdot)$ (\\hat{f}{-}, sorry Latex does not work here) is a not-so-good estimator because it uses one data point. On the other hand $\\hat{f}_{+}(t, \\cdot)$ can be a reasonable estimation as it uses $n-1$ data points. I think the statistical test should rely on presumably perfect estimators. It seems like $\\tilde{\\Delta}_t$ may not reflect well on the statistical test when there is a big difference in the number of left-side and right-side data. Of course, the variance $c_2(x)/[\\rho (1-\\rho)]$ can describe this. Do you have any results showing that the method can perform well when $\\rho^*$ close $0$ or $1$ (i.e. $0.1$ or $0.9$)? \n\n- As stated in Remark C.2, I wonder how to pick $c_2(x)$ in practice? It can greatly affect p-values.\n \n", " This paper addresses the problem of detecting abrupt changes in the conditional distribution of two (multivariate) random variables, using a finite time indexed data sample. The presented proposal entirely builds on a non-parametric approach. The presented approach is theoretically studied and some consistency results are provided. An empirical validation is also presented with both synthetic and real data sets. \n Strengths:\n- The presented approach is quite general and its implementation is simple. It does not assume the existence of a likelihood model, in opposition to many previous proposals. \n\n- Authors provide a non straightforward theoretical analysis of the consistency of the presented approach. \n\n- The presented methodology applies to multidimensional problems, which is a problem not widely addressed in the existing literature. \n\nWeaknesses:\n- Some parts of the approach are not derived using well-defined principles. For example, at the beginning of Page 6: “we suggest setting” the tilde delta variable according to some equation whose rationale is not discussed and it is hard to understand. \n\n- All the theoretical results established in this work are based on complex technical assumptions which are hard to interpret and, in many cases, impossible to verify. \n\n- A central issue in this kind of problems is “how big” should be the “abrupt change” to be possible for the proposed method to detect it from a finite data sample. This is not explicitly addressed in this work. \n\n- The computational complexity of the approach makes it impractical for many real-life applications. \n Could you please provide the reasons for the equation defining tilda delta at the beginning of page 6? \n\nHow realistic are the assumptions supporting your theoretical analysis? \n\nIs Theorem 5.1 of relevance in the context of finite data sets? \n Mostly yes. The only issue is the set of assumptions made for the theoretical results. See my question about. ", " The paper discusses a nonparametric change-point estimation methodology for change-point detection in paired sequence of observations. The author(s) propose a methodology based on kernel smoothing estimator to detect change-point in conditional expectation. Further, they generalized the methodology to change-point detection in conditional distribution via a novel kernel estimator. The asymptotic properties of the estimator has been derived. The synthetic results and real data analysis validate the performance of the proposed estimator. strengths\n\n(A) A novel change-point detection technique for paired sequence of observations\n(B) Smart use of NW eatimator for change in mean detection\n(C) novel theoretical results discussing the asymptotic properties of the proposed kernel change-point estimator for change detection in conditional distribution\n\nweaknesses\n(A) It wasn't clear in both algorithm 1 and 2 how one should make the choice of h_X or h_Y?\n(B) permutation/bootstrap procedure for p-value computation could be computationally quite expensive\n(C) standardized version of \\hat{Delta}_t is quite close to the CUSUM type statistic applied in change point detection. Some discussion would have been useful there. Following are my queries for the author(s)\n(A) In proposition 1 we require $\\Delta\\neq 0$, is it possible to have an asymptotic requirement i.e. as sample size n goes to infinity $\\Delta$ remain bounded above zero\n(B) Are we allowed to have discrete covariates x's in the settings author(s) proposed?\n(C) How is the empirical power calculated in the experiments in section 6? Yes they adequately addressed the limitations and potential negative societal impact of their work", " The paper presents a novel approach to detecting change points in conditional distributions, in particular for a setting relevant in machine learning where covariates are drawn iid from a fixed distribution, but the relation to the label changes. The idea is to derive a sequence of differences in conditional distribution, standardize this sequence, and perform a hypothesis test whether a change point exists for the maximum difference in an interval and report his point. For detecting a change in the conditional expectation, the difference of conditional expectation before and after this point is estimated using the kernel-based Nadaraya-Watson estimator. In order to estimate differences in conditional distributions, the paper proposes to generalize the estimator to take into account all pairwise similarities of labels (measured by a suitable kernel). The paper then proves correctness for their approach. The method is empirically evaluated on synthetic data and two real-world use cases.\n\nThe paper is sound, the presentation is clear, the method is novel and interesting, and well placed within the literature. I vote for acceptance. Strengths:\n- tackles a problem relevant to the NeurIPS community\n- novel and sound method\n- solid theoretical analysis\n\nWeakness:\n- the real-world use-cases are a bit limited. - regarding the UK Stock experiment: there seems to be a second changepoint coinciding with the Brexit vote announcement in February, 2016. Does your method detect this as well? The method is currently used to detect a single change point in a time series. It would be great to empirically evaluate whether it can also be used to detect multiple change points (e.g., by a windowing approach), in particular on real-world use cases, e.g., detecting stock market bubbles in stock prices. It would also be great to see whether the method is able to detect less abrubt changes in practice (e.g., by shifting the mean gradually in a short time interval from the initial distribution to the final one)." ]
[ -1, -1, -1, -1, -1, -1, -1, 6, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 3, 5, 3 ]
[ "Wbf6_GTNtyi", "RUhnJoeQENW", "CsEUkqcV07V", "_jBUbG196xD", "-Bq5pRuM8SG", "6OMlzfxnXtf", "nips_2022_G2kkDEujOw", "nips_2022_G2kkDEujOw", "nips_2022_G2kkDEujOw", "nips_2022_G2kkDEujOw", "nips_2022_G2kkDEujOw" ]
nips_2022_8cC2JeUyz9
Inference and Sampling for Archimax Copulas
Understanding multivariate dependencies in both the bulk and the tails of a distribution is an important problem for many applications, such as ensuring algorithms are robust to observations that are infrequent but have devastating effects. Archimax copulas are a family of distributions endowed with a precise representation that allows simultaneous modeling of the bulk and the tails of a distribution. Rather than separating the two as is typically done in practice, incorporating additional information from the bulk may improve inference of the tails, where observations are limited. Building on the stochastic representation of Archimax copulas, we develop a non-parametric inference method and sampling algorithm. Our proposed methods, to the best of our knowledge, are the first that allow for highly flexible and scalable inference and sampling algorithms, enabling the increased use of Archimax copulas in practical settings. We experimentally compare to state-of-the-art density modeling techniques, and the results suggest that the proposed method effectively extrapolates to tails while scaling to higher dimensional data. Our findings suggest that the proposed algorithms can be used in a variety of applications where understanding the interplay between the bulk and the tails of a distribution is necessary, such as health and safety.
Accept
The paper proposes a new method for inference and for sampling in archimax copulas. All the reviewers praised the soundess and clarity of the paper, the novetly of the ideas and the experimental results. Copulas might not be one of the core topics of the NeuRIPS community, but the reviewers pointed out that: 1) the authors did a great job at explaining copulas to the ML community, a valuable tool to model extreme events. 2) the method builds a connection between copula and deep generative modeling, and hence opens new research directions. Hence, they all enthusiastically recommend to accept the paper, and I agree with them. Some of the reviewers [HCqW, 5Yjr] also supported the idea to highlight the paper (oral or spotlight presentation).
train
[ "u9bqGMdnKLI", "Ta3ZHzSIeRa", "yMLIUaTZw9h", "QCzw5H8JT0", "np0mhs2pQZR", "dcdtRX1qqy", "0f6s-kDhoXx", "2Fc5CVGD1Xb", "dzx55EZlCY3", "nryJ4J2iNCY", "ciUthYW50V5" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your careful consideration and response to my comments. I have accordingly raised my score and I recommend acceptance of this paper. ", " Thank you very much for your careful responses. I am happy with your responses to my comments. In particular, my concerns given in comments (e) and (f) have been sufficiently addressed, and therefore I increased my score to 7 (Accept).", " Thank you for your response and addressing the comments! I will leave the score the same; I do recommend acceptance of the paper.", " We thank you for the very positive review, you brought up many thoughtful comments and interesting questions. The following is our response:\n\n**[Clarity]** Figure 1 is not referenced in the text. \n\nThank you for pointing out the error. We added a reference to Figure 1 in the revision, line 101 on asymptotic dependence and line 109 on radial envelope. We also enriched Figure 1 with an example of asymmetric dependence.\n\n**[Initialization]** Archimedean generator first set to $\\varphi(x)=\\exp\\\\{-x\\\\}$.\n\nThis is a very important question. We did not mention in the paper but we also considered initialization with different one-parameter families of Archimedean generators, with choice of generator based on the highest log-likelihood of transformed observation $\\xi$, equations (7) and (9). The parameter for each family may be obtained from an average of inversion of pairwise Kendall tau, as per the following equation [1], for each pair:\n\n$\\tau_{\\varphi,\\ell} = \\tau_\\ell + (1-\\tau_\\ell)\\tau_\\varphi,$\n\nwhere $\\tau_{\\varphi,\\ell}$ is the Kendall's tau of the Archimax bivariate marginal, $\\tau_\\ell$ is the Kendall's tau of the extreme value component and $\\tau_\\varphi$ is the Kendall's tau of the Archimedean component. An average of inversion of pairwise Kendall tau was employed in [2], with emphasis on the Clayton generator.\n\nInitialization with specific families of Archimedean generators might bias initialization, and thus we suggested initializing via the Archimedean generator first set to $\\varphi(x)=\\exp\\\\{-x\\\\}$ representing extreme value copulas and pre-processing the initial data to have extreme value dependence via block-maximas. This was also motivated by the experiment on extrapolating to extremes. \n\nWe added this alternative initialization procedure to the revision, in Appendix A.4.1 and briefly mention it as a footnote in Section 3.3.1 which describes the initialization procedure.\n\n**[Hierarchical Archimedean copula]**\n\nThis is a good direction for future work. The most straightforward way is to add hierarchy in the construction of the random variables $R$ and $\\mathbf{W}$ as in [3]. We have added a note on this in the revision. \n\n**[Architecture of generative neural networks]**\n\nThis is also another excellent direction for future work, such as to specify hierarchy and time dependence. We have also added this suggestion to the future work section of the revision.\n\n---\n\nReferences\n\n[1] P. Capéraà, A.-L. Fougères, and C. Genest. Bivariate distributions with given extreme value attractor. Journal of Multivariate Analysis, 72(1):30–49, 2000.\n\n[2] S. Chatelain, A.-L. Fougères, and J. G. Nešlehová. Inference for Archimax copulas. The Annals of Statistics, 48(2):1025 – 1051, 2020.\n\n[3] M. Hofert, R. Huser, and A. Prasad. Hierarchical archimax copulas. Journal of Multivariate Analysis, 167:195–211, 2018.", " We thank you for the overall positive and very detailed review. Your critique and suggestions are highly appreciated and the following is our response:\n\n**[Quality]** more experimental results to support claims of significance and contribution.\n\nWe thank you for the suggestion, and we have added a number of additional experiments in the revision, including an application of out-of-distribution detection and another baseline of the t and skew-t copulas. We hope that these help bolster the claims of the paper by adding additional empirical evidence. \n\n**[Clarity]** motivating example to clarify why Archimax copulas are appropriate/ necessary in critical scenarios. \n\nThis is a wonderful suggestion. One of the motivation for using Archimax copulas in critical scenarios is to model extreme data (e.g. very strong and rare earthquakes), where observations are rare, from a mix of moderately less extreme data (e.g. strong earthquakes), where observations are relatively more abundant. Another motivation for use of Archimax copulas is the ability to model both the bulk and the tails with one copula. We added more explanation in the revision under the experiment on extrapolating to extreme rainfall in Section 4.\n\n**[Significance]** superiority on tasks where SOTA ML models fail\n\nThrough the original and additional experiments, we provided promising results on the task of extrapolating to extremes. Following your useful suggestion, we also demonstrate good performance on the task of out-of-distribution detection. \n\n**1.** complexity not mentioned, trade-off between flexibility and scalability.\n\nThere is a direct trade-off if the sizes of the supports for $R$ and $\\mathbf{W}$ are fixed. This was briefly mentioned as a disadvantage in solving for $R$ and $\\mathbf{W}$ as discrete random variables with fixed sizes of supports in Appendix A.3. In our case, we let them be outputs of generative neural networks. In addition, to speed up training, we use a smaller number of samples in the empirical expectations of $\\varphi_\\theta$ and $\\ell_\\theta$ and resample fewer times during training, as mentioned in Appendix A.2.1 and A.3. The computational complexity is given in terms of time taken in Appendix B.1 and B.2. All timings are with a 2.7 GHz Intel Core i7, 16GB 2133MHz LPDDR3.\n\n**2.** ablation studies of data efficiency. \n\nWe agree with your helpful suggestion, an ablation study with number of observations $n=200$ and $n=1000$ was given in Appendix B.1.\n\n**3.** non-tabular data, other NN architecture. \n\nWe thank you for the suggestion. While the primary focus of the paper is on describing the inference and sampling algorithms, extensions to non-tabular data should be straightforward. For example, the Archimax copula can be used for describing the dependencies of a graph neural network. The Gaussian copula in [1] may be replaced with the Archimax copula described in this paper. We have added these notes to the future work section of the revision.\n\n**4.** comparison of sampled marginals to original. \n\nWe compared the CvM distance between samples from the learned copula and the empirical copula. We also gave visual comparisons. Confirming a reasonable fit is necessary for mitigating risks due to model misspecification. The CvM and plots show the dependence being captured correctly. Since the primary goal of the copula is to model the dependency, we focused on showcasing that aspect. We note that the marginals can be transformed fairly easily, such as via deep sigmoidal flows [2] or neural spline flows [3].\n\n**5.** experiment to increase significance of paper, e.g. out-of-distribution detection \n\nWe thank you for this excellent suggestion. We added an experiment on out-of-distribution detection to the revision, after the paragraph on extrapolating to extreme rainfall in Section 4, with details in Appendix B.5. \n\nThe AUC scores for outlier detection based on likelihoods computed from the Archimax copula, normalizing flow and variational autoencoder are **0.92** (higher is better), 0.82, 0.37. The F1 scores are **0.72** (higher is better), 0.48, 0.04. Please see the details and visual comparison in Appendix B.5 of the revision.\n\n**6.** missing related work.\n\nThank you for pointing out this literature which we overlooked, we cited the missing related work [4] in the revision, under Section 1.1 related work.\n\n**Reference Figure 1** in the text. \n\nThank you for pointing out the error. We added a reference to Figure 1 in the revision, line 101 on asymptotic dependence and line 109 on radial envelope. We also enriched Figure 1 with an example of asymmetric dependence.\n\n---\n\nReferences\n\n[1] Ma et al. Copula-GNN: Towards integrating representational and correlational roles of graphs in graph neural networks. In ICLR 2021.\n\n[2] Huang et al. Neural autoregressive flows. In ICML 2018. \n\n[3] Durkan et al. Neural spline flows. In Neurips 2019.\n\n[4] Bhatia et al. ExGAN: Adversarial generation of extreme samples. In AAAI 2021.", " We thank you for the very positive and constructive review, your suggestions are highly appreciated and are incorporated into a revised version:\n\n**(i) Intuitive explanation** to why Archimax copulas are good for modeling dependencies between extreme events.\n\nArchimax copulas were initially developed as a tool to study the behaviour of methods used to estimate the joint distribution of extreme events [1]. \n\nThe extreme value copula that arises in the limit can be understood from the stable tail dependence function (stdf) $\\ell$ and the index of regular variation of the Archimedean generator $\\varphi$. Unlike extreme value copulas which emerge from the limiting distribution of extreme events, the motivation for use of Archimax copulas is to model extreme data (e.g. very strong and rare earthquakes), where observations are rare, from a mix of moderately less extreme data (e.g. strong earthquakes), where observations are relatively more abundant. For example the monthly rainfall of French Brittany did not pass the test of extreme value dependence [2, 3]. Additionally, a pre-processing step for use with extreme value copulas by computing block-maximas would reduce the amount of available observations by 6-fold. Finally, by modeling the data with an Archimax copula using the techniques in this paper, we can generate samples for further studies and simulation. We provided this explanation in Appendix A.5 and added a brief summary to the revision in Section 4, motivating the experiment on extrapolating to extreme rainfall.\n\n**(ii) Concise statement** of the complete model up front, explaining how the deep generative model determines the stdf. \n\nWe thank you for the very helpful suggestion and will state a concise statement of the complete model up front. The deep generative models are used to represent the distributions of the spectral random variable $\\mathbf{W}$ and radial random variable $R$.\nBy taking an expectation, these characterize the stdf $\\ell$ and the Archimedean generator $\\varphi$. We added this statement at the end of Section 1.1.\n\n---\n\nReferences\n\n[1] P. Capéraà, A.-L. Fougères, and C. Genest. Bivariate distributions with given extreme value attractor. Journal of Multivariate Analysis, 72(1):30–49, 2000.\n\n[2] I. Kojadinovic, J. Segers, and J. Yan. Large-sample tests of extreme-value dependence for multivariate copulas. LIDAM Reprints ISBA 2011025, Université catholique de Louvain, Institute of Statistics, Biostatistics and Actuarial Sciences (ISBA), 2011. \n\n[3] S. Chatelain, A.-L. Fougères, and J. G. Nešlehová. Inference for Archimax copulas. The Annals of Statistics, 48(2):1025 – 1051, 2020.", " We thank you for your positive feedback, thoughtful comments and useful suggestions. We address the individual comments and suggestions below: \n\n**(e) [Quality]** insufficient comparison to existing flexible copulas such as skew-t copulas.\n\nWe agree that the skew-t copula is a good additional baseline for asymmetrical copulas [1, 2, 3, 4].\n\nFor the 3-dimensional dataset that models the rainfall in French Britanny, the Cramér–von Mises (CvM) distance between the Clayton - negative scaled extremal Dirichlet (C-NSD) copula and the learned copulas for Archimax, Gaussian, extreme value, t and skew-t are: **0.0003** (lower is better), 0.0005, 0.0006, 0.0008, 0.0027. \n\nFor this scenario, the Archimax and Gaussian copulas performed better than the extreme value, t and skew-t copulas. This may be because the C-NSD copula does not exhibit extreme value dependence, i.e. it fails the test of extreme value dependence [5]. The skew-t copula might have performed worse than the t copula due to over-parameterization. In addition, maximum likelihood estimation for the skew-t copula was extremely time consuming even for three dimensions and thus intractable for higher dimensions. \n\nWe have included the comparison to skew-t copulas in the revision, under Appendix B.4.1 which describes the rainfall experiment. \n\n**(f) [Significance]** popularity of Archimax copulas unsure.\n\nWe believe that the popularity of Archimax copulas has been hindered due to the lack of existing inference methods to fit parameters to observations and the lack of sampling tools [6, 7].\nRecent existing work only presented methods for recovering the stable tail dependence function (stdf) given the Archimedean generator [6] or maximum likelihood estimation for few parameters in 2-3 dimensions [8, 9]. On the other hand, these works showed a better fit to data when using Archimax copulas as compared to Archimedean and extreme value copulas [6, 8, 9]. \n\nWe hope that through the proposed methods and release of source code, more applications of Archimax copulas will follow.\nAdditionally, we note that extreme value copulas are already popular in a variety of environmental and financial situations, and are a major component of Archimax copulas.\n\n**(g) [Pairwise dependence]** Archimax copulas allow different strength of dependence for pairs of variables, and it can be achieved through appropriately defining the spectral random variable $\\mathbf{W}$. \n\nYou are correct that the Archimedean copula is limited due to the symmetry assumption of the variables and that choosing an appropriate spectral random variable $\\mathbf{W}$ removes the symmetry in the Archimax case. An example of a case involving asymmetry would be to define the spectral random variable $\\mathbf{W}$ such that the stdf matches the stdf of the asymmetric logistic copula [10].\n\n**(h) [Correction]** $\\mathbf{X}\\in[0,\\infty)^d$, $\\mathbf{U}=(\\varphi(X_1),\\cdots,\\varphi(X_d))\\in[0,1]^d$ follows an Archimax copula. \n\nWe thank you for pointing out this error and we have corrected it in the revision. \nThe dependence of $\\mathbf{X}$ follows an Archimax copula when appropriately transformed by the marginal CDFs.\nIn this case $\\mathbf{U}=(\\varphi(X_1),\\cdots,\\varphi(X_d))$ follows an Archimax copula.\n\n---\n\nReferences\n\n[1] S. Demarta and A. J. McNeil. The t copula and related copulas. International Statistical Review, 73(1):111–129, 2005. ISSN 03067734, 17515823.\n\n[2] T. Kollo and G. Pettere. Parameter estimation and application of the multivariate skew t-copula. In Copula Theory and Its Applications, pages 289–298, 2010. Springer Berlin Heidelberg.\n\n[3] T. Yoshiba. Maximum likelihood estimation of skew-t copulas with its applications to stock returns. Journal of Statistical Computation and Simulation, 88(13):2489–2506, 2018.\n\n[4] M. S. Smith, Q. Gan, and R. J. Kohn. Modelling dependence using skew t copulas: Bayesian inference and applications. Journal of Applied Econometrics, 27(3):500–522, 2012.\n\n[5] I. Kojadinovic, J. Segers, and J. Yan. Large-sample tests of extreme-value dependence for multivariate copulas. LIDAM Reprints ISBA 2011025, Université catholique de Louvain, Institute of Statistics, Biostatistics and Actuarial Sciences (ISBA), 2011.\n\n[6] S. Chatelain, A.-L. Fougères, and J. G. Nešlehová. Inference for Archimax copulas. The Annals of Statistics, 48(2):1025 – 1051, 2020.\n\n[7] A. Charpentier, A.-L. Fougères, C. Genest, and J.G. Nešlehová. Multivariate Archimax copulas. Journal of Multivariate Analysis, 126:118–136, 2014.\n\n[8] A. J. McNeil and J. Nešlehová. From Archimedean to Liouville copulas. Journal of Multivariate Analysis, 101(8):1772–1790, 2010.\n\n[9] T. Bacigál, V. Jágr, and R. Mesiar. Non-exchangeable random variables, Archimax copulas and their fitting to real data. Kybernetika, 47(4):519–531, 2011.\n\n[10] A. Stephenson. Simulating multivariate extreme value distributions of logistic type. Extremes, 6 (1):49–59, 2003.", " This paper presents new methods for inference and sampling for Archimax copulas. Archimax copulas are a family of copulas defined through an Archimedian generator and a stable tail dependence function (stdf). In order to discuss the inference and sampling for Archimax copulas, the authors first proposed inferential and sampling methods for Archimedian generator and stdf. Then, combining these methods, the methods for inference and sampling for Archimax copulas are established. In experiments, it is seen that the proposed inferential methods for Archimedian generator and stdf show satisfactory performance. Other experiments are given to illustrate that Archimax copulas outperform, or work as well as, some existing models for a couple of real datasets. *** STRENGTHS ***\n\n(a) [Originality] The presented inferential and sampling methods for Archimax copulas seem new. These methods are derived mainly by combining existing methods for Archimedian generators and stdfs.\n \n(b) [Quality] The paper seems technically sound. Comprehensive experiments are given to assess the performance of the proposed inferential methods and compare submodels of Archimax copulas with some existing models.\n\n(c) [Clarity] The paper is clearly written in general. Section 2 providing the background of the presented theory would be helpful for readers who are not family with copulas.\n\n(d) [Significance] Archimax copulas are flexible models which include Archimedian copulas and extreme-value copulas as special cases. Therefore the proposed inferential and sampling methods for Archimax copulas could be useful in practice when flexible modelling is required.\n \n\n*** WEAKNESSES ***\n\n(e) [Quality] Apart from Archimax copulas, there exist other flexible families of copulas such skew-$t$ copulas (see, e.g., Joe [49], Section 3.17.2). The paper does not sufficiently compare the Archimax copulas with those existing copulas.\n\n(f) [Significance] I am not sure about the popularity of Archimax copulas and the importance of the related theory this paper presents. (Nonetheless I appreciate the results of experiments which suggest the usefulness of the Archimax copulas.)\n (g) For the multivariate Archimedean copula (equation (3.4), [49]), all the pairs of variables have the same strength of dependence, and this is considered a drawback in terms of flexibility in general. How about the Archimax copulas? Is it possible to solve this problem by appropriately defining the spectral random variable $W$?\n\n(h) In lines 107 and 108, it is mentioned that $X$ defined in (5) follows an Archimax copula. However it seems to me that $X$ takes values in $[0, \\infty )^d$ in general and therefore the distribution of $X$ is not a copula which is defined on $[0,1]^d$. I think a correction is necessary to avoid this misunderstanding.\n\n(i) I would appreciate the authors' responses to my comments (e) and (f).\n\n The authors have addressed the limitations and potential negative societal impacts of their work in Section 5. Depending on the authors' response to my questions (g) and (i), I might claim that the usefulness of Archimax copulas is limited.", " The authors propose scalable estimation and sampling procedures for Archimax copulas. On simulated and real data, they demonstrate that Archimax copulas fit using their procedure can model complex data with dependencies between extreme values accurately, in comparison to existing deep generative methods. The paper bridges two important areas of research: copulas and deep generative modeling. It is highly original (the first method of its kind) and technically excellent. It is also potentially very significant in its impact: modeling rare events is critical to managing risk in real world applications, and relying naively on modern deep generative approaches can potentially be very problematic. The paper is overall quite clear, though there are two areas where I struggled: first, an intuitive explanation as to why Archimax copulas are good for modeling dependencies between extreme events would be helpful to the reader; second, I’d appreciate a concise statement of the complete model up front, explaining how the deep generative model determines the stdf. See above for suggestions. The authors clearly explain some of the method's important limitations.", " This paper proposes novel procedures for both inference and sampling of Archimax copulas. This family of copulas is important due to their ability to represent the bulk and tail of distributions simultaneously which can be suitable for healthcare and hydrology real-world data. The authors propose a 'hybrid' approach mixing copulas and neural networks to allow for more flexible estimation. In experiments, the proposed method is compared to SOTA density modeling techniques and the results suggest that their method can extrapolate to tails and scale to higher dimensions. Strengths:\n - Originality - to the best of my knowledge this is the first work to address flexible density estimation of bulk and tail distribution with Archimax copulas.\n - Quality - the authors put a lot of effort into including a high level of technical details and experiments in the paper and appendix \n - Clarity - since copulas are not a straightforward tool in the machine learning community, I appreciate the background overview and related work mentioned throughout the paper and supplementary. \n - Significance - considering the tails not only bulk of the data is overlooked problem and can be very significant in many real-world applications\n\nWeaknesses:\n - Originality - although the presented methodology is novel, it does still build on existing work regarding Archimax copulas\n - Quality - I would expect more challenging/motivating experimental results to support the claims of significance and contribution from the introduction\n - Clarity - A running example or one real data motivating example could improve/clarify why Archimax copulas are an appropriate/necessary tool in critical scenarios. This can help bring the paper closer to the ML community, make it more relevant for readers.\n - Significance - since copulas are still not widespread at the ML conferences, I feel like additional motivation for such papers is needed, either to showcase superiority on large-scale real-world datasets or find some new tasks where SOTA models fail. I also appreciate the code submission and effort of implementing everything in Python (rare for statistics methods). \n \n\n\n 1. Complexity of the proposed algorithms is not mentioned. Is there any trade-off between flexibility and scalability?\n2. Are there any ablation studies regarding data efficiency? Seems like such non-parametric methods can be data-hungry\n3. Will this generative network work for non-tabular data? Can there be an example of another NN architecture?\n4. How do the sampled marginals compare to the original data?\n5. To increase the significance/contribution of the paper, could you add an experiment for out-of-distribution detection for example? (since this is mentioned as a limitation of existing methods) maybe one of the experiments you have already can be used to present some uncertainty scores.\n6. related work that is missing: \nBhatia, Siddharth, Arjit Jain, and Bryan Hooi. \"Exgan: Adversarial generation of extreme samples.\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 8. 2021.\n\nMinor:\nFigure 1 is not referenced in the text\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------\n### Update after authors response:\n\nThank you for the thorough response and addressing all of the comments in the updated manuscript.\nI am happy to accordingly raise the score. The authors have addressed the limitations and potential societal negative impact of their work.", " The authors propose an efficient inference and sampling schemes for archimax copulae based on learning of the generator and stable tail dependence functions through deep learning techniques. *Originality*. The work is original. To my knowledge, the authors propose a new method to infer and sample from archimax copulas extending the previous work in this area.\n\n*Clarity*. The authors go a long way to make sure the paper is accessible by a larger machine learning community. They provide necessary background info on copulae and their use in machine learning. They also provide extensive derivations in the appendix. That said, the material itself is quite dense. I have not found typos. One minor thing is that Figure 1 does not seem to be referenced anywhere in the text.\n\n*Quality*. The authors build upon a previously developed theory and methods to derive a scheme to infer archimax copulae via the means of deep learning models which also allows for an easy sampling from. The authors provide an analysis of performance of the proposed method comparing it to other state-of-the-art methods. The details of the experiment setups are given in detail in the supplementary material. The authors provide extensive background/related work review in both the main text and the appendix.\n\n*Significance*. Inference of the multivariate distributions from the data is a core statistical/machine learning problem. The authors propose a way to infer multivariate dependencies via archimax copulae representations of which are learned via deep learning models. The method also allows for sampling from the learned distribution. The method serves both the bulk and the tails of a distribution. With that said, a proposed method is of major significance for the field.\n As far as I understand when optimizing the presentations of the copula generator and stable tail dependence function the authors start with an initial copula generator $exp(-x)$. Have you considered other initial generators and how would this affect learning of the copula?\n\nHave you considered estimating several copulae as in a hierarchical archimedean copula?\n\nWhat do you say about modifying the architecture of the generative neural networks? The authors faithfully address limitations and possible negative impact of the proposed method." ]
[ -1, -1, -1, -1, -1, -1, -1, 7, 8, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 3 ]
[ "nryJ4J2iNCY", "0f6s-kDhoXx", "QCzw5H8JT0", "ciUthYW50V5", "nryJ4J2iNCY", "dzx55EZlCY3", "2Fc5CVGD1Xb", "nips_2022_8cC2JeUyz9", "nips_2022_8cC2JeUyz9", "nips_2022_8cC2JeUyz9", "nips_2022_8cC2JeUyz9" ]
nips_2022_4RC_vI0OgIS
Online Deep Equilibrium Learning for Regularization by Denoising
Plug-and-Play Priors (PnP) and Regularization by Denoising (RED) are widely-used frameworks for solving imaging inverse problems by computing fixed-points of operators combining physical measurement models and learned image priors. While traditional PnP/RED formulations have focused on priors specified using image denoisers, there is a growing interest in learning PnP/RED priors that are end-to-end optimal. The recent Deep Equilibrium Models (DEQ) framework has enabled memory-efficient end-to-end learning of PnP/RED priors by implicitly differentiating through the fixed-point equations without storing intermediate activation values. However, the dependence of the computational/memory complexity of the measurement models in PnP/RED on the total number of measurements leaves DEQ impractical for many imaging applications. We propose ODER as a new strategy for improving the efficiency of DEQ through stochastic approximations of the measurement models. We theoretically analyze ODER giving insights into its convergence and ability to approximate the traditional DEQ approach. Our numerical results suggest the potential improvements in training/testing complexity due to ODER on three distinct imaging applications.
Accept
The paper proposes a learning method (specifically a deep equilibrium learning approach) for 'regularization by denoising', a plug-and-play method for solving inverse problems. After the rebuttal, all reviewers support acceptance of the paper. The reviewers find the paper to be well written, the problem to be interesting, and the claims to be well supported (reviewer Hjnn), both empirically (reviewer uDGc) and through theory. Reviewer A7f5 finds the work particularly exciting since both memory and training time are reduced, without sacrificing image quality. Based on my own reading and the unanimous support of the reviewers, I recommend acceptance of the paper. A nice contribution!
train
[ "TQ4Kro-7fQ", "Ent0syKxGsu", "lyjjwlUHJqD", "KqGCZENygay", "yII4cz2FVfT", "68r80VihjmR", "1MzjP4DS--", "khte5GsOCdd", "8H3yiKW1IX3", "G5ZnOW3zQS", "5lxy6m_JNsz", "LmmRuIUs3m3", "MDe1tMfDUvT", "Rlrt1ucHTy", "XZRwyDXwHy8", "OZRPcwccrx", "DKaNoObghAl", "_LcVvjUVdqr" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you all again for reviewing our work. An additional thanks to those reviewers that have already read our responses and the area chair for managing the review of our paper. Let us know if there is anything else we can do to improve your evaluation of our work.", " Dear reviewer, thank you again for reading our original submission and providing valuable feedback. The Author-Reviewer discussion ends today. If there is something that you would like to discuss beyond our responses above, please let us know and we will be happy to do so.\n", " Thank you for reviewing our responses, raising your score, and your valuable feedback.", " | Method | UNet | RED (Unfold) | ODER | RED (DEQ) |\n|-----------|:------------:|:------------:|:------------:|:------------:|\n| PSNR/SSIM | 18.86/0.8824 | 20.55/0.9215 | 23.47/0.9441 | 23.48/0.9450 |\n\n\nFirst of all, thank you for reviewing our responses, raising your score, and your valuable feedback. \n\nWith the hope of further increasing your evaluation, let us highlight that finding ways to efficiently train implicit neural networks is an emerging area with a big potential for practical impact. While [A] proposes to use implicit networks for inverse problems, it does not consider the impact of the forward model on the memory and computational complexity of training. This is something our work addresses and does so with new a new method, theoretical insights, and numerical results not available in [A]. \n\nWe ran simulations to, at least partially, address your second comment. The table above provides PSNR/SSIM results for sparse-view CT with b=120 for ODER, RED (DEQ), RED (Unfold), and UNet trained on CT images, but tested on 50 Brain MRI images. It is worth mentioning that the CNN priors in RED (Unfold), ODER, and RED (DEQ) are not proximal operators, since they are not trained to be denoisers. They can be better viewed as artifact removing operators trained for end-to-end optimality. Most importantly, note how the relative performance of ODER matches that of RED (DEQ) even for mismatched CNN priors trained on CT but tested on MRI. Fully addressing your comment by including a forward model mismatch would require a more careful evaluation, which would extend beyond this review window.\n", " First of all, thank you for reviewing our responses, raising your score, and your valuable feedback. \n\nWith the hope of further increasing your evaluation, let us highlight another benefit of ODER that goes beyond training time reduction. ODER is useful for improving the memory complexity of training, which is crucial when GPU memory is the bottleneck. Note that when the GPU memory is the bottleneck, the performance of the traditional DEQ training will drop (see Figure 4 in the supplementary material). As can be seen in Table 1 of the main paper, ODER can reduce the GPU usage from 3.56 GB to 0.71 GB, which is about 80% reduction. This reduction essentially comes for free, since ODER matches the performance of the traditional DEQ training.\n", " Thank you for the detailed response to my comments. Though the work is incremental, I believe it has potential to be used for many applications. I have raised my score to reflect the revisions. ", " Though I still feel the contribution of training implicit online network is moderate given the existing work [A], I do agree with the authors the main theory presented in this work is entirely new and interesting. As a result, I raise my score to Weak Accept. \n\nThe following questions would not impact my final rating, but I'm curious is the trained proximal mapping still a valid denoiser for Gaussian denoising, after end-to-end training? What about directly plugging the learned \"denoiser\" (on CS-MRI) into another task (sparse-view CT)?\n\n\n", " I would like to thank the authors for their effort to provide response in explaining the contributions. It is clarified now that the theoretical analysis is important. Thus, I raise my score by one.\n\nAlthough this work has some theoretical contribution, I still believe that this work has limited practical applications . Only reducing the training time is not that significant.", " Thank you to all the reviewers for the great effort in reviewing the paper and the authors for the responses.\n\nAs the author-reviewer discussion period is almost over, I want to ensure that reviewers have read the authors' responses and engage with the authors if needed.\n\nIf you haven't done this, could you please take a moment to read through the authors' responses, update the reviews to indicate that you have read the authors' responses, or communicate with the authors if needed? You can also share in private conversations with the reviewing team.\n\nPlease continue to share your thoughts. Thank you!", " Thank you for your valuable feedback. As discussed below, the feedback has prompted us to adjust the presentation of our work to better reflect its contributions.\n\n**Weakness:**\n\n- We were initially surprised by the reviewer’s evaluation that our work is similar to the prior work in [A, B, C]. However, upon careful re-reading of our initial submission, we realized that this confusion can be attributed to the way we presented our results. Based on the feedback, we can improve the presentation by renaming Theorem 3 to be the Main Theorem. Theorems 1 and 2 are renamed Propositions 1 and 2 to highlight their supporting roles. Similarly, Section 3.3 will move to become Sections 3.1, since this is where the main innovation of ODER as a method for learning implicit online networks is proposed (see the revised paper).\n* ODER is conceptually different from the traditional online PnP/RED algorithms for solving inverse problems [B, C]. The nuance is that ODER is a framework for _training implicit online neural networks_, which is different from _running online PnP/RED algorithms_ at inference time. This topic has never been considered in the existing literature. Additionally, our analysis in Theorem 3 in the initial submission is completely new as it establishes bounds on the training accuracy of implicit online networks, which is different from prior work [A, B, C] that proves convergence at inference time. It is a nontrivial theoretical result that online forward and backward iterations in ODER can provide gradients with controllable accuracy for training implicit online neural networks. We are not aware of any prior work that has proposed similar ideas or analyses as in our paper.\n\n**Questions:**\n\n>**1:**\n\n| Method | Sparse-view CT | Parallel MRI |\n|-----------------|:---------------:|:------------:|\n| ODER | 43 | 72 |\n| RED (DEQ) | 35 | 43 |\n| RED (Denoising) | 30 | 45 |\n\n* Prompted by your comment, we prepared the table above that reports the mean iteration numbers for ODER, RED (DEQ), and RED (Denoising), for the same stopping tolerance, in sparse-view CT and parallel MRI. Note how end-to-end training of implicit networks—ODER and RED (DEQ)—can increase the mean iteration numbers compared to the traditional RED (Denoising). This is reasonable since the implicit networks are trained to maximize the SNR, not to minimize the number of iterations. By allowing for more iterations, implicit networks can get trained to achieve higher SNR than RED (Denoising) (≥ 1 dB for CT in Table 2 and ≥ 0.5 dB for MRI in Table 3 of the main paper).\n\n>**2:**\n\n- Our original submission has unintentionally overlooked the references [D-F], which we added in the revised version (Refs [7, 66, 74] in the revised paper). The proposed ODER training and its analysis are perfectly compatible with the iterations based on the variations of PnP in [D-F].\n", " Thank you for your valuable feedback. As discussed below, the feedback has prompted us to adjust the presentation of our work to better reflect its contributions.\n\n**Weakness:**\n>**1:** \n* We understood from the feedback that the presentation of our work did a disservice to its contributions. It is worth highlighting that (**a**) _learning implicit online neural networks_ has never been considered before our work and (**b**) such learning can enable nearly 2.5x improvement in training time (see Fig. 1 in the main paper). Therefore, we would argue that ODER is not just an addition of online processing, but a theoretically sound method that leads to substantial practical improvements in DEQ for inverse problems. Theorem 3 in the initial submission establishes bounds on the training accuracy of implicit online neural networks, which is different from prior work proving convergence of traditional online algorithms. It is a nontrivial theoretical result that online forward and backward iterations in ODER can provide gradients with controllable accuracy for training implicit online neural networks. We are not aware of any prior work that has considered similar ideas or performed similar theoretical analysis.\n- Based on the feedback, we realized that we can improve the presentation of our work by renaming Theorem 3 to be the Main Theorem. Theorems 1 and 2 are renamed Propositions 1 and 2 to highlight their supporting roles. Similarly, we move Section 3.3 to be Section 3.1, since this is where the main innovation of ODER as a method for learning implicit online networks is presented (see the revised paper).\n\n>**2:** \n- We were surprised by the reviewer’s comment that the focus on imaging applications that require processing a large number of sensor measurements may limit the impact of our work. On the contrary, we can list examples of imaging applications that have historically relied on efficient processing of a large number of sensor measurements, such as the three applications considered in the paper (sparse-view CT and parallel MRI in medicine, and IDT in bio-microscopy), as well as other well-known applications, including photoacoustic tomography (PAT), positron emission tomography (PET), optical projection tomography (OPT), and optical diffusion tomography (ODT). All those applications are widely used, and would potentially benefit from powerful and efficient deep model-based architectures such as ODER.\n\n>**3:** \n* Indeed, a practical conclusion of our work is that ODER can reduce the training time of RED (DEQ) for the same SNR/SSIM performance. Figure 1 shows that ODER reduces the training time by 2.5 times for the same SNR, which is important for real applications. \n- Our work also presents novel theoretical analysis in Theorem 3 of the original submission that does not exist in prior work. Our main theorem shows that implicit online networks can approximate DEQ to a desired precision _during training_, which is different from existing work on inverse problems that shows convergence _during inference_. Note that it is a nontrivial theoretical result that randomized forward and backward iterations end up enabling the computation of the training gradients with controllable accuracy.\n\n**Questions:**\n>**1:**\n- As discussed above, we renamed Theorem 3 to Main Theorem to highlight its novelty compared to the existing work. We also highlighted the novelty of training implicit online networks by moving Section 3.3 to Section 3.1 in the revision. The abstract, introduction, and conclusion were revised accordingly.\n\n>**2:**\n- Our paper presents a novel algorithm and mathematical analysis that has never been considered in the existing work in the area. Inclusion of additional results would distract from our main message, which motivates us to leave it for future work. \n\n>**3:**\n- Thank you! We will correct these typos in the revised paper.", " Thank you for the feedback and positive comments on our work. Please see below for our responses.\n\n**Weakness:**\n* ODER is different from the traditional online learning algorithms (SGD, Adam) and from the traditional online algorithms for solving inverse problems (online PnP/RED or Ref [1] by the reviewer). ODER is for training _implicit online neural networks_, which has never been explored in the existing literature. Theorem 3 in the initial submission (Main Theorem in the revision) establishes bounds on the training accuracy of implicit online networks, which is different from prior work proving convergence of online algorithms. It is a nontrivial theoretical result that online forward and backward iterations in ODER can provide gradients with controllable accuracy for training implicit online neural networks. We are not aware of any prior work that has explored similar ideas or analysis.\n- We can improve the presentation of our work by renaming Theorem 3 to be Main Theorem. Theorems 1 and 2 are renamed Propositions 1 and 2 to highlight their supporting roles for the Main Theorem. Similarly, we move Section 3.3 to be Section 3.1, since this is where ODER is presented as a method for learning implicit online neural networks.\n* We cited Ref [1] mentioned by the reviewer in the context of online methods for solving imaging inverse problems (Ref [48] in this revision).\n- Please see our response to Question 1 below.\n\n**Questions:**\n>**1:**\n\n| Method \\ # Projections |90|120|180|\n|-|:-:|:-:|:-:|\n| ODER|34.69/0.9827|35.46/0.9848|36.21/0.9864|\n| RED (DEQ)|34.81/0.9831|35.55/0.9852|36.34/0.9867|\n| RED (Denoising)|32.94/0.9717|34.07/0.9781|35.34/0.9812|\n| RED (Unfold|34.27/0.9810|35.18/0.9842|36.27/0.9870|\n\n- Prompted by your comment, we did additional tests using 55 slices of size 512 x 512 from another patient, who was not part of our original data. The table above presents corresponding SNR and SSIM values for ODER, RED (DEQ), RED (Unfold), and RED (Denoising). Note the consistency of these results with those presented in Table 2 of the main paper. We are confident that the existing numerical results on ODER are representative of its general behavior. If requested by the reviewers, we will be happy to include additional test samples for computing the results in Table 2, although we do not expect that those will impact our contributions.\n* We will release our implementation of ODER on all the considered modalities.\n- The numerical results on CT in our original submission used a public dataset from a CT challenge (Ref [87] in the revised paper). We adopted a similar setting used in the literature cited in the main paper (Refs [11, 12, 14]).\n\n>**2:**\n\n| # Projections (b)|90|120|180|\n|-|-|-|-|\n|ODER (trained using b=120)|34.37|35.12|35.81|\n|ODER (trained using a matched b)|34.40|35.12|35.91|\n|RED (DEQ) (trained using b=120)|34.41|35.26|35.95|\n|RED (DEQ) (matchet b)|34.61|35.26|35.95|\n\n- We did additional tests on CT using a mismatched number of measurements in training and testing. The table above shows results for two variants of ODER: one trained using 120 measurements and the other trained using the same number of measurements as for testing. We also provide RED (DEQ) for reference. Note how the performance of ODER trained using b = 120 is always within 0.1 dB of the oracle ODER that has a perfectly matched training and testing b.\n\n|Input SNR (dB)|50|45|40|35|\n|-|-|-|-|-|\n|ODER|35.12 |33.11|30.19|24.21|\n|RED (DEQ)|35.26|33.20|30.28|24.48|\n|RED (Unfold)|35.01|32.90|28.87|23.75|\n\n- We also ran tests on CT using a mismatched amount of Gaussian noise. The table above shows the performance of three methods—ODER, RED (DEQ), and RED (Unfold)—trained at 50 dB input SNR and tested at 35, 40, 45, and 50 dB input SNR. \n* Based on the new results, we can conclude that ODER is relatively robust to shifts in the number of measurements and the amount of noise, in the sense that it maintains its relative reconstruction performance with respect to RED (DEQ).\n\n**Limitations:**\n- If the reviewer requests, we can revise our conclusion to include the following: (a) ODER has an additional hyperparameter w ≥ 1 to control accuracy/efficiency tradeoff relative to RED (DEQ); (b) just like RED (DEQ), ODER training achieves memory savings over RED (Unfold) at the cost of higher computational complexity due the computation of the forward and backward fixed points; (c) just like RED (DEQ) and RED (Unfold), ODER recovers better images over RED (Denoising) at the cost of an overall more expensive training using the measurement model.\n* Note that while RED (Denoising) can be implemented as an online inference algorithm, it is different from ODER since it doesn’t learn the prior end-to-end with the measurement model. On the other hand, while RED (Unfold) can also be implemented using online learning, it is not an implicit network and thus suffers from high memory complexity from storing intermediate latent variables across unfolded iterations.", " Thank you for the feedback and positive comments on our work. We provide point-by-point responses to your comments below.\n\n**Weakness:**\n> **1:**\n* We would like to point out that Theorem 3 in our original submission is novel and the learning technique in Section 3.3 of our original submission is also novel in the context of “online processing of measurements.” The novelty of the proposed method is in _training_ of implicit online networks, which is different from existing work on running online PnP/RED using pre-trained denoisers as priors. The novelty of the main theorem is in the bound for the training accuracy of implicit online neural networks, which is different from existing work proving convergence of PnP/RED. It is a nontrivial theoretical result that online forward and backward iterations in ODER can provide gradients with controllable accuracy for training implicit online networks. We are not aware of any prior work that has explored similar ideas or analysis.\n- Based on the feedback, we realized that we can improve the presentation of our work by renaming Theorem 3 to be Main Theorem. Theorems 1 and 2 are renamed Propositions 1 and 2 to highlight their supporting roles for the Main Theorem. Similarly, we moved Section 3.3 to be Sections 3.1 in the revision, since this is where ODER is presented as a method for learning implicit online neural networks.\n\n>**2:**\n* Please see below for our answers to the raised questions. All the additional simulations in our answers will be included in the revised supplementary material.\n- We will publicly release our code upon the acceptance of the paper, which will simplify the communication of the hyperparameters for the reproducibility of our work.\n\n**Questions:**\n>**1:**\n\n |w/b, b=120|1/12|1/6|1/3|1/2|3/4|1|RED (DEQ)|\n |:-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|\n |ODER|34.34|34.94|35.12|35.24|35.25|35.27|35.26|\n\n* Prompted by your remark, we ran new simulations on sparse-view CT to empirically quantify the influence of w ≥ 1 on SNR (dB). The table corroborates our theoretical analysis showing that the parameter w allows to balance computational/memory efficiency against accuracy relative to RED (DEQ). Note how using 1/2th, 1/6th, or 1/12th of total measurements allows ODER to be within 0.02 dB, 0.4 dB, and 1 dB, respectively, of the SNR achieved by RED (DEQ) that uses all b = 120 measurements.\n- In practice, w is best treated as a hyperparameter whose value is empirically optimized for the best imaging performance (SNR, SSIM) under acceptable hardware constraints (GPU memory, training time). Our main theorem ensures that ODER can approximate RED (DEQ) to a desired accuracy for large enough w.\n\n>**2:**\n\n|Methods|Training from scratch|Training from a denoiser|\n|-|:-:|:-:|\n| ODER| 34.56| 35.91|\n| RED (DEQ)|34.73|35.95|\n\n- We adopted the pre-training strategy from the reference [40] in the revised main paper, which used pre-trained denoisers to initialize the traditional DEQ. Prompted by your remark, we ran new simulations on sparse-view CT to empirically quantify the benefit of initializing from a pre-trained denoiser. The table above compares the performance of ODER and RED (DEQ) trained from a random initialization (“Training from scratch”) against those trained from a pre-trained denoiser. Note the empirical benefit of initializing the CNN priors using pre-trained denoisers, which justifies our strategy of using pre-trained denoisers for initializing ODER.\n\n>**3:**\n- ODER doesn’t use denoisers as priors (beyond initialization, see our response to Question 2) since its purpose is to train implicit online networks. The CNN prior in ODER is trained for end-to-end performance.\n* The 5 denoisers are for the traditional RED (Denoising), not our proposed method (see Table 1 of the main paper). For RED (Denoising), we simply adopted the commonly used strategy of using multiple denoisers (see Refs [24, 40, 59, 63] in the revised paper).\n\n>**4:**\n- The denoiser mentioned in Line 220 is for the traditional RED (Denoising), which is simply one of the methods we compare against in Table 1 of the main paper.\n* ODER does not use denoisers as priors, since its purpose is to train implicit online networks by using the algorithm in Section 3.3 in the initial submission (now Section 3.1 in the revision). Using BM3D or any other pre-trained denoiser would defeat the purpose of ODER by turning it into a traditional online PnP/RED method.\n- Any CNN architecture can in principle be trained using ODER. We tested 3 architectures, DnCNN, U-Net, and Tiny U-Net, and finally selected Tiny U-Net for ODER due to its computational efficiency. The ODER backward pass using Tiny U-Net is about 3x faster compared to the one using DnCNN or U-Net (see Table 1 in the supplement).\n\n**Limitations:**\n* We thank the reviewer for suggesting additional references! It is our goal to have a broad and well presented view of the area, and we cited the suggested references in the revised paper (Refs [4,5,6]).\n\n", " \n* Thank you all for providing us with valuable feedback. We provide detailed answers to all the questions below. To better address some of them, we ran additional simulations. We have **_updated the paper and the supplementary material_** based on our responses below. It is worth mentioning that the raised points helped to streamline and improve the paper without fundamentally altering the main contributions of our work.\n\n- We were initially surprised that the reviewers saw similarity between our work and the existing literature on online PnP/RED. However, upon carefully re-reading our initial submission, we realized that this confusion can be attributed to the way we presented our results. The confusion can be alleviated by highlighting that Theorem 3 in our original submission is our main theoretical contribution, while Theorems 1 and 2 serve primarily as supporting results. We are not aware of any result in the existing literature analogous to our Theorem 3, which shows that _implicit online neural networks_ can be _trained_ to approximate DEQ for inverse problems to a desired precision. This result is different from the existing work showing the convergence of traditional PnP/RED methods. **_It is a nontrivial theoretical result that online forward and backward iterations in ODER can provide gradients with controllable accuracy for training implicit online neural networks for solving imaging inverse problems._**\n\n* ODER is a conceptually different method from PnP/RED. Note how ODER is for training implicit online neural networks, which is different from running PnP/RED using pre-trained denoisers as priors. We are not aware of any method similar to ODER in the prior work on PnP/RED and hope that reviewers will appreciate the nuances between ODER and PnP.\n\n- To highlight our main theoretical contribution, the revised paper renames Theorem 1, Theorem 2, and Theorem 3, to Proposition 1, Proposition 2, and Main Theorem, respectively. To highlight our main algorithmic contribution on learning implicit online neural networks, the revised paper moves Section 3.3 to be Section 3.1. The rest of the paper has been edited to be consistent with this naming and order of presentation.\n", " The paper studies an online Deep Equilibrium Models (DEQ) method for Regularization by Denoising (RED). The proposed ODER incorporated randomized processing of measurements. The ODER algorithm aims to bypass the high computational/memory complexity of DEQ led by the high dimensionality of measurement space. The introduced online backward pass is demonstrated to lead to a more scalable and flexible DEQ framework for inverse problems. Based on standard assumptions in the analysis of fixed-point and SGD, the authors also have given some analysis of the ODER's convergence. The authors have also conducted experiments to analyze the behaviour of ODER, and demonstrate its effectiveness. As an accelerated DEQ-RED, the proposed ODER is a useful extension to DEQ and RED.\n\n\n\n Strengths:\n\n1. The studied problem is interesting and worth devoting to, and the author has done significant work.\n2. The analysis and experiments are extensive and can support the claims.\n3. The paper is well written and easy to follow.\n\nWeakness:\n\n1. 'Online processing of measurements' and its analysis tricks are not new. \n2. Some experimental settings and studies on hyper-parameters are not well discussed (see the points in [Questions]). \n\nMy initial rating of the paper is weak accept. I am open to raising my score if the authors can address my concern during the rebuttal. 1. Line 122. I am curious if any w<<b enough to make ODER work? Any empirical/theoretical lower bound for w to guarantee ODER work? What are the principles for choosing w in practice?\n\n2. Line 210. \"The CNN prior of ODER is initialized using pre-trained denoisers ..\" What's the pretraining strategy? What if train the CNN prior from scratch? \n\n3. Line 220 - 221. Do you mean one needs to train 5 denoisers for different noise levels? If so, how bad if we train a denoiser to model all noise levels? \n\n4. line 220. I am confused on if only the AWGN denoisers are valid to use? Is the ODER also scalable to other noise types? How to construct/select denoisers? What if BM3D or DnCNN are used in ODER?\n\n Suggestions: there are other relevant works that are worth mentioning. As far as I know, in addition to the denoiser prior, the below DL paradigms also incorporate the knowledge of data acquisition to solve inverse imaging problems which have been studied for the purpose of learning Generative prior [1], Neumann inversion [2] or Equivariance prior [3]. To a broader view, it's necessary to add some review or comparison with these related paradigms.\n\n[1] Bora, A., Price, E., & Dimakis, A. G. (2018). AmbientGAN: Generative models from lossy measurements. In International conference on learning representations.\n\n[2] Gilton, D., Ongie, G., & Willett, R. (2019). Neumann networks for linear inverse problems in imaging. IEEE Transactions on Computational Imaging, 6, 328-343.\n\n[3] Chen, D., Tachella, J., & Davies, M. E. (2021). Equivariant imaging: Learning beyond the range space. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 4379-4388).", " Edit - score raised by 1 point based on author revisions. \n\nThis work proposes ODER, an online learning method for deep equilibrium learning for Regularization by Denoising. DEQ is a recently proposed framework for memory efficient learning of an infinite-depth unrolled network as implicitly defined by a fixed point of an operator. RED is a specific type of iterative algorithm that can incorporate learned priors, and has the fixed point operator structure. The authors therefore apply stochastic gradient descent across the measurement direction for each input in the training set, and derive the corresponding update equations for gradient-based learning. They show that their approach is able to get similar quality reconstructions with reduced memory and training time, compared to full-fledged DEQ-RED.\n Strengths:\nThe paper is well-organized and presentation is clear. The work nicely connects online learning with DEQ, with specific application to RED. There is a \"free-lunch\" result: both memory and time to train are reduced, without any sacrifice in image quality. For this reason this work is exciting. The work is also supported by theoretical results and is demonstrated for a several different applications.\n\nWeakness:\nAs the authors state, online-learning for RED is not a new concept, and both are special cases of running SGD across the measurement direction (e.g. coils in MRI) which is not novel on its own; for example:\n[1] Ong, F, Zhu, X, Cheng, JY, et al. Extreme MRI: Large-scale volumetric dynamic imaging from continuous non-gated acquisitions. Magn Reson Med. 2020; 84: 1763– 1780. https://doi.org/10.1002/mrm.28235\n\nThe CT experiment had only one subject in the test set, which seems prone to overfitting.\n - Could the CT experiments be run with a larger test set or with cross-validation?\n\n- How sensitive is the method to changes in the measurement operator between train and test?\n I don't think that Section 6 sufficiently describes limitations of the work. Could the authors discuss limitations of the approach, for example the required theoretical assumptions, implementation considerations, etc.? Another example is the relative improvement of ODER over RED (Unfold) or RED (Denoising). As both of these alterntives can also be implemented with online learning, how will the training time compare?\n", " This paper introduces an online deep equilibrium learning approach for large-scale inverse imaging problems. The method is built upon the recently proposed deep equilibrium architecture that unrolled an optimization algorithm (e.g., steepest gradient decent for regularization by denoising) into a neural network with a potential infinite number of depth (iterations). This paper goes beyond that by presenting an online variant to enable large-scale inverse imaging applications where the full evaluation of data consistency term is quite expensive. It therefore demonstrates superior performance on three data-intensive inverse imaging tasks -- its reconstruction quality is comparable to the full batch solution while is 2-3x faster, suggesting the benefits of the proposed method The major strength of this work lies at the strong empirical evidence on three practical large-scale inverse imaging problems. Plus, the paper is nicely written and easy to follow. The theoretical convergence of the algorithm is also rigorously analyzed. \n\nNevertheless, the technical novelty of this work is rather incremental. The methodology here is mostly credited to [A], which first connected the deep equilibrium model and PnP/RED method. The nontrivial technical parts thus can only be attributed to the online version, as well as theoretical analysis of convergence, but none of them could be viewed as significant contributions unfortunately. Virtually, the deltas here are well known in the PnP literatures, e.g., online version of PnP and RED algorithms are proposed in [B] and [C] respectively, even both of them have already established the theoretical convergence. \n\nConsequently, given the pros and cons on balance, I feel this is a very borderline paper, and I vote for borderline accept tentatively\n***\n**I raise my score because of the originality of the main theory highlighted in the rebuttal.**\n\n[A] Deep Equilibrium Architectures for Inverse Problems in Imaging, TCI 2021 \n[B] An Online Plug-and-Play Algorithm for Regularized Image Reconstruction, TCI 2019 \n[C] Block Coordinate Regularization by Denoising, NeurIPS 2019 I'd like to see the mean iteration times of the deep equilibrium architecture. Does end-to-end training help to reduce the number of iterations for convergence, compared with the PnP/RED counterpart? \n\nRecent progresses on PnP would also be worthy to be mentioned, such as [D, E, F]\n\n[D] It Has Potential: Gradient-Driven Denoisers for Convergent Solutions to Inverse Problems, NeurIPS 2021 \n[E] TFPnP: Tuning-free Plug-and-Play Proximal Algorithms with Applications to Inverse Imaging Problems, JMLR 2022 \n[F] Gradient Step Denoiser for convergent Plug-and-Play, ICLR 2022 N/A", " In this paper, to address the issue that the training of Deep Equilibrium Models (DEQ) can still be a significant computational and memory challenge in applications that require processing a large number of sensor measurements, the authors propose Online Deep Equilibrium Learning for Regularization by Denoising (ODER) for inverse problems that adopts stochastic processing of measurements within an implicit neural network. Experiments on three applications demonstrate the potential improvements in training/testing complexity. This paper solves data-intensive imaging inverse problems and owns some strengths:\n1) The main advantages of the presented technique are the reduced execution time w.r.t. the main baseline RED (DEQ) and the lower memory consumption of the measurement.\n2) The paper is overall well structured and written. \n3) The provided references are comprehensive and adequate.\n4) The work seems correct since the proposed approach lays its foundations on robust literature papers, and the authors theoretically analyse ODER regarding its convergence and ability to approximate the traditional DEQ approach. \n\nHowever, there are some issues that I would like to highlight:\n1) Novelty/Originality/Contribution: \nAlthough integrating the online processing into the DEQ framework is overall a novel attempt for solving inverse imaging problems using PnP/RED operators, it is a straight borrow from other literature and stitching. \nI worry that the contribution of the paper is limited since the only improvement to DEQ and Ref.[35] is the addition of online processing. This seems like a small and straightforward contribution to pre-existing works and hardly motivates the publication of a new paper.\n2) Significance: \nThis paper specifically addresses issues in imaging applications that require processing a large number of sensor measurements, which may limit its impact.\n3) Experiment: \nIn Figure 3, Tables 2 and 3, the SSIM and SNR of the proposed approach are not better than RED (DEQ) in experiments on CT and MRI images, so the only real contribution is the reduced execution time.\n 1) The contribution needs to be better justified.\n2) This work could explore to further improve the analysis and design variants of ODER to enhance its performance, as the authors also mentioned in Line 287, Page 9.\n3) There are a few grammar errors. For instance, the section titles \"4 Theoretical Analys\" and \"7 Broader impact\" should be \"4 Theoretical Analysis\" and \"7 Broader Impact\".\n Yes" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 5, 4 ]
[ "nips_2022_4RC_vI0OgIS", "MDe1tMfDUvT", "68r80VihjmR", "1MzjP4DS--", "khte5GsOCdd", "LmmRuIUs3m3", "G5ZnOW3zQS", "5lxy6m_JNsz", "nips_2022_4RC_vI0OgIS", "DKaNoObghAl", "_LcVvjUVdqr", "OZRPcwccrx", "XZRwyDXwHy8", "nips_2022_4RC_vI0OgIS", "nips_2022_4RC_vI0OgIS", "nips_2022_4RC_vI0OgIS", "nips_2022_4RC_vI0OgIS", "nips_2022_4RC_vI0OgIS" ]
nips_2022_Ms6QZafNv01
Optimal algorithms for group distributionally robust optimization and beyond
Distributionally robust optimization (DRO) can improve the robustness and fairness of learning methods. In this paper, we devise stochastic algorithms for a class of DRO problems including group DRO, subpopulation fairness, and empirical conditional value at risk (CVaR) optimization. Our new algorithms achieve faster convergence rates than existing algorithms for multiple DRO settings. We also provide a new information-theoretic lower bound that implies our bounds are tight for group DRO. Empirically, too, our algorithms outperform known methods.
Reject
The main criticism raised by the reviewers was the unconvincing experiments. The reviewers generally liked the simplicity of the method presented the paper, but were unconvinced by the impact/utility of the results. Even though the paper focuses on the convex regime, the paper may benefit from some DL experiments. This is not an uncommon paradigm in research (e.g. Adam and other convex optimization methods with DL experiments, etc.) There were some other comments that are worth addressing in a future version of the paper. (e.g. adding the discussion mini-batching, improved exposition).
val
[ "TMYo0oW7WK", "FgduSpLX2p9K", "b3ztScQOEf", "qEt2Lj4hmUa", "xRW8ZUnxUpx", "EWm_lsZerKL", "0nluoTQP-yx", "4N3DRIWnVkf", "lSxj-kCwY_L" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I apologize for the late response. \n\n> We believe that our OCO approach is valuable because it yields an algorithm for a wide range of DRO problems in a very simple way. In fact, the previous work on group DRO [Sagawa et al. 2020] analyzed the convergence of their algorithm with more complicated results of convex analysis. Our approach significantly simplifies their convergence analysis too; see Appendix D. Furthermore, via our OCO approach, we can obtain a unified algorithm for a more general form of DRO including group DRO, empirical CVaR, and many others. Also, the unified algorithm achieves the optimal convergence rate for group DRO, which is remarkable given the simplicity of our approach.\n\nThat is a fair point but doesn't answer the question of algorithmic novelty for me. As I already said, I agree that your formulation is clean and simple. \n\n> Since we study expected convergence under the i.i.d. oracle for group distributions, there is no benefit from mini-batching. This is generally true for OCO as well.\n\nI don't think the i.i.d. oracle setting masks the benefit of mini-batching. It is the bounded gradient and parameter space assumptions. \n\n> We are not sure if mini-batching helps in other smoothness/Lipschitzness assumptions even in OCO.\n\nI don't believe it helps in OCO, but it does help in OSGD-style algorithms in bandit convex optimization. For instance, see [this paper](https://arxiv.org/abs/1209.2388).\n\n> On the other hand, we observed that mini-batching can stabilize algorithms' behavior. To evaluate the average performance of algorithms reliably, we used mini-batching in our experiment. We remark that the performance trend did not change even without mini-batching (but with a larger variance).\n\nThis is precisely why I think you should analyze your algorithms for a problem class where mini-batching reduces variance. I understand that this is a more fundamental question for OCO, but strongly encourage the authors to add this point in a remark. \n\n> Thank you for your valuable suggestion. We agree that it would be interesting to see the performance of our algorithm with various datasets and/or in other DRO problems. We will add more experiments in the final version.\n\nPlease do. After reading the other reviewers' comments, it seems to be a shared concern. I agree that the existing experiments make sense compared to the previous work. \n\nOverall, I will retain my score, and **strongly encourage the authors to add more experiments**, including but not limited to the non-convex experiments in Sagawa et al., even if the setting is not covered by your theory. \n", " Thank you for your careful reading and valuable suggestions. We'll reflect your comments in the final version.\n\n> The idea seems original, however, I am not entirely sure if there is an algorithm in the multi-arm bandits that has the same convergence rate. If not this result can be seen as a contribution to the two-player stochastic online learning problems.\n \nTo the best of our knowledge, there is no known multi-armed bandit algorithm with the same convergence rate. Note that our stochastic two-player game formulation is different from multi-armed bandit because we also need to optimize on a continuous variable $\\theta$. \n\n> Throughout the paper formulation of \"distributionally robust optimization\" and \"group distributionally robust optimization\" terms are used interchangeably, which I find very confusing. For example, in Line 21, the reference to general DRO problems is not precise. This paper studies the general formulation of group DRO problems because the problem introduced in (1) is already a group distributionally robust optimization problem, which is different from than traditionally studied distributionally robust optimization problem. A similar problem occurs in Line 148, Line 143, in the title of Algorithm 1, and so on.\n> To avoid confusion, the problem formulated in (1) could be called a general formulation for group distributionally robust optimization.\n\nWe are sorry to make confusion. We will revise the text so that DRO (1) will be called \"a general form of our DRO\". \n\n\n> I am not entirely sure why $q_t$ is removed from the objective in the gradient update of $\\theta_t$ . Is it common to apply such a technique in two-player games? I am not sure if the unbiasedness of the gradient estimates goes through in that case.\n\nThe update of $\\theta_t$ indeed depends on $q_t$ because data point $z$ is sampled from group distribution $P_{i_t}$, where the group index $i_t$ is sampled from $q_t$. See Lines 3–5 in Algorithm 1. The resulting gradient estimator is unbiased.\n\n\n> I do not think the related literature for online stochastic games is covered fully. I suggest the authors to extend this part.\n\nThank you for your suggestion. We will extend the part if you would suggest any missing references.\n\n\n> It is not clear what is $f$ in Section A.1.\n\nFunctions $f_t$ in Section A.1 denote the objective functions in online learning. In our convergence analysis, we set $f_t(\\theta) := L(\\theta, q_t)$ (resp. $f_t(q) := L(\\theta_t, q)$) to obtain regret bounds of the $\\theta$-player (resp. $q$-player).\n\n\n> I cannot follow the correctness of proofs because of a couple of reasons. - I could not see proofs for Lemma 1, 2, 3, and 4. If these lemmas are proved in a reference, please indicate them. - The proofs of the theorems are dependent on these lemmas.\n\nLemma 1 can be found in standard online learning textbooks, e.g., Theorem 6.8 in [[Orabona 2021](https://arxiv.org/abs/1912.13213)]. Lemmas 2--4 immediately follows from Lemma 1 by setting $\\Psi$ to the Euclidean norm, negative entropy, and Tsallis entropy, respectively.", " Thank you for your careful reading and valuable suggestions. We'll reflect your comments in the final version.\n\n> The background for adopting the synthetic dataset was to confirm the performance of the proposed algorithm in the high-dimensional model scenario. For the readers, it seems better to utilize natural network architectures rather than only linear models. (e.g., [1] used ResNet50 for image datasets and BERT for language dataset.)\n\nWe did not include experiments in the deep learning regime because our main focus is to devise an efficient algorithm for a general form of our DRO (1) and prove its optimality in group DRO. Nonconvex DRO such as deep learning is much more challenging and currently out of the reach of our theoretical analysis. Note that the previous work [Sagawa et al. 2020] also analyzed the convergence for convex group DRO only.\n\n> The introduction of oracle makes a difference in terms of the setting in consideration from the existing methods, so it would be good to add a discussion on this.\n\nThank you for your remark. Our i.i.d. oracle model to group distribution is more general than the empirical group distribution model. We will add a discussion in the revised version.", " Thank you for your careful reading and valuable suggestions. We'll reflect your comments in the final version.\n\n> What is the evaluation of the methods of this paper on other real world data sets. \n\nWe agree that it would be interesting to see the performance of our algorithms in a different real-world dataset. We believe that one would obtain similar results for other datasets because our algorithm improved the convergence rate by $O(\\sqrt m)$ and that our algorithms consistently outperform the known algorithm for both real and synthetic datasets. We will add more experiments in the final version.\n\n> What is the comparison to other methods to solve DRO problems beyond those introduced in Sagawa et al.\n\nFor empirical CVaR, our method achieves $O(\\sqrt{(G^2D^2 + M^2m)/T})$ convergence, improving $O(\\sqrt{(G^2D^2 + M^2m\\log m)/T})$ convergence of [Curi et al. 2020]. For other DRO problems, there is no known algorithm for the i.i.d. oracle setting (to the best of our knowledge).\n\n> What are practical applications of the Group DRO, Empirical CVaR and other mentioned problems. How is the performance of the algorithm on the real world instances of these problems.\n\nGroup DRO, empirical CVaR, and other DRO problems have various applications in fairness and robustness in machine learning. See references in the introduction. Our experiments demonstrated that our algorithms find an optimal solution in the worst group fairness problem (i.e., group DRO) much faster than the known method. To the best of our knowledge, there is no known algorithm for other DRO problems in the i.i.d. oracle setting, so we conducted experiments in group DRO.", " Thank you for your careful reading and valuable suggestions. We'll reflect your comments in the final version.\n\n\n> My main concern is that the algorithmic insight here is just casting the objective as a two-player game and using the correct regularizer for OMD. On the one hand, the simplicity is excellent (and OCO theory itself is known to be clean and simple), but on the other, I feel that the techniques and analysis used are not novel. This is apparent in the proof of the upper bounds, which immediately follow from known convergence lemmas. The lower bound is more interesting, and the analysis idea looks more novel. This is why I give the paper a weak acceptance.\n\nWe believe that our OCO approach is valuable because it yields an algorithm for a wide range of DRO problems in a very simple way. In fact, the previous work on group DRO [Sagawa et al. 2020] analyzed the convergence of their algorithm with more complicated results of convex analysis. Our approach significantly simplifies their convergence analysis too; see Appendix D. Furthermore, via our OCO approach, we can obtain a unified algorithm for a more general form of DRO including group DRO, empirical CVaR, and many others. Also, the unified algorithm achieves the optimal convergence rate for group DRO, which is remarkable given the simplicity of our approach.\n\n\n> The authors use mini-batching in the experiments, but the theory doesn't have any mini-batching. I would guess that under the assumptions of G-lipschitzness and compact parameter space, it is hard to show such a benefit theoretically? If this is true, could some other assumptions show an improvement (such as smooth, non-Lipschitz functions)? I understand this might is a more fundamental question for online convex optimization.\n\nSince we study expected convergence under the i.i.d. oracle for group distributions, there is no benefit from mini-batching. This is generally true for OCO as well. We are not sure if mini-batching helps in other smoothness/Lipschitzness assumptions even in OCO. Mini-batching is known to be theoretically useful for the empirical distribution (offline) setting, but it is rather a different and more restricted setting. \nOn the other hand, we observed that mini-batching can stabilize algorithms' behavior. To evaluate the average performance of algorithms reliably, we used mini-batching in our experiment. We remark that the performance trend did not change even without mini-batching (but with a larger variance). \n\n\n> I believe a much more extensive empirical evaluation is possible here, particularly on data sets widely used in the fairness/robustness community. Since the theoretical results are not very hard to attain and are positive, it would be good to see that the simple algorithms outperform known algorithms on a broad range of tasks. It would also be good to see these experiments on data sets where the general objective proposed in the paper makes more sense than group DRO, etc.\n\nThank you for your valuable suggestion. We agree that it would be interesting to see the performance of our algorithm with various datasets and/or in other DRO problems. We will add more experiments in the final version. Below let us explain why we used the current experimental setup.\n\n**On Datasets.** We used the Adult dataset for the real-world experiment because it was used in previous work of DRO [Namkoong and Duchi, 2016]. We agree that it may be interesting to see the performance on other recent fairness datasets. We believe that one would obtain similar results for other datasets because our algorithm improved the convergence rate by $O(\\sqrt m)$ and that our algorithms consistently outperform the known algorithm for both real and synthetic datasets. We will add more experiments with other datasets in the final version. \n\n**On other DRO settings.** We conducted experiments in group DRO because there is a known algorithm [Sagawa et al. 2020] under the same i.i.d. oracle model. We are not aware of any other special case of DRO for which a known algorithm exists under the same i.i.d. model. There are various works in the empirical distribution (offline) setting, but it is more restricted than the i.i.d. setting.", " This paper studies a generalized notion of distributionally robust optimization (DRO). In particular, the mixing weights for different data distributions ($m$ groups) can belong to arbitrary subsets $Q$ of the simplex $\\Delta_m$ as long as $Q$ also contains uniform mixing weights $\\frac{1}{M}\\cdot \\mathbb{1}$. Thus, popular DRO objectives such as group-DRO, etc., are special cases of this objective. \n\nThe paper casts this objective as a two-player game and uses stochastic gradient methods to obtain convergence guarantees for the optimality gap. The players' algorithms are simple online gradient and mirror descent algorithms with appropriate regularization. For the particular case of group-DRO, their rates improve upon the best-known rate by $\\sqrt{m}$. The paper also presents a novel lower bound, showing that their algorithm has an optimal convergence under their assumptions. I like the presentation of the paper: discussion of the related work and their connections in section two, review of current convergence rates, and discussion of algorithm design and results. The result is clean and also shown to be optimal for group DRO. \n\nMy main concern is that the algorithmic insight here is just casting the objective as a two-player game and using the correct regularizer for OMD. On the one hand, the simplicity is excellent (and OCO theory itself is known to be clean and simple), but on the other, I feel that the techniques and analysis used are not novel. This is apparent in the proof of the upper bounds, which immediately follow from known convergence lemmas. The lower bound is more interesting, and the analysis idea looks more novel. This is why I give the paper a weak acceptance. 1) The authors use mini-batching in the experiments, but the theory doesn't have any mini-batching. I would guess that under the assumptions of $G$-lipschitzness and compact parameter space, it is hard to show such a benefit theoretically? If this is true, could some other assumptions show an improvement (such as smooth, non-Lipschitz functions)? I understand this might is a more fundamental question for online convex optimization.\n\n2) I believe a much more extensive empirical evaluation is possible here, particularly on data sets widely used in the fairness/robustness community. Since the theoretical results are not very hard to attain and are positive, it would be good to see that the simple algorithms outperform known algorithms on a broad range of tasks. It would also be good to see these experiments on data sets where the general objective proposed in the paper makes more sense than group DRO, etc. No apparent negative societal impacts. ", " In this paper the authors introduce new algorithms to solve a set of DRO problems consisting of Group DRO, Empirical CVar and Weighted Ranking problems. Their algorithms treats the problems as a two player game between players of the minimization and maximization problems. They leverage Online Gradient Descent and Online mirror descent for each of these respectively. \nThey also develop DRO instances which show a lower bound the queries required for objective performance. The bound the best possible performance of any algorithm for the Group DRO problem. \n Originality: I feel the paper is original as it provides new and innovative algorithms for existing problems. \n\nQuality: I feel the paper is technically sound. The novel algorithms provided in the paper improve upon existing methods and these aspects are illustrated through theoretical and numerical results. However, I feel that the numerical section is quite limited with only 1 real data set. Expanding this section will significantly improve the quality of the paper. \n\nClarity: I think the paper is well written and clearly communicates the various assumptions, results, numerical experiments etc. \n\nSignificance: I feel the paper is important as it takes a step forward in solve practical DRO problems by identifying new algorithms for the DRO problems. What is the evaluation of the methods of this paper on other real world data sets. \nWhat is the comparison to other methods to solve DRO problems beyond those introduced in Sagawa et al. \nWhat are practical applications of the Group DRO, Empirical CVaR and other mentioned problems. How is the performance of the algorithm on the real world instances of these problems. The authors do not discuss any limitations of their work. Though I doubt the existence of negative societal impact, it would be interesting to see any differences in the solutions obtained by their method as compared to Sagawa. Do this focus on different aspects etc. \n", " This paper proposes a general class of DRO that includes Group DRO, subpopulation fairness, empirical conditional value at risk optimization and others, and the stochastic gradient learning algorithms under these frameworks. As the convergence rates shown in the previous studies are improved, the proposed algorithms are efficient, and especially, it is theoretically shown that the optimal convergence rate is achieved under the Group DRO framework. This allows to superior performance over a baseline in the Adult and synthetic datasets, which are aligned with theoretical results. [Strengths]\n* This paper is clearly written in overall.\n* It is interesting to combine online learning algorithms and appropriate gradient estimators in the scenario, where the group distributions are given by the stochastic oracles.\n* It differs from previous studies in that it derives algorithms and theoretical results for online learning scenarios with stochastic functions.\n* The introduction of the weighted ranking of group losses (general DRO) seems natural after Group DRO and empirical CVaR optimization.\n* This paper shows several theoretical results including improvement of convergence rate compared to existing studies through the proposed algorithms and corresponding convergence analysis.\n\n[Weaknesses]\n* Compared to the theoretical results, experimental analysis and setting seem to be insufficient. (e.g., Accuracy, the number of random seeds, corresponding standard deviations can also be reported in the Appendix.)\n* Line 129: ‘p=m/k’ → ‘p=k/m’ * Due to its methodological simplicity, it seems that the proposed algorithms have the advantage of being applicable to a variety of datasets.\n* The background for adopting the synthetic dataset was to confirm the performance of the proposed algorithm in the high-dimensional model scenario. For the readers, it seems better to utilize natural network architectures rather than only linear models. (e.g., [1] used ResNet50 for image datasets and BERT for language dataset.)\n* The introduction of oracle makes a difference in terms of the setting in consideration from the existing methods, so it would be good to add a discussion on this.\n\n[1]: (Sagawa et al.) Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. Limitations (e.g., some consideration in practice) are not specified, as mentioned in the checklist.", " The paper proposes an algorithm for a two-player zero-sum game based on stochastic no-regret dynamics which achieves a faster convergence rate than existing algorithms for group DRO problems. They further show that their bound is tight for group DRO. \n The paper is well-written.\n\nThe idea seems original, however, I am not entirely sure if there is an algorithm in the multi-arm bandits that has the same convergence rate. If not this result can be seen as a contribution to the two-player stochastic online learning problems. \nI highlight the need for a comprehensive literature review. Please see my comments below.\n\nI appreciate that the authors put a separate section about the algorithm that is the most related to their work and provide a simple proof for the convergence of the competitive algorithm. \n\n # Major Comments\n\n1) Throughout the paper formulation of \"distributionally robust optimization\" and \"group distributionally robust optimization\" terms are used interchangeably, which I find very confusing.\nFor example, in Line 21, the reference to general DRO problems is not precise. This paper studies the general formulation of group DRO problems because the problem introduced in (1) is already a group distributionally robust optimization problem, which is different from than traditionally studied distributionally robust optimization problem.\nA similar problem occurs in Line 148, Line 143, in the title of Algorithm 1, and so on.\n\nTo avoid confusion, the problem formulated in (1) could be called a general formulation for group distributionally robust optimization. \n\n2) I am not entirely sure why $q_t$ is removed from the objective in the gradient update of $\\theta_t$.\nIs it common to apply such a technique in two-player games?\nI am not sure if the unbiasedness of the gradient estimates goes through in that case.\n\n3) I do not think the related literature for online stochastic games is covered fully. I suggest the authors to extend this part. \n\n\n4) It is not clear what is $f$ in Section A.1.\n\n5) I cannot follow the correctness of proofs because of a couple of reasons.\n\t\t- I could not see proofs for Lemma 1, 2, 3, and 4. If these lemmas are proved in a reference, please indicate them. \n\t\t- The proofs of the theorems are dependent on these lemmas. \n\n## Minor Comments: \n\n1) The sentence in Line 125 is neither rigorous nor clear. Please clarify this sentence.\n The authors did not study the limitations of their work. \n" ]
[ -1, -1, -1, -1, -1, 6, 6, 5, 4 ]
[ -1, -1, -1, -1, -1, 3, 3, 2, 3 ]
[ "xRW8ZUnxUpx", "lSxj-kCwY_L", "4N3DRIWnVkf", "0nluoTQP-yx", "EWm_lsZerKL", "nips_2022_Ms6QZafNv01", "nips_2022_Ms6QZafNv01", "nips_2022_Ms6QZafNv01", "nips_2022_Ms6QZafNv01" ]
nips_2022_68YyraaeYmc
Exploring through Random Curiosity with General Value Functions
Efficient exploration in reinforcement learning is a challenging problem commonly addressed through intrinsic rewards. Recent prominent approaches are based on state novelty or variants of artificial curiosity. However, directly applying them to partially observable environments can be ineffective and lead to premature dissipation of intrinsic rewards. Here we propose random curiosity with general value functions (RC-GVF), a novel intrinsic reward function that draws upon connections between these distinct approaches. Instead of using only the current observation’s novelty or a curiosity bonus for failing to predict precise environment dynamics, RC-GVF derives intrinsic rewards through predicting temporally extended general value functions. We demonstrate that this improves exploration in a hard-exploration diabolical lock problem. Furthermore, RC-GVF significantly outperforms previous methods in the absence of ground-truth episodic counts in the partially observable MiniGrid environments. Panoramic observations on MiniGrid further boost RC-GVF's performance such that it is competitive to baselines exploiting privileged information in form of episodic counts.
Accept
This paper proposes an intrinsic motivation method which by and large extends RND, aiming to service the needs of agents in longer horizon exploration problems. There was a bit of a spread of scores amongst reviewers, but based upon reading the extensive discussion, I am recommending acceptance it seems that on the balance of opinions, the consensus leans towards the reviewers agreeing that the result is important, the method useful, and the paper is of a publishable nature.
train
[ "f7_wdPo8z4", "-b5w3aWYiI", "_QgR7ScUIOZ", "xV1bc4zhTTF", "PAsWVgHqay4y", "DpQO-UEvHwW", "rBDlIamoTe", "8Rk-JXkvgCQ", "E16cmcIZ83w", "rwjpb03mFrt", "wQkBa9W4th_", "zV61lGZXnpaZ", "2diEm9AlHVyA", "fRa3sPvp3fr", "eN13k0dAACj", "5oZgn1xIM7i", "EyoBvz8pCby", "K3kr1GNNR2N" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " **\"And now that I re-read that section, I believe the authors don't discuss this relationship between scaling the prediction error with essentially an epistemic uncertainty estimate well enough. The authors discuss how the ensemble disagreement vanishes for stochastic states so the prediction error will be multiplied by 0. But what about all the other cases, where ensemble members disagree because of simply high epistemic uncertainty due to lack of data? In that case, you are multiplying the prediction error with an epistemic uncertainty estimate, introducing a new type of relationship that is not discussed.\"**\n\nIndeed, while the benefit is clear in stochastic states when the prediction error is multiplied by 0, it is more difficult to analyze what will happen in other encounters where multiple different factors interact. In this work, we have set out to investigate this empirically, and our results in ablation that you recommended (figure https://ibb.co/z21MtDD) suggest a clear improvement from using the disagreement term.\n\nAs mentioned in the reply above, we had originally begun our investigation with the prediction error term only (no ensembled disagreement term). The empirical results we obtained were worse than the current version that adds the disagreement term, which we felt is likely due to incorrectly handling the aleatoric uncertainty when longer horizons are considered. This led us to include the disagreement term which greatly improved our approach.\n\nWe will update the discussion to include a sketch of possible scenarios where the benefit of the disagreement term is not clear to complement our current discussion, which currently mostly focuses on its benefits in stochastic states. Regarding the analysis suggested in your previous question that we overlooked, we agree that an analysis of different noise levels and ensemble sizes in the Diabolical Lock could be an interesting addition, although the evidence from the ablation that you suggested already provides a strong signal that the inclusion of the variance term is generally useful on representative benchmarks (and which we will include in the paper). \n\n**\"Could that explain why for γ = 0 you see a difference in results for ensemble size 2?\"**\n\nA difference between RC-GVF ($\\gamma = 0$) and RND in Figure 4c is the inclusion of the multiplicative disagreement term from the ensemble in the former case. This appears to only provide a small benefit (compare to RC-GVF ($\\gamma = 0$) w/o disagreement), while history conditioning is the main driver of improvement over RND in the case of ($\\gamma = 0$). As to why this small benefit exists, we speculate that simply estimating the intrinsic reward (which includes the prediction error) with multiple predictors may have an advantage in reducing variance.\n\n**\"Is there a way to check that you are indeed seeing an improvement with an ensemble because of vanishing disagreement at noisy states?”**\n\nThis is unfortunately difficult to measure, although one piece of evidence we have for this is that _RC-GVF benefits more from the disagreement term than RND alone_ (which we will highlight in the paper upon including these new results).\n\n**“I also would still like to see quantitative results of a high-capacity predictor for the diabolical lock experiment in an updated version of the supplementary.”**\n\nWe re-ran this experiment as requested. The table below presents results of the best RND configuration with the high-capacity predictor on the diabolical lock problem. \n\n| Model | Mean and (min, max) of farthest column | Runs that reach end of the lock |\n| --------- | ---------------------------------------------------- | ----------------------------------------- |\n| RND | 32 (11, 60) | 0/10 |\n| RND (with high-capacity predictor) | 31 (26, 57) | 0/10 |\n\nIt can be seen that even with a high-capacity predictor RND is unable to solve this environment. We will report these results in the supplementary material.\n", " Thank you for engaging in the discussion period. We reply to your remaining comments and questions below. \n\n**“However the ensemble size experiment results are still almost counter-intuitive to me. Why does ensemble size 2 work the best in the KeyCorridorS5R3 experiment? Do the authors have any explanation/intuition why there is no consistent trend across the experiments with varying ensemble sizes? Is this an issue with number of seeds as reviewer 9UDz alluded to?”**\n\n\nWe note that all ensemble sizes (2, 4, 6, 8) perform about the same on KeyCorridorS5R3, and that the natural variance of these algorithms makes it difficult to interpret small differences in performance as are currently observed. Currently, we report 5 seeds for ensemble sizes (4, 6, 8), and 10 for size=2, because of the limited time available during the rebuttal. While this means that the precise curves are still likely to deviate slightly, this is not expected to alter the outcome of this experiment (i.e. that they perform about the same) given the observed variability thus far.\n\nAs to why ensembles with fewer members can perform as well as those having additional members, there could be a few reasons:\n\n1. It might be that because of also having the prediction error term, a precise quantification of the disagreement/variance is not essential.\n\n2. The ensembles have a common recurrent core and only differ in MLP heads; perhaps separate RNNs for each member would lead to greater variation with ensemble size.\n\n3. With larger ensemble sizes it may be that additional hyper-parameters need to be tuned separately (at this point, we checked two values of the intrinsic weight coefficient). For instance there could be a dependency through the variance of the sample variance, which could impact the best choices for the intrinsic weight coefficient and the predictor’s learning rate.\n\n**“Also the authors didn't really comment about my question about the relationship between the noise levels in the environment and the ensemble size.”**\n\nThank you for pointing this out, we apologize for overlooking this question previously and include our response as part of our other reply down below.\n\n**“Although the variance between ensemble members should go to zero in noisy states, this on its own is a way of approximating the epistemic uncertainty.\"**\n\nWe agree that the disagreement term may also approximate epistemic uncertainty on its own. In fact, there are several possibilities for constructing an intrinsic reward within our framework. Our particular choice was motivated by the need to correct a prediction error term, which itself did not separate the aleatoric uncertainty. The error term as a starting point is motivated by the connections to RND, and throughout the development of RC-GVF we profited from being able to study the connections between RND and RC-GVF more closely through this term. How to go from here, and whether the variance term can stand on its own or perhaps should be combined differently with an error term, is an interesting and relevant question for future work. We will update the paper accordingly to discuss this.\n\n", " Thank you for your response. I believe the idea is novel and given the experimental results, I am happy to increase my score. ", " Thank you for your clarification and answers to my questions. I agree that doing a sensitivity analysis on the weighting of true and intrinsic reward would be valuable. Also, making it clear how they are weighted and/or annealed in the main paper would be good. ", " I thank the authors for their response and for answering my questions. \n\nHowever the ensemble size experiment results are still almost counter-intuitive to me. Why does ensemble size 2 work the best in the KeyCorridorS5R3 experiment? Do the authors have any explanation/intuition why there is no consistent trend across the experiments with varying ensemble sizes? Is this an issue with number of seeds as reviewer 9UDz alluded to? \n\nAlso the authors didn't really comment about my question about the relationship between the noise levels in the environment and the ensemble size. Although the variance between ensemble members should go to zero in noisy states, this on its own is a way of approximating the epistemic uncertainty. And now that I re-read that section, I believe the authors don't discuss this relationship between scaling the prediction error with essentially an epistemic uncertainty estimate well enough. The authors discuss how the ensemble disagreement vanishes for stochastic states so the prediction error will be multiplied by 0. But what about all the other cases, where ensemble members disagree because of simply high epistemic uncertainty due to lack of data? In that case, you are multiplying the prediction error with an epistemic uncertainty estimate, introducing a new type of relationship that is not discussed. Could that explain why for $\\gamma=0$ you see a difference in results for ensemble size 2? Is there a way to check that you are indeed seeing an improvement with an ensemble because of vanishing disagreement at noisy states?\n\nI also would still like to see quantitative results of a high-capacity predictor for the diabolical lock experiment in an updated version of the supplementary.", " Thanks for your detailed response and the additional results. Your response has adequately addressed my main concerns. While I do appreciate the links to new results (very convenient), I also would've really liked to see the suggested changes and new results reflected in an updated PDF. Nevertheless, I do trust that the authors would make appropriate changes in the final version and as such I'm happy to increase my score at this point. ", " We would like to thank the reviewers for their thoughtful comments and positive feedback. We are pleased that the reviewers found our approach novel or interesting (Reviewers MZa2, ZFPx, 5dYH), well-motivated (Reviewers 9UDz, 5dYH), and simple, intuitive or fundamentally well justified (Reviewers MZa2, 9UDz, 5dYH).\n\nIn response to the reviewers feedback we have conducted several additional experiments and analysis, which we will incorporate in the paper. We summarize the main experiments and findings below.\n\n**Ensemble size ablations (Reviewers ZFPx and 5dYH).**\n\nWe conducted experiments with larger ensemble sizes on the MiniGrid and Diabolical Lock experiments. We observed a small improvement with increased ensemble size in diabolical lock, where we present results of lock completion at 15M frames with 2 (original), 4 and 6 members in the ensemble as in Table 1:\n\n| # predictors | Mean and (min, max) of farthest column | Runs that reach end of the lock |\n| --------------- | ---------------------------------------------------- | ----------------------------------------- |\n| 2 | \t\t94 (66, 100)\t\t |\t\t\t6/10\t\t|\n| 4\t | \t\t\t98 (90,100)\t\t |\t\t\t8/10\t\t|\n| 6 \t|\t\t95 (65, 100)\t\t | \t\t\t8/10\t\t|\n\nIn two MiniGrid environments, we obtained comparable results with all ensemble sizes (KeyCorridor-S5R3: https://ibb.co/X5bKCYb and MultiRoom-N12S10: https://ibb.co/L8qbDH9). These new findings indicate that two member ensembles usually suffice in these problems.\n\n**Additional baselines on Diabolical Lock (Reviewer 5dYH).**\n\n We added a comparison of NovelD and AGAC to RC-GVF on the Diabolical Lock. We present the results for lock completion at 15M frames for these approaches and RC-GVF in the table below.\n\n| Model | Mean and (min, max) of farthest column | Runs that reach end of the lock |\n| --------------- | ---------------------------------------------------- | ----------------------------------------- |\n| RC-GVF | \t\t94 (66, 100)\t\t |\t\t\t6/10\t\t|\n| ACAG | \t\t5 (4, 6)\t\t |\t\t\t0/5\t|\n| NovelD \t|\t\t24 (22, 31)\t\t | \t\t\t0/5\t\t|\n\nIt can be observed how AGAC and NovelD perform poorly on this challenging exploration task, further emphasizing the need for longer horizon prediction as in RC-GVF.\n\n**Additional ablations of ensembling and history conditioning (Reviewer ZFPX).**\n\nWe conducted additional ablations of RC-GVF at γ = 0.6 on the KeyCorridor-S5R3 environment. Concretely, we removed either the recurrent predictor or the disagreement term. Our results (figure: https://ibb.co/z21MtDD) indicate that both factors are important at γ = 0.6, and removing either of them degrades performance. RC-GVF with only disagreement (no recurrent predictor) performs better than RC-GVF with only recurrent predictor (no disagreement). We provide additional insight into these new findings in the response to reviewer ZFPx.\n\n**RC-GVF with episodic state counts (Reviewer 9UDz).**\n\nWe now report results for RC-GVF using ground-truth episodic state counts on KeyCorridor-S5R3 (https://ibb.co/Gx0PJrN) and MultiRoom-N12-S10 (https://ibb.co/VCrcCcv). While RC-GVF achieves competitive performance in the former environment (as expected), we observe interference between the GVF prediction and the episode state counts in the latter environment. Further discussion is in the response to reviewer 9UDz. \n\n**Analysis of discount factors (Reviewer 9UDz).**\n\nWe expand our analysis of RC-GVF to include larger GVF discount factors (0.99 and 0.95). We observed that 0.95 works well in the KeyCorridor-S5R3 environment (figure https://ibb.co/SmWsNLH), but not in MultiRoom-N12-S10 (figure https://ibb.co/bHBQDXR). 0.99 was worse than our default discount factor of 0.6 on both environments. These results indicate that while longer horizon prediction is important, extending the horizon too much may make the prediction problem too difficult. Further discussion is in the response to reviewer 9UDz.\n\nWe respond to individual points raised by reviewers in separate replies to each review below.\n", " **“I find the ablation study Figure 4c to be contradicting the statements made in section 3.3. Using γ = 0 is the equivalent of intrinsic rewards by RND, in which case as the authors state no ensemble is needed (The deterministic target value of the pseudo-reward takes care of the stochasticity). So why is the ensemble ablation performed for this discount value? In fact why are there no ablations for γ = 0.6? “**\n\nThank you for this comment and question. It is a valid point that we would like to address. \n\nIn Section 3.3, we state that, “Note that in this case using the TD-error between the predictor and target is equivalent to intrinsic rewards provided by random network distillation (RND; Equation 2).”\n\nThis equivalence would apply if we were using solely the TD-errors as intrinsic rewards for RC-GVF. Based on the definition of the RC-GVF intrinsic reward, we would still have the ensemble disagreement term with RC-GVF and γ = 0.\n\nIn principle, there should be no need for the disagreement term with RND (no aleatoric uncertainty). However, our results do indicate a relatively minor benefit from the inclusion of the ensemble disagreement term in 4c. We hypothesize that these benefits could arise from using multiple predictors to estimate the first error term. Such reasoning would also be supported by the uncertainty analysis presented by Ciosek et al. (2020). From an implementation perspective, we also equip RC-GVF with a history conditioned LSTM predictor in the MiniGrid experiments.\n\nTo summarize, there are two differences between RC-GVF with γ = 0 (in minigrid) and RND, (1) The history conditioned LSTM predictor (2) Disagreement term from the ensemble. The ablation in Figure 4c moves from RC-GVF to RND. Thus, we first presented the move to γ = 0, and then removed either the disagreement term or recurrent predictor. Removing both would simply be RND, for which we provide results as well.\n\nRegarding your comment about the ablations at γ = 0.6, this is a good point and something that we overlooked. We have run this experiment and obtained the following results (figure https://ibb.co/z21MtDD). Removing either the disagreement term or the recurrent predictor worsens the performance of RC-GVF with γ = 0.6. At γ = 0.0 we found that the disagreement term contributes lesser in comparison to the recurrent predictor. Interestingly, we obtain a slightly different result at γ = 0.6, RC-GVF with only disagreement (no recurrent predictor) performs better than RC-GVF with only recurrent predictor (no disagreement). This further emphasizes the importance of handling the aleatoric uncertainty with non-zero GVF discounts.\n\nThe new ablation results are presented with 10 seeds for the complete RC-GVF and 5 seeds for the other variants. We will include final results with 10 seeds in the paper.\n\n[Ciosek 2020] K. Ciosek, V. Fortuin, R. Tomioka, K. Hofmann, and R. E. Turner. Conservative Uncertainty Estimation By Fitting Prior Networks. 8th International Conference on Learning Representations (ICLR) 2020.\n\n**“If I understand it correctly, the main difference between RC-GVF with γ = 0 with and without recurrence is that the predictor network unlike in RND is not the same as the pseudo-reward generator and in the case of Minigrid doesn’t use an LSTM core. Did you do an ablation on the extra representational power in the different predictor network architecture of RC-GVF compared to the pseudo-reward generator also for the diabolical lock experiment?”**\n\nRC-GVF has a similar predictor (no recurrence) to RND in the diabolical lock experiment.\nAs you mention, RND’s predictor is the same as the pseudo-reward generator. In comparison, the RC-GVF predictor has more capacity (one more hidden layer, and double the neurons in each layer). We had conducted experiments with the high capacity predictor available to RND. This was never successful in solving the lock and suggests that using intrinsic rewards derived from future dynamics is really the key to succeeding in the diabolical lock. \n\nWe will clarify this in the paper, although we can also run these experiments once more and report quantitative results for them in the appendix if this is desirable.", " Thank you for your review, suggestions, and questions. We are glad that you felt that the problem setting is important and that our approach is interesting and novel. We reply to your questions and concerns about missing related work below.\n\n**“I believe that the authors of the paper didn’t discuss the related work in this direction in enough detail. There are several recent papers that build upon the ensemble disagreement proposed in [1] (which is discussed in the current paper) and extend it to get multi-step novelty estimates into the future, such as [2] and [3]. (Especially in [3], multi-step novelty with RND as intrinsic reward is also proposed.). I believe these works should also be part of the discussion to give a more complete picture of long-horizon intrinsically-motivated exploration.”**\n\nThank you for pointing out this missing related work. Indeed, these are relevant works to discuss and contrast with our own proposal for generating long-horizon intrinsically-motivated intrinsic rewards. It appears that these incorporate rollout based planning to optimize for long term novelty instead of only considering novelty on states that were actually encountered. This can be useful to overcome the reactive nature of model-free approaches and incorporate model-based planning into exploration.\n\nA crucial difference between these approaches and ours is that our state novelty signal itself incorporates long-term information without requiring step-by-step rollouts. Indeed, the very recent work by Lambert et al. [3] applies RND on imagined data in an interesting way and optimizes long-term novelty. However, this technique is very different from measuring a single observation’s novelty through multiple steps. In principle, it could also be possible to incorporate our long-term intrinsic reward from RC-GVF to be optimized by multi-step model based planning approaches. Thus combining these techniques could be an interesting direction for future work.\n\nWe will update the revised paper to include these references and a discussion.\n\n**“Why is there no recurrency in the predictor network for the diabolical lock experiment?”**\n\nWe mainly adopted this choice for compatibility reasons, because prior work evaluating on the diabolical lock did not use recurrence either [Misra et al., 2020]. Note also that there is no recurrence in the policy either (similar to in prior works), which is sufficient to solve this task. \n\nOne possible reason for this choice by Misra et al. could be the nature of the partial observability in this problem, which follows a block MDP rather than one where states are completely aliased. More generally, we think it is possible that there could be a benefit from recurrence, especially with higher levels of noise, although we have not experimented with this.\n\nMisra, D., Henaff, M., Krishnamurthy, A., & Langford, J. (2020). Kinematic state abstraction and provably efficient rich-observation reinforcement learning\n\n**“Why are there only two ensemble members? How does the number of ensemble members affect performance in regards to noise levels in the tested environments?”**\n\nWe mainly used only two ensemble members because this is sufficient to have some form of disagreement, and preliminary experiments on MiniGrid did not suggest a clear benefit when increasing the number of ensemble members further. This is also in line with recent work that found a two member ensemble sufficient for balancing optimism and pessimism with distributional estimates of the return [Moskovitz 2021].\n\nBased on your feedback, we have conducted experiments where we investigate the effect of ensemble sizes. In the KeyCorridor-S5R3 and MultiRoom-N12S10 environments we obtain comparable results for ensemble 2, 4, 6 and 8 ensemble members (KeyCorridor-S5R3: https://ibb.co/X5bKCYb and MultiRoom-N12S10: https://ibb.co/L8qbDH9). \n\nIn the diabolical lock environments, we observed small improvements with increased ensemble sizes of 4 and 6. Here we present results of lock completion at 15M frames with 2 (original), 4 and 6 members in the ensemble as in Table 1:\n\n| # predictors | Mean and (min, max) of farthest column | Runs that reach end of the lock |\n| --------------- | ---------------------------------------------------- | ----------------------------------------- |\n| 2 | \t\t94 (66, 100)\t\t |\t\t\t6/10\t\t|\n| 4\t | \t\t\t98 (90,100)\t\t |\t\t\t8/10\t\t|\n| 6 \t|\t\t95 (65, 100)\t\t | \t\t\t8/10\t\t|\n\n\nAs indicated in your question, this could be related to the noisy observations in the environment.\n\n", " Thank you very much for your review, questions and suggestions for improving the paper. We are glad that you found our idea and results interesting. \n\nWe recognize that it can be difficult to understand the considered benchmarks in detail if the reviewer hasn’t used them before. We will add additional details to improve clarity in various parts of the paper based on your feedback (see our response to your questions below). Please let us know in case you encounter any remaining issues in this regard or if any specific details are missing. We provide a point-by-point response to your specific questions below.\n\n**“Figure 1: I appreciate the idea of having a simple motivating example, but the details are too sparse. How does the agent get to the end to see the spike? It seems that even predicting future rewards gives no signal until the very end so it would still appear to be a very hard exploration problem without a shaped true reward. What is the statespace here? is it just white and blue? x-pos? What is the task reward?”**\n\nThank you for pointing this out. In this motivating example there is no task reward. There is a single action (and only one policy) available to the agent, which is to move forward at every time step. The current state can only be observed through the color of the cell (white or blue), which makes this a partially observable setting. We train our predictors (RND or RC-GVF) online (no buffer) on the data collected in intervals of 2 environment steps during the agent’s interactions. \n\nWhat we wanted to illustrate with this example is a failure case of RND as it is usually applied in partially observable environments. The RND intrinsic reward yields no ‘surprise’ even though the last sequence of cells is ‘interesting’ in the sense that it breaks away from the repeating pattern of oscillating cell colors. We will clarify these details in the paper.\n\n**“Line 226: Transitions from \"dead\" row are unclear. From the current description it seems that an optimal policy be to take a bad action H-1 times (total reward 0) followed by a good action at the last timestep?”** \n\nWe agree that the current description may erroneously suggest that the optimal policy would be to take a bad action H-1 times. Due to space constraints we were limited to providing further details along with a figure in the supplementary material (Figure 5, in Appendix A.1). We will clarify this with a sentence to communicate that the agent cannot obtain the maximum return once a single bad action is taken.\n\nTo avoid any other misunderstandings we will also summarize the dynamics related to the dead row here:\n\n* The transition (for any action) from a dead row in a certain column is to the dead row in the next column with zero reward. Once the agent transitions to the dead row it remains in it until the end of the episode. In the final dead state all actions lead to termination with zero reward.\n\n* It is impossible for the agent to receive a high reward at the end of the lock from a dead state, and hence the optimal policy is to take the respective good actions in each (good) state. The good action at each state is assigned randomly as part of the MDP specification.\n\n**“Line 232: I don't understand why you can't just always take the good action. It seems like this deterministic policy works regardless of what state you are in”**\n\nIndeed, a deterministic policy that selects the good action in every state is the optimal one. However, as we now hope is clear (and what we intended to convey with the sentence in line 232), any fixed/memorized sequence of actions that ignores state can not successfully solve this problem due to the stochastic transitions. The agent must take a correspondingly different action depending on the state it ends up in.\n\n**“How sensitive is the algorithm to the weights on true reward and intrinsic reward for RL?”**\n\nThat is an interesting question that we have not explored thus far. This is mainly because we don’t have any reason to suspect that the intrinsic reward generated by RC-GVF requires a different treatment to other intrinsic rewards from the literature in this regard. Similar to all the other baselines we compare with, RC-GVF balances intrinsic and extrinsic rewards through a weight on the intrinsic coefficient. In certain cases like RND, intrinsic rewards would naturally vanish over interactions. This would also be true for NovelD with RND. RC-GVF is closer to recent approaches that propose intrinsic rewards that do not vanish asymptotically [Flet-Berliac et al., 2021; Raileanu et al., 2020]. One way to ensure stability is by annealing the intrinsic coefficient to zero [Flet-Berliac et al., 2021]. In our case, it is simply a small value over which extrinsic rewards will dominate once they are reliably discovered.\n \nWe will attempt to include a basic investigation into this matter in the form of an experiment with different mixing weights.\n", " Thank you for your review and valuable feedback. We are glad that you found our paper novel, well-motivated, and easy to follow. We understand that you are on the fence about our work and we hope that our response and revision may help you to decide to endorse our submission.\n\nIn your review, you mention that **“My only real qualm with the paper is the significance of the results. The method strongly outperforms the baseline in the lock task but really only outperforms the baselines in 2 of the grid-world tasks.”**\n\nIndeed, RC-GVF does not solve all environments in the Minigrid suite. However, as is evident from experiments, state of the art methods such as NovelD and AGAC also cannot solve these environments (without relying on episodic state counts), and performs worse in those environments that RC-GVF solves. Based on your suggestion, we have now also compared RC-GVF to NovelD and AGAC on the diabolical lock environment where we similarly find that RC-GVF is the best performing method.\n \nOther than proposing a novel approach to generating intrinsic rewards that extends RND to using random GVFs, we wanted to push the development of future exploration methods to consider the more natural setting without episodic state counts. With RC-GVF achieving SOTA in this setting we contribute a useful baseline for others to improve on, while also pointing to deficits in existing methods applied to this setting.\n\nWe provide responses to your specific questions below.\n\n**“Are there any ablations on ensemble size?”**\n\nThank you for suggesting this analysis. We performed a study with ensemble sizes 4, 6 and 8 on KeyCorridor-S5R3 and MultiRoom-N7S8 and observed negligible benefits with higher ensemble sizes (KeyCorridor-S5R3: https://ibb.co/X5bKCYb and MultiRoom-N12S10: https://ibb.co/L8qbDH9). This appears to be in line with recent work that found a two member ensemble sufficient for balancing optimism and pessimism with distributional estimates of the return [Moskovitz 2021].\n\nWe also conducted an analysis on the lock task, where present results for lock completion at 15M frames and we found small improvements with larger ensembles of 4 and 6 predictors.\n\n| # predictors | Mean and (min, max) of farthest column | Runs that reach end of the lock |\n| --------------- | ---------------------------------------------------- | ----------------------------------------- |\n| 2 | \t\t94 (66, 100)\t\t |\t\t\t6/10\t\t|\n| 4\t | \t\t\t98 (90,100)\t\t |\t\t\t8/10\t\t|\n| 6 \t|\t\t95 (65, 100)\t\t | \t\t\t8/10\t\t|\n\n**“As mentioned I would be interested in knowing the results of AGAC and NovelD on the lock task.”**\n\nThank you for suggesting this. Our initial aim with the diabolical lock experiment was to illustrate differences between RND and RC-GVF, since RND is often evaluated in this environment, while approaches like AGAC and NovelD are not. \n\nHowever, we agree it is valuable to have some indication of how they perform on the lock task. We present the results for lock completion at 15M frames for these approaches and RC-GVF in the table below.\n\n| Model | Mean and (min, max) of farthest column | Runs that reach end of the lock |\n| --------------- | ---------------------------------------------------- | ----------------------------------------- |\n| RC-GVF | \t\t94 (66, 100)\t\t |\t\t\t6/10\t\t|\n| ACAG | \t\t5 (4, 6)\t\t |\t\t\t0/5\t|\n| NovelD \t|\t\t24 (22, 31)\t\t | \t\t\t0/5\t\t|\n\nAs we can see, AGAC does not make progress towards solving the problem (for a wide range of hyperparameters). NovelD fares better, solving about 30% of the lock on the best run. Here we explored different hyperparameters ranges for the intrinsic coefficient and the learning rate of the predictor. For NovelD we used the suggested default of 0.5 for the scaling coefficient (\\alpha). These are results with 5 seeds, we will include final results with 10 seeds in the supplementary material.\n", " **“The approach seems also applicable in principle to continuous state-action environments. The authors do not claim this, but could you comment on whether you see any obstacles in such environments?”**\n\nThank you for pointing this out. We have not tested RC-GVF in continuous environments (although the diabolical lock has smooth noisy observations). However, since we focus on prediction problems (GVFs), this approach could almost immediately apply to continuous control. One aspect to consider is that if we wanted to include control GVFs to learn optimal value functions for certain cumulants, we would have to make modifications to deal with maximization over continuous actions. Nonetheless this is an interesting direction for future work that we will comment on in the paper.\n\n**“How do you justify using only 10 independent trials (or only 5 in one task)? PPO itself generally has very high variance across trials, and my experience is that using 20 trials is important in deriving any conclusions about performance characteristics. Now, in your context, the variation added due to pseudo-reward predictors could only make this worse. It is my view that, given that the domains are not too computationally expensive, it is critical to at least make sure the results are statistically reliable. Please share your view on this, and why the choice to run only 10/5 seeds.”**\n\nIn principle we fully agree with this sentiment as there are indeed variance and stability issues with PPO. Our current choice of 10 seeds (for all experiments apart from OM2Dlhb) is mainly a pragmatic one (to reduce the computational cost), yet it is already nearly twice as many as is typically encountered in relevant prior works where 4 to 6 seeds are the default [Zhang et al., 2021; Flet-Berliac et al., 2021; Raileanu et al., 2020]. One way in which we have further attempted to mitigate variance and bias in report results is by reporting all results with fresh runs after the initial hyperparameter selection.\n\nWhile we are eager to provide 20 definitive runs wherever possible for a final version, we would like to note that, although the Minigrid environments are simple in appearance, they pose challenging exploration problems that require many million frames of training.\n\nZhang, T., Xu, H., Wang, X., Wu, Y., Keutzer, K., Gonzalez, J. E., & Tian, Y. (2021). Noveld: A simple yet effective exploration criterion. \n\nRaileanu, R., & Rocktäschel, T. (2020). Ride: Rewarding impact-driven exploration for procedurally-generated environments.\n\nFlet-Berliac, Y., Ferret, J., Pietquin, O., Preux, P., & Geist, M. (2021). Adversarially guided actor-critic.\n\n", " **“The authors have tried different discount factors for training the GVF predictors and argue that lower discounts are not good because the prediction horizon should be larger. Couldn't the worse performance of lower discount factors be due to optimisation issues under function approximation? (See van Seijen et al. (2019) for details on this issue.)”**\n\nThank you for pointing us to this reference. It is an interesting aspect that could impact results. We believe that some of these factors might be less influential in our GVF prediction case, unlike analysis in the control setting by van Seijen et al. (2019). In our prediction case, there would not be effects from the mismatch between the finite horizon undiscounted return (used for measuring performance) and the discounted return. Similarly, optimization effects arising from action gaps might be more relevant in the learning of optimal value functions. However, there could also be optimization effects even in the GVF prediction case with function approximation. For example, there could be scenarios where different behaviors or policies are not reflected in the value function with certain discount factors. These effects could certainly be interesting to consider in future work and we will comment on this in the paper.\n\nMore generally, as we mention in the limitations section, moving away from single fixed discount factors seems like a good idea. Some possibilities include adapting discount factors during learning, considering GVFs under multiple discount factors, or utilizing state dependent discounts.\n\n**“I also feel the range of discount factors used is somewhat arbitrary and not well motivated. The most commonly-used discount factors (e.g. γ = 0.9 or γ = 0.99) are missing in the tested values in Figure 4. Why? I understand that these are not agent discounts and are only used for exploration, but I cannot process why still a higher value shouldn't in principle be preferred in episodic settings. I think too high wouldn't work well with function approximation due to optimisation issues, but a value of 0.99 should work in practice. Why is it not preferred in principle by the authors?”**\n\nThank you for pointing this out. We agree that further studying the effects of the discount factor is interesting and useful. We now present results with higher discount factors (0.99 and 0.95) in our discount analysis with KeyCorridorS5R3 and MultiRoom-N7S8. The new figures are available at https://ibb.co/SmWsNLH (KeyCorridor) and https://ibb.co/bHBQDXR (MultiRoom). 0.95 does quite well in KeyCorridor but not in the MultiRoom. 0.99 is worse than values like 0.6/0.7 in both environments.\n\nOne reason could be that while predicting more than one step into the future is useful, predicting too far ahead into the future makes the task too difficult, especially in procedurally generated environments like MiniGrid.\n\n**“Does the basis PPO agent use the history summary or only use the latest observation? Is this setting the same among all baselines?”**\n\nThe base PPO agent is always a recurrent network operating on the history of observations in the MiniGrid experiments. This holds true for all baselines as well.\n\n**“Where can I find the result that you refer to in footnote 1?”**\n\nThis statement was based on preliminary experiments on KeyCorridorS5R3 that we had not included in the initial submission. We now provide results for RC-GVF with episodic counts and egocentric observations (based on a coarse grid search for intrinsic coefficients) on two of the harder environments (KeyCorridor-S5R3 and MultiRoom-N12S10).\n\nOn KeyCorridor (https://ibb.co/Gx0PJrN), we note how RC-GVF achieves competitive performance, although performs slightly worse compared to when using panoramic observations. We also ran an additional experiment using episodic state counts on MultiRoom-N12-S10 (https://ibb.co/VCrcCcv). We observe that the performance is worse compared to when only using episodic state-counts, indicating that some interference takes place between these two signals. Indeed we have not attempted to make our approach amenable to incorporating episodic counts previously: we heuristically applied a scaling term similar to the one described by [Raileanu et al., 2020], where the intrinsic reward is divided by the square root of the episodic count. Given the baseline performance when only using counts it is reasonable to expect that other ways of combining (or weighting) these components (eg. using an additive term for the state-counts as opposed to a multiplicative one) can lead to substantially improved performance in this environment.\n\n", " Thank you very much for your detailed review and intriguing questions. We are glad that you feel positively about our paper and that you found our approach interesting and well-justified.\n\nWe discuss the recurrent pseudo-reward generator and new results regarding the GVF discount factor (and motivation) in the response to your questions down below.\n\nRegarding your comments on the experiments with panoramic observations, we agree that this setting is not ideal and perhaps somewhat forced. There are two matters at hand here:\n\n* Somehow prior approaches to exploration have incorporated privileged information about episodic state counts as part of their approach, which greatly simplifies the exploration problem in a way that is likely difficult to achieve in the real world. Indeed, in Figure 6 in appendix B we present clear evidence that often these counts alone already lead to a very successful intrinsic reward. Importantly, without using these episodic counts we observe that existing approaches struggle a lot, while RC-GVF performs very well (Figure 2).\n* The second issue (2) we encountered, which was also encountered in prior work, is that rotations in MiniGrid affect the observation in a detrimental way, which is certainly not representative of how real-world agents (including robots) sense the world. This led us to adopt the panoramic setting introduced in prior work [38]. While we acknowledge that this can also be viewed as using some sort of privileged information, we would also argue that this assumption is a lot more reasonable. In part because it effectively translates to equipping the agent with additional sensors, and because it may be achieved with representation learning techniques that consider symmetries like rotation invariance. \n\nWe wanted to avoid future work comparing to RC-GVF using episodic counts, which is why we were hesitant to report these numbers in the paper on KeyCorridor. We have now added these results as requested and discuss them in reply to your other comment down below.\n\n**“In order to avoid state aliasing in the rewards, wouldn't it make more sense for the rewards to also be generated through a fixed, randomly initialised recurrent network that conditions on the sequence of observations, actions, and pseudo-rewards? For example, say there are only aliasing states between a trajectory that the agent has experienced repeatedly and a trajectory that the agent had not experienced before. Now, the history summaries would be the same and so would the value predictions, and the disagreement would be low. But if the trajectories were distinguishable by different pseudo-rewards, then the histories would be distinct and the epistemic uncertainty would be high for the trajectory that had not been explored before.”**\n\nUsing a randomly initialized RNN to generate pseudo-rewards is an interesting idea that we have not considered. While there may be situations where recurrent pseudo-rewards are beneficial, we could not think of such a scenario (once the predictor is conditioned on histories). \n\nWe are not fully sure that we understood the details of the example that you presented. Perhaps you could clarify the situation where history summaries would be the same, but the trajectories can/should be distinguishable?\n\nAs you mention, using a stateless pseudo-reward generator could lead to the same pseudo-reward for different states emitting the same observation, however, it is in our view unlikely that the longer term pseudo-return following from the observation would also be aliased.\n\n\n\n", " This paper proposes a novel exploration strategy for RL that uses intrinsic rewards based on predicting long term values of random vector values rewards. The proposed approach contains the common RND exploration algorithm as a special case and is shown to work well on standard benchmarks, even when not given privileged access to episodic counts. +Simple idea that seems to work well and generalizes prior work\n\n+Good results on exploration benchmarks\n\n+Nice use of ensembles to deal with aleatoric and epistemic uncertainty\n\n-The environment details are sparse for someone who hasn't used these benchmarks before\n\n-It is unclear how sensitive the algorithm is to the weights on true reward and intrinsic reward for RL? Figure 1: I appreciate the idea of having a simple motivating example, but the details are too sparse. How does the agent get to the end to see the spike? It seems that even predicting future rewards gives no signal until the very end so it would still appear to be a very hard exploration problem without a shaped true reward. What is the statespace here? is it just white and blue? x-pos? What is the task reward?\n\nLine 226: Transitions from \"dead\" row are unclear. From the current description it seems that an optimal policy be to take a bad action H-1 times (total reward 0) followed by a good action at the last timestep?\n\nLine 232: I don't understand why you can't just always take the good action. It seems like this deterministic policy works regardless of what state you are in\n\nHow sensitive is the algorithm to the weights on true reward and intrinsic reward for RL?\n yes", " The authors propose Random Curiosity with General Value Functions (RC-GVF) that essentially extends the intrinsic rewards obtained through Random Network Distillation (RND) to temporally extended outcomes of action sequences, in order to tackle long-horizon hard-exploration tasks in partially observable environments. To deal with aleatoric uncertainty, an ensemble of predictors is trained and the ensemble disagreement is combined with the prediction error as a multiplicative factor. The method is evaluated in two sets of environments: the diabolical lock and the MiniGrid environments. I believe this is an interesting work touching on an important subject that is utilizing future novelty with long horizon novelty predictions to solve hard-exploration tasks.\n\nHowever, I believe that the authors of the paper didn’t discuss the related work in this direction in enough detail. There are several recent papers that build upon the ensemble disagreement proposed in [1] (which is discussed in the current paper) and extend it to get multi-step novelty estimates into the future, such as [2] and [3]. (Especially in [3], multi-step novelty with RND as intrinsic reward is also proposed.). I believe these works should also be part of the discussion to give a more complete picture of long-horizon intrinsically-motivated exploration.\n\nI still believe that the way the authors combined the RND intrinsic reward with general value functions is interesting and different in that they do not rely on model rollouts.\n\n[1] Pathak, Deepak, Dhiraj Gandhi, and Abhinav Gupta. “Self-supervised exploration via disagreement.” International conference on machine learning. PMLR, 2019.\n\n[2] Sekar, Ramanan, et al. “Planning to explore via self-supervised world models.” International Conference on Machine Learning. PMLR, 2020.\n\n([3] Lambert, Nathan, et al. “The Challenges of Exploration for Offline Reinforcement Learning.” arXiv preprint arXiv:2201.11861 (2022).) - Why is there no recurrency in the predictor network for the diabolical lock experiment?\n- Why are there only two ensemble members? How does the number of ensemble members affect performance in regards to noise levels in the tested environments?\n- I find the ablation study Figure 4c to be contradicting the statements made in section 3.3. Using $\\gamma_z=0$ is the equivalent of intrinsic rewards by RND, in which case as the authors state no ensemble is needed (The deterministic target value of the pseudo-reward takes care of the stochasticity). So why is the ensemble ablation performed for this discount value? In fact why are there no ablations for $\\gamma_z=0.6$?\n- If I understand it correctly, the main difference between RC-GVF with $\\gamma_z=0$ with and without recurrence is that the predictor network unlike in RND is not the same as the pseudo-reward generator and in the case of Minigrid doesn’t use an LSTM core. Did you do an ablation on the extra representational power in the different predictor network architecture of RC-GVF compared to the pseudo-reward generator also for the diabolical lock experiment? -", " This paper proposes a generalisation of the random network distillation (RND) to deal with partially-observable environments. That is, the paper extends the notion of intrinsic rewards in RND to deal with long-horizon, sparse reward environments that are partially observable by (1) predicting the value of pseudo-rewards (as opposed to per-state rewards in RND), (2) using a recurrent value predictor network to create a belief state due to partial observability, and (3) using the disagreement among an ensemble of value predictors to explore where predictions are epistemically uncertain. The combination of this approach with any model-free on-policy RL agent as a basis leads to an approach to exploration through temporally-extended curiosity. To validate the approach, the authors have combined their approach with PPO and tested it on a long-horizon anti-reward-shaped task, as well as on a set of MiniGrid tasks with egocentric and panoramic observations. They conduct analysis and ablation studies to verify that all added components are necessary to achieve the best performance in such tasks over RND and other techniques. ### Strengths:\n\n- The paper moves in the direction of extending a simple yet powerful idea (namely, RND) to the more general problem setting of partially-observable environments with sparse rewards. \n\n- The approach is simple and fundamentally well justified; specifically, (1) GVF formulation is natural for going beyond predicting single-step rewards (as in RND), (2) using random targets as in RND has been previously shown effective (and simplifies the problem of selecting/discovering auxiliary tasks in Horde-style architectures), (3) using recurrence as agent's state-update function to deal with partial-observability is well-accepted and the authors have also justified its particular importance in the context of this paper, (4) using prediction disagreement among an ensemble is an effective proxy for epistemic uncertainty. \n\n- The approach is well-motivated through illustrative examples and challenging exploration problems (but with relatively simplistic dynamics; namely, Diabolical Locks and 6 MiniGrid environments).\n\n\n### Weaknesses:\n\n- I feel that the pseudo-reward generator should also be based on some representation of history (e.g. via a fixed and randomly initialised recurrent network). I discuss this further in the Questions section (first question).\n\n- The trade-offs involved in choosing the discount factor are not discussed to a good extent. For example, why should we even care for low discounts? Why do they think $\\gamma = 0.6$ works better, and not $\\gamma = 0.9$ or $\\gamma = 0.99$? Discussing intuitions and drawing on previous literature on the role of discounting (especially in combination with function approximation) could help improve this shortcoming.\n \n- In relation to discounting, I also feel the range of discount factors used is somewhat arbitrary and not well motivated. The most commonly-used discount factors are missing in the tested values in Figure 4. Why?\n\n- I feel that the panoramic-view experiment or at least the justifications around it are forced. The statement in footnote 1 of the paper is not a reasonable argument in my view. If the environment doesn't give you some information, assuming anything beyond it is privileged. Augmenting the agent with more sensors to get a panoramic view is privileged too. As such, I much rather see the result of your approach having access to the episodic count as opposed to a panoramic view.\n \n 1. In order to avoid state aliasing in the rewards, wouldn't it make more sense for the rewards to also be generated through a fixed, randomly initialised recurrent network that conditions on the sequence of observations, actions, and pseudo-rewards? For example, say there are only aliasing states between a trajectory that the agent has experienced repeatedly and a trajectory that the agent had not experienced before. Now, the history summaries would be the same and so would the value predictions, and the disagreement would be low. But if the trajectories were distinguishable by different pseudo-rewards, then the histories would be distinct and the epistemic uncertainty would be high for the trajectory that had not been explored before.\n\n2. The authors have tried different discount factors for training the GVF predictors and argue that lower discounts are not good because the prediction horizon should be larger. Couldn't the worse performance of lower discount factors be due to optimisation issues under function approximation? (See van Seijen et al. (2019) for details on this issue.) \n\n3. I also feel the range of discount factors used is somewhat arbitrary and not well motivated. The most commonly-used discount factors (e.g. $\\gamma = 0.9$ or $\\gamma = 0.99$) are missing in the tested values in Figure 4. Why? I understand that these are not agent discounts and are only used for exploration, but I cannot process why still a higher value shouldn't in principle be preferred in episodic settings. I think too high wouldn't work well with function approximation due to optimisation issues, but a value of 0.99 should work in practice. Why is it not preferred in principle by the authors?\n\n4. Does the basis PPO agent use the history summary or only use the latest observation? Is this setting the same among all baselines? \n\n5. Where can I find the result that you refer to in footnote 1? \n\n6. The approach seems also applicable in principle to continuous state-action environments. The authors do not claim this, but could you comment on whether you see any obstacles in such environments?\n\n7. How do you justify using only 10 independent trials (or only 5 in one task)? PPO itself generally has very high variance across trials, and my experience is that using 20 trials is important in deriving any conclusions about performance characteristics. Now, in your context, the variation added due to pseudo-reward predictors could only make this worse. It is my view that, given that the domains are not too computationally expensive, it is critical to at least make sure the results are statistically reliable. Please share your view on this, and why the choice to run only 10/5 seeds.\n\nShould these questions be adequately clarified/addressed, I'd be happy to increase my score. \n\n- H. van Seijen, M. Fatemi, A. Tavakoli. “Using a Logarithmic Mapping to Enable Lower Discount Factors in Reinforcement Learning.” NeurIPS (2019). The authors discuss some limitations of their work. I have asked for comments on other potential limitations in the Questions section.\nThey appropriately state that the nature of the work does not per se entail a potential negative societal impact. ", " The authors propose a novel form of artificial curiosity that utilizes randomly initialized neural networks to represent generalized value functions. The authors utilize the insight that temporally extended predictions of environment quantities (such as state transitions, etc.) are often useful for generating artificial curiosity in partially observable environments. However, instead of using state sequences directly, they utilize randomly initialized NNs to annotate trajectories with pseudo-rewards and utilize the uncertainty in the prediction of these pseudo-rewards by a prediction network ensemble as a signal for artificial curiosity. They evaluate their method in lock-opening environments and in grid world tasks and show a performance improvement on curiosity baselines. Strengths:\n1. The paper is well motivated. Long-term dependencies should be accounted for when generating artificial curiosity signals. \n2. The writing is easy to follow and the approach is intuitive.\n3. The experiments are representative of hard exploration problems.\nWeaknesses:\nMy only real qualm with the paper is the significance of the results. The method strongly outperforms the baseline in the lock task but really only outperforms the baselines in 2 of the grid-world tasks. Additionally, why are there no performance metrics for AGAC and NovelD on the lock task? I would be interested in knowing those numbers as a reader. 1. Are there any ablations on ensemble size?\n2. As mentioned I would be interested in knowing the results of AGAC and NovelD on the lock task.\n I believe the limitations of the method have been adequately discussed by the authors. I believe the paper is well written. While not state-of-the-art in terms of results, I believe there is some novelty in the method. I am on the fence about my decision about the paper." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 3 ]
[ "-b5w3aWYiI", "PAsWVgHqay4y", "wQkBa9W4th_", "rwjpb03mFrt", "8Rk-JXkvgCQ", "zV61lGZXnpaZ", "nips_2022_68YyraaeYmc", "E16cmcIZ83w", "5oZgn1xIM7i", "eN13k0dAACj", "K3kr1GNNR2N", "2diEm9AlHVyA", "fRa3sPvp3fr", "EyoBvz8pCby", "nips_2022_68YyraaeYmc", "nips_2022_68YyraaeYmc", "nips_2022_68YyraaeYmc", "nips_2022_68YyraaeYmc" ]
nips_2022_U14PKEu18bK
Unsupervised Multi-View Object Segmentation Using Radiance Field Propagation
We present radiance field propagation (RFP), a novel approach to segmenting objects in 3D during reconstruction given only unlabeled multi-view images of a scene. RFP is derived from emerging neural radiance field-based techniques, which jointly encodes semantics with appearance and geometry. The core of our method is a novel propagation strategy for individual objects' radiance fields with a bidirectional photometric loss, enabling an unsupervised partitioning of a scene into salient or meaningful regions corresponding to different object instances. To better handle complex scenes with multiple objects and occlusions, we further propose an iterative expectation-maximization algorithm to refine object masks. To the best of our knowledge, RFP is the first unsupervised approach for tackling 3D scene object segmentation for neural radiance field (NeRF) without any supervision, annotations, or other cues such as 3D bounding boxes and prior knowledge of object class. Experiments demonstrate that RFP achieves feasible segmentation results that are more accurate than previous unsupervised image/scene segmentation approaches, and are comparable to existing supervised NeRF-based methods. The segmented object representations enable individual 3D object editing operations. Codes and datasets will be made publicly available.
Accept
Reviewers are generally positive about the submission, and all recommend acceptance post rebuttal. They appreciate the new formulation and the strong results. The AC agrees and recommends acceptance.
val
[ "IVYBK8XQdAz", "_3K2UecPweM", "vx03V4tbzq", "T_ZS_2SvcwR", "JhCHXScKUpw", "gQtcrvtEECk", "5VkqryGj8WKL", "GIj7Ama3OjH", "OM_sZvG6Ivm0", "t1svVU4CtZXV", "V5eKKqmd898", "hfctnz74qn0", "3rAT5SY2qKT", "ZSoLdqairaj", "8djwdb56ab", "HQMvjfJhO7", "_7uOjLuMsB8", "0107dvh5kr", "uNtf1hz9gs" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer QA4f,\n\nThanks for your positive and valuable feedback, which would help us improve this work.\n\nAnonymous authors", " Dear Reviewer GXm6,\n\nThank you for your response and appreciation of our approach. Please rest assured that we will stress our limitations in our final version.\n\nAnonymous authors", " Dear Reviewers, ACs and SACs,\n\nSince the server of Anonymous GitHub, which we use for sharing rebuttal materials, has broken down, we added all of the rebuttal materials in the revised supplemental materials, along with the original supplementary document and video.\n\nWe also note that the 9-page limit is still valid for the revised version, so we removed the new figure and section in the revised version. However, the version with the figure for illustration and limitation section is still available in the supplemental materials. We will put them in the final version.\n\n\nThanks,\n\nAnonymous Authors", " Thank authors for the detailed response to my questions and concerns with additional clarifications and visualisations.\n\nI also appreciate the frankness of authors concerning the heavy reliance on appearance which make propsoed method struggle on relatively complex real-world scenes. I personally like the motivation and application of maximising dis-similarity of segments, especially its transferring to neural fields, though the tested scenes are a bit limited in my point of view.\n\nAfter reading all the reviews and authors' feedbacks (the single image test looks interesting), I would like to keep my positive rate towards this paper and recommend an acceptance.", " Thank the authors for responses. I stand by acceptance.", " Dear Reviewer QA4f,\n\nWe hope that you had a chance to read the rebuttal (and also the general response above) as the discussion period is ending soon. In particular, we explained Eqn. 4&5, hoping it would be helpful. We responded to the question about the missing numbers and provided a quantitative ablation on performing EM refinement on single object scenes. We appreciate your idea of trying our method on single images, and we provided qualitative and quantitative results, which we hope would be insightful.\n\nPlease let us know whether you have any further concerns or suggestions to improve this work’s quality.\n\nThank you!", " Dear Reviewer 1Do1,\n\nWe hope that you had a chance to read the rebuttal (and also the general response above) as the discussion period is ending soon. We have provided more results and failure cases for the weakness you proposed, which we hope would be insightful. We also highlight our limitations on scene complexity in the revised submission. Finally, for your confusion about Tab. 1, we have clarified the supervisory signals used by compared methods. Thank you for proposing another baseline (vanilla NeRF); we adopted it and implemented it with quantitative and qualitative results.\n\nPlease let us know whether you have any further concerns or suggestions to improve this work’s quality.\n\nThank you!", " Dear Reviewer fm5S,\n\nWe hope that you had a chance to read the rebuttal (and also the general response above) as the discussion period is ending soon. We appreciate your questions about the motivation for propagation and the strong feature extractor. We have given explanations for them, and we are looking forward to having further discussions with you. We also provided the revised version of our submission and further supplemental materials in the general responses, and we hope they will be helpful.\n\nPlease let us know whether you have any further concerns or suggestions to improve this work’s quality.\n\nThank you!", " Dear Reviewer GXm6,\n\nWe hope that you had a chance to read the rebuttal (and also the general response above) as the discussion period is ending soon. We have responded to your questions and weaknesses. In particular, we provided more results and failure cases on objects with textures or similar appearance to backgrounds. We also did ablation studies on propagation mechanism and turning off $\\mathcal L_{init}$. We also uploaded a revised version of our submission, with the figure for illustration as you suggested, highlighted limitations, and other modifications.\n\nPlease let us know whether you have any further concerns or suggestions to improve this work’s quality.\n\nThank you!", " Dear reviewers,\n\nAll the materials we refer to in the rebuttal can be found in this [anonymous link](https://anonymous.4open.science/r/NeurIPS-Rebuttal-Materials-PaperID3608).\n\nHowever, we have found that sometimes this link cannot be opened properly. If this happens, please leave a comment and we will provide a new link.\n\nThanks,\n\nAnonymous Authors", " Thank you for your time and detailed review. We agree with your concerns and address them briefly in turn below.\n\n### Explanation for Eqn. 4 and 5 (Weakness #1)\nPlease check the explanation in the general response. Specifically, Eqn. 4 is for computing the label assignment at an arbitrary coordinate. $\\hat{\\mathbf{l}}({\\mathbf{x}})$ in Eqn. 5 is an alternative of $l({\\mathbf{x}})$ with differentiability.\n\n### Propagation mechanism (Weakness \\#2)\nPredicting other related values to object regions like the maximum or minimum than averaging color sounds plausible. We tested such strategies and found that they gave rise to NaNs loss during training. This is probably due to extreme radiance values predicted by MLPs, which are common for NeRF.\n\n### Performance on more complex cases (Weakness #2, 3 and Question #1)\nPlease check the discussion on *More complex scenes and failure cases* in the general response. We provide qualitative results on DTU dataset where the object is textured in this [anonymous link](https://anonymous.4open.science/r/NeurIPS-Rebuttal-Materials-PaperID3608/DTU%20results.pdf). \n\nAs the first approach dealing with unsupervised multi-view segmentation, our work has difficulty handling objects severely occluded or with a similar appearance to backgrounds. We have highlighted this limitation in the revised submission and provided more failure cases in this [anonymous link](https://anonymous.4open.science/r/NeurIPS-Rebuttal-Materials-PaperID3608/failure%20cases.pdf),\n\n### Reliance on appearance instead of others like geometry (Weakness \\#3)\nOur current propagation mechanism is appearance based. We have tried 3D GAN-based approaches to propagate geometry but have so far not gotten satisfactory results. This may be the direction for future exploration.\n\n### Initial masks (Weakness \\#4 and Question \\#2)\nInitial masks are expected to give a rough object boundary, which will determine the approximate region of the segmented object in 3D space. This is especially important in multi-object cases so poor initialization may take its toll on the final performance. Notwithstanding, empirically, in the failure case we have given, the quality of the initial mask is actually often quite poor while the results still degrade quite gracefully. \n\nWe did an ablation on the loss be turned off at 15000 iterations (out of 30000), with the results tabulated in the following table. We did not see the expected improvement in performance.\n\n| |Acc. $\\uparrow$ |mIoU $\\uparrow$ | N-Acc. $\\uparrow$ | N-mIoU $\\uparrow$ |\n|---|---|---|---|---|\n|w/o turning off | 97.9 | 94.1 | 97.4 | 93.6 |\n|w/ turning off |97.0 | 92.3 | 97.0 | 93.4 | \n\n\n### Balance losses (Weakness \\#5 and Question \\#2)\nWe did not find a general strategy to balance the different losses in all scenes. Empirically, as long as the term $\\mathcal L_{prop}$ exists, it will work, and the weight of this term $\\lambda_{prop}$ is not critical. \nSetting a very small $\\lambda_{init}$ would make the initialization of semantic fields hard, while setting it too large would cause a loss of details.\n\n\n### More illustration (Weakness \\#6)\nThank you for your suggestion. An additional illustration of a multi-object case will make it easier for readers to understand. We have made such an illustration and please check it through this [anonymous link](https://anonymous.4open.science/r/NeurIPS-Rebuttal-Materials-PaperID3608/multi-object%20illustration.pdf). We also added this figure in the revised submission.\n\n### Sensitivity to camera pose (Weakness \\#7 and Question \\#3)\n\nAccurate camera poses will definitely benefit the quality of segmentation, as such it is expected to yield better novel view synthesis. On the other hand, our method should be robust to camera pose inaccuracies. Compared with the LLFF dataset, the camera poses of the scenes in CO3D are not 100\\% accurate, while both qualitative and quantitative results show that our method achieves reasonable segmentation in these cases.\n\n### Minor Issues. (Weakness \\#8)\nThank you for pointing out our typos and errors. We have fixed the typo in the equation and added the suggested references in the revised submission.\n", " Thank you for your time and valuable feedback. While being encouraged by your appreciation for our novelty and results, the issues you point out are crucial.\n\n### Why does propagation work intuitively? (Question and Weakness #1)\n\nPropagation from one part or, say, one object $A$, of a 3D scene to others can be regarded as an estimation of others based on $A$. (We do so by averaging the color of $A$.) \nIf we can accurately segment $A$, then the segmentation of $A$ we get should only contain the information of $A$ without including other objects or parts of the scene. \nAccurate segmentation of $A$ contains no information about objects other than $A$. Attempting to make an estimation of other objects based on A would be lousy, disagreeing, and contrasting to the real occupying object. This is our motivation for designing propagation and bi-directional photometric loss.\n\n\n### Feature extractor (Weakness #2 and Limitation)\n\nThe EM refinement, which involves such a strong feature extractor, is crucial when dealing with multi-object scenes. \nDespite that, our proposed radiance field propagation and bidirectional photometric loss are still essential, since without them we cannot even perform the segmentation. Without the extractor, we can still get somewhat reasonable results as shown in the ablation.\nIt should be reminded that the comparison between our method and DFC or uORF is fair, since DFC also involves such an extractor while uORF is trained on an external large-scale dataset. SemanticNeRF and ObjectNeRF do not need an extractor pretrained on a large-scale dataset, while they directly use annotated labels or masks for supervision which is not required by our method. ", " Thanks for the thoughtful review, and the helpful suggestions. We will improve the paper based on them. For the proposed weaknesses and questions:\n\n### Scene complexity (Weakness)\nThank you for pointing out the weakness of our work on scene complexity. We have stressed it in the revised submission. \nWe give two failure cases in this [anonymous link](https://anonymous.4open.science/r/NeurIPS-Rebuttal-Materials-PaperID3608/failure%20cases.pdf). In one of them, the object is close to a patterned wall.\nWe also provide [additional results](https://anonymous.4open.science/r/NeurIPS-Rebuttal-Materials-PaperID3608/DTU%20results.pdf) on DTU dataset where the object is textured.\nPlease also check the discussion on *More complex scenes and failure cases* in the general response. \n\n### Confusion in Tab. 1 (Question \\#1)\nSemanticNeRF and ObjectNeRF are two supervised methods since they directly use annotated labels or masks for supervision. All other rows are unsupervised. DFC involves a feature extractor pretrained on large-scale datasets but no labels for supervision. IEM, ReDO, and uORF are unsupervised approaches. Among them, uORF is trained on a large-scale dataset consisting of very simple scenes, but without supervision from labels. Thank you for your suggestion of adding an indication, which will further improve our paper's clarity.\n\n### Extra baseline (Question \\#2) \nThis baseline is reasonable and we worked out such an implementation: We train a vanilla NeRF for the scene and extract an explicit voxel grid-based representation from it. Treating each connected volumetric component as an instance, we use 3D floodfill to get object segmentation, with manually assigned seed points.\n\nIt turns out this baseline performed poorly, probably because, in most NeRF scenarios, all objects of interest tend to have some connected volumetric components.\nThe quantitative result is reported in the following table and the qualitative result can be found in this [anonymous link](https://anonymous.4open.science/r/NeurIPS-Rebuttal-Materials-PaperID3608/vanilla%20NeRF%20baseline.pdf).\n\n\n| |Acc. $\\uparrow$ | mIoU $\\uparrow$ | N-Acc. $\\uparrow$| N-mIoU $\\uparrow$|\n|---|---|---|---|---|\n|vanilla NeRF | 75.7 | 47.7| 79.0 | 48.8 |\n|ours | 97.1 | 92.8 | 96.7 | 92.1 |\n\n### Typo in L157 (Question \\#3) \nThank you for pointing out our typos and errors. We have fixed the typo in the equation in the revised submission.", " Thank you for your positive feedback as well as thoughtful suggestions and questions. Below, we address your points individually.\n\n### Explanation for Eqn. 4 and 5 (Weakness #1)\nPlease check the explanation in the general response. Specifically, differentiability is necessary for backpropagation of the relevant gradient to the pre-softmax semantic logits $\\mathbf{s}(\\mathbf{x})$.\n\n### 2nd last row of Tab. 1 (Weakness #2)\nThe EM refinement is designed for scenes with multiple objects. We found that without this refinement, the results on single-object scenes like those from CO3D or LLFF are satisfactory. Our full model for single object scenes does not contain EM refinement. The EM refinement involves a strong feature extractor and we thought this might be overkill for single-object scenes. Involving such a feature extractor pretrained on large-scale data would also cause confusion when compared with unsupervised single image segmentation methods IEM and ReDO. We report a quantitative ablation on performing EM refinement on single object scenes in the following table.\n\n| | Acc. $\\uparrow$ | mIoU $\\uparrow$ | N-Acc. $\\uparrow$ |N-mIoU $\\uparrow$ |\n| ----------------|-------------|------------------|---------------------- |-----------------|\n|w/o EM refinement | 97.9 | 94.1 | 97.4 | 93.6 |\n|w/ EM refinement |98.0 | 94.1 | 97.4 |93.6 |\n\n \n\n### Trying our method on single images (Question)\nThanks for this suggestion and such results would be insightful for the readers. We try our method with only one training view (input single image at a time) as input with qualitative results available here: [anonymous link](https://anonymous.4open.science/r/NeurIPS-Rebuttal-Materials-PaperID3608/single%20image.pdf). Quantitative metrics are reported in the following table. We also report qualitative metrics on the same scene with all 102 images as input at a time for comparison.\n\n| # of input views|Acc. $\\uparrow$|mIoU $\\uparrow$|\n| ----------------|-------------|------------------|\n|1 | 87.8 | 68.1 |\n|102 | 97.1 | 92.8 | \n\n\nOn the other hand, our paper's notable contribution exactly consists of handling 3D RFs, so we are a little unclear why you mention 2D or 3D RFs in the questions. But we are willing to discuss this further in the Reviewer-Author Discussions session.\n\n", " We thank all the reviewers for their time and constructive reviews. We are encouraged that the reviewers appreciate our first significant attempt toward the unsupervised multi-view image segmentation task for NeRF. We first address common concerns, followed by detailed responses to individual reviewers.\n\n### Explanation for Eqn. 4 and 5 (Weakness #1 from Reviewer GXm6 and Weakness #1 from Reviewer QA4f)\nWe compute the label assignment at an arbitrary coordinate using Eqn. 4. However, the argmax operation in Eqn. 4 is not differentiable. An undifferentiable function here is undesirable because it prohibits backpropagation of the relevant gradient to the pre-softmax semantic logits $\\mathbf{s}(\\mathbf{x})$, which makes infeasible the optimization of the MLP representing the semantic field. \n\nTherefore, we use $\\hat{\\mathbf{l}}({\\mathbf{x}})$ in Eqn. 5 instead of $l({\\mathbf{x}})$ in subsequent computation. Notably, $\\hat{\\mathbf{l}}({\\mathbf{x}})$ carries the same information as the assignment in Eqn. 4, which is essentially its one-hot representation. \nThe term $\\mathbf{s}(\\mathbf{x})-\\texttt{sg}(\\mathbf{s}(\\mathbf{x}))$ equals to 0 in value but has the same gradient as $\\mathbf{s}(\\mathbf{x})$ thus allowing optimization of the MLP representing the semantic field.\n\n### More complex scenes and failure cases (Weakness \\#2, 3 and Problem \\#1 from Reviewer GXm6 and Weakness from Reviewer 1Do1) \n\nThis first approach may have some difficulties in handling objects with rich textures or with a similar appearance to backgrounds. In our supplemental material, we show a failure case. So we give additional two failure cases in this [anonymous link](https://anonymous.4open.science/r/NeurIPS-Rebuttal-Materials-PaperID3608/failure%20cases.pdf). The first case is from CO3D dataset where the object of interest (hydrant) is severely occluded. The second is from instant-ngp repository, where the foreground object (fox) is near a patterned wall (Reviewer 1Do1). Thanks to reviewers for pointing out this limitation, which we have highlighted in the revised version. We also provide in this [anonymous link](https://anonymous.4open.science/r/NeurIPS-Rebuttal-Materials-PaperID3608/DTU%20results.pdf) qualitative results on DTU dataset where the object is textured (Reviewer GXm6).\n\n### Revised submission\nWe have submitted a revised submission with suggested modifications, stressed limitations (Reviewer GXm6, 1Do1 and fm5S) and an additional illustration figure (Review GXm6), which is also uploaded to this [anonymous link](https://anonymous.4open.science/r/NeurIPS-Rebuttal-Materials-PaperID3608/Revised%20submission.pdf).", " This paper propose an unsupervised multi-view object segmentation method leveraging on neural radiance field. The key motivation is that a good segmentation should maximise the dis-similarity between content within each individual mask. Correspondingly, three key modules including radiance propagation, bi-directional photometric loss and EM-based post-processing are adopted to decompose a scene into different regions/objects. This paper presents promising qualitative results on object-centric scenes and synthetic multi-objects scenes under multiple observations. Strengths\n\n1. The paper is well motivated and the pipeline design follows the idea of making prediction of one part from others difficult.\n\n2. The evaluation and ablations are adequate on single-object scenes and multi-object scenes(synthetic)scenes. As far as I can tell this is the first multi-object segmentation framework in a purely unsupervised manner on complex scenes.\n\n3. Though there are some missing details (mentioned in the weakness part), the overall writing is good.\n\n\n\nWeaknesses\n\n\n1. It is not clear to me what equation (4) means and helps the optimisation? In addition, \\hat{l} is not mentioned in the remaining text.\n\n2. The object-level NeRF is driven to predict average colour within masks across the batch, the design choice seems a bit ad-hoc. \nWhat about predicting other related values to object regions like the maximum or minimum or other strategies? \nAs to objects with complex textures or various colours, I assume even the average colour may also not well represent the object appearance. Objects within the paper owns similar textures or colours, and I am not sure if the propagation is still feasible to other cases.\n\n\n3. Similar to previous points, the main concerns comes from the question, how much does the system rely on appearance instead of others like geometric cues. The objects within several experiment have a relatively clear difference in colour compared to backgrounds, which makes it a good \"colour segmenter\", limiting the pipeline towards complex scenes. \n\n\n4. From quantitative evaluations, the initial mask acts as an important role. Will the quality of the initial masks severely affect the final performance? During training, will the initial segmentation loss be turned-off in the later phase of training to achieve better details?\n\n5. As an unsupervised method, how to properly choose the loss weighting in Eq.17?\n\n6. It would be good to demonstrate some illustrations about how each individual object-nerf works in and outside the object regions like Fig2, but for more complex scenes similar to Fig4 and Fig5.\n\n7. Is the system performance sensitive to errors within camera poses?\n\n8. Minor Issues.\n- L157. m_k(x)=0 --> m_k(x)=1\n- References should be added in L189 about the unsupervised single image-based approach, which is revealed late in the paper.\n- L207. I suggest adding references or some extra explanations here. Otherwise it is hard for readers to immediately know what is exactly thee deep feature extractor here and how it works.\n \n-How does the system work on objects with rich textures or similar appearance to backgrounds? For example, BlenderMVS or DTU datasets? \nIn addition, under this situation, will the propagation mechanism using average colour be still helpful?\n\n-How does the initial masks involve during training and how to balance different losses.\n\n-Is the system sensitive to pose registration accuracy? The authors have addressed the limitations and potential negative societal impact.", " The paper views image segmentation task from a 3D perspective, applies NeRF to the unsupervised multi-view image segmentation task for the first time. The paper guides the reconstruction of semantic field through the radiation field propagation, and proposes a mask refine procedure based on EM to handle complex scenes with multiple objects. Pos:\nThe method is novel. NeRF is appropriatly applied to deal with multi view image segmentation tasks from the perspective of 3D, and a bidirectional photometric loss is proposed to construct the unsupervised task. The experimental improvements are remarkable.\nThe overall writing of the paper is smooth and the overall structure is clear.\n\nNeg:\nLack of analysis on why propagation is useful makes it difficult to understand why it works.\nWhen dealing with multi-object complex scenes, this method needs a strong feature extractor, which requires large-scale data for pretraining,It's hard to tell whether it's the strong feature extractor works or the paper‘s method works. Especially for the comparasion with other methods, the introduction of this large-scale external data will make the comparison unfair. The authors should explain why Propagation works intuitively? When dealing with multi-object complex scenes, this method needs a strong feature extractor, which requires large-scale data for pretraining,It's hard to tell whether it's the strong feature extractor works or the paper‘s method works. Especially for the comparasion with other methods, the introduction of this large-scale external data will make the comparison unfair.", " This paper presents an approach to learn 3D object instance segmentation, given RGB images and camera poses. The main novelty is the \"propagated\" radiance field obtained by summing radiance field layers weighted by *inverted* masks. Correct labeling would maximize its photometric error. The network is trained using both traditional photometric loss and the proposed loss. EM-refinement is used to improve the result. Strength: Unlike prior work, instance segmentation is possible. And inverting the mask is an interesting idea.\n\nWeakness: The scenes do not seem complex enough for the task. An object close to a wall (in addition to the floor) or another object may be difficult to segment. It would be insightful to see some failure cases (either real or synthetic). Table 1 is confusing. It says “The best results without supervision are boldfaced.” But it is not clear on its own which rows they are. If I understand correctly, ablated results perform worse, as expected. And all the other papers had some label or bounding box supervision. It might be helpful to indicate which additional supervisory signals they used, in each row.\n\nWas this baseline considered: train vanilla NeRF, do 3D reconstruction (e.g. marching cubes), treat each connected volumetric component as an instance (assume known floor level). It would also be unsupervised.\n\nL157, is m_k(x) = 0 supposed to be m_k(x) = 1 ? Yes", " This paper focuses on unsupervised segmentation in 3D with multiview images. The main idea is to use a mixture model of multiple visually contrastive radiance fields to compositionally reconstruct the scene. The contrastiveness among the RFs is enforced by propagating average appearance to empty space and maximizing the visual differences between the reconstruction and the inversely mixed composition. Experiments show good results on a single-object real dataset and a multi-object synthetic dataset. Strength:\n\n+ Interesting idea to create and use contrastiveness in compositional NeRFs for unsupervised segmentation.\n+ In general, the experimental results look good.\n+ Experiments on a real dataset.\n+ Comprehensive comparison to the state of the art.\n\nWeakness:\n\n- I feel a few aspects in technical description and experiments are not well clarified and justified. For Eq. (4), why does it need to be differentiable? It seems l(x) is only used in Eq. (5), but is discontinuous and not differentiable. Why are numbers in 2nd last row of Table 1 missing? It would be interesting to see if the same method works with single images. Have you ever tried that with 2D or 3D RFs? Yes" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 4 ]
[ "JhCHXScKUpw", "T_ZS_2SvcwR", "nips_2022_U14PKEu18bK", "V5eKKqmd898", "ZSoLdqairaj", "uNtf1hz9gs", "0107dvh5kr", "_7uOjLuMsB8", "HQMvjfJhO7", "nips_2022_U14PKEu18bK", "HQMvjfJhO7", "_7uOjLuMsB8", "0107dvh5kr", "uNtf1hz9gs", "nips_2022_U14PKEu18bK", "nips_2022_U14PKEu18bK", "nips_2022_U14PKEu18bK", "nips_2022_U14PKEu18bK", "nips_2022_U14PKEu18bK" ]
nips_2022_sNcn-E3uPHA
Text Classification with Born's Rule
This paper presents a text classification algorithm inspired by the notion of superposition of states in quantum physics. By regarding text as a superposition of words, we derive the wave function of a document and we compute the transition probability of the document to a target class according to Born's rule. Two complementary implementations are presented. In the first one, wave functions are calculated explicitly. The second implementation embeds the classifier in a neural network architecture. Through analysis of three benchmark datasets, we illustrate several aspects of the proposed method, such as classification performance, explainability, and computational efficiency. These ideas are also applicable to non-textual data.
Accept
This paper has 2 accepts (7) and 2 borderline accepts (5). The average is 6. The modification of Reviewer Sqxh does not show in the system, but he stated as follows in our discussion “I tend to modify the score of this paper to five.” This paper shows an algorithm that delivers outstanding text classification performance despite its extreme simplicity and speed. The quantum “explanation” for the algorithm is weak. The experiments were limited to somewhat simple text classification problems, but the authors added some convincing results on sentiment analysis in their rebuttal. The paper is clearly written and the provided code is well documented, so it should be reproducible (though none of the reviewers did rerun the experiments). The authors provided a revised version that clarified some aspects (though it did not include the additional experiments provided in the rebuttal). The consensus after discussion is accept, given that this paper may open a new class of simple and high performance classification algorithms. The explanation for their effectiveness remains an open problem, and this paper could be improved by opening the discussion. We recommend the following modifications in order to mitigate overreaching claims that could make the authors look naïve: - Quantum interpretation of a sentence as superposition of words. This seems to imply that detecting a single word is sufficient to classify a sentence, and that the best solution is a combination of these weak classifiers based on a single word or N-gram. The success of Adaboost on text classification testifies of the power of these methods. However such methods do not work so well when sentences are longer, or when there is a stronger compositionality, for instance sentiment analysis (IMDB, SST). The results on EmoInt provided in the refutation are a step in the right direction. They show the method can handle some level of context, thought sentences are still short. How could this method handle long sentences? The paper would benefit from a discussion about what the superposition of words hypothesis implies, what are it limitations, and maybe how to overcome them. - Table 1 provides preliminary results but some of the claims can be turn off for readers with experience in linear classifiers (SVM and LR) applied to text recognition, as the authors apparently failed to select the correct algorithm (SGD classifier). Comparisons with SVMs and LR in table1 suggest BC is 1 million times faster than SVMs. However, the same difference in speed can be observed within different Sklearn SVM implementations, depending on the algorithm. The SGD classifier, which supports both SVM and LR, has a computational complexity which is even better than the O(NJK) reported in the paper, as it depends on the number of non-zero features rather than the number of features J. Furthermore, the accuracy provided for the SVM (79.4) also seems far below the SOTA. For instance, the reference [17] reports an accuracy of 82.27 on 20NG using TF-IDF SVMs (the most vanilla setting). - As shown in Eq.(6), the model is fundamentally linear. Explainability by taking the feature with the highest weight is as ancient as ML itself. How this method performs better than traditional linear approaches should be better illustrated. The authors just say “The words appear semantically correlated to their respective class and do not contain neither noisy words, such as stopwords or punctuation, or words whose meaning is too general to be representative of a specific class. " - Novelty of this paper in the field of quantum-inspired classification. Even in the revised version, while they quote work on language modeling, the authors do not seem to have quoted any work on classification, even though it was pointed by the first reviewer (https://ojs.aaai.org/index.php/AAAI/article/view/17567). As pointed in their rebuttal, this is very different from their work, but they should mention that work.
val
[ "zwyoUH2zlTA", "R0tzSnli73YQ", "CayN85oQcn", "7VeLZ9K40cFY", "MSQTXI6NUXQ", "xJHCf0QnTy", "28ZMpM5YIPB", "dhYkN-NYlrH", "UyZCxMVOqALT", "SDYm3ToGMVRm", "F7KtSNQIZp", "MqgZgPaJIvJ", "3uCarveSVBL", "do1U_Defjd" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " This is to acknowledge that I have read the authors' response. So far, my questions about the paper have been sufficiently answered and I thank the authors for taking the time to provide very detailed responses and experiments. My score will remain a 7 as I would like to see this paper at the conference.", " We thank you for your message and we apologize for not answering those questions. We thought only the questions in the Questions section should have been addressed. We reply below to the points in the Strengths And Weaknesses section.\n\n1. Thank you! We very much appreciate your comments on the paper and code.\n\n2. The method itself does not encode semantics, context, or double negatives. In the empirical evaluation, we rely only on word counts (tf-idf weights). Our revision highlights that our empirical evaluation is limited to a bag-of-words representation of the document (Section 1 and 7). We don't expect BC or BL to achieve state-of-the-art results by relying only on word counts when some level of context is needed. However, we expect them to be competitive with respect to other methods that also rely on word counts. We expect BC to work out-of-the-box on extremely noisy texts thanks to the regularization proposed in Section 4.2. We also expect the method to work well if we reduce the size of each text instance, as long as the word counts are relevant for the classification task. \n\n To better illustrate this point, we have run an additional test on the [WASSA-2017 Shared Task on Emotion Intensity (EmoInt)](http://saifmohammad.com/WebPages/EmotionIntensity-SharedTask.html). More precisely, we use the [Kaggle version](https://www.kaggle.com/datasets/anjaneyatripathi/emotion-classification-nlp) of the dataset. The dataset is almost balanced and it contains tweets classified into four emotional categories (anger, fear, joy, sadness). The task is to predict the emotional category based on the tweet's text. We choose this task because we can imagine that 1) some level of context is needed; 2) the text is noisy; 3) the size of each text instance is small. Accuracy and F1-macro scores are reported below for BC and the baseline models (we use the same tokenization and vectorization described in Section 6 for 20NG, R8, R52; the baseline is tuned via cross-validation as in Table 1 in the paper). We find that BC is competitive with the baseline models also in this scenario. \n\n| | **BC** | **LR** | **SVM** | **MNB** | **DT** | **RF** | **KNN** |\n| ---------------- | ------ | ------ | ------- | ------- | ------ | ------ | ------- |\n| **Accuracy (%)** | 81.8 | 80.9 | 74.9 | 71.1 | 82.0 | 78.7 | 53.2 |\n| **F1-macro (%)** | 81.8 | 80.9 | 74.9 | 70.8 | 78.1 | 78.3 | 52.5 |\n\n3. Our method can be extended to encode semantics or to exploit more sophisticated representation of the document by using it on top of existing architectures (as we briefly discuss in Section 7). In this sense, our method is not an alternative to existing approaches, but it is rather a complementary resource. Although we don't have a definitive synthesis yet, we have applied preliminarily the Born layer to other approaches to see the performance improvement. Please see the response to Q2 of Reviewer mSUa. \n\nWe hope that we have adequately addressed your remaining questions and we stay available for any additional clarification.", " Thank you for your follow up!\n\n> For Q3, I mean that a quantum state is composed of eigenstates (or basis vectors) in superposition. The superposition actually represents a kind of uncertainty. After quantum measurement, the quantum state collapses to a eigenstate (or a basis vector), and this eigenstate (or basis vector) is sufficient to represent the current quantum state. In this work, a document is composed of words in superposition. After quantum measurement, the document collapses to a word. This means the word is sufficient to represent the document. \n\nThis is true at time $t=0$ when the measurement takes place. For $t>0$, the Hamiltonian will drive the evolution of the state and the word is no longer sufficient to represent the document.\n\nFor example, consider a reader who reads a document. At time $t=0$ the reader reads the word $w_0$. In the analogy, this corresponds to the wave function of the document that collapses to the state $|\\psi\\rangle=|w_0\\rangle$. Between time $t=0$ and time $t=1$, the state $|\\psi\\rangle$ evolves according to a Hamiltonian that maps e.g., $|\\psi\\rangle=|w_0\\rangle$ to $|\\psi\\rangle=|w_1\\rangle$, which represents the next word. At time $t=1$ the reader effectuates another measure on the document and reads $w_1$. The process is iterated until the reader reads the whole document. \n\nIt is also possible to introduce more generic Hamiltonians. For instance, after the reader reads the word $w_0$ at time $t=0$, a Hamiltonian may map $|\\psi\\rangle=|w_0\\rangle$ to some superposition of words that are close to $w_0$. At time $t=1$, the reader reads a random word from such superposition. This would mimic a reader who skips words and reads the document here and there. \n\nWe hope this discussion clarifies the physical meaning of our method. However, we notice that the interpretation of text as a superposition of words is used in the paper to infer the wave coefficients from the distribution of words (lines 65-69). No measurement is performed and we do not make any claim about what happens to the system after measurement. This part is not needed to derive our classifier and it doesn't seem to be strictly related to the paper. \n\n\n> For Q8, I don't think that the author has seriously addressed my question.\n\nFor Q8, the classifiers we present are general (although our empirical results are limited to text classification). The Born Classifier (BC) can be applied whenever the feature vector doesn't contain negative elements. The Born Layer (BL) can be applied whenever the input vector has elements in $\\mathbb{C}$. BC is obtained explicitly by regarding a data instance as a superposition of features and by applying the Born rule. BL applies the Born rule by regarding the input vector as wave function coefficients. In this work, the Born rule is not used to mimic quantum measurement in a preparation-evolution-measurement-collapse setting (see e.g., QMNN[1]). Instead, the Born rule is viewed as introducing a new non-linear transformation from features to targets, compared to other non-linear transformations like MNB, SVM, KNN, DT, RF, etc. To our knowledge, this perspective is new and we hope that it facilitates integration of the method in mainstream ML.\n\nWe hope that we have adequately addressed your remaining concerns and we stay available for any additional clarification.", " This is to acknowledge that this reviewer has read the response of the authors. However, there are still questions in the Strengths and Weaknesses section that have not been addressed. These should also be considered.", " For Q3, I mean that a quantum state is composed of eigenstates (or basis vectors) in superposition. The superposition actually represents a kind of uncertainty. After quantum measurement, the quantum state collapses to a eigenstate (or a basis vector), and this eigenstate (or basis vector) is sufficient to represent the current quantum state. \n\nIn this work, a document is composed of words in superposition. After quantum measurement, the document collapses to a word. This means the word is sufficient to represent the document. However, the semantic of the document is composed of the semantics of all the words in the document, and one word cannot represent the whole document. The amount of information is reduced from the document level to the word level by quantum measurement, which will inevitably bring information loss and semantic bias. \n\nFor Q8, I don't think that the author has seriously addressed my question. ", " - Q1: In the empirical section, our classifier relies only on a bag-of-words (superposition-of-words) representation of the document. We chose baseline models that use the same bag-of-words representaton for a fair comparison against a variety of consolidated approaches. Accordingly, we choose 20Newsgroup, R8, R52 as benchmark datasets where traditional algorithms may be expected to work reasonably well. We report Table 2 as a stress-test against approaches that use a more sophisticated representation of the document. We have edited Section 6.3 to clarify this point. The revision also improves the discussion on limitations and extensions in the conclusion. \n- Q2: BC works out-of-the-box on imabalanced data by construction. We have added Appendix A.1 in the revision, where we report F1-macro scores on the three datasets and we better discuss this point (note: we use the appendix not to violate the 9 page limit; however, we plan to move this section in the main text for the final version that allows one extra page).", " - Q1: Yes, $x_j$ is unnormalized probability. Fixed (lines 62-63)\n\n- Q2: We tried to replicate BertGCN (https://arxiv.org/abs/2105.05727) by replacing the softmax layer with BL. Classification performance improved on average for 20NG, R8, and R52 (by a limited margin). We also tried BL on non-textual classification tasks. For instance, we found that CNN + BL for image classification (MNIST, FashionMNIST, CIFAR10) slightly improves over the corresponding CNN + Softmax. Integrestingly, it seems that increasing the network depth with more BL layers widens the performance gap between BL and the corresponding architecture with Softmax. We also noticed that the networks with BL seem to require less training epochs than the corresponding architecture with Softmax to achieve optimal classification performance. Overall, these tests suggests that BL is easier to optimize. However, this point deserves thorough experimentation, while this paper mainly aims at preseenting the method. We touch upon extensions in these directions in the conclusion (lines 261-271).\n- C1: Fixed (line 82).\n- C2: In the generalized form, the hyperparameters $a=1/2$ and $b=1$ control the exponents to reproduce the transformation in Equation 6. The hyperparameter $h=1$ controls the regularization to reproduce Equation 11. In the ablation study, we show that both the transformation itself ($a$ and $b$) and the regularization ($h$) are important for performance. We also show that the classical configuration is unable to exploit the proposed regularization.\n- C3: We agree that several extensions are possible and interesting.", " - Q1: We have double-checked and KNN is brute-force. We have profiled the sklearn code that fits KNN. The increasing training time is due to the time to copy the input matrix. This ranges between 1 and 10 milliseconds depending on the size of the matrix. This means that training BC is almost as fast as just copying the input data.\n- Q2: We have observed this behaviour in other datasets and for different evaluation metrics. For instance, the same behaviour is observed on 20NG, R8, and R52 using F1-macro score (Figure 5 in the appendix; note: we use the appendix not to violate the 9 page limit; however, we plan to move this new section in the main text for the final version that allows one extra page). We have also observed similar behaviour on the Zoo dataset (https://archive.ics.uci.edu/ml/datasets/Zoo) and on other non-textual datasets. We conjecture that this is (at least partially) due to the regularization: the entropic re-weighting ($H_j$ in Equation 11) decreases the weights of features that are slightly skewed across classess. It seems that these features can be easily identified from fewer training samples (e.g., a stopword would appear in almost all documents and its weight is immediately shrinked to zero). As a consequence, the classification is based on less features but highly specific to a class (see e.g., Table 3). However, we plan to further investigate this point.", " - Q1: From https://en.wikipedia.org/wiki/Prior_knowledge_for_pattern_recognition \n\n > Prior knowledge refers to all information about the problem available in addition to the training data. [...] Many classifiers incorporate the general smoothness assumption that a test pattern similar to one of the training samples tends to be assigned to the same class.\n\n Technically, prior knowledge in this work is the invariance by scaling and rotation of the transformations that we derive (inherited by the Born rule). Qualitatively, our intuition is that quantum principles may lead to input-output relations closer to the physical/biological mechanism in which we process data/information. We discuss this point briefly in the conclusion (see lines 272-276 and reference therein) to avoid excessive speculation.\n\n- Q2: We were not aware of these works and we agree with the reviewer that \"quantum theory\" is therefore not rigorous. We have fixed it in the revision (lines 42-44) and we have added references to quantum language models (footnote 3). Complex-order [2] seems to be related to complex numbers but not strictly to quantum physics. QMNN [1] seems to be limited to emotion recognition, while our method is more general.\n\n- Q3:\n\n > [...] the superposition state represent a document, which means that a word will be used to represent the document after measurement.\n\n Correct. Physical meaning: in the analogy, this corresponds to a reader (observer) who selects (measures) a certain word (state) from a document (system). \n\n- Q4: This means that class $k$ will be represented by a word in the class after measurement. Or, better, that a reader (observer) selects (measures) a certain word from class $k$ (see above).\n\n- Q5: The identification at lines 71-72 states that the probability to observe $|j\\rangle$ upon a measurement on $|\\varphi^{(k)}\\rangle$ is equal to the probability of the feature $j$ given the class $k$ (which is natural and can be easily computed from the matrix of co-occurences; Section 4.1). This leads to use the conditional probabilities as the superposition coefficients according to Equation 3. \n\n- Q6: The definition of $x$ and $y$ is introduced at lines 50-51. $x^{(n)}$ and $y^{(n)}$ are simply $x$ and $y$ for each training instance $n$ as defined at line 79. In the revision, we have put all vectors in bold to improve readability.\n\n- Q7: \n\n > Why the j-th addend is irrelevant for the final classification? \n\n Because it does not alter the ranking of classification probabiilties as explained at lines 82-85. Let $j^*$ be the \"irrelevant addend\" such that $P_{j^*|k}=c$ where $c$ is a constant (lines 87-88), then:\n $$\n k^*=\\arg\\max_k u_k = \\arg\\max_k\\sqrt{u_k} = \\arg\\max_k \\sum_j \\sqrt{P_{j|k}x_j}= \\arg\\max_k \\sum_{j\\neq j^*} \\sqrt{P_{j|k}x_j} + \\sqrt{cx_j} = \\arg\\max_k \\sum_{j\\neq j^*} \\sqrt{P_{j|k}x_j}\n $$\n\n > Can you introduce the rationality of entropy? \n\n Irrelevant addends are associated to features with a uniform distribution across the classes (see above). The uniform distribution maximises entropy. Thus, we weight addends based on entropy to \"shrink the contribution of irrelevant addends towards zero\" (lines 85-86).\n\n > In the equation 10, is $u_k$ a normalized vector?\n\n There is no $u_k$ in Equation 10. Maybe $W_{k|j}$? This is defined in Equation 9. In general, $u_k$ is unnormalized throughout the paper.\n\n- Q4bis: Fixed (lines 135-136).\n\n- Q6bis: Correct.\n\n- Q7bis: Global phases and normalization constants have no physical meaning. That's why we don't label the axes in Figure 4. The figure can be rotated and \"zoomed in/out\" and it would still provide the same interpretation. $\\bar{\\varphi}_s^{(k)}$ is interpretable only in *relative* terms.\n\n- Q8: Formalization of the Born rule as a general input-output transformation for supervised classification (lines 47-48).\n\n- Q9: In the empirical section, our classifier relies only on a bag-of-words (superposition-of-words) representation of the document. We chose baseline models that use the same bag-of-words representaton for a fair comparison against a variety of consolidated approaches. We report Table 2 as a stress-test against approaches that use a more sophisticated representation of the document. We have edited Section 6.3 to clarify this point.\n\nWe hope that our response has adequately addressed your concerns. We would greatly appreaciate it if you could engage with us during the discussion period on any remaining barriers to raising your score.", " We thank the reviewers for their thorough and constructive comments! Some points that relate to more than one review or that are of general interest are addressed here.\n\nAs a general remark, we would like to clarify that our method is not a novel state-of-the-art language model or a language model at all. Reviewer #5Hcj perfectly summarizes the very core of our paper and our contribution is better described as *\"the first step to develop a brand new general classification algorithm with certain explainability inspired by quantum physics [...] that shows impressive superiority over baseline algorithms in the prediction accuracy and training/inference efficiency\"*. \n\nWe apologize for the confusion and we have uploaded a revision that should clarify this point since the very beginning (abstract and introduction). In the revision, we also put vectors in bold to improve readability and we integrate several suggestions from the reviews. All changes are highlighted in red. Our point by point reply to individual reviews follows (note: the line numbers refer to the new revision). \n\nWe thank the reviewers for taking the time to review our work and we stay available for any additional request and clarification.", " \nThis paper uses the mathematical tools of quantum mechanics to realize a fast and accurate text classification algorithm. Specifically, in this paper, a document is regarded as the superposition state of words, and the classification is viewed as the document to collapse to another superposition state associated with the target class. The collapse probability of the document can be computed by the Born rule. Most of the experimental results are better than baseline models. In addition, the authors also give the interpretability analysis of the two methods. The interesting experimental results show that the classification algorithm is effective and interpretable.\n From the overall perspective, the idea of this paper is interesting, and the algorithm proposed is small and exquisite. The experimental results show that the training time is greatly shortened, and the experimental accuracy is improved compared with baseline models.\n\nHowever, the paper also has some weaknesses that need to be enhanced. First, the quantum-inspired models applied to supervised classification tasks is not the author's initiative. Moreover, in the experimental analysis, there is a lack of comparative experiments between this model and other quantum-inspired models. Second, the author uses quantum theory to analogize the relationship between documents and words, and the relationship between words and categories. This approach is based on intuition and has no strong theoretical basis. From the perspective of quantum measurement, this method is not reasonable. Third, the regularization lacks detailed analysis, and the use of entropy is incomprehensible. Fourth, some important information is not detailed in this paper, such as the necessity of using quantum theory, the method of quantum theory modeling prior knowledge, and what information the prior knowledge represents in the task. These makes people doubt whether it is necessary to introduce quantum theory into the problem raised in this paper. \n Q1: The motivation of this paper says the prior knowledge results in more efficient algorithms. What is prior knowledge in this work, and does it give an intuitive understanding at this point?\n\nQ2: The author says this work is the first supervised classification algorithm based on quantum theory. In my opinion, this view is not rigorous. Because many quantum-inspired models have been used to supervised classification tasks, such as QMNN[1], Complex-order[2]. In addition, there are some other quantum-inspired language approaches.e.g., quantum language models, quantum many-body wave language models. What is the advantage of this work over these language approaches?\n\nQ3: On line 68, $|j\\rangle$ is represented as word, and the document is represented as a superposition of state $|\\psi\\rangle = \\sum_j \\psi_j |j\\rangle$. In quantum theory, the superposition state indicates that a quantum state is superimposed by its eigenstates, and will collapse to an eigenstate after measurement. However, the superposition state represent a document, which means that a word will be used to represent the document after measurement. Even if there is no measurement and the document is expressed as the weighted sum of word vectors in the code, its physical meaning is still unreasonable. \n\nQ4: On line 71, author represents each class k with a wave function $|\\phi^{(k)}\\rangle = \\sum_j \\phi_j^{(k)} |j\\rangle$. Why the class k is the superposition state of word? Does this mean that class k will be represented by a word in the document after measurement?\n\nQ5: In equation 5, the $p_{j|k}$ means the conditional probability of j given k. It is necessary to specify the relationship between j and k, and the reason why the conditional probabilities are used as the superposition state coefficients.\n\nQ6: On line 80, the $x^{(n)}$ is feature vector. Is $x^{(n)}$ a word or $|j\\rangle$? What is the relationship between $y^{(n)}$ and $y_k^{(n)}$? And the $y_k^{(n)}$ has no definition.\n\n\nQ7: On line 85, author says “the j-th addend does not alter the ranking of the probabilities $u_k$, thus being irrelevant for the final classification”. However, each $\\sqrt{p_{j|k} x_j}$ contributes to $u_k$ in the equation 6. Why the j-th addend is irrelevant for the final classification? The reason why entropy is used as weight is not given. Can you introduce the rationality of entropy? In the equation 10, is $u_k$ a normalized vector?\n\nQ4: On line 136, $\\Phi$ is a matrix of weights, and $\\Phi_{ks}$ is a scalar. Why are they equal? Is that ambiguous? Similarly, $v=v_s$ is also confusing.\n\nQ6: Whether it can be understood in this way, the 145 to 152 line gives the initialization process of BL, and the 153 to 155 line gives the initialization process of BL by BC.\n\nQ7: On line 159, Why does $\\rho e^{i\\\\theta} \\overline{\\varphi}_s^{(k)}$ have to be derived in this form? Can't $\\overline{\\varphi}_s^{(k)}$ be directly interpreted?\n\nQ8: The novelty of this work is mainly Born rule, but most quantum-inspired models use quantum measurement theory to classify or judge. What is the essential difference between this work and other quantum-inspired models?\n\nQ9: The reasons for choosing these baseline models need to be explained. Is there comparability between this work and other baseline models in ideas, methods and other aspects? Why \n\n[1] Li Q, Gkoumas D, Sordoni A, et al. Quantum-inspired neural network for conversational emotion recognition[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2021, 35(15): 13270-13278.\n\n[2] Wang B, Zhao D, Lioma C, et al. Encoding word order in complex embeddings[J]. arXiv preprint arXiv:1912.12333, 2019.\n The authors conclude that there are limitations that they cannot explain why quantum-inspired algorithms are better than classical algorithms. In addition, I think the limitation of this work is also the proposal of the regularization term, and whether the regularization term can be repurposed and better explained by quantum theory.\n", " This paper proposes a novel algorithm derived from Born's Rule in quantum physics for text classification. The paper discusses various aspects of this classification algorithm, including training, regularization, generalization, explainability, and computational complexity. A neural architecture called Born Layer is also developed. The empirical results show the superiority of the proposed algorithm in prediction accuracy and computational complexity over several classical baseline ML algorithms. This work is promising to be the first step to develop a brand new general classification algorithm with certain explainability inspired by quantum physics. **Strength**\n\n- I love the idea of a classification algorithm based on the theory of quantum physics. The authors have a comprehensive discussion of various aspects of the proposed classification algorithm.\n- The algorithm is implemented with well-structured code, which should make the community easy to extend and try on more tasks.\n- The algorithm shows impressive superiority over baseline algorithms in the prediction accuracy and training/inference efficiency.\n\n**Weakness**\n\n- The empirical result is limited. The major experiments are only conducted on 20NewsGroup. Also, the compared baselines are relatively weak. (But considering this is a preliminary work in this direction, this is acceptable.)\n - In Figure 2, the training time of KNN increases as the dataset size grows. This implies that the compared KNN is not a brute-force one. Moreover, I would expect to see more details to be elaborated on for traditional algorithms in the supplemental material.\n- The Born Classifier shows a significant superiority over other classical algorithms when the dataset size is limited. This is surprising. Do you have any interpretation of this observation? Do you have such justification for other datasets?\n The author does not discuss the limitations.", " This paper proposes a novel text classification method (Born layer) based on quantum mechanics. The Born layer takes the input vector as the wave function and predicts the class of the given input with Born’s rule. The Born layer can be applied to any other neural network or function that transforms the input text into a feature vector. This paper uses the simple text transformation method, tokenization, for the transformation function, with the square root function derived by the Born rule. Experiments on three text classification datasets (20Newsgroup, R8, and R52) show the efficacy of the proposed method. The proposed method is also computationally efficient. Strengths\n\n1. The proposed method is novel.\n2. This paper is the first trial that applies quantum theory to the text classification problem.\n3. Experimental results support the validity of the proposed method.\n\nWeaknesses\n\n1. This paper is hard to follow. See the questions section for more details.\n2. This paper does not provide any detailed description of their assumption that “text is a superposition of words.” What is the motivation that the authors see a text snippet as a superposition of words? How is this assumption brings a positive impact on text classification? What problem do existing studies have, and how does the proposed method resolve it? 1. Line 67: Is x_j unnormalized probability? Eq 4 states that x_j is a probability.\n2. The Born layer is one of the main contributions of this paper. Do authors try any ablation study on the Born layer? e.g., apply the Born layer to other approaches and see the performance improvement.\n\nComments\n\n1. In line 79, this paper describes P as the probability, but in line 83, the authors describe P as a vector. Please provide a more clear description.\n2. Please provide a more description for the generalization form. What do the authors want to claim from the ablation study for the hyperparameters a and b?\n3. The proposed method is applicable to other text classification tasks such as sentiment classification. More experiments on other popular text classification problems are recommended. The authors have addressed the limitations. No negative societal impact.", " The paper describes an interesting application of Born Rule to one of the most fundamental tasks in machine learning which is text classification. The Born Rule is a concept in quantum physics on estimating the probability of an object (or document in the context of this study) to collapse in one of the probable outcomes (category/class of the text). In this line, the paper frames text classification as a superposition of words where Born Rule can be integrated as a standalone classifier or with integration to a neural classifier. The authors show that the method outperform both traditional and neural network based methods on three datasets (NG, R8, and R52) by a considerable amount of margin especially with >1 epochs. There is also an emphasis on the efficient runtime obtained by using the proposed method with Born’s rule. 1. The paper is fairly easy to read, and follow, and contains an arguably sufficient level of technicality that would make even intermediate readers be interested in the concepts in quantum physics such as the Born rule. I also appreciate that the authors conveniently built the method on top of Scikit and Pytorch for ease of replication on the side of the readers. \n2. For this new approach, I expected a very definitive discussion on the limitations to describe the pitfalls of the proposed method in various applications and data types. Will it work on cases such as a sentiment classification task where some level of context is needed and there are cases of double negatives? Will it work on extremely noisy texts such as those collected from social media (Twitter)? What is the effect if we reduce the size of each text instance (say sentence classification instead of document-based classification)? \n3. Following #2, a definitive synthesis comparing TC with Born’s rule to existing methods of text classification (using language models, traditional method) should be in order. This can be tabulated for ease of reference.\n\nWith all the strengths and weaknesses of the paper, I believe it would still be beneficial for the ML community to learn from the paper provided the authors studiously highlight the limitations of the method. Thus, I would recommend acceptance on my part.\n 1. What was the motivation for traditional algorithms when the use of language models such as BERT are already considered baseline in text classification as they are equally accessible compared to the tested algorithms? Also, what were the criteria for choosing the three datasets when IMDB, Amazon, Yelp, AG are often used for the evaluation of newly-proposed TC methods (see Patel et al, NAACL 2021).\n2. Are there any cases where TC with Born’s rule favors particular classes based on their confusion matrices? The paper only shows Accuracy scores so there is uncertainty if it resolves class imbalance or not. This can be a limitation of the method.\n The paper covers a new method for implementing text classification that seems to outperform both traditional and some neural network-based methods in terms of speed and at par with accuracy scores. The authors are advised to exhaustively list what the proposed method is (not) capable of doing covering factors such as context, how texts is processed, type of task. Again, this can be described through a table. \n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 7, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 2, 2, 4 ]
[ "R0tzSnli73YQ", "7VeLZ9K40cFY", "MSQTXI6NUXQ", "xJHCf0QnTy", "UyZCxMVOqALT", "do1U_Defjd", "3uCarveSVBL", "MqgZgPaJIvJ", "F7KtSNQIZp", "nips_2022_sNcn-E3uPHA", "nips_2022_sNcn-E3uPHA", "nips_2022_sNcn-E3uPHA", "nips_2022_sNcn-E3uPHA", "nips_2022_sNcn-E3uPHA" ]
nips_2022_8E8tgnYlmN
SIREN: Shaping Representations for Detecting Out-of-Distribution Objects
Detecting out-of-distribution (OOD) objects is indispensable for safely deploying object detectors in the wild. Although distance-based OOD detection methods have demonstrated promise in image classification, they remain largely unexplored in object-level OOD detection. This paper bridges the gap by proposing a distance-based framework for detecting OOD objects, which relies on the model-agnostic representation space and provides strong generality across different neural architectures. Our proposed framework SIREN contributes two novel components: (1) a representation learning component that uses a trainable loss function to shape the representations into a mixture of von Mises-Fisher (vMF) distributions on the unit hypersphere, and (2) a test-time OOD detection score leveraging the learned vMF distributions in a parametric or non-parametric way. SIREN achieves competitive performance on both the recent detection transformers and CNN-based models, improving the AUROC by a large margin compared to the previous best method. Code is publicly available at https://github.com/deeplearning-wisc/siren.
Accept
This work proposes a new unified distributional model to address out of distribution detection and improves over the state of the art. While the approach shares notable similarities with other works in the domain, the idea of creating a unified distributional representation also at the intermediate features, especially in the context of transformers is new. While this work can be criticized for some missing comparisons with related works, the overall approach seems both sounds and novel and of general interest. Therefore, I suggest its acceptance for NeurIPS 2022.
train
[ "J1S0RHKSrHy", "T8kmGUfnDtV", "14imOqQUp8w", "1CmtAp14p2", "Agpf_QafbOJ", "IuttBJlyZA8", "XiUPC8GQW79", "nPbmCQVnuQO", "YYvWJ_t8j3" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I will keep my score because the differences between the proposed model and the cited training-based approaches are not significant. No direct comparison against the methods we presented was shown. Finally, the majority of the concerns we presented were not covered in the rebuttal. Neither citing nor comparing against the previous related work for the last four years is hard to accept.", " **Clarification on the hyperparameters**\nFollowing [1], we carefully tune the hyperparameters on the speckle-noised in-distribution dataset, which are shown in **Section H** of the appendix. For the main task of object-level OOD detection, SIREN reduces the number of hyperparameters from 4 in [1] and 3 in [2] to 2 (loss weight and hyperspherical feature dimension).\n\n[1] Xuefeng Du, Zhaoning Wang, Mu Cai, Yixuan Li, VOS: Learning What You Don’t Know by Virtual Outlier Synthesis, ICLR2022\n\n[2] Akshita Gupta, Sanath Narayan, K J Joseph, Salman Khan, Fahad Shahbaz Khan, Mubarak Shah, OW-DETR: Open-world Detection Transformer, CVPR2022\n\n**Added mean and standard deviations**\nFor object-level OOD detection, we have provided the mean and std values in **Table 1** for SIREN. The std values for the training-based baselines are shown in the following table (evaluated on COCO and OpenImages, respectively). There are no std values for post-hoc metrics since they are deterministic. \n\n| Method | mAP| FPR95 | AUROC|\n|:-----------:|:------------:|:------------:|:-----------:|\n|Generalized ODIN |60.4(0.7) |92.08(1.6) / 92.83(0.7) | 57.34(1.4) / 55.74(0.9)|\n|CSI | 59.5(0.4) | 84.00(1.3) / 79.16(0.5)| 55.07(0.2) / 51.37(0.5) |\n|VOS |60.3(0.2) |97.46(0.1) / 97.07(0.3)| 54.40(0.3) / 52.77(0.2)|\n|OW-DETR | 58.3(0.5)| 93.09(1.2) / 93.82(0.9)| 55.70(0.8) / 57.80(0.8)|\n\nFor the std values for the classification networks, we report on the CIFAR10 dataset (evaluated on ResNet and DenseNet, respectively). \n\n| Method | FPR95 | AUROC|\n|:-----------:|:------------:|:-----------:|\n|Vanilla cls. network | 70.71(0.9) / 40.91(0.4) | 79.23(0.5) / 88.32(0.1) |\n|SIREN (ours) | 64.56(1.3) / 40.67(1.1) | 84.86(0.9) / 92.37(0.5) |\n", " **Difference between our SIREN and loss-based approaches**\nWe appreciate the reviewers for bringing our attention to the loss-based uncertainty estimation approaches! We respect the invention of these related works. However, after carefully examining these related works, we believe the main differences between them and our approach can be summarized into five folds:\n\n+ First, from the motivation/novelty level, as R1 and R2 noticed, the main contribution of SIREN is that it pioneers the effort of unifying the distributional model between training and testing for OOD detection. Different from the previous works, both our representation modeling/learning and OOD detection operate under a **consistent distributional model** and enjoy strong mathematical compatibility. This is also highlighted at the end of the **Introduction** section. In contrast, the mentioned works mainly aim at designing a loss that is suitable for seamless OOD detection tasks (fast energy-efficient inferences, no classification accuracy drop, no hyperparameter tuning, and no collection of outlier data), which is not related to learning a unified and coherent distributional model for OOD detection. Additionally, even though several works, such as DisMax, employs logits with unit-norm to design losses, they did not follow the motivation of unifying the distributional model between training and testing for OOD detection. The detail of similarity calculation is also different in the unit space (L2 vs. cosine similarity).\n\n+ Second, from the modeling level, SIREN estimates the class-conditional vMF distribution from data, where each class has its own learnable distributional parameters of mean and kappa, which is totally different from scaling the logits with a constant factor in IsoMax. Thus, claiming “Kapa appears similar to essentially the entropic scale of IsoMax” is not proper.\n\n+ Third, from the architecture level, SIREN focuses on shaping the **intermediate representations** (such as the features of the penultimate layer) in **transformer-based** models and meanwhile demonstrate promise in CNN-based models. However, the mentioned works usually pay attention to **CNN-based models only** and manipulate the **final logit layer** in their proposed distance-based loss. We encourage the reviewer to be aware of the difference between the **representations** in SIREN and **logits** in the mentioned papers.\n\n+ Moreover, from the task level, SIREN establishes state-of-the-art results on a challenging object-level OOD detection task while the mentioned works purely focus on OOD detection on classification models.\n\n+ Finally, from the detail level, there are many differences between the loss-based approaches and our SIREN, such as vMF-based representation-shaping loss vs. distance-based classification loss, randomly initialized prototypes vs. prototype estimates from the vMF distributions, etc. \n\nWe believe the existence of such major differences are able to support the validity and contributions of our proposed approach. At the same time, again we respect the invention of these great loss-based OOD detection approaches and are very much willing to discuss these differences after revision **with proper citations**.\n\n**Added results on the classification accuracy**\nWe provide the comparison of the in-distribution accuracy of a vanilla classification network and SIREN in the following table (evaluated on ResNet and DenseNet, respectively), which shows that SIREN will maintain the in-distribution accuracy for classification models as well.\n\n| Method | Acc (%) |\n|:-----------:|:------------:|\n| Vanilla cls. network | 88.31 / 93.09 |\n|SIREN |88.08 / 93.47 | \n\n**Added results on the CIFAR100 dataset**\nSIREN focuses on object-level OOD detection. Additionally, we provide the OOD detection performance on CIFAR100 datasets as follows (evaluated on ResNet and DenseNet, respectively):\n\n| Method | Acc (%) | FPR95 | AUROC|\n|:-----------:|:------------:|:------------:|:-----------:|\n| Vanilla cls. network | 61.43 / 72.03 | 86.82 / 57.02 | 56.89 / 77.10 | \n|SIREN |61.77 / 72.69 | **81.33 / 54.11**| **60.70 / 78.23** | \n\n**Clarification on the end-to-end backpropagation-based training**\nSIREN is **end-to-end trainable** in terms of both the object detection models and the vMF loss for shaping representations since we utilize the exponential-moving-average manner to estimate the class-conditional prototypes and the calculation formula for the distributional parameter kappa allows gradient update as well.\n\n\n", " We are encouraged that the reviewer finds our work effective and plausible, performing detailed and extensive experiments and with a well-organized and sophisticated presentation. We thank the reviewer for their helpful comments and suggestions, which we address below:\n\n**Clarification on our contribution**\nThanks for bringing our attention to the main contribution of SIREN! As noticed by R1, the main contribution of our proposed SIREN is that it pioneers the effort of shaping the representations for OOD detection under a coherent distributional model and shows effectiveness for both transformer-based and CNN-based models, which is also explained at the end of the **Introduction** section.\n\nFor test-time OOD detection score, we propose to use the largest estimated class-conditional likelihood for SIREN as the OOD score, which operates under the same distributional model as the training process, and hence enjoys strong mathematical compatibility. Computationally, we can directly use the parameters of vMF distributions (such as kappa) learned from training, which makes it more stable and efficient than estimating covariance matrix in the Mahalanobis distance. Thus, the design of the inference score serves as an important step in order to fulfill our motivation--“unifying the distributional model between training and testing for OOD detection”. The benefit of our proposed test-time inference score is illustrated in **Remark 2**.\n\nFor the similar design of prototype update in other papers, absolutely fair point. We did not claim that to be our main contribution. We will discuss the design with more details in Section 3.1 and cite the relevant papers after revision, including [1, 2].\n\n\n**Clarification on the choice of vMF distribution**\nGreat point raised! We choose to model the representations by the von Mises-Fisher distribution because it is a simple and expressive probability distribution in directional statistics for hyperspherical data. \nCompared with Gaussian distribution, SIREN explores vMF distribution to estimate the distributions (**Section 3.1**), which avoids estimating large covariance matrices for high-dimensional data that is shown to be costly and unstable in literature [3,4] and **Remark 2** of our paper. Meanwhile, choosing the vMF distribution allows us to have the features live on the unit hypersphere, which leads to several traits. For example, fixed-norm vectors are known to improve training stability in modern machine learning where dot products are ubiquitous [5], etc., which we believe is beneficial to help SIREN learn a good representation space for OOD detection.\n\n[1] Junnan Li, Caiming Xiong, Steven C.H. Hoi, MoPro: Webly Supervised Learning with Momentum Prototypes, ICLR2021.\n\n[2] Wang, Haobo, et al., PiCO: Contrastive Label Disambiguation for Partial Label Learning, ICLR2022.\n\n[3] Kimin Lee, Kibok Lee, Honglak Lee, Jinwoo Shin, A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks, NeurIPS 2018\n\n[4] Xixian Chen, Michael R. Lyu, Irwin King, Toward Efficient and Accurate Covariance Matrix Estimation on Compressed Data, ICML2017\n\n[5] Tongzhou Wang, Phillip Isola, Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere, ICML2020.\n\n", " We are glad that the reviewer found our method effective in shaping the representations for OOD detection under a coherent distributional model. We thank the reviewer for the constructive comments and suggestions, which we address below:\n\n**The performance on ID data**\nWe provide the ID performance of mAP for SIREN on object detection models in Table 1, which shows that our proposed SIREN does not hurt the ID performance while significantly improves the results on OOD detection. \n\n**Clarification on the choice of vMF distribution**\nGreat point raised! We choose to model the representations by the von Mises-Fisher distribution because it is a simple and expressive probability distribution in directional statistics for hyperspherical data. \nCompared with Gaussian distribution, SIREN explores vMF distribution to estimate the distributions (**Section 3.1**), which avoids estimating large covariance matrices for high-dimensional data that is shown to be costly and unstable in literature [1,2] and **Remark 2** of our paper. Meanwhile, choosing the vMF distribution allows us to have the features live on the unit hypersphere, which leads to several benefits. For example, fixed-norm vectors are known to improve training stability in modern machine learning where dot products are ubiquitous [3], etc., which is beneficial to help SIREN learn a good representation space for OOD detection.\n\n[1] Kimin Lee, Kibok Lee, Honglak Lee, Jinwoo Shin, A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks, NeurIPS 2018\n\n[2] Xixian Chen, Michael R. Lyu, Irwin King, Toward Efficient and Accurate Covariance Matrix Estimation on Compressed Data, ICML2017\n\n[3] Tongzhou Wang, Phillip Isola, Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere, ICML2020.\n\n**Added results on the CNN-based object detection models**\nAnother great point! The main objective of SIREN is to shape the embedding geometry for distance-based OOD detection,. Unlike VOS, SIREN does not explicitly incorporate virtual outliers for model regularization. \nIn fact, we show that SIREN’s representation shaping ability can be beneficial and play a complementary role for VOS. We have tried to shape representations during training of VOS and report the OOD detection performance when we use the same OOD score, which is shown in the following table (evaluated on COCO and OpenImages, respectively). The result demonstrates the effectiveness of our training objective. We will add the comparison after revision.\n\n| Method | FPR95 | AUROC|\n|:-----------:|:------------:|:-----------:|\n|VOS | 49.02/52.11 | 88.66/85.36|\n|VOS+SIREN |**48.01/50.28** | **89.73/86.06**|\n\n**Added results on fixed object detector**\nWe report the OOD detection performance when DDETR is trained for 50 epochs without the objective of SIREN (to ensure we have a good object detector) and then fine-tune the projection head with the fixed object detector for another 50 epochs. On the Pascal VOC dataset, the comparison with SIREN is shown as follows (evaluated on COCO and OpenImages, respectively):\n\n| Method | mAP| FPR95 | AUROC|\n|:-----------:|:------------:|:------------:|:-----------:|\n|SIREN |**60.8** | **75.49 / 78.36** |**76.10 / 71.05** |\n|SIREN w/ fixed detector | 60.5 | 83.21 / 85.94 | 69.82 / 62.07 |\n\nThe table shows that training with the fixed object detector cannot outperform jointly training the projection head with the detection network. The reason may be that the joint training allows the gradients to shape representations for multiple different layers in the detection network, which is more flexible to change the representation, estimate the vMF distribution and thus is beneficial for OOD detection compared to only optimizing the MLP projection head.\n", " We are pleased to see that reviewers find that the method is **effective** and **plausible** (R1, R2), and the results on both transformer-based and CNN-based models are **detailed**, **perform well** and **extensive** (R1, R2). We are equally glad that reviewer found the paper **organized well** and **sophisticated** (R2). \n \nWe have addressed the reviewers’ comments and concerns in **individual responses to each reviewer**. The reviews allowed us to improve our draft and the changes made in the revised draft are summarized below:\n \nFrom Reviewer azqk:\n+ Clarified the performance of SIREN on the ID data.\n+ Added results on CNN-based object detection models and with fixed detector\n \nFrom Reviewer azqk and aJVX:\n+ Clarified the choice of vMF distribution.\n \nFrom Reviewer aJVX and d8dM:\n+ Clarified novelty and contribution\n \nFrom Reviewer d8dM:\n+ Added results on the classification accuracy of CIFAR100 experiments, and std values for baselines.\n+ Clarification on the difference between SIREN and recent works and other confusion.\n", " This paper shows that the representation embeddings in the transformer-based models do not fit the Gaussian distributional assumption. As a result, previous OOD detection methods relying on such assumptions can not work. \n\nThis paper proposed the method (SIREN) to shape the representations into compact class-conditional vMF distributions, which unifies the distributional model between training and testing. And the proposed method is effective for both transformer-based and CNN-based models.\n Strengths:\n\n- SIREN shapes the representations for OOD detection under a coherent distributional model, which does not depend on the distribution of model representation embedding.\n- SIREN unifies the distributional model between training and testing for OOD detection and shows effectiveness for both transformer-based and CNN-based models.\n\nWeaknesses and questions:\n\n- Why choose the vMF distributions as the coherent distributional model, rather than other distributions.\n\n- The performance on CNN-based object detection is insufficient (table 2), and the author needs to provide more results. What's more, the VOS performs better than SIREN in CNN-based object detection, which needs some explanation.\n\n | Method | FPR95 | AUROC |\n | ------ | ----------- | ----------- |\n | SIREN | 65.45/67.33 | 84.60/81.51 |\n | VOS | 49.02/52.11 | 88.66/85.36 |\n\n- From figure 6, the SIREN improves the embedding quality, does this influence the performance in ID data? And it's better to show the performance under the setting about: fixed detector and only training the projection head. Please see the \"strengths and weakness\". This paper didnot describe the limitations of this work. Every paper has limitations and corner cases. I recommend the authors to supplement the limitations of this work.", " This paper proposes a novel OOD detection framework SIREN relied on the vMF distributions, which unifies representation learning and OOD detection. Detailed experiments and solid ablation studies demonstrate the effectiveness of the proposed method. **Strengths**:\n1. This paper is organized well and the oveall representation is sophisticated.\n2. The method is plausible, and the experimental results perform well.\n3. Extensive experiments and detailed ablation analysis.\n4. The theoretical proof of the pivotal distribution deviation vector K has been explained in the appendix.\n\n**Weaknesses**:\n1. The originality is a bit limited, and the impact on the field is also incremental.\n\n Let's briefly summarize this paper, the authors introduce a pretext task of shaping the final image map, then exploit the corresponding maximum class-conditional likelihood to classify whether the current image is OOD or not. Thus, this apporach can still be coverd as an OOD method based on uncertainty and fixed thresholds.\n\n2. Tets-time OOD dectection can hardly be highligthed as a new contribution, casue it is more like a test solution specific to this method.\n\n3. Updating the mean of a distribution by using prototype is not fresh for me, in fact, it has appeared in the following paper:\n\n > Wang, Haobo, et al. \"PiCO: Contrastive Label Disambiguation for Partial Label Learning.\" arXiv preprint arXiv:2201.08984 (2022).\n 1. This article select the vMF distribution to shape the image embedding, but does not clarify why the vMF distribution is the chosen one, and whether any Gaussian-like distribution is feasible? Limitations have been adequately addressed. No serious concerns.", " The paper's main assumption is that we need to construct representations during training that are compatible with the detection distributions we observe during inference.\n\nUnlike the Mahalanobis approach, which is an inference-only solution because it does not affect the network's training, the paper proposes constructing a loss to train the network to produce consistent distribution in training and test sets.\n\nIt proposed to train the network to impose the representations to follow a mixture of von Mises-Fisher (vMF) distributions and to make both train and test distributions compatibles to improve OOD detection performance.\n **Weaknesses:**\n\n_\"Shaping representations\" and \"leverage from the learned representations\" is what loss-based approaches (IsoMax, Scaled Cosine, SNGP, DUQ, and IsoMax+) have been doing since 2019. The fact that these loss-based approaches for OOD detection were not even cited may explain why the paper understand \"shaping representations\" and \"leverage from the learned representations\" as a significant novelty. Unfortunately, it is clearly not the case._ \n\n_Unfortunately, the paper entirely ignores the last three years of relevant advances in loss-based approaches that are completely reshaping the area after the Mahalanobis inference-based approach faded out. The paper insistently compares against Mahalanabos, a four years old approach that has been outperformed very easily in the last three years._\n\n**1. Making representation compatible with the OOD Detection procedure is at least three years old.** \n\nThe Mahalanobis paper is from 2018. At that time, inference-based metric learning approaches for OOD were trending. Since 2019, we have observed that loss-based methods are now mainstream. Therefore, training the model using a loss to make training representation and test/inference distribution compatible, which is the central claim of the paper's novelty, is not novel at all.\n\nUnfortunately, the paper lacks to realize that training the model to produce train and inference representations compatible with performing OOD has been the mainstream approach since 2019 and was proposed by IsoMax, Scaled Cosine, DUQ, SNGP, and more recently by DisMax. Kapa appears similar to essentially the entropic scale of IsoMax.\n\n**2. Forcing representation to the unit hypersphere is not novel.**\n\nForcing representation to the unit hypersphere during training to improve OOD detection was done in Scaled Cosine and IsoMax+, and more recently by DisMax. \n\n**3. Very limited novelty.**\n\nConsidering that point #1 above showed that leveraging representations/distributions learned during training to perform OOD detection has already been proposed. Moreover, noticing that point #2 above showed that unit hypersphere representations have also already been proposed, we conclude that the novelty of the proposed approach is very limited.\n\n**4. Neither cited nor compared with similar previous approaches.**\n\nConsidering that the paper essentially proposes a loss to train the network to improve OOD detection, we believe that the paper should have mentioned and compared with (at least) IsoMax, Scaled Cosine, DUQ, IsoMax+, and SNGP. Considering that this did not happen, we do not know if using vMF helps in any way.\n\n**5. Results for CIFAR10 do not look impressive.** Apparently, the results for CIFAR10 are not remarkable compared to the approaches mentioned above (a direct comparison is absolutely mandatory). We would also like to see results for CIFAR100. \n\n**6. Results for CIFAR10 need to show classification accuracy.** We also would like to see classification accuracy results to evaluate a possible classification accuracy drop, which is very usual in loss-based approaches.\n\n**7. We need much more experiment results using CIFAR10. Furthermore, we need CIFAR100 results as well.** CIFAR10 and CIFAR100 still are the _de facto_ standard datasets on which major OOD detection approaches have published results. The paper requires much more experimentation using CIFAR10 and CIFAR100 datasets. Classification accuracy need also to be shown and analyzed, as classification accuracy drop is a significant issue when using loss-based approaches for OOD detection.\n\n**8. Unlike recent approaches, the proposed solutions do not allow regular pure end-to-end backpropagation-based training.**\n\n**9. The proposed approach presents much more hyperparameters than current state-of-the-art approaches.**\n\n**10. Results should show mean and standard deviations.**\n\nIsoMax: https://arxiv.org/abs/1908.05569\n\nIsoMax (journal): https://arxiv.org/abs/2006.04005\n\nScaled Cosine: https://arxiv.org/abs/1905.10628\n\nDUQ: https://arxiv.org/abs/2003.02037\n\nSNGP: https://arxiv.org/abs/2006.10108\n\nIsoMax+: https://arxiv.org/abs/2105.14399\n\nDisMax: https://arxiv.org/abs/2205.05874\n\nAfter rebuttal: I will keep my score because the differences between the proposed model and the cited training-based approaches are not significant. No direct comparison against the methods we presented was shown. Finally, the majority of the concerns we presented were not covered in the rebuttal. Neither citing nor comparing against the previous related work for the last four years is hard to accept.\n We would like the authors to update the paper considering the comments presented on the above topics. The paper does not properly comment on the approach's limitations. Unlike many other cited approaches, it does not allow pure end-to-end backpropagation training. It also should better explain the hyperparameter validation procedures required for the alpha, beta, and the dimension of the last layer." ]
[ -1, -1, -1, -1, -1, -1, 5, 5, 2 ]
[ -1, -1, -1, -1, -1, -1, 4, 5, 5 ]
[ "YYvWJ_t8j3", "14imOqQUp8w", "YYvWJ_t8j3", "nPbmCQVnuQO", "XiUPC8GQW79", "nips_2022_8E8tgnYlmN", "nips_2022_8E8tgnYlmN", "nips_2022_8E8tgnYlmN", "nips_2022_8E8tgnYlmN" ]
nips_2022_t3X5yMI_4G2
Reincarnating Reinforcement Learning: Reusing Prior Computation to Accelerate Progress
Learning tabula rasa, that is without any prior knowledge, is the prevalent workflow in reinforcement learning (RL) research. However, RL systems, when applied to large-scale settings, rarely operate tabula rasa. Such large-scale systems undergo multiple design or algorithmic changes during their development cycle and use ad hoc approaches for incorporating these changes without re-training from scratch, which would have been prohibitively expensive. Additionally, the inefficiency of deep RL typically excludes researchers without access to industrial-scale resources from tackling computationally-demanding problems. To address these issues, we present reincarnating RL as an alternative workflow or class of problem settings, where prior computational work (e.g., learned policies) is reused or transferred between design iterations of an RL agent, or from one RL agent to another. As a step towards enabling reincarnating RL from any agent to any other agent, we focus on the specific setting of efficiently transferring an existing sub-optimal policy to a standalone value-based RL agent. We find that existing approaches fail in this setting and propose a simple algorithm to address their limitations. Equipped with this algorithm, we demonstrate reincarnating RL's gains over tabula rasa RL on Atari 2600 games, a challenging locomotion task, and the real-world problem of navigating stratospheric balloons. Overall, this work argues for an alternative approach to RL research, which we believe could significantly improve real-world RL adoption and help democratize it further. Open-sourced code and trained agents at https://agarwl.github.io/reincarnating_rl.
Accept
This paper proposes a novel method for transferring prior policies across design and system changes to improve the sample efficiency of RL algorithms, which could ultimately help unlock RL for real-world use cases. There was an active discussion across the reviewing process in which the authors managed to address the concern of the reviewers, leading to updated scores. Based on the contributions of the paper and highly positive final reviews I recommend the paper for acceptance.
train
[ "VyC9lD2-pR", "JaJv00rMnTi", "acIVDId807W", "cr_h-5cn9cK", "EJPRgjEgOmQ", "H2fnLR5h7G", "gnQbKtdUcfo", "i1F_JoxHnVO", "yGpfFA_vFFa", "VHNMgnLz0AM", "2u0WqO2r1e6", "aWrZNDQensX", "dvsgH_o49J", "6BNRJF8QZ-", "j3YlPJawdD", "5V3OFWHXELD", "xPw3ENBwL7f" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Apologies for the delay in reply. I appreciate the thoughtful response. \n\nMy concerns have been addressed and updated my score.", " We thank the reviewer for reading our response and engaging in follow-up discussion.\n\n> **If the pre-trained policy is good enough to collect samples without safety issue or incurring significant cost, that would mean the pre-trained policy is already of high-quality which can be deployed to complex environments; then we have to think of why we have to consider fine-tuning or re-incarnating the policy.**\n\nWhile the pre-trained policy achieves good performance, it is often desirable to further improve this policy's performance (via architectural / algorithmic changes). This is especially relevant on real-world RL settings where improving performance leads to real-world impact, such as chip design, tokamak control etc but tabula rasa re-training is computationally / data expensive. The BLE domain shows this use-case: given access to Perciatelli trained via distributed RL training for more than a month, we show how fine-tuning / reincarnation can improve upon this policy compared to tabula rasa with a finite compute/sample budget.\n\nFurthermore, on Atari, we show that we can improve upon the teacher policy, DQN(Adam) policy trained for 400M frames, by switching to an Impala architecture and Rainbow algorithm and obtain nearly *2x higher IQM* performance than the teacher (Figure 9).\n\n> **It could be 'simply collecting 1M samples' for simulated environment, especially game environments like Atari, but it's a quite big assumption on the scenario where collecting samples could be costly.**\n\nIndeed, PVRL methods that perform well with smaller number of environment samples would be desirable. Since most prior methods were not designed for policy to value reincarnating RL (PVRL) due to lack of research in this setting, this work is arguing for doing research in PVRL, where we intend for QDagger to serve as a starting point. \n\n> **I would not say that we have to completely rid of the online interaction, because we would need online interaction anyway for re-training the policy, but the dependency on the dataset consisting of tens or hundreds of episodes only for value learning seems a bit strong assumption to me.**\n\nCollecting data from teacher instead of the student is simply a way of how we choose to use the online interactions in the environment. For a given budget of online interactions, we can choose to use some part of the budget for teacher data collection while remaining budget for student data collection (akin to Figure A.19). Furthermore, not using teacher's final replay buffer or teacher collected data for pretraining corresponds to kickstarting, which results in poor performance relative to QDagger. \n\n\n\n> **Missing related works or references for A.6, especially on leveraging the pre-trained representations from images and videos given the recent surge of relevant works in the field. Currently it's missing all the references (not only for representation pretraining) in A.6.**\n\nFor leveraging pre-trained representations, a large fraction of work is about how to learn these representations (such as in unsupervised RL) and we'll add citations of representative work. For other examples (such as leveraging existing agents / data), the related work discusses those papers and we'll also include them in Appendix A.6 for completeness. ", " Thank you for the very well-done and extremely thorough response!\n\nMy concerns have been addressed and I have updated the score.", " Thanks for the detailed response to my review and also other reviewers from different reviewers.\n\n- I think the modification made in the rebuttal -- toning down on some strong claims about the problem setup (as pointed out by mLNV) -- could definitely clarify the position of the work in terms of different works. A comment I have on this front is missing related works or references for A.6, especially on leveraging the pre-trained representations from images and videos given the recent surge of relevant works in the field. Currently it's missing all the references (not only for representation pretraining) in A.6.\n\n- I still have a concern on the dependency on the dataset. It could be 'simply collecting 1M samples' for simulated environment, especially game environments like Atari, but it's a quite big assumption on the scenario where collecting samples could be costly. If the pre-trained policy is good enough to collect samples without safety issue or incurring significant cost, that would mean the pre-trained policy is already of high-quality which can be deployed to complex environments; then we have to think of whether we really would need fine-tuning or re-incarnating the pre-trained policy. I would not say that we have to completely rid of the online interaction, because we would need online interaction anyway for re-training the policy, but the dependency on the dataset consisting of tens or hundreds of episodes only for value learning seems a bit strong assumption to me.\n\nDue to the aforementioned limitation of the method, I would like to maintain the current score of weak accept.", " I thank the authors for their detailed response and the adjustments they've made. Given that these addressed my concerns, I'm happy to raise my score to a strong accept (8).", " We thank all the reviewers, $R1$ (kCKQ), $R2$ (jTCn), $R3$ (VM1W), and $R4$ (mLnV), for their valuable feedback! All reviewers found the paper to be well-written and presented, with sound results and extensive experiments and are in favor of accepting the paper. This work also generated a lot of questions and discussion from the reviewers, who mentioned that reincarnating RL would be of interest to various researchers. \n\nBased on reviewers’ comments, we have **revised the paper** to add discussion and a couple of new experiments, $\\textcolor{red}{shown\\ in\\ red}$, with clarifications (marked in $\\textcolor{brown}{brown}$). We summarize the changes below:\n\n- *Reproducibility and Comparisons in Reincarnating RL*: Added detailed discussion in **Appendix A.2** [$R1, R2, R4$]\n- Highlighted that prior reincarnation efforts were *ad hoc* and have limited applicability [$R1, R2$]\n- *Generalizability in Reincarnating RL*. Added discussion in Appendix A.2 and results in **Figure A.11** showing that ranking of PVRL algorithms remain consistent across two different teachers [$R1, R2$]\n- *Experimental details about BLE* for self-containedness in **Appendix A.5.2** [$R1, R3$] \n- Clearly framing reincarnating RL as a class of problem settings, with examples in Appendix A.6 [$R4$]\n- Additional *ablations* in **Appendix A.7** showing that QDagger works well without offline replay [$R3$] and impact of QDagger in online phase [$R4$]\n- Included takeaway from n-step experiments and clarifications about humanoid-run [$R3$]\n\n**Update**: Based on our changes and response, reviewers $R1$, $R2$ and $R4$ acknowledged that their concerns have been addressed and updated their assessment to recommend *accept*, *accept* and *strong accept* respectively.", " > **Plenty of work that would fall into reincarnating RL, as seen in \"Related Work\". .. not sure reincarnating RL could be considered an original contribution**.\n\nAs we mention in the paper, while several large-scale efforts have used reincarnation, it has typically been done in an ad-hoc way and has limited applicability. Existing approaches such as PBT (Starcraft) and Net2Net (Dota) can not be applied for reincarnating RL when switching to arbitrary architectures or from a policy to a value function. Similarly, behavior cloning (Rubik’s cube) is only useful for policy to policy transfer and is inadequate for the PVRL setting. \n\nOur formalization is arguing for studying reincarnating RL as a research problem and developing broadly applicable reincarnation approaches. For example, policy to value reincarnation setting studied in this work enable us to apply reincarnation to problems where prior reincarnation approaches are not applicable including (1) transferring an existing DQN policy to a Impala-CNN Rainbow agent in ALE, and (2) transferring Perciatelli (a distributed QR-DQN agent with MLP architecture) to a recurrent R2D2 agent in BLE.\n\n> **Given the seeming lack of originality of QDagger, it is difficult to say that the contribution itself is highly significant.**\n\nWhile QDagger is a simple algorithm, it works particularly well for PVRL and its significance should take into account that it allows us to efficiently apply reincarnation to problem settings where prior reincarnation approaches are not applicable (research workflow experiments on ALE and BLE). Our intention is for QDagger to serve as a starting point for future work in this area.\n\n> **What example settings (beyond BLE) are you thinking of when it comes to \"real-world RL adoption\"? In what real-world scenarios would a broader community improving existing trained agents be helpful?**\n \nAny real-world setting where RL is currently deployed in practice and tabula rasa re-training is computationally / data expensive is a good candidate for adoption of reincarnating RL due to existence of prior computational work (such as existing deployed policies or data) and to tackle the challenge of being able to improve upon the existing deployed policy (which is often the case).\n \nRL problems where improving performance leads to real-world impact, such as chip design [1], tokamak control [2], compiler optimization [3], etc. are scenarios where continually improving existing trained agents might be helpful. We have added this point in the introduction.\n \n> **How do you imagine reincarnating RL research looks in practice?**\n \nWe speculate that research in reincarnating RL can branch out in two directions: \n \n- **Standardized benchmarks with open-sourced computational work** (e.g., fixed teachers): This research would mostly focus on the development of reincarnating RL approaches. Akin to NLP and computer vision, where typically a small set of pretrained models are used in “pretraining and fine-tuning” research, this research can also possibly converge to a small set of open-sourced computational work (e.g., pre-trained teachers) on a given benchmark, for example, the teacher DQN agents and data we released on Atari.\n\n- **Real-world domains**: Since obtaining higher performance has real-world impact in such domains, it incentivizes the community to build on current “state-of-the-art” agents and try to improve their performance. This would also likely benefit from methods' research on standardized benchmarks.\n\n> **How is a real-world practitioner supposed to use the learnings from this line of work on a new task?**\n \nThe generalizable findings from reincarnating RL research would be about comparing algorithmic efficacy given access to same computational work (e.g., policies) on a specific task. As such, practitioners can use these findings to try to improve on an existing deployed RL policy (as opposed to being restricted to running tabula rasa RL). For example, this work shows that QDagger is effective for training a value-based agent given access to an existing policy on ALE and as such, QDagger can be tried in new tasks where we have an existing policy (e.g., BLE where we had the Perciatelli teacher). \n\n----------------------------\n*We hope that most of the reviewer’s concerns have been addressed and, if so, they would reconsider their assessment. We’d be more than happy to engage in further discussions.*", " We thank the reviewer for their constructive feedback. Main changes in revision, $\\textcolor{red}{shown\\ in\\ red}$, are listed in “Summary of Changes” comment. We first address concerns related to limitations and questions raised by reviewer.\n\n> **This paradigm introduces its own host of problems .. \"reincarnation from two distinct policies that perform similarly results in different performance trends\". Thus, it seems as though it may be very easy to overfit to specific implementations and seeds of the pre-trained agents.**\n \nWe’d like to clarify that the two distinct policies (Figure 10), mentioned in Line 336, were obtained through different agents and do not correspond to different random seeds: (1) Training DQN (Adam) from scratch for 400M frames, (2) Fine-tuning DQN(Nature), trained for 200M frames, with Adam for 20M frames. These two policies have similar IQM performance but different behaviors, which results in different reincarnation performance. Furthermore, reported results are aggregated across 3 random seeds of the pretrained policy for each of the 10 games, which shows the robustness to random seeds.\n \nWe expect algorithmic rankings to typically be preserved with these two teachers despite resulting in different reincarnation performance. In fact, we empirically verified this by running an experiment to evaluate the top 3 methods in our work (QDagger, kickstarting, pretraining) using the fine-tuned Nature DQN teacher and found that the ranking of the methods does remain unchanged (**Figure A.11**). We have added this discussion in the **Appendix A.2**.\n\n> **The precise details of generation of teacher policy / data become important for reproducibility; adding on to the challenge of reproducing work from scratch.**\n \nWe acknowledge that *reproducibility from scratch* is challenging and have added a discussion in **Appendix A.2**. The “reproducibility from scratch” issue also exists in NLP and computed vision, where existing pretrained models (e.g., GPT-3) are rarely, if ever, reproduced / re-trained from scratch but almost always used as-is. Despite this, “pretraining-and-fine-tuning” is a dominant paradigm in NLP and vision, and we believe this issue should not prevent us from doing research in reincarnating RL.\n \nFurthermore, reproducing existing computational work, which could be more expensive than training tabula rasa agents, beats the purpose of using reincarnating RL. Instead, reincarnating RL research would build on open-sourced prior computational work on a given benchmark. For example, to allow others to use the reincarnation setup in our work, we have already open-sourced [DQN model checkpoints and replay buffers ](https://console.cloud.google.com/storage/browser/rl_checkpoints) in the original submission. \n\n> **The limitations on reproducibility is problematic since science is about repeatedly testable hypotheses.**\n\nThe testable hypothesis in reincarnating RL research would be about whether a given algorithm is better than another algorithm given the same prior computational work. For example, one hypothesis could be whether QDagger is a better algorithm than kickstarting, which can be repeatedly tested, for example, by rerunning PVRL experiments on ALE.\n\n> **I'm not sure if I agree with \"reincarnating RL\" as being a superior approach for democratizing RL research .. environments such as Minatar democratize research and allow for cheaper testing of tabula rasa RL.**\n\nWe did not claim superiority of reincarnating RL over other approaches for democratizing RL research. That said, while it is worthwhile to do RL research on small-scale problems, it is also important to be able to investigate RL on more complex problems as otherwise we’d run the risk of overfitting to small-scale problems (for example, it is unclear whether better performing methods on MinAtar would perform better on more complex tasks like Starcraft or MineCraft?). However, research on large-scale RL domains is mostly accessible to researchers within compute-rich labs. Reincarnating RL would allow us to also tackle larger-scale domains without requiring excessive computational resources. \n\nA majority of reincarnating RL research would be about developing broadly applicable reincarnation algorithms as opposed to achieving higher scores. As pointed by #R4, this is important as a common use of RL algorithms in real-world settings will likely be in scenarios where prior computational work is available (previous data, policies etc.). For example, there is an [ongoing NeurIPS competition](https://minerl.io/basalt/#changes) on MineCraft where existing pretrained policies are provided as RL from scratch fails to get almost any reward [[Baker et al.](https://arxiv.org/abs/2206.11795)].", " > **An issue of reincarnating could be related to diversity. A goal in reinforcement learning is to find diverse policies. Although this is not directly related to this work, reincarnating might not be able to fall into the workflow of training diverse policies.**\n\nThis is an interesting point for discussion. Akin to tabula rasa RL, diversity-inducing objectives (e.g., entropy maximization) can also be added during the RL phase for a reincarnated agent. Thus, while reusing existing computational work (e.g., learned policies / data) can bias the learned policy towards existing behaviors, this can possibly be overcome with sufficient exploration. We’d be curious to hear from the reviewer whether they think that diversity challenges in reincarnation are fundamentally different from tabula rasa RL.\n \n> **I think the claim \"JSRL does not improve performance compared to tabula rasa DQN and even hurts performance with a large number of teacher roll-in steps\" feels a little strong, given that it only hurts performance when n=1. However, I don't think using multi-step = 1 is very common particularly in atari.** \n\nThanks for pointing this out. We mistakenly referenced Figure 3 (right) instead of Figure 3 (left) for JSRL results and have fixed this in the revision. \n\nIn Figure 3 (left), we see a big drop in performance with teacher roll-in steps set to 5000, even when using n-step=3. JSRL performance is comparable or much lower relative to the tabula-rasa DQN, which receives an IQM score of 0.39.\n\n> **I'm not entirely sure if the reference to Panel 3 in Figure 1 on line 251 is correct? Similarly, it would be helpful to reference the appropriate figure for the discussion in the second paragraph of sec. 5 (Fig A. 13, A. 14 I believe).**\n\nThe reference to Panel 3 in Figure 1 is correct. Panel 3 shows the performance comparison between the tabula rasa Impala-CNN Rainbow (sky-blue), reincarnated Impala-CNN Rainbow (pink) and further fine-tuning the existing DQN agent (yellow).\n\n\n\n\n*References*:\n\n[1] Mirhoseini, Azalia, et al. \"A graph placement methodology for fast chip design.\" Nature 594.7862 (2021): 207-212.\n\n------------------\n*We hope that most of the reviewer’s concerns have been addressed and, if so, they would reconsider their assessment. We’d be happy to engage in further discussions.*", " We thank the reviewer for constructive feedback. Main changes in revision, $\\textcolor{red}{shown\\ in\\ red}$, are listed in “Summary of Changes” comment. We address reviewer's concerns below.\n\n> **The purpose of the paper is to formalize 'reincarnation.' .. this work is formalizing a process .. as noted by the authors already well documented e.g. starcraft, dota, Rubik's cube.**\n\nWhile several large-scale efforts have used reincarnation, it has typically been done in an ad-hoc way and has limited applicability. Existing approaches such as PBT (Starcraft) and Net2Net (Dota) can not be applied for reincarnating RL when switching to arbitrary architectures or from a policy to a value function. Similarly, behavior cloning (Rubik’s cube) is only useful for policy to policy transfer and is inadequate for the PVRL setting.\n\nOur formalization is arguing for studying reincarnating RL as a research problem in its own right and developing broadly applicable reincarnation approaches. For example, policy to value reincarnation setting studied in this work enabled us to apply reincarnation to problems where prior reincarnation approaches are not applicable including (1) transferring an existing DQN policy to a Impala-CNN Rainbow agent in ALE, and (2) transferring Perciatelli (a distributed QR-DQN agent with MLP architecture) to a recurrent R2D6 agent in BLE. \n\n> **With a well implemented Ape-X or R2D2, we can significantly reduce RL training times -- role of reincarnation?**\n\nDistributed RL training complements reincarnation, rather than making it unnecessary. For example, training an agent on challenging domains, such as Starcraft, takes a long time (several weeks / months) even with distributed training with 1000s of actors. As such, we can easily train reincarnated agents with distributed training, as we did on BLE (including a R2D2 variant). \n\n> **Current RL tasks .. training time is on the order of hours to days .. justify reincarnation in common RL settings?**\n\nAs pointed by #R4 (mLNV), a common use of RL in the real-world will likely be in scenarios where prior computational work is available (e.g., previously-learned policies), which is evident from the prior large-scale reincarnation efforts, including applications such as chip design [1]. Thus, it is important to develop general-purpose reincarnation algorithms as opposed to prior ad-hoc solutions. Repurposing existing benchmarks for reincarnating RL, akin to how we use ALE in this work, can serve as testbeds for developing such reincarnation algorithms.\n\nFurthermore, reincarnating RL enables researchers with limited compute to tackle larger-scale RL problems, currently accessible only to compute-rich labs, than if they were using tabula rasa RL. For example, there is an [ongoing NeurIPS competition](https://minerl.io/basalt/#changes) on MineCraft where existing pretrained policies are provided as RL from scratch fails to get almost any reward [[Baker et al.](https://arxiv.org/abs/2206.11795)]. While it is worthwhile to do RL research on small-scale problems, investigating RL on large-scale problems helps avoid overfitting to such small-scale problems.\n\n\n> **How should evaluations be reported in reincarnating RL?**\n\nWe acknowledge that reproducibility and fair evaluations in reincarnating RL is an important concern and have added a detailed discussion in **Appendix A.2**. \n\nWhile performance in reincarnating RL depend on prior computational work (e.g, teacher in PVRL), this is analogous to how fine-tuning results in NLP / computer vision depend on the pretrained models (e.g., using BERT vs GPT-3). Fairly comparing reincarnation approaches requires using the exactly same computational work. Indeed, to allow others to use the same reincarnation setup as our work, we open-sourced \n[DQN model checkpoints and replay buffer](https://console.cloud.google.com/storage/browser/rl_checkpoints) in the original submission.\n\nThe performance ranking of reincarnation algorithms is likely to remain consistent across different teachers on a benchmark. In fact, we empirically verified this for the PVRL setting, where we find that while using two different teacher policies, namely DQN(Adam) vs a fine-tuned Nature DQN, leads to different performance trends but the ranking of PVRL algorithms remain consistent: QDagger > Kickstarting > RL Pre-training (see Figure A.11).\n\n> **What teacher the paper uses to train the reincarnated agents for BLE?**\n\nWe acknowledge this oversight and have rephrased the lines 284-285 to clarify this and added experimental details about BLE in Appendix A.5.3. Perciatelli is the name of the RL agent (QR-DQN with a MLP architecture) trained by Bellemare et al in their Nature paper using a distributed RL setup and released with the BLE as a starter agent. \n\nThe BLE experiments show the benefit of reincarnating RL workflow, in which we are able to utilize the already released Perciatelli agent, as opposed to throwing it away as done by tabula rasa RL.", " We thank the reviewer for their constructive feedback. We have made the following changes, $\\textcolor{red}{shown\\ in\\ red}$, in the revision to address their concerns:\n\n- Added results in **Appendix A.7** showing that QDagger works reasonably well without an existing offline dataset (teacher's last replay buffer).\n\n- More experimental details about BLE in **Appendix A.5.2** for self-containedness.\n\n- Added takeaway from n-step results (**L221 - 224**).\n\n- Included additional clarifications about humanoid:run experiments in Appendix A.5.1.\n\n> **The proposed method is limited in that it depends on the availability of offline datasets, which have size of ~1M samples. This assumption gets difficult to hold because a replay buffer is a queue of fixed size instead of storing all transitions.**\n\nWe’d like to clarify that the replay buffer for value-based Atari agents is typically of size 1M transitions, which is 100 times smaller than the data seen by the teacher DQN agent trained for 400M frames (4 frames = 1 transition in Atari). This is somewhat analogous to using a pre-trained agent on large amounts of data (400M frames) and reincarnating another agent given access to a much smaller offline replay dataset (1M transitions) and environment interactions (10M frames).\n\n> **It would be nice to add a discussion on the dependency of the proposed method on the availability of offline datasets.**\n\nIf we don’t have access to the teacher's last replay buffer (i.e., the offline dataset), we can simply generate data using the teacher policy during the online RL phase (this would cost environment samples) and use this teacher collected data for pre-training with QDagger loss (analogous to the behavior cloning phase in Dagger). \n\nTo verify this empirically, we ran a QDagger ablation on 4 games where we do not assume access to the teacher's replay buffer. As shown in **Figure A.19**, we found that collecting the same amount of teacher data as the offline replay buffer leads to comparable performance on 3/4 games (Asterix, Qbert, Seaquest). Thus, QDagger works reasonably well given access to only the pretrained policy.\n\n> **Experimental results on n-step returns are confusing; what's the main point of n-step experiments?** \n\nWe agree with the reviewer that the n-step results can be overwhelming and have revised the paper to include a clear takeaway. \n\nThe degradation in performance without n-step returns (e.g., kickstarting) and even with n-step returns (e.g., DQfD) reveals the sensitivity of prior methods to a specific hyperparameter choice in the PVRL setting. For researchers, this indicates the need for developing less hyperparameter-sensitive PVRL methods that do not fail when weaning off the teacher. For practitioners, the takeaway is to consider this hyperparameter sensitivity when weaning off the teacher for reincarnation.\n\n> **The paper is a bit not self-contained .. missing experimental details in BLE.**\n\nWe acknowledge this oversight and have added more experimental details on BLE about the station keeping problem, evaluation, environment, teacher and outlined the details about hyperparameters and architectures in *Appendix A.5.2*.\n\n> **What is Perciatelli?**\n\nWe’ve rephrased the lines 284-285 to clarify this. Perciatelli is the name of the RL agent (QR-DQN with a MLP architecture) trained by Bellemare et al in their Nature paper using a distributed RL setup and was released with BLE as a starter agent. \n\n> **TD3 performance on humanoid run is also quite confusing .. makes results a less convincing on humanoid.**\n\nWe’d like to clarify that the purpose of humanoid:run experiments is to show the utility of reincarnation in a complex continuous control environment given access to a pre-trained policy. Irrespective of fine-tuning TD3’s performance, we can show the benefits of using reincarnation over tabula rasa D4PG. For example, the reincarnated D4PG achieves [SAC’s performance](https://github.com/denisyarats/pytorch_sac) after 10M frames (score of ~400) in only half the number of samples (i.e., 5M frames).\n\n> **Reason for performance decrease of TD3 -- happens for tabula rasa?**\n\nThe degradation in performance also happens with tabula rasa TD3 and different learning rates for fine-tuning (Figure A.16) but only after training for 25M frames. We hypothesize that this degradation is likely caused by loss in network’s capacity with prolonged training in value-based RL [[Kumar et al.](https://arxiv.org/abs/2010.14498), [Lyle et. al](https://openreview.net/forum?id=ZkC8wKoLbQ7)]. Further investigation of this degradation is outside the scope of the work. We’d like to emphasize that TD’s degradation does not affect our conclusions about the efficacy of reincarnating RL over a tabula rasa workflow.\n\n----------------------------------\n*We would appreciate if the reviewer can confirm that most of their concerns have been addressed and, if so, reconsider their assessment. We’d be happy to engage in further discussions.*", " > **Do you see RRL and PVRL as research workflows that will produce (improvements to) algorithms that are useful for tabula rasa RL, or only for RRL/PVRL?**\n\nExtrapolating our results in Figure 8 that sample-efficient tabula rasa algorithms may not be sample-efficient for RRL / PVRL, it is possible that vice versa would be true too. If we are interested in algorithms that perform well both in tabula rasa and RRL settings, they should be benchmarked in both settings. Currently, we mainly benchmark algorithms tabula rasa, making it unclear whether they’d work for reincarnation.\n\n> **Could you run an ablation using the Qdagger loss in only the offline pretraining, and then doing standard and online training stages? .. would be interesting to see how much additional benefit (on top of standard Q-learning) is gained by using the Qdagger loss during online training.**\n\t\nWe ran this ablation on 4 games and added the results in Appendix A.7. As shown in Figure A.20, we see that the online phase helps improve performance on some games (significant improvement in Space Invaders, while small improvement in Asterix) while not having much effect on others. \n\nOn the more complex BLE domain, we only had access to a small amount of replay data relative to the training budget of the Perciatelli teacher. Our preliminary experiments on BLE indicated that using the QDagger loss only during the offline phase resulted in much lower performance than using QDagger for both offline and online phase\n\n\n\n-------------------------------------------------\n*We hope that most of the weaknesses pointed by the reviewer have been addressed and, if so, they would reconsider their assessment. We’d be happy to engage in further discussions.*\n", " We thank the reviewer for their constructive feedback. We have revised the paper, with $\\textcolor{red}{changes\\ in\\ red}$, to incorporate their recommendations:\n\n- We acknowledge that reproducibility and comparability is an important concern and have added a detailed discussion in **Appendix A.2**.\n\n- Modified the abstract and intro to frame RRL as a class of problem settings and described some possible instantiations in Appendix A.6. \n\n- Changed the framing of PVRL to be an instance of the RRL problem given access to policy and some data from it. (Lines 143-145).\n\n- Adjusted the claims in L35 and L165 (now L139 and L167).\n\n- Added experiments & discussion in **Appendix A.7** about applying QDagger without access to teacher’s replay buffer (Figure A.19) and impact of QDagger loss in online RL phase (Figure A.20).\n\n> **It's also important to address the reproducibility and comparability concerns of this research workflow or class of problems.**\n\n**Comparability**. We totally agree that the same prior computational work should be utilized for fairly comparing reincarnation approaches. Thus, it would be beneficial to release both the model checkpoints and the data generated (at least the final replay buffers). Indeed, to allow others to use the reincarnation setup in our work, we have already open-sourced [DQN model checkpoints and replay buffer](https://console.cloud.google.com/storage/browser/rl_checkpoints) in the original submission (Appendix A.2).\n\n**Reproducibility**. As *reproducibility from scratch* involves reproducing existing computational work, which could be more expensive than training tabula rasa, it beats the purpose of doing reincarnation. Furthermore, *reproducibility from scratch* is also difficult in NLP and computed vision, where existing pretrained models (e.g., GPT-3) are rarely, if ever, reproduced / re-trained from scratch but almost always used as-is. Despite the difficulty of reproducibility from scratch, pretraining-and-fine-tuning is a dominant paradigm in NLP and vision, and we believe that a similar challenge in reincarnating RL should not prevent researchers from investigating and studying this important problem. \n\nInstead, we expect that research in reincarnating RL would build on open-sourced prior computational work. Akin to NLP and computer vision, where typically a small set of pretrained models are used in research, we believe that for reincarnating RL research can also possibly converge to a small set of open-sourced models / data on a given benchmark, for example, the teacher DQN agents and data we released on Atari.\n\n> **All RRL settings are all valuable, and it's not necessarily the case that one is more general or flexible than another (as claimed for example on line 35; PVRL requires both a fixed dataset and previous policy, making it less general than methods that only require one or the other).**\n\nIndeed, we agree that all RRL settings are valuable and we didn’t mean to imply one setting is more general than other. We have rephrased the sentence to be in L135 (now L139) to indicate that when we have access to an interactive policy in addition to the dataset, this policy can be queried during student’s training. \n\n> **In line 165, it's claimed that assuming access to a previous policy allows one to assume access to a dataset from that teacher \"without loss of generality\", which doesn't seem to be the case, as there are examples where we have a previous policy but not necessarily data from that policy.**\n\nWe have removed the “without loss of generality” (L167). Furthermore, we added new results showing how we can use QDagger without any access to an offline dataset by generating data using the teacher policy and using that for QDagger pre-training.\n\nSpecifically, we ran a QDagger ablation on 4 games where we do not assume access to the teacher's replay buffer. As shown in Figure A.19, we found that collecting the same amount of teacher data as the offline replay buffer leads to comparable performance on 3/4 games (Asterix, Qbert, Seaquest). Thus, QDagger can be used successfully given access to only the pretrained policy.\n\n> **In the PVRL case, I think it makes more sense to just consider the constraints on the problem setting and not couple that with the type of algorithm being trained (in this case a value-based method).** \n\nIndeed, we have done this reformulation in Lines 143-145. We have also highlighted that we focus on the PVRL setting because a policy-based student can be easily reincarnated in the PDDRL setting via behavior cloning, it was unclear how to reincarnate a deep Q-learning agent in this setting.", " The authors investigate reincarnating RL, which aims to utilize prior policies to train new agents rather than training RL agents from scratch. This paper formalizes the reincarnation setting. They demonstrate that existing approaches fail in this setting, and their method, n-step QDagger, they are able to outperform prior approaches. ## Strengths\n1. The results demonstrate that n-step QDagger is able to achieve strong performance (match, beat a fully trained DQN with far fewer steps in Atari, train faster than tabular rasa in humanoid:run, and outperform tabular rasa agents in BLE).\n2. They paper includes numerous baselines and shows hyperparameter sweeps over prior agents and demonstrate pitfalls of these agents. \n3. The paper felt quite lucid to read and typically straightforwards to follow. The bulleted list under evaluation was a little dense to read, not sure if it's easy or possible, but a better visualization of each method might make it easier to parse. \n\n## Weaknesses\n1. The main purpose of the paper is to formalize 'reincarnation.' Although I appreciate the extensive baselines and environments tested in the paper, it feels as if this work is formalizing a process that is in some ways well known. This process is valid; however, as noted by the authors already well documented in the RL community e.g. starcraft, dota, robotics for Rubik's cube solving. \n2. For a lot of RL tasks: single agent RL (atari, mujoco, car racing), robotics (Habitat, Sawyer arms, etc.), multi-agent RL (Hanabi, SMAC, MPE, etc), the training time is typically much shorter e.g. on the order of a dozen hours to a few days per policy. With good engineering, we can also reduce training times. For example, with a well implemented Ape-X or R2D2, we can significantly reduce RL training times -- these also can still be single machine 1 GPU training runs. As a result, in the context of focuses in the broader RL community, I'm not entirely sure how impactful/how big of a role reincarnation will play. \n3. An issue of reincarnating could be related to diversity. A goal in reinforcement learning is to find diverse policies. Although this is not directly related to this work, reincarnating might not be able to fall into the workflow of training diverse policies. \n4. I'm not entirely sure what teacher the paper uses to train the reincarnated agents for BLE? My impression is they're using the weights from Perciatelli et al. In this case it feels like it'd make sense that reincarnated outperforms other variants, as reincarnated bootstraps off of a strong policy that has seen a much wider space then it is basically fine tuned to the BLE environment via reincarnation. This also explains why the fine-tuned Perciatelli model performs well in the BLE environment. \n\n## Minor\n- I think the claim \"JSRL does not improve performance compared to tabula rasa DQN and even hurts performance with a large number of teacher roll-in steps\" feels a little strong, given that it only hurts performance when $n=1$. However, I don't think using multi-step = 1 is very common particularly in atari. \n- I'm not entirely sure if the reference to Panel 3 in Figure 1 on line 251 is correct? Similarly, it would be helpful to reference the appropriate figure for the discussion in the second paragraph of sec. 5 (Fig A. 13, A. 14 I believe). 1. Can the authors justify reincarnation in the most common RL settings? Also how should evaluations be reported in this case? Since reincarnating can lead to improved performance, but also has a dependence on an already trained teacher. The work sufficiently addresses limitations and societal impacts. ", " The authors present the concept of \"reincarnating RL\" as an alternative to \"learning tabular rasa\". Reincarnating RL is very generally about using prior data/computation to more efficiently train new agents. They then introduce an algorithm called \"QDagger\" which is meant to solve a specific instance of \"reincarnating RL\" called \"policy-to-value reincarnating RL\" (PVRL). It performs well relative to baselines. \n\nThe main contributions of the paper seem to be:\n\n1. The promotion of \"reincarnating RL\" as an alternative approach to \"learning tabular rasa\"; along with examples of it as a research workflow. \n\n2. QDagger as a solution to a specific instance of \"reincarnating RL\" (PVRL).\n\nI will be writing the review considering both of these contributions separately. # Originality:\n\n## On \"Reincarnating RL\": \n\nStrength: This is an interesting notion that has not been explicitly discussed much in the literature before. \n\nWeakness: There is plenty of work that would fall into this vague category, as seen in the \"Related Work\" section. While this umbrella term / notion has not been explicitly discussed much, I'm not sure if generically promoting (and also coining a term for) a category of works could be considered an original contribution. \n\n## On \"QDagger\": \n\nStrength: The specific setting has not been previously studied much, and the precise details of this algorithm for this specific setting have not been proposed before.\n\nWeakness: This does not seem to be particularly original. The authors write that \"kickstarting [can be viewed] as another ablation that does not pretrain on teacher data\" (Line 236). Indeed when looking at the algorithms, they are very similar. If QDagger is just \"kickstarting with pretraining\" then it is not very original.\n\n# Quality:\n\n## On \"Reincarnating RL\": \n\nI'm really not sure how to evaluate this contribution in terms of quality. \n\nStrength: \n\nThe authors make a coherent argument that more attention in this area would better democratize and improve real-world RL adoption. They also show through case studies that experimenting under this paradigm is computationally cheaper and more accessible. Promoting the release of model checkpoints is helpful to the research community. \n\nWeakness: \n\n1. I'm not sure if NeurIPS is the correct venue for what seems to largely be posturing / call-to-action. This interpretation may seem uncharitable, but significant portions of the paper are dedicated to this broad discussion instead of experiments, analysis, or theory.\n\n2. I'm not sure if I agree with \"reincarnating RL\" as being a superior approach for democratizing RL research. On the whole, it asks the question of what researchers should care about in reinforcement learning. A large part of the field is about studying and understanding how intelligent agents might/should learn tabula rasa (which is valuable in and of itself), not as much about real-world RL adoption. For example, the breakthrough DQN results in Atari are not interesting because they achieve high scores, but rather because a single algorithm could solve a large portion of the games tabula rasa. Only in the cases in which we \"care\" about ultimate performance (e.g. DOTA, Starcraft, Rubik's cube -- note that these are also not particularly useful in the real-world) do we consider reincarnating RL. Such scenarios currently seem rare in the real-world -- and it's unclear if this is because of a lack of work in reincarnating RL. On the other hand, environments such as [Minatar](https://github.com/kenjyoung/MinAtar) democratize research and allow for cheaper testing of tabula rasa RL.\n\n3. See the limitations section on reproducibility. This is problematic since science is about repeatably testable hypotheses. While it would be neat for a broad community to continually improve existing trained agents, it would not be a scientific endeavor as much as an engineering one.\n\n## On \"QDagger\": \n\nStrengths: The experiments and setup were well-done. \n\nWeaknesses: N/A.\n\n# Clarity:\n\nStrength: The paper as a whole is very well-written and clear.\n\nWeakness: N/A.\n\n# Significance:\n\n## On \"Reincarnating RL\": \n\nSee the discussion above. \n\n## On \"QDagger\": \n\nStrengths: It achieves impressive results in the tested settings. \n\nWeaknesses: Given the seeming lack of originality of the algorithm, it is difficult to say that the contribution itself is highly significant.\n\n# TL;DR\n\nI don't think the large amounts of posturing in the paper is particularly helpful to the broader community; however, the core algorithm and its results are very well-done and clean enough to warrant an accept. 1. What example settings (beyond BLE) are you thinking of when it comes to \"real-world RL adoption\"? In what real-world scenarios would a broader community improving existing trained agents be helpful? How do you imagine this looks in practice?\n\n2. How is a real-world practitioner supposed to used the learnings from this line of work on a new task? Suppose that a series of papers for reincarnating RL that build on each other demonstrate SOTA performance on some set of tasks. Is the real-world practitioner supposed to then go through each of the different training / reincarnation phases? Right now, it is easy to run tabula rasa RL on a new environment. This paradigm introduces its own host of problems around reproducibility and generalization. As mentioned in the paper, \"reincarnation from two distinct policies that perform similarly results in different performance trends\" (Line 336). Thus, it seems as though it may be very easy to overfit to specific implementations and seeds of the pre-trained agents. \n\nFurthermore, the precise details of the generation of the teacher policy / data (which themselves may have been reincarnated) become important for reproducibility; adding on to the challenge of reproducing work from scratch. I do not believe that this is addressed in the paper. \n\n", " This paper proposes a new approach for reincarnating RL agents from pre-trained policies with different architectures and argues that RL papers should also adopt 'pre-training and fine-tuning' approach instead of training everything from scratch. In order to transfer pre-trained policy with different architectures, the method proposes Q-Dagger that learns value functions from a sub-optimal teacher policy and that is capable of continually decreasing the dependency on the sub-optimal teacher. Proposed method is compared to various baseline schemes on Atari and Balloon learning environments.\n Strengths\n- Idea of reincarnating RL policies is definitely should be of interest to various researchers.\n- Idea is simple and working well\n- Exhaustive experiments with many baselines\n\nWeaknesses\n- Transferring pre-trained policy is definitely important and the paper proposes an intuitive and effective method for doing so. However, the proposed method is still limited in that it quite heavily depends on the availability of offline datasets, which have size of ~1M samples. This assumption gets often more difficult to hold because usually a replay buffer is implemented as a queue of fixed size in memory instead of storing all transitions in storage. The main benefit of pre-training and fine-tuning scheme in CV and NLP domains is that we can fine-tune a pre-trained model on datasets from target domains, which has a much smaller size than large, often huge, pre-training datasets. In this point of view, it is a bit difficult to see that the proposed method can be a generic solution for reincarnating RL and it's not clear whether only releasing pre-trained checkpoints could enable us to obtain the benefit as stated in the paper. It would be much more nice if the paper can show that the method works without pre-training student on offline dataset. I would not say that this experimental results are necessary because the method and results are still interesting, but more discussion on this front would make paper more interesting and insightful for future researches.\n- The paper is a bit not self-contained because it's missing some experimental details in Balloon Learning Environment, which is quite new benchmark so that it makes a bit difficult to understand the details. For instance, it is difficult to grasp what is Perciatelli and why is it called Perciatelli, and it's even not in the referred paper.\n- Experimental results on n-step returns are quite confusing; what's the main point of n-step return experiments? I understand that some methods are unstable without n-step returns, but why it's so important and what's the conclusion and practical take-away?\n- TD3 performance on humanoid run is also quite confusing; what is the reason for performance decrease of TD3? Does this also happen when we train TD3 from scratch without pausing the training after 10M steps? If so, why does this happen? and what is the reason TD3 is chosen for solving the task? It is known that SAC can solve humanoid run task without such degradation (see https://github.com/denisyarats/pytorch_sac for performance of SAC on humanoid run). This makes results a bit less convincing on humanoid experiments. - It would be nice to add a discussion on the dependency of the proposed method on the availability of offline datasets. Or I'm open to any follow-up discussion whether it's a limitation or not.\n- More experimental details on BLE would be nice for self-containedness of the paper\n- More clear explanation and take-away message of n-step experiemnts would be nice.\n- Clear explanation about TD3 performance degradation on humanoid run, or additional experiments with SAC without such phenomenon could make experimental results more convincing. Limitation is not clearly discussed in the paper but potential negative societal impact is well discussed.", " This paper presents an alternative class of problem statements to classical tabula rasa reinforcement learning, that is called reincarnating reinforcement learning (RRL). In RRL, instead of developing algorithms that learn from scratch (tabula rasa) in the RL environment, algorithms can make use of previous policies, and data collected from those policies, to speed up learning. This goes beyond standard offline reinforcement learning as (1) online learning is allowed, and (2) there's a possibility of access to the previous policy, and not just previous data generated from that policy. They instantiate a specific version of RLL in policy-to-value RL (PVRL), where the aim is to increase the speed of value-based online RL methods (such as DQN) with access to a previous (possibly suboptimal) policy and data from that policy. They present Q-dagger, an adjustment to standard deep Q-learning methods which uses n-step returns combined with a Dagger-style loss for both offline training and online from the pretrained policy to improve learning speed, and show it performs better than a variety of strong baselines in this setting in ALE, BLE and humanoid:run. # Strengths\n\nThe paper is well-written and clear to read. While (as mentioned in the paper) this kind of problem setting has been tackled before, it has always been in an ad-hoc way, and a more thorough motivation and investigation of it is beneficial and original. Further, it's likely that the vast majority of the usage RL algorithms in real-world settings will be in scenarios where previous data and/or policies are available, which makes this setting an important one to study.\n\nThe PVRL problem/solution setting is interesting, and their algorithm is robustly demonstrated to perform better than existing solutions on a range of benchmarks.\n\nFor all these reasons I'm recommending acceptance, but I think there are several issues, in terms of framing of the problem statement and contribution, that would make the paper clearer and aid future work more easily (see Weaknesses below).\n\n# Weaknesses\n\n## Framing\n\nWhile the RRL class of problem settings in an important and understudied one, I think the framing of the paper slightly confused what exactly RRL __is__. RRL seems like a class of problem settings, of which offline RL is one (in the limit of no online interaction or policy access), and PVRL is another (in which access to a previous policy and data from that policy is assumed). It would be beneficial to describe other possible instantiations of this problem setting; some obvious ones are having access to a previous policy but no data (which is often the setting in kickstarting-style methods), or conversely data but no policy (which is offline RL, possibly plus online fine-tuning); other factors of variation are the quality of the policy and/or data, and the amount of online training allowed.\n\nI think these settings are all valuable, and it's not necessarily the case that one is more general or flexible than another (as claimed for example on line 35; PVRL requires both a fixed dataset and previous policy, making it less general than methods that only require one or the other). It would also make certain distinctions clear that I think aren't in the paper; for example in line 165 it's claimed that assuming access to a previous policy allows one to assume access to a dataset from that teacher \"without loss of generality\", which doesn't seem to be the case, as there are examples where we have a previous policy but not necessarily data from that policy (we could only have data it was trained with, or no data at all).\n\nTo be specific, in the PVRL case, I think it makes more sense to just consider the constraints on the problem setting (previous policy and data, online learning allowed), and not couple that with the type of algorithm being trained (in this case a value-based method). There are many possible solutions to this problem, some of which are based on value-based RL methods and some are not. Hence I'd recommend defining (something like) PDRRL as policy-and-data-RRL as a problem instance within RRL, with PVRL as a class of solutions to this problem.\n\n## Reproducibility\n\nI think it's also important to address the reproducibility and comparability concerns of this research workflow or class of problems. For example consider the passage in lines 47-52:\n\n> For example, imagine a researcher who has trained a deep RL agent A1 for a long time (e.g., weeks), but now this or another researcher wants to experiment with better algorithms or architectures. While the tabula rasa workflow requires re-training another agent from scratch, reincarnating RL provides the more viable option of transferring A1 to another agent and training this agent further, or simply fine-tuning A1 (Figure 1).\n\nWhile this saves computation for the researcher, to make sure their algorithm is fairly compared with others in the same setting, other algorithms need to be trained with the same flow, which might require even more compute (if we need to reproduce the week-long training as well as the faster fine-tuning). As such, it would be beneficial (alongside releasing the code and model checkpoints) to release the data generated from those models - this means future researcher can work in exactly the same setting (i.e. the same inputs to their PVRL algorithm) so that the comparison is fair. Problems of this nature are even raised in the **Dependency on prior work** paragraph in section 6.\n\n## Concrete recommendations\n\n* Adjust the abstract and intro to frame RRL as a class of problems, of which PDRRL is one instance. Describe some other relevant/significant problem instances\n* Adjust the framing of PVRL to be a class of solutions to the PDRRL problem\n* Adjust the claims in L35 and L165, in light of the new framing above.\n* Address the reproducibility/comparability concerns; specifically, if another researcher was to attempt to produce a new method for PVRL, how would they go about doing that, to make sure their algorithm's results are comparable with those reported here.\n\nIf these recommendations and weakensses are addressed (which I think is only a matter of some light rewriting of the paper) then I'm happy to recommend a strong accept.\n\nEDIT: I thank the authors for their detailed response and the adjustments they've made. Given that these addressed my concerns, I'm happy to raise my score to a strong accept (8). Most of my suggestions are discussed in the weaknesses section above. \n\n* Do you see RRL and PVRL as research workflows that will produce (improvements to) algorithms that are useful for tabula rasa RL, or only for RRL/PVRL? \n* Could you run an ablation using the Qdagger loss in only the offline pretraining, and then doing standard and online training stages? From figure 2a it seems that a lot of the benefit of Qdagger comes from the offline pretraining stage - it would be interesting to see how much additional benefit (on top of standard Q-learning) is gained by using the Qdagger loss during online training. I believe the limitations are adequately addressed, given the fixes recommended in the Weaknesses section (which are more focused on unclear description of the framework that I think leads implicitly to unstated limitations in generality)." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 4, 4 ]
[ "6BNRJF8QZ-", "cr_h-5cn9cK", "gnQbKtdUcfo", "2u0WqO2r1e6", "aWrZNDQensX", "nips_2022_t3X5yMI_4G2", "i1F_JoxHnVO", "j3YlPJawdD", "VHNMgnLz0AM", "6BNRJF8QZ-", "5V3OFWHXELD", "dvsgH_o49J", "xPw3ENBwL7f", "nips_2022_t3X5yMI_4G2", "nips_2022_t3X5yMI_4G2", "nips_2022_t3X5yMI_4G2", "nips_2022_t3X5yMI_4G2" ]
nips_2022_fcMd-tuWwiO
A sharp NMF result with applications in network modeling
Given an $n \times n$ non-negative rank-$K$ matrix $\Omega$ where $m$ eigenvalues are negative, when can we write $\Omega = Z P Z'$ for non-negative matrices $Z \in \mathbb{R}^{n, K}$ and $P \in \mathbb{R}^{K, K}$? While most existing works focused on the case of $m = 0$, our primary interest is on the case of general $m$. With new proof ideas we develop, we present sharp results on when the NMF problem is solvable, which significantly extend existing results on this topic. The NMF problem is partially motivated by applications in network modeling. For a network with $K$ communities, rank-$K$ models are popular, with many proposals. The DCMM model is a recent rank-$K$ model which is especially useful and interpretable in practice. To enjoy such properties, it is of interest to study when a rank-$K$ model can be rewritten as a DCMM model. Using our NMF results, we show that for a rank-$K$ model with parameters in the most interesting range, we can always rewrite it as a DCMM model.
Accept
This was a borderline paper, which fell just above the bar for acceptance. The reviewers felt the work was interesting and original, although perhaps the problem studied is a bit niche for NeurIPS.
train
[ "GuaJ9moWlM6", "IuJT_KY5eJa", "ufGgm1cS1PJ", "D7DtPMawYdP", "dTOCN0eFmHP", "kQPkeApL8ek", "80roWzmJ6S5" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to thank you for your time and valuable comments. \nWe are especially glad that you recognize the main contributions \nof our paper, and think our paper as well-written and well-motivated. \nWe have tried our best to address your comments \nand prepared a revised version and a point-to-point response below. \n\n\n\n1a. Response to comment 1 (part 1 on uniqueness). When the NMF is solvable, it is known that the solution is usually not unique without a proper regularity condition (e.g., \\cite{DonohoNMF}). In our setting, once we can factor $\\Omega$ by $\\Omega = \\Theta \\Pi P \\Pi' \\Theta$ for some non-negative matrices $(\\Theta, \\Pi, P)$ as in equation (1.5), the factorization is unique if (a) for each $1 \\leq k \\leq K$, there is at least one $i$ such that $\\pi_j = e_k$, where $e_1, \\ldots, e_K$ are the standard Euclidean basis vectors of $\\mathbb{R}^K$, and (b) all diagonal entries of $P$ are $1$. For the proof, see \\cite{JKL2019} for example. In response, we have added Remark 4 near the end of Section 2 (marked in red). \n\n \n\n1b. Response to comment 1 (part 2 on algorithm for NMF). This is an interesting question. In response, \nwe have added Remark 5 at the end of Section 2 (marked in red), and also added a new section, Section F, in the supplement (the discussion is almost a page, so we put it in the supplement). For details, please read with Remark 5 and Section F of the supplement.\n\n2. Response to comment 2 (why some entries of $J_{K, m}$ are still negative): Thanks for the comments. Recall that by SVD, \n\\begin{equation} \n\\Omega = \\Xi \\mathrm{diag}(\\lambda_1, …, \\lambda_K) \\Xi’, \\qquad \\mbox{where $\\Xi = [\\xi_1, \\ldots, \\xi_K]$}. \n\\end{equation} \nYou are right if all eigenvalues of $\\Omega$ are positive. But when some of the eigenvalues of $\\Omega$ are negative, we have\n\\begin{equation} \n\\Omega = \\Xi \\mathrm{diag}(\\lambda_1, …, \\lambda_K) \\Xi’, \\qquad \\mbox{where $\\Xi = [\\xi_1, \\ldots, \\xi_K]$}. \n\\end{equation} \nIn this case, we can not take the square root of the matrix $\\mathrm{diag}(\\lambda_1, …, \\lambda_K)$ directly. Instead, we introduce a positive diagonal matrix by \n\\begin{equation} \nD = \\mathrm{diag}( |\\lambda_1|, …, |\\lambda_K|). \n\\end{equation} \nTherefore, \n\\begin{equation} \n\\Omega = \\Xi D^{1/2} J_{K, m} D^{1/2} \\Xi,\n\\end{equation} \nand the matrix $J_{K, m}$ is still there. \n\n \n\n\n3. Response to minor comment 1 (why use an unconventional notation for the inner product): Thanks and we think what you meant is that why not use $<x,y>$ instead of $(x,y)$ as the notation for the inner product. \nBoth notations are frequently used, and we have also explained what $(x,y)$ stands for in \nsection 1.4. \n\n \n\n4. Response to minor comment 2 (entrywise ratio matrix): Thanks for the comments. The R matrix in equation (2.13) was introduced right above equation (2.13). Let $\\mathrm{diag}(\\xi_1)$ be the $n \\times n$ diagonal matrix where the $j$-th diagonal entry is $\\xi_1(j)$, $1 \\leq j \\leq n$. It is seen that \n\\begin{equation} \n[\\xi_1, \\xi_2, …, \\xi_K] = \\mathrm{diag}(\\xi_1) [{\\bf 1}_n, R], \\qquad \\mbox{where ${\\bf 1}_n$ is the vector of all ones}, \n\\end{equation} \nand so \n\\begin{equation} \n\\Omega = \\mathrm{diag}(\\xi_1) [1_n, R] D [1_n, R]’ \\mathrm{diag}(\\xi_1). \n\\end{equation} \nBy Perron’s theorem, all entries of $\\xi_1$ are strictly positive. Therefore, the NMF problem is solvable for $\\Omega$ if and only if the NMF problem is solvable for \n\\begin{equation} \n[{\\bf 1}_n, R] D [{\\bf 1}_n, R]’. \n\\end{equation} \nCompared with $\\Omega$, this matrix is much simper. This is why we choose to state Theorem 2.4 using the rows of $R$ \ninstead of those of $\\Omega$. \n\n \n", " We would like to thank you for your time and valuable comments. \nWe are especially glad that you recognize the main strength and \ncontributions of our paper. We have tried our best to address your comments \nand prepared a revised version and a point-to-point response below. \n\n\n1. Response to comment 1 (on Section 4): Thanks for your comments. Section 4 provides a real data example to complement \nwith the theory. The main focus of the paper is that, given an $\\Omega$, study when the \nNMF problem is solvable. For numerical study, many existing works in this area use a simulated \n$\\Omega$ matrix, but we think that using the $\\Omega$ matrix from a real network is more \ninteresting. The challenge is however that $\\Omega$ is unknown in real applications. \nTherefore, we have to develop a new algorithm for estimating $\\Omega$. How to analyze \nthis algorithm is interesting but is not our main focus, so we leave it to the future. \nI think some readers would desire to see a real data example so we prefer to keep \nSection 4 in the main text. \n\n \n\n2. Response to comment 2 (on missed references): Thanks for pointing out these very interesting references. We have added them to the paper. \n", " 8. Response to comment 8 (connections to community detection): This is a great comment. \nTo develop an approach to community detection, we desire to have a more specific model (a model \nthat has many structures we can exploit). Our paper develops an approach to checking when we can rewrite a broader model (the rank-$K$ model) to a more specific model (the DCMM model), and so it is helpful for \ncommunity detection for it ensures us that there are more structures in the model which we can exploit. \n \nFor example, let $\\xi_k$ and $V = [v_1, v_2, \\ldots, v_K]'$ be as in Section 3 (near equation (3.15)), and \nlet $R$ be the $n \\times (K-1)$ matrix satisfying $R(i, k) = \\xi_{k+1}(i) / \\xi_1(i)$, $1 \\leq k \\leq K-1, 1 \\leq i \\leq n$. If we view each row of $R$ as a point in $\\mathbb{R}^{K-1}$, then there is a simplex in $\\mathbb{R}^{K-1}$ with vertices $v_1, v_2, \\ldots, v_K$ where row $i$ of $R$ falls on one of the vertex if node $i$ is pure and $R$ falls in the interior of the simplex otherwise. Such a result is helpful in obtaining a more accurate \nestimate for the $\\Omega$ matrix (this is a hard problem and the main reason is that the $\\Pi$ matrix is latent and it is hard to estimate; fortunately, the simplex structure suggested an efficient approach to estimating $\\Pi$ (e.g., \\cite{JKL2017, JKLW2022})). For a general rank-$K$ network model, we do not have the simplex structure, so it remains unclear how to develop an accurate estimate for the $\\Omega$ matrix (we can use ordinary spectral decomposition but the resultant estimate is frequently unsatisfactory). \nIn response, we have added a few sentences (marked in read) to the first paragraph of Section 5. \n\n \n9. Response to comment 9 (on the converse of Theorem 2.4): \nThanks for the comments. We think what you suggested is to comment on whether condition (2.14) is \nsufficient and necessary. As in most works on NMF (e.g., the book by Shaked-Monderer and Berman cited in our paper), our goal is to find conditions that are easy to check and sufficient. Such conditions are usually not necessary, but necessary conditions are usually messy and hard to check (e.g., see the book by Shaked-Monderer and Berman). \nWe have added some sentences right below Theorem 2.4 (marked in red). \n\n10. Response to comment 10 (figure on Page 9): Thanks for your comments. In the previous version, the caption is omitted for space limit. We have now added a caption. See Page 10 (marked in red). \n\n11. Response to comment 11 (in Section 4, mention what insight NMF can offer while other approaches can not): This is a very helpful comment. In many recent works on network analysis, we frequently assume that a DCMM model holds for the settings at hand, but we rarely checked if such an assumption is valid. \nOur NMF results provide an approach to checking whether the network satisfies DCMM model. \nWe have added a few sentences to the end of Section 4 (marked in red). \n \n\n12. Response to comment 12 (typos): thanks and they are corrected. ", " We would like to thank you for your time and valuable comments. \nWe are especially glad that you think our results are new and \nthat we have made a two-fold contribution. We have tried our best to address your comments \nand prepared a revised version and a point-to-point response below. \n\n\n \n1. Response to comment 1 (do not use acronym DCMM in the abstract): Great comments. We have made the changes. \n\n2. Response to comment 2 and 4: These are great comments. The settings you suggested (asymmetric, directed SBM, complex NMF) are very interesting. These settings are very different from the setting considered here and deserve a careful study in a future paper. Here, we present a straightforward extension of Theorem 2.1 to the asymmetric case. By SVD, for an $n \\times p$ (entry-wise) positive matrix Omega with rank K, we can write $\\Omega = Y Z’$ for an $n \\times K$ matrix $Y$ and an $p \\times K$ matrix $Z$. Let $y_i'$ be $i$-th row of $Y$ and $z_j’$ be the $j$-th row of Z, respectively. If there is a $K$ dimensional vector $y_0$ such that for all $i$ and $j$, \n\\begin{equation}\n|(y_i, y_0)| / \\|y_i\\| \\geq \\sqrt{1 - 1/K}, \\qquad \\mbox{and} \\qquad |(z_j, y_0)|/\\|z_j\\| \\geq \\sqrt{1 - 1/K},\n\\end{equation} \nthen we can find a $K \\times K$ orthogonal matrix $Q$ which rotates all rows of $Y$ and $Z$ to the first orthant simultaneously. In this case, the NMF problem is solvable. In response, we have added some references and a few sentences to the end of paragraph right below Theorem 2.1 (marked in red). \n\n \n3. Response to comment 3 on pure nodes: thanks and we have made the changes (see two places marked in red on Page 3). \n\n4. Response to comment 4: see item 2 above. \n \n5. Response to comment 5 (why $m < K/2$ is the most interesting case in practice): \nThis is a great comment. We think you question is about the first sentence of Section 2.4. We meant that we have $m < K/2$ for most real networks we have analyzed. We have \n checked the eigenvalues of the adjacency matrices of more than a handful of frequently seen network data sets. For most of them, we have $m < K/2$. In response, we have rephrased the sentence as “which is the case that is most frequently found for real networks). \n\n6. Response to comment 6 (the notation $J_{K,m}$): Thanks and you certainly have a good point. However, the subscripts are used to stress that the matrix is a $K \\times K$ diagonal matrix where among the $K$ diagonal entries, the last $m$ of them are $-1$ and others are $1$. It is hard to find a simpler notation. \n\n7. Response to comment 7 (what specific problem in network analysis to study in the beginning of Section 3): \nI think what you mean is that the term of network analysis is too broad and too abstract, \nand it is better to make it more specific. This is a great point. In response, \nin the first paragraph of Section 3, we have changed Network analysis to \"Network analysis (e.g., community detection, membership estimation, link prediction)\". We hope this makes it more specific. \n\n \n \n ", " This paper considers a particular case of the NMF problem, along with its application to network modelling.\nThe main contributions of the paper is two-fold. On one hand, the authors put forth several new results on the symmetric case of the NMF problem, further generalizing previous lines of work that considered special cases of the current formulation from the paper. On the other hand, as a secondary contribution, the authors provide a both quantitative and qualitative discussion on use-cases of different rank-K network models.\n The paper is fairly original in its inception, and provides a generation of previous results, with previous results being recovered from the current formulation. It provided a very good motivation for why one might care to consider an NMF approach to analyze network problems. The paper is quite dense, and a bit difficult to follow at places, also due to several omitted proofs and results called upon from other works, and not always with enough context. The authors could consider improving this aspect. Perhaps not a good idea to use acronyms in the abstract without first explaining them - most readers of the abstract would not be familiar with DCMM, so a brief comment would be appropriate.\n\nWhat about the asymetric case? Can a brief comment be made on it, in light of the results in this paper?\n\nCould clarify what are “pure” nodes in Remark 1, line 82 - as it might not be clear to everyone outside of the SBM community.\n\nWhat about extension to a directed stochastic block model? (say, where the input adjacency matrix is a skew symmetric matrix). And somewhat related to this, what about the complex NMF?\n\nWhy exactly is the case m < K/2 - “probably the most interesting case in practice”?\n\nThe notation J_{K,m} is a bit cumbersome to follow at places, perhaps a notation can be used without the use of subscripts throughout, to improve readability. \n\nIn Section 3, the authors repeatedly refer to “network analysis” applications as if it was a particular problem/task - but it should be made clear at the beginning what is the exact problem to be adressed. \n\nAre there any implications of the results concerning the ability to derive guarantees of SBM/community detection models in the very sparse regime? (one where the edge density in the graph is required to be above (log n) / n and extra effort is required for regularize appropriately before procedding with a spectral approach).\n\nThe authors could comment more on the converse of Thm 2.4 when first mentioned. \n\nThe Figure in page 9 should have a bare minimum to axis labeling and sub/captions. \n\nSection 4 could make it clear what type of insights one can obtain with the NMF-based approach which perhaps it cannot otherwise. \n\n\nTypos: \n314: all rows of r_i lives with \n357 (n,K) = (1,222, 2) - confusing at first none", " The paper deals with the problem of symmetric non-negative matrix factorization (NMF) for a low-rank matrix, when some of the eigenvalues of the matrix are negative. The paper provides conditions under which a symmetric NMF solution exists for a matrix of rank K with negative eigenvalues. The main theoretical results are divided into two main theorems, one for m<K/2 and another for general K. The paper also provides an algorithm for symmetric NMF for degree-corrected mixed-membership block model (DCMM). Strength:\n1. The theoretical results (Theorem 2.1-2.5) on the conditions under which a symmetric NMF solution exists for a matrix of rank K with negative eigenvalues are the main strengths of the paper, as the results need some technical innovation.\nWeakness:\n1. The algorithm of NMF for DCMM is provided a bit lightly and without theoretical support, thus section 4 becomes quite antithetical to the tone of the rest of the paper.\n2. The paper also misses some relevant references, like,\n(i) Anandkumar, Animashree, Ge, Rong, Hsu, Daniel J, and Kakade, Sham M. A tensor approach to learning mixed membership community models. Journal of Machine Learning Research, 15(1):2239–2312, 2014.\n(ii) Mao, X., Sarkar, P. and Chakrabarti, D., 2017, July. On mixed memberships and symmetric nonnegative matrix factorizations. In International Conference on Machine Learning (pp. 2324-2333). PMLR. Suggestion:\n1. Regarding section 4, it would be better to either put in Supplementary or give the theoretical support behind the estimated conditions. None noted.", " In this paper, the authors discuss when a square symmetric non-negative matrix with $m$ negative eigenvalues allows a non-negative factorization of the form $ZPZ'$. The authors' main result is Theorem 2.1 where they show a set of sufficient conditions under which the NMF problem is solvable. The authors also extend the results for some special cases. The paper is well-written and also provides nice examples to motivate the problem. However, I feel that there are several questions that are not answered properly and are mentioned below. 1) Here, the paper discusses the solvability of the NMF problem. However, of equal importance is the question of when the solution is unique and can be recovered by an algorithm. The paper does not answer these questions for problem 1.1 when the problem is known to be solvable.\n\n2) How did J_{k,m} still come up in the spectral decomposition of \\Omega? \\lambda and \\Zeta are eigenvalues and eigenvectors of the symmetric matrix \\Omega; so \\Omega should just be \\Zeta D\\Zeta' , right?\n\nMinor:\n\n1) Why use such an uncommon notation for the inner product that is so confusing?\n2) How is the matrix of entry-wise ratio relevant (L246) and what do its entries correspond to? Also, what is R in 2.13?\n Yes" ]
[ -1, -1, -1, -1, 5, 6, 6 ]
[ -1, -1, -1, -1, 2, 3, 4 ]
[ "80roWzmJ6S5", "kQPkeApL8ek", "dTOCN0eFmHP", "dTOCN0eFmHP", "nips_2022_fcMd-tuWwiO", "nips_2022_fcMd-tuWwiO", "nips_2022_fcMd-tuWwiO" ]
nips_2022_RP1CtZhEmR
Generating multivariate time series with COmmon Source CoordInated GAN (COSCI-GAN)
Generating multivariate time series is a promising approach for sharing sensitive data in many medical, financial, and IoT applications. A common type of multivariate time series originates from a single source such as the biometric measurements from a medical patient. This leads to complex dynamical patterns between individual time series that are hard to learn by typical generation models such as GANs. There is valuable information in those patterns that machine learning models can use to better classify, predict or perform other downstream tasks. We propose a novel framework that takes time series’ common origin into account and favors channel/feature relationships preservation. The two key points of our method are: 1) the individual time series are generated from a common point in latent space and 2) a central discriminator favors the preservation of inter-channel/feature dynamics. We demonstrate empirically that our method helps preserve channel/feature correlations and that our synthetic data performs very well in downstream tasks with medical and financial data.
Accept
The authors propose GroupGAN which uses a separate generator and discriminator for generating each data channel and a central discriminator for accurately capturing the correlation structure between different channels. This is a borderline paper and there were extensive discussions among the reviewers about this paper. The reviewers all agreed that having two sets of discriminators is a good idea. However, the evaluation for this paper were only done with low-dimensional time series and the reviewers voiced concerns about the generalization of this method to higher dimensions. Some parts still need further clarification: for example, as Reviewer zeED pointed out, dimensionality reduction in this paper remains an extremely odd preprocessing step. Overall, accepting this paper may start new discussions in the community. I hope the authors address the concerns raised by the reviewers in the camera-ready version of the paper.
test
[ "hEsKWm29pFc", "_zvEKtIjIUK", "UctbUkccq5l", "_dpp3vKiIJi", "MobEC8zTrBe", "1thAHg_rcG2", "Sb2CFI2n-rh", "POP3TWApNKe", "G8WqI_b5fMt", "FHVMN-fEybu", "GxWL9jSn2Et", "KLo1s7po_Ad" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response. My concerns have been addressed. I think this is a good paper and a score of 7 is appropriate. My score remains the same.", " Thank you for your thorough response.\n\nIn the final version of the paper, we will make sure to clarify any confusion caused by the use of the terms “noise” and “channel.” You are entirely correct about substituting “channel” for “feature,” and we will try our best to correct this in the final version of the paper.\n\nRegarding your question about the z-score and how its threshold was determined, Mohit Agarwal and Raghupathy Sivakumar used the same threshold for z-score in their paper “Blink: A Fully Automated Unsupervised Algorithm for Eye-Blink Detection in EEG Signals,” where their task is similar to ours. We experimented (using only real data) with various z-score values and concluded that a value of 3 is the best possible value for our dataset as well. Outlier removal in health care domain signals is a standard process due to the noise from measurement tools. We will clarify these points in the final version of the paper.\n\nWe should also emphasize that all of preprocessing steps were carried out on the real dataset and were based on the classification score on the real dataset. Furthermore, the preprocessed dataset was created and fixed before generating synthetic data and synthetic data evaluation. Although this is not the best practice for evaluating a classifier on real data, our preprocessing was agnostic to generative model evaluation and would not favor any generative model. We are confident that our results are a fair comparison of GroupGAN, baseline method, and SOTA methods because all methods benefit from the preprocessing steps. We hypothesize that our method will still outperform the others even if we do not remove the outliers.\n\nIn response to your question about how forward feature selection was performed, we used sequential forward feature selection, as F.J. Ferri et al. discussed in their paper “Comparative study of techniques for large-scale feature selection”. In a nutshell, we performed a standard sequential forward feature selection by adding the locally best feature in the feature set regarding the classification score. When adding new features no longer improved the classification score, we stopped adding new features. In our case, no feature added to our top 5 would increase the classification score when trained on the real EEG dataset.\n\nAnd as for your final question, it is a valid concern for those interested in much higher dimensionality datasets that we will investigate and design experiments for in future works. Our results focus on helping the “middle-dimensionality” ( 2 < # features < ~10 ) that are ubiquitous in health care, for instance.\n\nThank you once more for your time and consideration.", " Great! Thanks for your response. Hopefully you will make all the points clear in your revised version of the paper. My score remains the same.", " I thank the authors for their response.\n\nI agree, an early definition of what is meant by \"channel\" and \"noise\" would greatly help readers understand the paper. As previously stated, an average ML reader will most likely understand \"channel\" to refer to rgb dimensions in vision data. Thus, I strongly encourage the authors to avoid the general use of \"channel\" in favor of \"feature.\"\n\nPreprocessing of raw data is understandable, but should also be agnostic to downstream methods being evaluated. Most importantly, how was the outlier threshold (z-scores > 3) chosen? For the statement: \"Then we performed a forward feature selection, and chose top 5 channels regarding the accuracy of our classification task.\" How was feature selection performed? Why are only 5 channels chosen, and under what criteria was 5 selected? As previously mentioned, thorough evaluation of GroupGAN is a concern. How does the method fair for a standard ML dataset (e.g., much higher dimensionality than 5)?", " Regarding your first question, there is no such independence assumption in our algorithm; however, we simplified the equations with this assumption to help the reader understand the concept and the general idea. To avoid misunderstandings, we will clarify this point in the final release of the paper.\n\nIn response to your second question, Equations 3 and 2 are actually equivalent. Equation 3 is the Group GAN implementation of Equation 2.\n\nYour last question is a great idea, though it should probably not be several \"independent\" noises at first. We could consider having an initial (patient) embedding (corresponding to several initial noises) in future work, but it would originate from a single source. We will mention it as a promising next step in future works.", " As you mentioned, the word “channel” could be misleading, and in the final release of the paper, we will clarify what we mean by a channel. The term \"channel\" was derived from the neuroscience literature on EEG signals, such as the papers \"Imaging human EEG dynamics using independent component analysis\" by Julie Onton et al., \"Fundamental of EEG measurement\" by M. Teplan, and many others.\n\nThe goal of the low-dimensional experiments was to show that Group GAN does indeed do what our intuition suggests. \n\nIn response to your concern about why we re-estimated the labels during the preprocessing phase, we did so because, in our experience, capturing an eye blink with EEG data is easier than stating the openness or closeness of the eye for a deep learning classifier. To achieve good classification results using real-world data, we decided to perform an eye blink detection task instead and re-estimate the labels.\n\nThere are some outliers in the EEG time series due to measurement tool imperfections and noise. Filtering EEG signals is common in neuroscience tasks; a review of the possible methods for removing artifacts from EEG signals is provided in \"Removal of Artifacts from EEG Signals: A Review\" by Xiao Jiang et al.. In their paper \"Faster: Fully Automated Statistical Thresholding for EEG Artifact Rejection,\" Nolan et al. used the z-score to detect outliers.\n\nThe baseline method is the same LSTM-based network we use in GroupGAN, which takes a noise vector and generates all channels simultaneously. This information will be included in the final version of our paper. Because there was insufficient space in the main paper, the exact details of the architecture are provided in supplementary materials, and the implementation of all networks will be available in our GitHub repository to demonstrate the precise structure and parameter values. The choice of parameters is the results of various experiments to find those that produce good results on the validation dataset for both Group GAN and Baseline methods. In addition to the baseline method, we also compared our method to existing state-of-the-art methods such as TimeGAN in the last section.\n\nRegarding your first question, we need to clarify what we mean by “noise”. We consider \"noise\" to be a (random) sampling point in the latent space that represents one patient's \"whole biological environment.\" By common noise, we actually mean a common point in the latent space, i.e., that latent space is the patient space, and we are sampling a patient by sampling a noise for the generator. \n\nRegarding your second question, we have demonstrated in Figure 9 of the supplementary material that other SOTA methods often exaggerate or even create unrealistic correlations when compared to real data correlation. In the final version of our paper, we will explain the reasoning behind the hypothesis if the paper is accepted. \n\nThank you for the suggested papers. We will include the mentioned papers in the related works in the final release of our paper.\n\nYou are completely correct about the limitation point. In the final version of the paper, we will discuss the tradeoff between the challenges of learning a single multivariate GAN versus multiple univariate GANs. As you mentioned, other methods will also perform poorly in high-dimensional data. In the GroupGAN method, it is a \"computing resources\" limitation, whereas it is an intractable problem for the baseline and many other methods, which is more fundamental. As you mentioned, GroupGAN has the advantage of being parallelizable, which allows it to be faster. These discussions will be included in the final version of the paper, and parallelization will be mentioned as a promising next step in future works. Thanks a lot for your constructive suggestions.", " We will move the algorithm from supplementary materials to the main paper in the final release as we will have an extra page if the paper is accepted.\n\nIn response to your first question, the objective for all channels is a `2C + 1` player game adding the terms with subscript `i` in equation (4). We will add a clarification sentence to the final release of the paper.\n\nIn response to your second question, the central discriminator is fed two sets of stacked channels, the Real dataset, and the Synthetic dataset. Its goal is then to estimate the distribution of the Real dataset. The conditional distribution of a given synthetic channel depends on the other `fake` channels, which means that the central discriminator receives either all of the real channels or all of the fake channels. To avoid further misunderstandings, we will rewrite the aforementioned section if the paper is accepted.\n\nIn response to your third question, the central discriminator is a binary classifier. As you mentioned, the output will be a single score for all channels combined. The central discriminator's weight update is a standard gradient ascent step based on the binary classification task's loss. We will include this information in the line mentioning the central discriminator in Algorithm 1.", " In response to your first question, the \"gamma weight\" influences the strength of the CD's effect in the loss calculation, whereas the architecture (LSTM vs. MLP) influences the ability to learn a \"good\" representation. Thus, a \"reduced\" loss strength of a \"better\" representation is not necessarily equivalent to an \"increased\" loss strength of a \"worse\" data representation. \n\nIn answer to your second question, you are right, i.e., “All-Synthetic experiment” and “Train-on-Fake, Test-on-real” are the same concepts. However, the results provided in table 3 are for only two channels and compare the results of Group GAN with and without CD. We will clarify this point as well as other editorial suggestions in the final release of the paper if the paper is accepted.\n\nTo answer your third question, It is common in synthetic data augmentation that \"not so good\" synthetic data can \"appear\" to help on average, but the variations can be huge, if not awful, sometimes. In the augmentation experiment, we aimed to demonstrate that one of the GroupGAN method's advantages is the \"stability\" of its results. As shown in figure 4 (b), in the case of 5 channels, the accuracy of the baseline method could be as high as 0.93 and as low as 0.6 among different synthetic data generations; however, the results of the Group GAN method show much more stability, and the accuracy of our method is much narrower, ranging from 0.77 to 0.94.\n\t\nRegarding your fourth question, increasing the augmentation ratio increases the accuracy and stability of both Group GAN and baseline methods, but not dramatically (less than 5 percent). However, in each distinct ratio, we conclude the same point we discussed in answer to your previous question: the Group GAN method is more stable than the baseline method.\n\nAnd regarding your last question, we suspect that two channels are too simple (tractable) a problem to require the Group GAN method and that even a simple baseline method could produce good enough synthetic data.", " This paper introduces GroupGAN a method for generating multivariate time series data that share a single source or cause. GroupGAN uses a separate generator and discriminator for generating each data channel and a central discriminator for forcing the model to keep the correlation between different channels the same as the real data. Synthetic data as well as EEG eye state dataset is used to evaluate GroupGAN compared to other models in this area. Strengths:\n\nI believe the generation of multi-variate time-series data is a significant and very challenging problem. As mentioned by the authors it is urgently needed in many fields. The idea of using a central discriminator and separate generator discriminator pairs for each channel is very interesting and the authors show through experiments that this idea improves the data generation and downstream tasks.\n\nWeaknesses:\n- In the related works section it would be good to mention the weaknesses of the existing methods that the current model is trying to fix\n- I think some additional comparisons would've strengthened the claim of the paper that GroupGAN works better than the baseline model (a single multi-channel GAN model for generating the data) for data generation (on lines 116 - 118), for example in tables 1 and 2 I expected to see a comparison with the baseline model.\n- On line 155 the authors used LLD for the first time and then define them later it is better to define a term before using it.\n- The paper can be improved by changes in language and writing style. There are typos and grammar problems that if fixed can improve the clarity of the paper. I bring some examples below:\n- - on line 13 it should be \"performs very well in downstream tasks\"\n- - on line 95 \"channels\" should be \"channel\"\n- - on line 223 \"EGG\" should be \"EEG\"\n-- on line 292 \"compare\" should be \"comparison\" - On lines 151-154 you mention that for central discriminator MLP networks work better than LSTM networks because \"if the central discriminator is too powerful, the results will be of lower quality, as it will strive to make the signals more correlated at the expense of realistic individual TS.\" can't this be controlled with the gamma weight?\n- Is the \"All-synthetic experiment\" described on lines 243-255 the same as the \"Train-on-fake, Test-on-real\" experiment mentioned in table 3?\n- In Figure 4, I wonder why GroupGAN does so much better than baseline on the All-Synthetic experiment while on the Augmentation experiment it is not doing as well? Is it because of a lower diversity of samples generated compared to the baseline?\n- In Figure 4c, does augmenting more synthetic data reduce the performance of GroupGAN with respect to the baseline model, or does it improve it? It is not obvious from this figure.\n- What is your hypothesis on why GroupGAN does not do as well on 2 channel datasets? The limitations are discussed adequately.", " In this paper, the authors propose Group GANs to generate Multivariate Time Series (MTS) data. MTS can be used for augmentation and privacy protection. To generate realistic MTS, the model should (1) generate realistic single-channel data (2) preserve the correlation between channels. So this paper uses a paired generator and discriminator to generate single-channel data and uses a Central Discriminator to force correlation preservation. They verify the results on synthetic and EEG datasets. ## Strengths:\n* The paper is well structured and nicely written. The authors provide a clear motivation for why generating MTS is useful and explain the key challenges.\n* The authors tear down the problem of generating MTS into (1) generating realistic single-channel TS (2) preserving correlation. The authors propose Group GANs (many channel GANS + Central Discriminator) to address these problems. The method is straightforward and intuitive. I believe the proposed method will provide insight and inspire people working on related problems.\nIt’s good to see the authors submitted detailed supplementary materials.\n* The authors use many metrics and methods (eg, Feature-based Analysis) to show how Diversity and Correlation are preserved. They also discussed the implementation details and trade-offs of their model, which can be valuable for the community.\n\n## Weaknesses:\n* I think the algorithm and training details are very important for understanding the paper. Maybe you can consider moving them (mainly Algorithm 1) from the Supplementary Materials to the body part.\n* Some questions regarding the objectives remain. See Questions.\n* The datasets are small, and they don’t look very practical. (As the authors state “our method could be extended to more practical use cases” in line 312).\n* Whether the proposed model can scale very well remains a question. Performance limitations can be an issue. Another issue is that the authors only test MLP and LSTM, which is not a full guarantee that they can generalize to complex models like Transformers. Q1:\n\nEq (4) is the objective for “a given channel” i. What is the objective for all channels (for i generators + i discriminators + one central discriminator)?\n\nQ2: \n\nIn eq (4), when updating channel i, does the Central Discriminator treat the generated samples in other channels (j != i) as real ones? It seems the case because in line 100 you say “the goal is to estimate the conditional distribution”.\n\nQ3:\n\nWhat is the output of the central discriminator? I still don’t get it after reading Algorithm 1. The central discriminator takes in data of all the channels, but does it output a single score for all the channels combined or does it output a score for every channel (i scores in total)? And how do you update the central discriminator exactly (Describe the loss)? More useful datasets and advanced model architectures can be used.", " The authors present GroupGAN for synthesis of multivariate time-series data. In contrast to previous approaches where features are jointly learned, GroupGAN trains a separate GAN while also training a central discriminator which classifies feature vectors as either real or fake. The intuition behind the latter is to preserve correlations among the individually trained generators. Synthetically trained data are evaluated for low-dimensional synthetic time-series data and supervised EEG data. Strengths: The proposed architecture is novel and interesting. The central disciminator, which enforces preservation of correlations among independent GANs, is intuitive and could be helpful regularizing different GAN instances to promote consistency/correlations. While the authors claim: \"The main reason for having channel GANs as opposed to a single multichannel generator discriminator pair is to make each generator powerful in its own channel distribution, and by including the central discriminator, we enforce realistic correlations between the\nchannels as much as possible.\"\nIt seems like another major advantage of this approach is tractability; jointly modeling all TS at once may lead to a computational explosion, but the current setup allows the computationally efficient benefits of training individual generators while also ensuring correlations are learned among the individual TS through the central discriminator.\n\nWeaknesses: The writing and description of the model could be much improved to align with time-series literature. In particular, the description and focus on channels both confuses and limits the paper to vision data. Channels typically refers to vision data, but moving the model description and text to describing time signals as comprised of features broadens the work and its applicability (i.e., a large number of time series work are not vision data). Furthermore, the description in terms of channels breaks down for different applications (i.e., MFCCs in speech recognition). The clearest/most general description in time-series analysis is a (possibly multirate as well as multivariate) process is composed of various features collected at different times. Furthermore, there seems to be a general misconeption on what a time series is in the paper; the EGG eye state data is a single time series, each time-instance of which consists of 14 features (or measurements) and a binary label.\n\nEvaluation of the proposed method requires significant improvement. The synthetic data is extremely low-dimensional (only 2 dimensions) and resulting hyperparameter tuning, performance, an(d (particularly) catch22 results on this data are thus unconvincing. Furthermore, the real data is preprocessed in an extremely ad-hoc way (lines 229-234) without any explanation. E.g., the EEG data is supervised, why are labels re-estimated during preprocessing? How many outliers were present in the data, and what observed effects on GroupGAN and competitors warranted removal?\n\nNecessary details are also lacking from the paper. In particular: \"The baseline method is an LSTM-based GAN that generated all of the\nchannels simultaneously\" <- this competitor requires much more explanation. What were the exact details of the architecture and learning schedule? Why were they chosen? Why were existing methods (i.e., TimeGAN) not compared to? The following is a major contribution of the paper: \"this is the first study to analyse how to generate multivariate time series\nwith individual channel generation originating from a common noise while inter-channel\ncorrelation preservation is forced with a central discriminator.\" <- How realistic is this scenario? E.g., for the listed examples \"the biometric values from a medical patient, the stock prices from economic events or geographically separated seismic measurements from a single earthquake,\" do these assumptions hold? E.g., while biometric values are collected from a single medical patient, different signals may have different noise properties (e.g., ECG, blood pressure, etc.).\n\nWe hypothesize that this will penalize unrealistic (un)correlation patterns between channels.\" <- Can you supply evidence to support\n this, or point out where in the text evidence will be demonstrated?\n\nSuggestions:\n\" it is well known that sharing data associated with 26 single individuals, even anonymized, can lead to unexpected privacy breaches\" <- Well known ML examples:\n -Narayanan, Arvind, and Vitaly Shmatikov. \"Robust\n de-anonymization of large sparse datasets.\" 2008 IEEE\n Symposium on Security and Privacy (sp 2008). IEEE, 2008.\n\n -Gong, Maoguo, et al. \"A survey on differentially private\n machine learning.\" IEEE computational intelligence magazine\n 15.2 (2020): 49-64. A discussion of the tradeoff between learning a single multivariate GAN vs multiple GANs would help strengthen the paper. While the authors have mentioned learning a large number of individual GANs can be prohibitive for high-dimensional data, in situations where jointly learning the distribution of all features is intractable, the presented architecture is highly parallelizable while maintaining a mechanism to encourage the separately trained GANs generate correlated data. ", " This paper proposes a generative method for multivariate time series. The method consists of individual GANs for each channel of time series with a shared initial noise, and a global discriminator for preserving the correlations between different channels. The design is to model the common factors of different channels. Strengths\n\n1. The problem of synthetic time series generation is important in many tasks, e.g., data sharing, data augmentation.\n2. The proposed method uses a reasonable approach to enforce common factors of multivariate time series and preserve correlations.\n3. The experiments on synthetic and real datasets indicate the effectiveness of the proposed method to some extent.\n\nWeakness:\n\n1. The method is sort of incremental on GAN for time series generation. Preserving correlations using a central discriminator is similar to enforce a regularization of different channels (Fig 1), which is a common way of coordinating multivariate time series.\n2. In Eq 2, the condition conditional distributions are assumed independent. That is, the equivalence between the joint condition and the product of individual condition. This is not clearly justified.\n3. Enforcing a shared initial noise in Eq 3 and modeling the joint condition in Eq 2 seem have similar effects of regularizing different channels of time series. The description is not clear on how could these two parts both be important.\n4. Enforcing a single initial noise may be a strong constraint. Adding K initial noises, where K is smaller than the number of channels, could make the model more flexible and fit training data better.\n5. In the experiments, mode time generative model could be compared, on both synthetic and real datasets for comprehensiveness and consistency.\n 1. In Eq 2, why to assume the conditional distributions are independent in the conditions (channels)?\n2. Will the shared initial noise in Eq 3 and the modeling of the joint condition in Eq 2 have similar effects? How could they both be indispensable in the method?\n3. Is it reasonable to add several initial noises, instead of one, for improving the flexibility?\n The authors addressed the potential negative societal impact well." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 5, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 4 ]
[ "POP3TWApNKe", "_dpp3vKiIJi", "Sb2CFI2n-rh", "1thAHg_rcG2", "KLo1s7po_Ad", "GxWL9jSn2Et", "FHVMN-fEybu", "G8WqI_b5fMt", "nips_2022_RP1CtZhEmR", "nips_2022_RP1CtZhEmR", "nips_2022_RP1CtZhEmR", "nips_2022_RP1CtZhEmR" ]
nips_2022_dT0eNsO2YLu
Learnable Polyphase Sampling for Shift Invariant and Equivariant Convolutional Networks
We propose learnable polyphase sampling (LPS), a pair of learnable down/upsampling layers that enable truly shift-invariant and equivariant convolutional networks. LPS can be trained end-to-end from data and generalizes existing handcrafted downsampling layers. It is widely applicable as it can be integrated into any convolutional network by replacing down/upsampling layers. We evaluate LPS on image classification and semantic segmentation. Experiments show that LPS is on-par with or outperforms existing methods in both performance and shift consistency. For the first time, we achieve true shift-equivariance on semantic segmentation (PASCAL VOC), i.e., 100% shift consistency, outperforming baselines by an absolute 3.3%.
Accept
The paper proposes an end-to-end learnable polyphase sampling and shows competitive performance and sufficient novelty. Major concerns of the reviewers seem to be addressed during the rebuttal and therefore it can be accepted.
train
[ "FiuhrKzltT0", "B-Lqx0sr-75", "puOeRu3kox", "tlkKG7miBaN", "-XWd0hw0UAf", "Rrq_7i5HaF", "EIbIlxaR6_7", "FDHIjRfNlL", "IhA4m8k0Hc", "LUyt23r5xNB" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Interesting thought. In our classification experiments, we replaced **all** downsampling layers with LPD. The number of LPD layers is hence determined by the architecture. In our segmentation experiments, we replaced **all** down/upsampling layers with LPD/LPU respectively. We have not experimented with replacing only some of the down/up- sampling layers with LPD/LPU.\n\nLooking at Table 3, we agree with the reviewer that there is likely a sweet spot, e.g., 2 LPD followed by 2 standard downsampling layers. Anecdotally, we think it is more crucial for the earlier layers to be shift-equivariant. However, we need to conduct additional experiments for a conclusive claim.", " Thanks for putting all the explanation and resolving my concerns. I notice that for LPS, in terms of Acc and Cons metrics, it is very hard to improve both at same time. I am curious about any ablation studies on # of LPS layers for the results? Will it possible to find sweet spot on improving both metric? Any comments on it? No need to do such experiments for my answer. ", " Thank you for the time spent on reviewing and replying to our response. We are happy to see that the reviewer’s concerns have been resolved. As mentioned in our response, we will add these results to the appendix.", " Thanks for your answers, that extra experiment with the ResNet-152 is especially convincing, I recommend adding it to the appendix of the final paper.", " > ### Q1. Comparison with slightly increased model size to match resnet + LPS params/speed\n\nAs summarized in Appendix Tab. A7, on ResNet101 + DDAC, our method increases the trainable parameters by 1.04%. Following the reviewer's suggestion, we tried modifying the ResNet101 + DDAC architecture by increasing its residual blocks (leading to 44,796,218 trainable parameters) to match our ResNet-101 + LPD model (44,751,034 trainable parameters). Specifically, we added three and one residual blocks to the first two ResNet stages. Despite the extra layers, we did not observe any gain in accuracy or shift-consistency over the original ResNet101 + DDAC model. \n\nWe also compare our ResNet-101 + LPD model (44,751,034 trainable parameters)\nagainst an even larger ResNet-152 model (60,192,808 trainable parameters). Both\nmodels were trained on ImageNet using the same augmentation and optimizer\nconfiguration. ResNet-152 achieves 78.3% top-1 classification accuracy and 90.9%\nshift consistency, while our ResNet-101 + LPD model obtains 78.8% top-1\naccuracy and 92.4% shift-consistency. Despite having 25% less trainable\nparameters, our model attains 0.5% higher accuracy and 1.5% higher\nshift-consistency. These results show that the improved performance of our model\nis not due to its additional trainable parameters. We will update the appendix to include these results.\n\n\n> ### Q2. Reference to the appendix in the main paper\n\nThanks for pointing this out. We did not reference the appendix to keep the paper self-contained. As suggested, we will add references to the appendix in the main paper.", " We thank the reviewer for the feedback. We think there are some misunderstandings regarding the conducted experiments. Please see our responses below.\n\n> ### Q1. Contribution of Learnable Polyphase Sampling (LPS)\n\nWe propose LPS, **a downsampling (LPD) and an upsampling (LPU) layer** which learn\na truly shift-invariant downsampling scheme or a truly shift-equivariant\nupsampling scheme. This differs from the prior adaptive polyphase sampling (APS)\nmethod which **only considers downsampling** and which is based on a\n**hand-crafted rule**. See Line 29 for a detailed discussion regarding their\ndifferences. Additionally, we believe there is value to generalizations. Our\nproposed method is more flexible than APS and we empirically demonstrate that\nLPS does not learn to recover APS (see Line 264). Lastly, our claims/proofs differ from APS’s: \n\n- (a) We introduce the shift-permutation property which describes how spatial shifts result in permutations of the polyphase indices (Lemma 1).\n- (b) Using Lemma 1, we prove a sufficient condition for constructing shift-equivariant downsampling layers (Claim 1). \n- (c) We prove that our proposed downsampling layer captures a more general family of downsampling layer than APS (Claim 2).\n- (d) We prove how to design practical models satisfying the sufficient condition in Claim 1 (Claim 3).\n- (e) We prove that the proposed down/upsampling (LDP/LPU) layers can be used to construct shift-equivariant deep-nets (Claim 4).\n\n**These results are novel and have not been considered by prior works.**\n\n> ### Q2. Effectiveness of Learnable Polyphase Upsampling (LPU) in practice...not clear how LPU works and what's the reason behind that.\n\nWe study both a down- and upsampling layer (LPD/LPU). In the segmentation\nexperiments, LPS refers to using both LPD **and** LPU. As can be seen in Tab. 4,\nwe performed an ablation study with **LPD only**. This baseline uses standard\nbilinear interpolation for upsampling following DeepLabV3+ and is not truly\nshift equivariant. In contrast, LPS achieves 100% circular shift consistency while\nmaintaining model accuracy; this highlights the benefit of having LPU\nlayers. \nThe description of LPU is defined in Sec. 4.3 (Line 187), where upsampling is performed by placing \nfeatures back to the original spatial location prior to downsampling. The other spatial locations are filled with zeros,\nas summarized in Eq. (24). We also theoretically prove that the proposed LPD+LPU (LPS) together results in truly shift-equivariant deep-nets (see Claim 4).\n\n> ### Q3. In general, LPS has similar performance to APS in a ResNet 18 setting. Is the result consistent in ResNet 50 or 101?\n\nAs the APS paper did not report ResNet50/101 results for ImageNet, we only\ncompared to the more recent SOTA method: DDAC. To answer the question, we ran additional\nexperiments on ResNet-101 (ImageNet), comparing APS and our proposed LPS method.\n\nThe results show, both methods attain\ncomparable top-1 classification accuracy, while our proposed LPS improves shift\nconsistency (see results below). We will add these results to Tab. 3 of the\nmain paper. \n\n| Method | Anti-Alias | Acc. (%) | S-Cons. (%) |\n|------------|------------|----------|-------------|\n| LPF | Tri-3 | 78.4 | 91.6 |\n| DDAC | Adapt-3\t| 79.0 | 91.8 | \n| DDAC* | Adapt-3 | 78.6 | 91.8 |\n| APS | Adapt-3\t| 78.8 \t| 92.0|\n| LPS (ours) | Adapt-3\t| 78.8 \t| 92.4 |\n\n\n> ### Q4. Table 4 shows that LPD only works better than LPS. Will LPD + a few LPU layers have even better performance?\n\nThis is a misunderstanding. LPS refers to using both LPD/LPU layers, i.e., the architecture uses LPD for downsampling and LPU for upsampling (as requested). In Line 29, we explained LPS to be “**a pair of down/upsampling layers**”. We will clarify this in the experiment section. As can be seen in Tab. 4, LPS achieves perfect circular shift consistency with competitive model performance.\n", " We thank the reviewer for appreciating our paper’s writing and professionalism.\n\n> ### Q1. Motivation of learnable polyphase sampling (LPS) and its naming\n\nIn this paper, we propose LPS, a pair of down/up-sampling (LPD/LPU) layers which\n**learn** a truly shift-invariant/equivariant down/up sampling scheme. This\ndiffers from the prior adaptive polyphase sampling (APS) method which (a) only considers\ndownsampling, and (b) is based on a “hand-crafted” selection rule. For consistency,\nwe followed their choice of using the term polyphase sampling. We are happy to\ntake naming suggestions.\n\n\n> ### Q2. Computation added by this layer\n\nIn the appendix Tab. A7, we report the inference time for each of the models. On ResNet-101 with DDAC, our method increases the inference time by roughly 4.81%. Please see Appendix A4.3 for more details on inference time and architectures.\n\n\n> ### Q3. APS is a special case of LPS\n\nSorry for the confusion. APS is only a downsampling layer, hence it is a special case of LPD. In theory, both APS and LPD achieve 100% circular shift consistency (Claim 1). Different from APS, LPD represents a larger family of downsampling selection schemes, which broadens the type of function a deep-net can represent. Specifically, Claim 2 states “APS is a special case of LPD, i.e., LPD can represent APS’s selection rule”. \n\nOn segmentation, our proposed method (LPS=LPD+LPU) achieves 100% circular shift consistency\nboth in theory and in practice (Claim 4). In contrast, APS is **only a downsampling\nscheme**. Therefore, this APS baseline uses standard bilinear interpolation for\nupsampling following DeepLabV3+ and is not guaranteed to be circularly shift\nequivariant. We will clarify this detail in the paper. \n\n\n> ### Q4. Visual meaning of Figure 4\n\nFig. 4 is meant to qualitatively show that LPD learns to select different\ncomponents during downsampling from APS. We did not claim the visualized feature spaces are interpretable. At\nLine 264, we explicitly stated: “we did not observe a specific pattern that can\nexplain LPD’s selection rule”. We think interpretable visualization of the\nfeature space is itself an interesting research topic.\n\n\n> ### Q5. Running time, computation complexity should be included\n\nThe running time and introduced trainable parameters of our method are documented in the appendix. Please see Appendix A4.3 and Tab. A7 for a detailed comparison between methods. We will include this section in the main paper.", " This paper generalizes polyphase resampling and applies learnable selection of the polyphase components, instead of based on signal magnitudes. It achieved better accuracy and shift consistency on classification and segmentation. Strength: The paper writes with professionism in notations and arangement. The authors did both experiment in high-level classification and low-level segmentation.\n\nWeakness: My personal feeling here is that: the idea of polyphase resampling is being derived to maybe too complicated, to lose its original meaning in signal processing, and now it becomes a fancy way of 2x2 pooling. Even APS can be used for backpropagation actually, it is just another non-smooth operator adding to some existing non-smooth operators, Relu, maxpooling, so no big trouble. So I am conservative about the motivation to be the polyphase resampling parameterized. The computation added by this layer cannot be neglet, considering the shiting doubles of workload of whatever layer follows it. Maybe I missed something, as APS is a specicial case of LPU, and why is LPU more shift-invariant than APS theoretically, and on segmentation empirically? As LPU is data dependent, so it could be approximate APS on some bad test case?\n\nIn figure 4 caption \"This indicates LPS did not learn to reproduce APS\" I can infer this from the algorithm, but is this good from visual meaning perspective? Difficult to see.\n\nThe running time, computation complexity comparison with APS, and other baseline should be included. yes", " This paper presents a new framework to enable the design of shift invariant and equivalence convolutional neural networks. The key idea in this paper is introducing a learnable polyphase sampling scheme which can be used in both down/up sampling layers. Compared to existing methods, the proposed LPS is trained from data instead of manually. The LPS is introduced with detailed analysis and the implementation is also shown in both down/up sampling layers. Experiments on image classification and semantic segmentation valid its effectiveness in practices. \n Pros:\nThe main contribution in this paper is extending the manually designed adaptive polyphase sampling layers to learnable polyphase sampling layers with theoretically prove. Especially, the introduced conditional probability directly learn from the data while the LPS keep the property of APS. Besides the LPS, both downsampling and upsampling LPD and LPU have been showed with implementation and detailed analysis. \n\n Cons:\nThe main idea of this paper is generalized the existing work APS in a learnable manner. Compared to the significance of APS the contribution is limited. Also it is not clear how LPU and LPD work in real cases. For example, in Semantic Segmentation cases, LPD only seems works better than LPD and LPU while it is not clear how LPU works and what the reason behind that. \n\n Q1.In general LPS has similar performance to APS in Resnet 18 setting. Is the result consistent in Resnet 50 or 101? \n\n\nQ2. Table 4 shows that LPD only works better than LPS, while there is no ablation shows that the importance of LPD and LPU in the neural network when keep replacing LPD LPU in downsampling and upsampling layers. Will is possible LPD + a few LPU layers can have even better performance?\n\n The limitation in this paper is the marginal performance improvement over APS, which leads the bias for justifying the importance of the motivation in this paper. This is because APS is a special case of LPS, the contribution of this paper can only be valid through theoretical analysis. ", " The paper tackles the issue of shift invariance/equivariance in CNNs. Prior work (APS) suggested, when downsampling, to select the index for whose value has the largest norm. This paper suggests a generalization which consists of learning the index-selection rule instead. Experiments on classification (invariance: CIFAR, ImageNet) and segmentation (equivariance ; PascalVOC) show convincing results.\n\nLearning this selection is implemented via a carefully crafted small NN and the gumbel-softmax trick. ## Strengths\n\n- The paper clearly motivates the problem.\n- The paper gives just the right amount of background knowledge and prior work.\n- The evaluations are to the point.\n\n## Weaknesses\n\nThe experiments show convincing results: shift invariance matching theory, and small but consistent improvement in \"regular\" accuracy measures. However, I wonder whether the small but consistent improvements may not be simply due to the fact of introducing more learnable parameters. This is almost addressed in Appendix A7, but what I am missing is the results of either (a) resnet + LPS but with slightly reduced ResNet size to match plain resnet params/speed, or (b) plain resnet with slightly increased size to match resnet + LPS params/speed.\n\nAlso, this should be mentioned in the main paper, with reference to the appendix. Actually, most of the appendix should be mentioned in the main paper. See above. None." ]
[ -1, -1, -1, -1, -1, -1, -1, 7, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 5, 3 ]
[ "B-Lqx0sr-75", "Rrq_7i5HaF", "tlkKG7miBaN", "-XWd0hw0UAf", "LUyt23r5xNB", "IhA4m8k0Hc", "FDHIjRfNlL", "nips_2022_dT0eNsO2YLu", "nips_2022_dT0eNsO2YLu", "nips_2022_dT0eNsO2YLu" ]
nips_2022_1wVBLK1Xuc
Policy Optimization with Advantage Regularization for Long-Term Fairness in Decision Systems
Long-term fairness is an important factor of consideration in designing and deploying learning-based decision systems in high-stake decision-making contexts. Recent work has proposed the use of Markov Decision Processes (MDPs) to formulate decision-making with long-term fairness requirements in dynamically changing environments, and demonstrated major challenges in directly deploying heuristic and rule-based policies that worked well in static environments. We show that policy optimization methods from deep reinforcement learning can be used to find strictly better decision policies that can often achieve both higher overall utility and less violation of the fairness requirements, compared to previously-known strategies. In particular, we propose new methods for imposing fairness requirements in policy optimization by regularizing the advantage evaluation of different actions. Our proposed methods make it easy to impose fairness constraints without reward engineering or sacrificing training efficiency. We perform detailed analyses in three established case studies, including attention allocation in incident monitoring, bank loan approval, and vaccine distribution in population networks.
Accept
I recommend acceptance due to the positive opinions of the reviewers. This paper proposes a method relevant to fairness in RL, for enforcing constraints during policy optimization. The reviews judge the method (inspired by Lyapunov stability ideas) and its empirical evaluation as technically sound. Many initial points of confusion or uncertainty were addressed and resolve during the author discussion.
train
[ "yi5l7bLQcg", "hYstilot5z", "6yy5Am32Zg", "MR-Hj7MdI2a", "b4qoQOJ7IWC", "ooetGd8k8MK", "QpsBZn0mnP8", "addoRHgWidb", "8MJyfGXu6-4", "YMKfO9KVDpf", "BTH3L4QgcKd", "qzpbiFrEgAq", "vaDPMA-yp1U" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for adding some theoretical justification. I have updated my score based on the author rebuttal.", " I would like to thank authors for the detailed response. I would like to encourage authors to incorporate the proposed clarifications in the manuscript, so that the contribution and takeaway msgs can be clearer and more approachable.\n\nPost-rebuttal: I have updated my score accordingly.", " Thanks for your response. The related works are well compared! ", " > Thanks for the response. I'm still a little unclear on the distinction between the hyperparameters in your method and a method where we weight the auxiliary objective - if your method tells us to take actions when fairness is worse than some threshold, then isn't the setting of that threshold equivalent to setting the weight on an auxiliary objective?\n\nAnswer: Thank you and yes we should further clarify. There are two parts in any learning problem: one is the definition of the objectives, and the other is the computation procedures for reaching the objective. In other words, one is what do we optimize, and secondly how do we optimize. The major difference between our methods and standard reward engineering is that our hyperparameters are only used to improve the second step of how to optimize, without affecting the objective and the problem definition itself, while reward engineering focuses on defining the problem itself in the first step. We will elaborate on these below. \n\nFirst of all, the threshold parameter is part of the objective definition, not what we need to choose in the algorithm. The threshold declares the amount of tolerance of “unfairness” that a problem allows, and both our approach and reward engineering with an auxiliary objective need to use it. In the reward function form, it would become an additional loss term of “unfairness value minus that threshold”, and then choosing a weight for this term. In our case, we interpret this given threshold as a region for “stabilizing” the unfairness value. In both cases, the threshold can be arbitrarily chosen by the problem formulation, and it can be zero, in which case it specifies that the unfairness should be minimized to as small as possible without tolerance. Our algorithms work for any choice of the threshold, and it is not a hyperparameter that we need to choose in the algorithm itself. \n\nBut perhaps the reviewer is also asking about the distinction between our advantage regularization hyperparameters $\\beta$ and auxiliary objective weights $\\zeta$, and what the difference is between reward engineering $\\zeta$ and tuning $\\beta$. This is a great point that we should address in the paper, because the difference is crucial. On one hand, the $\\beta$ we are using is in the gradient update and it does not affect the reward definition and does not hold one to commit to specific weights of the different objectives in the first place. Tuning $\\beta$ improves the \"how to learn\" aspect of optimization similar to how a learning rate would in standard deep learning procedures: the advantage regularization terms help with better convergence on the objective. Our procedure is orthogonal to and does not change the objective itself (i.e. \"what to learn\"). On the other hand, reward engineering is defining the objective in the first place, and after that, the underlying hyperparameters, such as learning rate, still need to be chosen, which is at the level of what our proposed methods are focusing on. In short, we only control how to optimize (learning procedures), not what to optimize (objective). \n", " Thanks for the response. I'm still a little unclear on the distinction between the hyperparameters in your method and a method where we weight the auxiliary objective - if your method tells us to take actions when fairness is worse than some threshold, then isn't the setting of that threshold equivalent to setting the weight on an auxiliary objective?", " We thank the reviewer for the detailed comments and questions. \n\nWe did not include a detailed mathematical analysis of our advantage regularization terms and focused on showing the empirical effectiveness of the methods, because we worried that the background theory may distract the focus and clarity of the paper from the long-term fairness context. For your reference, here we give a short introduction to the theoretical justification of our proposed methods, and we will provide more details along the following lines in the paper or the appendix. \n\nFirst, we consider the problem of long-term fairness as a control problem: the goal is to obtain a control policy to regulate the entire system to satisfy the fairness properties. Consequently, we draw inspiration from control-theoretic methods in the Lyapunov stability framework [1-4], a tool for estimating and improving stability in a nonlinear dynamical system. Note that long-term fairness is essentially a property about stability: we wish to control the system such that the violation of fairness reduces over time, and ideally gets controlled to be under a certain threshold. This connection makes it possible to use Lyapunov methods to derive the regularization terms. \n\nBefore deriving our advantage regularization terms, let us first set up some definitions for Lyapunov stability. Before continuing, please refer to Section A: “An Introduction to Lyapunov Stability” of the appendix for a short introduction to Lyapunov stability.\n\nNext, with the Lyapunov stability idea in mind, we come back to our derivations of the advantage regularization terms, in particular the second one. Recall in Proximal Policy Optimization (PPO), the advantage $\\hat{A}_t$ describes the difference between the Q-value of an action and the expected value of a state at time $t$, to indicate whether an action should be taken more often in the future. Let $\\Delta : S \\rightarrow \\mathbb{R}^\\ge$ be a function of the state and a fairness constraint to minimize. We make it our task to transform the landscape of $\\hat{A}_t$ into a Lyapunov-like landscape with respect to $\\Delta$.\n\nFirst, we wish to enforce the decrease of $\\Delta$ in the landscape of $\\hat{A}_t$. We add a value-thresholding term $\\text{min}(0, -\\Delta(s_t) + \\omega)$, where $\\omega$ specifies a tolerable threshold of $\\Delta$. This term penalizes the advantage evaluation for $(s_t, a_t)$ when $\\Delta(s_t)$ is greater than $\\omega$, and remains inactive otherwise. This term transforms the advantage landscape by adding a region of attraction around some equilibrium point, which in this case is the origin.\n\nNext, we take inspiration from the sample-based Lie derivative to design the decrease-in-violation term $\\text{min}(0, \\Delta (s_{t+1}) - \\Delta(s_t))$, where $\\Delta(s_{t+1}) - \\Delta(s_t)$ measures the dynamics of $\\Delta$ over time. Like the value-thresholding term, this term only penalizes the advantage term if $\\Delta(s_t)$ exceeds some tolerable threshold $\\omega$, and remains inactive otherwise. While similar to a sample-based Lie derivative, the decrease-in-violation term is not exactly a derivative. Instead, it imposes the soft requirement of negative-definiteness of Lie derivatives in Lyapunov conditions.\n\n[1] Wassim Haddad and Vijaysekhar Chellaboina. Nonlinear dynamical systems and control: A lyapunov-based approach. Nonlinear Dynamical Systems and Control: A Lyapunov-Based Approach, 01 2008.\n\n[2] S. M. Richards, F. Berkenkamp, and A. Krause, “The lyapunov neural network: Adaptive stability certification for safe learning of dynamical systems,” in 2nd Annual Conference on Robot Learning, CoRL 2018, Zürich, Switzerland, 29-31 October 2018, Proceedings, 2018, pp. 466–476.\n\n[3] S. Liu, D. Liberzon, and V. Zharnitsky, “Almost lyapunov functions for nonlinear systems,” Automatica, vol. 113, p. 108758, 2020.\n\n[4] R. Bobiti and M. Lazar, “Automated-sampling-based stability verification and doa estimation for nonlinear systems,” IEEE Transactions on Automatic Control, vol. 63, no. 11, pp. 3659–3674, 2018.\n", " We thank the reviewer for the detailed comments and questions. Our responses are listed below (we refer citations to the original paper):\n\n> Would be useful to clarify (to me) what the novelty is in this paper: is it the method for optimizing and auxiliary objective in the RL setting, the exploration of fairness, or something else?\n\nAnswer: Our novelty largely stems from the way we impose fairness requirements in standard policy optimization procedures for long-term fairness. We enforce fairness by regularizing the advantage evaluation during the policy gradient step, rather than incorporating the fairness requirement as part of the reward function. We aim to address the reward-hacking issue specified by [12] in the future directions, by imposing fairness requirements without introducing additional reward engineering. We compare our approach with existing methods, and show it is a practical and more efficient algorithm that can be of use to the long-term fairness community.\n\n> How general is the proposed method? If can be used beyond fairness but you're just choosing to explore fairness here for some reasons, would be nice to walk through that\n\nAnswer: The reviewer brings up a good point. Technically, if $\\Delta$ can be formulated as a generic constraint to be minimized in the context of our RL setup, our proposed method still holds. However, we find this method to best fit the use case in addressing the problem of long-term fairness, and thus that is what we focus on. We will mention this point after defining our method.\n\n> Can you clarify more how to interpret the difference between A-PPO and R-PPO?\n\nAnswer: The Reward-Only Fairness Constrained PPO (R-PPO) is Proximal Policy Optimization (PPO) with fairness constraints applied only at the reward level as penalty reward terms, whereas the Advantage Regularization PPO (A-PPO) is PPO with fairness constraints applied only at the policy gradient level as advantage regularization terms. The only difference between R-PPO and A-PPO is how and where we enforce the fairness constraints during training. \n\n> line 142: Q value is never defined.\n\nAnswer: We will add a brief definition for Q value in the preliminaries.\n\n> line 168: there are a few hyperparameters to balance this function (the \\beta's). I'm not sure how this relates to the critique of existing methods on line 45, where the authors say that the hyperparameters weighting fairness and accuracy constraints may result in reward hacking. \n\n\nAnswer: In multi-objective reward functions, each objective is represented as a distinct term and weighted by importance. These objectives are linearly combined to form a scalar reward. However, when the objective weights are poorly selected, an agent can reward hack by maximizing for the wrong tradeoff between different objectives, and adopt undesirable behaviors. \n\nIn our case, our $\\beta$ hyperparameters exist at the policy gradient level in the advantage evaluation to enforce a specific property into the value function, and does not exist at the reward level which defines the objective. This specific property is a Lyapunov property, which on a high level enforces the policy to take actions that are more “fair” if the fairness constraint $\\Delta$ is violated beyond a certain threshold. The $\\beta$ terms are necessary because the magnitude of $\\Delta$ and its derivative are highly dependent on the training environment setup. These $\\beta$ terms provide a way to balance the advantage regularization terms across various scenarios.\n\n\n> line 168: I may just not be familiar enough to the relevant literature but I don't understand why the reference to Lyapunov stability is helpful here\n\nAnswer: Lyapunov stability is a framework in control theory for estimating and improving stability in a dynamical system by reducing some error signal, which in our case can be modeled as the violation of the fairness requirement over time. On a high level, Lyapunov stability characterizes the state of being stable in a dynamical system through the enforcement of certain properties in the Lyapunov landscape. We can use these same Lyapunov stability techniques to enforce fairness constraints in the context of RL by transforming the value landscape to a Lyapunov-like landscape through advantage regularization.\n\n> Line 303: what is the \"min-max normalization\" and what is it for?\n\n\nAnswer: Min-max normalization is a type of normalization that transforms each number in $x \\in \\mathbb{R}^n$ to between 0 and 1, inclusive. More specifically, for each real number in $x$, the minimum number becomes a 0, the maximum number becomes a 1, and all other numbers get scaled to a value between 0 and 1.\n\nWe use min-max normalization as a trick during training to achieve higher numerical stability by adjusting the scales of each term in our proposed advantage function $\\hat{A}_\\beta$ before performing their weighted sum. In general, normalization is not required for our methodology to work.\n\n", " We thank the reviewer for providing additional related works. Below, we discuss how these works are related to our work in detail.\n\nPaper [1] proposes a CMDP-based solution on the problem of long-term fairness in recommendation. They claim that prior strategies solve for fairness in recommendation in a one-shot context, and fails to consider the dynamic nature of the system where item popularities change over time. Thus, [1] studies how to maintain long-term fairness on item exposure for the task of splitting items into groups by recommendation, using a modified Constrained Policy Optimization (CPO) procedure. [1] and our work both solve long-term fairness problems in nonlinear dynamical systems. [1] models the problem as a CMDP, while our work models the problem as an MDP and uses advantage regularization techniques to enforce fairness. Our proposed method is much more cost efficient than CMDP-based methods at training time and better satisfies the task of achieving high overall utility while minimizing violations of the fairness requirements.\n\n[2] aims to address the long-term fairness problem in 3 parts. First, they define the fairness notion of return parity, which requires MDPs sharing the same state-action space across different demographic groups to achieve similar expected returns. They then provide a decomposition theorem for return disparity that partitions it into the disparities between group reward functions, group policies, and the group policies' state visitation distributions. Finally, they provide algorithms to decrease the return disparity by tasking policies to maximize their expected returns while maintaining similar state visitation distributions. Like us, [2] models their long-term fairness problem as an MDP, but they set up their constraint as the difference in the reward gaps across different demographic groups rather than defining it as a deterministic function of the state space. This requires that they design their fairness requirement directly into the reward function, which can be inflexible in that multi-task objectives bring up the issue of establishing trade-offs between different objectives and satisfaction of the fairness requirement. Our approach specifically aims to address this issue by enforcing fairness constraints at the policy gradient level, not the reward level. \n\nWe will add these works to the revised paper.\n\nFinally, we thank the reviewer for explaining the concern for the clarity around the fairness notion. We will make the definition more explicit in Section 3.\n\n[1] Ge, Yingqiang & Liu, Shuchang & Gao, Ruoyuan & Xian, Yikun & Li, Yunqi & Zhao, Xiangyu & Pei, Changhua & Sun, Fei & Ge, Junfeng & Ou, Wenwu & Zhang, Yongfeng. (2021). Towards Long-term Fairness in Recommendation. 445-453. 10.1145/3437963.3441824. \n\n[2] Jianfeng Chi, Jian Shen, Xinyi Dai, Weinan Zhang, Yuan Tian, and Han Zhao. Towards return parity in markov decision processes. In Gustau Camps-Valls, Francisco J. R. Ruiz, and Isabel Valera, editors, International Conference on Artificial Intelligence and Statistics, AISTATS 2022, 28-30 March 2022, Virtual Event, volume 151 of Proceedings of Machine Learning Research, pages 1161–1178. PMLR, 2022.\n", " We thank the reviewer for the detailed comments and questions. Our responses are listed below:\n\n> What does the term \"human-designed policies\" mean in the summary of experimental results?\n\nAnswer: A human-designed policy is a handcrafted strategy designed by a human for a specific scenario that does not involve any learning-based algorithmic design. For instance, the greedy agent from the attention allocation scenario and the equality of opportunity (EO) agent in the loan scenario is a human-designed policy. We will further elaborate on what a human-designed policy is in the experiments setup section.\n\n\n> What is the contribution of the paper, is it in terms of technical modification of the PPO algorithm, or there are additional takeaway messages?\n\nAnswer: Our contribution comes in 2 parts.\n\nFirst, we demonstrated that RL approaches are effective for designing policies that can achieve long-term fairness. Note that in the work of [1] only human-designed heuristic policies are evaluated, and algorithmic approaches have not been proposed for policy design. Through these scenarios, we demonstrate that policy optimization methods in deep RL can find strictly better decision policies such that they achieve higher overall utility and violate less of the fairness requirements than previously known strategies.\n\nSecond, we propose a novel method for effectively imposing fairness requirements in standard policy optimization procedures by regularizing the advantage evaluation during the policy gradient step. This approach specifically addresses the reward-hacking issue, as discussed in the future directions in [1] by enforcing fairness requirements without additional reward engineering. We compare this approach with existing methods, and show that it is a simple, more efficient, and practical algorithm that can benefit the long-term fairness community.\n\nWe will emphasize the contributions listed above in the introduction.\n\n[1] Alexander D’Amour, Hansa Srinivasan, James Atwood, Pallavi Baljekar, D. Sculley, and Yoni Halpern. Fairness is not static: deeper understanding of long term fairness via simulation studies. In Mireille Hildebrandt, Carlos Castillo, L. Elisa Celis, Salvatore Ruggieri, Linnet Taylor, and Gabriela Zanfir-Fortuna, editors, FAT* ’20: Conference on Fairness, Accountability, and Transparency, Barcelona, Spain, January 27-30, 2020, pages 525–534. ACM, 2020.\n\n\n> How does the proposed modification of PPO algorithm overcome the drawbacks/difficulties of current RL approaches (to algorithmic fairness) outlined in the introduction?\n\nAnswer: One drawback of standard policy optimization approaches is that the use of monolithic reward functions in the standard RL framework requires that they optimize for policies with respect to a single numerical value. Because the goal of decision systems is inherently multi-objective, the tradeoff between maximizing overall utility and minimizing violations of the fairness requirement becomes nontrivial at the reward level and may incentivize the learning agent to reward hack. \n\nIn our approach, we avoid adding the fairness constraint as yet another objective in the reward function, but instead enforce it at the policy gradient level using Lyapunov-like stability approaches. Our methods are based on methods that originated from control problems for enforcing safety or stability properties in learning-based controllers, and we show that fairness properties can be handled in a similar framework with the specific design of regularization that we proposed in the paper. The proposed methods are novel, and as demonstrated in the case studies, can obtain policies that outperform previously known methods in the sense of achieving both higher utility and less fairness violation. \n\nAs described in detail in the paper, one approach that we focus on comparing against is the framework of Constrained Markov Decision Processes (CMDPs). CMDP formulation requires learning algorithms to lower the expectation of constraint violation asymptotically and the training time can be arbitrarily long before any fairness properties can be established. In addition, CMDP-based methods require multiple rounds of numerical optimization (e.g. linear programming) at every step to satisfy the constraints, which can be expensive. Our formulation only requires the additional computation of the regularization term, which is constant time and is thus far more computationally efficient than the former. Empirically, we find that Constrained Policy Optimization (CPO), a CMDP-based policy optimization method, takes 2-3x more time than our A-PPO formulation to train a policy that achieves high overall utility and low violations of the fairness requirement. We can provide additional supporting evidence in our revision.\n", " The paper proposes a modification to PPO algorithm and presents an algorithm to constrain advantage disparity in the dynamic setup. The empirical results on multiple practical scenarios are also provided. The strength of the paper comes from a detailed presentation of the conducted experiments. The results are reported in three case studies (following D'Armor et al., 2020) and the comparisons involve various algorithms and fairness criteria.\n\nWeakness of the paper comes from the way materials are presented. In particular, it would be very helpful if authors can clarify several questions (detailed in Section `Questions`):\n\nQ1: what does the term \"human-designed policies\" mean in the summary of experimental results?\n\nQ2: what is the contribution of the paper, is it in terms of technical modification of the PPO algorithm, or there are additional takeaway messages?\n\nQ3: how does the proposed modification of PPO algorithm overcome the drawbacks/difficulties of current RL approaches (to algorithmic fairness) outlined in the introduction? ### Q1: what does the term \"human-designed policies\" mean in the summary of experimental results?\n\nIn presented experimental results, it is mentioned several times that the proposed A-PPO outperforms human-designed policies. I am a little bit confused of the used term here. Is the term \"human-designed\" referring to reward engineering (line 46)? If so, it seems that the proposed algorithm also involves an explicitly defined trade-off between utility and fairness violation, e.g., Equation 3. A further clarification would be very helpful.\n\n### Q2: what is the contribution of the paper, is it in terms of technical modification of the PPO algorithm, or there are additional takeaway messages?\n\nI think readers can benefit more from a clearer presentation of the contribution of the paper.\n\n### Q3: how does the proposed modification of PPO algorithm overcome the drawbacks/difficulties of current RL approaches (to algorithmic fairness) outlined in the introduction?\n\nIn introduction (e.g., paragraph 2 of Section 1), the paper listed several drawbacks/difficulties of current RL approaches towards fairness in dynamic settings. I am wondering how the proposed technical modification of the PPO algorithm overcomes some or all of the listed drawbacks. In light of the questions detailed in Section `Questions`, while the experimental results are relatively well documented, readers can benefit more from clearer presentations of the takeaway messages of the paper.", " The paper studies the long-term fairness problem in MDPs by proposing advantage regularization policy optimization methods to achieve high overall utility and mitigate violation of the fairness requirements. *Novelty* The paper studies the long-term fairness problem in MDPs. The problem is well-motivated, and this work is a novel combination of PPO and Lyapunov stability. Overall, the problem setting and methods are novel, but the authors fail to cite some recent works of fairness in MDPs [1-2].\n\n*Quality*: The paper is technically sound. In particular, I find the authors have a reasonable and interesting RL environment setup. \n\n*Clarity*: The paper is well written, but the fairness definition is unclear at the beginning before the case study. \n\n*Significance*: The long-term fairness problem is important. The paper seems to be a useful contribution to the literature in this area by proposing practical algorithms to achieve long-term fairness. \n\n[1] Towards Long-term Fairness in Recommendation, in WSDM 2021\n\n[2] Towards Return Parity in Markov Decision Processes, in AISTATS 2022 I suggest the authors discuss the fairness notion in the problem setup. The fairness requirement is a function of the state, and depending on the context, it has different implications. I did not know the fairness notion until I read the experiment. The authors have adequately addressed the limitations and potential negative societal impact of their work.", " This paper suggests a method for optimizing a reinforcement learning model which has some auxiliary objective beyond reward (e.g. a fairness objective). They run some experiments and analyze the difference in performance between their method and several others, focusing on application to fairness-type questions, showing the advantages of their method. Strengths:\n- this method seems very general and useful\n- exposition of the method and motivation is very clear\n- experiments support the claims of the method on a variety of simple RL problems\n\nWeaknesses:\n- the novelty of the work is not clear to me as I think I am not quite familiar enough with the relevant RL literature. I'm not sure how many similar methods exist for optimizing auxiliary objective functions like this\n- I'm not quite sure how specific the proposed method is to fairness - if very specific, it should be explained why. If very general, I think the exposition of the paper should be a little bit different to communicate that it's a general method they are specifically exploring with respect to fairness\n\nMinor points:\nline 142: Q value is never defined\nline 168: there are a few hyperparameters to balance this function (the \\beta's). I'm not sure how this relates to the critique of existing methods on line 45, where the authors say that the hyperparameters weighting fairness and accuracy constraints may result in reward hacking.\nline 168: I may just not be familiar enough to the relevant literature but I don't understand why the reference to Lyapunov stability is helpful here\nExperiments: can you clarify more how to interpret the difference between A-PPO and R-PPO?\nLine 303: what is the \"min-max normalization\" and what is it for? - Would be useful to clarify (to me) what the novelty is in this paper: is it the method for optimizing and auxiliary objective in the RL setting, the exploration of fairness, or something else?\n- How general is the proposed method? If can be used beyond fairness but you're just choosing to explore fairness here for some reasons, would be nice to walk through that\n- some clarifications on how to interpret the difference between A and R-PPO none", " The authors propose an approach to extend policy optimization algorithms to consider fairness requirement. To achieve this, they extend the MDPs to include two fairness related terms: (i) fairness metric regularization which penalizes when the fairness metric is above a threshold, and (ii) improvement over time term, which penalizes if the fairness metric increases in the next time step. This makes the model fairness aware. The extension proposed is simple, intuitive and easy-to-implement. Further, the experiments and results are neatly presented with several case studies. \n\nThe primary weakness is the magnitude of novelty. The authors propose two simple regularization terms without any mathematical analysis. They have also not presented other possible alternatives they considered or any detailed justification about their final choice.\n * Did you try other regularization terms or variations of the value-thresholding term and decrease-in-violation term in Eq. 3? Can you justify mathematically why the terms used would be optimal? Yes, authors have adequately addressed the limitations and potential negative impact of their work." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 4 ]
[ "ooetGd8k8MK", "8MJyfGXu6-4", "addoRHgWidb", "b4qoQOJ7IWC", "QpsBZn0mnP8", "vaDPMA-yp1U", "qzpbiFrEgAq", "BTH3L4QgcKd", "YMKfO9KVDpf", "nips_2022_1wVBLK1Xuc", "nips_2022_1wVBLK1Xuc", "nips_2022_1wVBLK1Xuc", "nips_2022_1wVBLK1Xuc" ]
nips_2022_5yAmUvdXAve
Cluster and Aggregate: Face Recognition with Large Probe Set
Feature fusion plays a crucial role in unconstrained face recognition where inputs (probes) comprise of a set of $N$ low quality images whose individual qualities vary. Advances in attention and recurrent modules have led to feature fusion that can model the relationship among the images in the input set. However, attention mechanisms cannot scale to large $N$ due to their quadratic complexity and recurrent modules suffer from input order sensitivity. We propose a two-stage feature fusion paradigm, Cluster and Aggregate, that can both scale to large $N$ and maintain the ability to perform sequential inference with order invariance. Specifically, Cluster stage is a linear assignment of $N$ inputs to $M$ global cluster centers, and Aggregation stage is a fusion over $M$ clustered features. The clustered features play an integral role when the inputs are sequential as they can serve as a summarization of past features. By leveraging the order-invariance of incremental averaging operation, we design an update rule that achieves batch-order invariance, which guarantees that the contributions of early image in the sequence do not diminish as time steps increase. Experiments on IJB-B and IJB-S benchmark datasets show the superiority of the proposed two-stage paradigm in unconstrained face recognition.
Accept
The paper received 4 positive reviewers, and the reviewer increased/remained their scores after the rebuttal. The paper pursues a useful direction of unconstrained face recognition. All reviewers agree that the results are impressive and the method has novelty.
val
[ "K6mSt8VEc0f", "XByGrDlao3Q", "QWxWyZ3iRb8", "Cg8eQdj25rD", "4JIOUl3PeB1", "udpVhsSWAXC", "CyZUkt1phIP", "Q3_IBU3mFrL", "meE4u9yVic_", "8OsRqLX0ze", "OQJSo7hs2V", "LRafArfhzpN" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I appreciate the authors' effort to address my concerns and comments. \n", " The rebuttal has solved most of my former concerns and this paper is ok for acceptance based on the positive comments from other reviewers.", " Thank you for the answers. I believe that most questions have been properly answered and, therefore, the paper remains acceptable.", " Thanks for your answers. They addressed most of my concerns pre-rebuttal.\nI have updated my score.", " We thank the reviewer for taking the time to evaluate our work. The reviewer recognized that the two-stage architecture is interesting and the experiment results are strong. \nThe reviewer observed that the paper could be potentially acceptable with a thorough revision to improve its readability. We acknowledge the suggestion and rewrote Section 3 to present the work more clearly. \n\n**Q1. Do Center-Loss and Non-Local neural Networks overlap with the proposed work?**\n\nThis is an important question as it highlights where our novelty lies. \n\n__Non-Local Network__ Non-Local Neural Network or its application in feature fusion (RSA) is based on the (multi-head) attention mechanism. Our clustering mechanism is different from the attention mechanism in two regards because we are performing an assignment. \n\n1) While the attention mechanism performs softmax normalization over the index of keys (row), we perform over the index of queries (column), so that each value is assigned to at most one query. This alone would not make the algorithm analogous to clustering. We also modify the query to be globally shared. In this way, for each set of inputs, the affinity to the global center is computed and the assignment to one of the centers is ensured. To further aid the understanding, we included Fig.4 in the main paper which highlights the changes from the attention computation. \n\n2) Secondly, our usage of style information as the keys for clustering is crucial for clustering to work. In calculating the affinity map $A$ with global centers, as opposed to using task-specific features $f_i$ for clustering, we use a style vector extracted from the intermediate features. Style vector contains information such as image quality, facial pose that are useful for feature fusion. The table below shows the performance gain using the style input. The lesson from our clustering method, one needs to decouple the keys and values and choose the appropriate representation for clustering. \n\n| |IJB-B TAR@FAR=1e-3|IJB-B TAR@FAR=1e-4|IJBS(avg)| \n|-|-|-|-|\n|without style input |$96.32$|$94.54$|$53.98$| \n|with style input |$96.91$|$95.53$|$57.55$| \n\n__Center Loss__ We emphasize that we do not claim novelty in the loss function. Indeed, the function that reduces the similarity between the fused and the template was used previously. We did not name our loss as Center loss as it is technically different. Center-loss uses $L2$ distance and we use the angular distance. But we agree it is conceptually similar so we included a proper citation in the revision. \n\n**Q2. The method is hard to follow**\n\nWe have rewritten Section 3 to improve readability. The reviewers' questions helped improve the flow of the text.\n\nBelow we answer the specific questions.\n\n1. __How does $P$ show the importance of each cluster?__ Let's provide an example. \nConsider a scenario with $4$ cluster centers. The output of the Cluster Network is $F' \\in R^{4\\times 512}$ (512 is the dim of a feature). In other words, it is $4$ intermediate features corresponding to the 4 cluster centers. \n$P \\in R^{4 \\times 512}$ is a predicted matrix with the same shape as $F'$. The final fused output is an element-wise multiplication of $P$ and $F'$ and sum (analogous to weighted averaging) to create $f\\in R^{512}$. So, each row of $P$ governs how much each row of $F'$ is contributing during the aggregation and each row of $F'$ is the result of an assignment to each cluster center. \n\n2. __What is T-step sequential inference with batch order invariance?__ It refers to dividing the length $N$ inputs into batches of smaller length $N'$ during inference. It is necessary when $N$ is large, as a concurrent inference is infeasible. A unique property of our method is batch-order invariance. Even when we shuffle the order of the partition, CAFace obtains the same result by the model design. It improves upon the shortcomings of the recurrent module paradigm which suffers from the reduced contribution of early inputs. \n\n3. __Sample order vs batch order?__\nSample order refers to the order of individual samples in a sequence. Batch order is the order of batches when one divides the sequence into batches. \n\n4. __Is identity \"person identity\" or the mathematical function of identity?__ It is the \"person identity\". To avoid confusion, we changed the text to use facial identity when necessary. \n\n**Q3. Relevance to NeurIPS**\n\nOur proposed cluster and aggregate paradigm is a general feature fusion method. Apart from its application in face recognition and its SoTA performance, the community can benefit from the work in the following regard.\n\n1. A learned clustering method that is differentiable and optimized for a given task. This is a modification of the attention mechanism. We show that the shared queues and style input can perform task-oriented clustering. \n\n2. The update rule in the cluster and aggregate paradigm is batch order invariant, a useful property that improves the shortcoming of the recurrent module.", " We appreciate the reviewer for taking the time to evaluate our work. The reviewer recognized that the paper is well motivated and the idea interesting. Furthermore, the experimental results show improvements against prior works. In the following, we address the reviewer's concerns. \n\n**1. Multihead self-attention is adopted with the attention matrix of (N + M) x (N + M). As N could be very large, how efficient is this design?**\n\nIt is true that in CN, calculating affinity matrix is quadratic in computation, which is costly if $N$ is large. However, we divide $N$ inputs into batches of length $N'$. For instance, in IJB-S dataset, the probe size, $N$ could be $10,000$. We set $N'$ to be $256$. This is the key to reducing computation costs and memory consumption. One can ask if then the inner-set relationship is only limited to $256$ samples. Specifically, to avoid this and allow communication across batches, we design an update rule for merging the intermediates. Therefore, when the subsequent Aggregation network fuses the intermediates, it considers all inputs in a condensed way. \n\nOne more benefit is that Style Input maker creates a style vector of length $64$, as opposed to the identity feature whose length is $512$. So using the style vector for attention calculation is more efficient. \n\nSupp.D includes detailed resource and efficiency comparisons with previous works. We show that our work is fast and does not suffer from the out-of-memory issue when $N$ is large.\n\n**2. Does the correlation of samples within a batch affect the assignment of a sample to one of $M$ centers? How is the permutation conducted?**\n\nThe short answer is yes. The longer answer is that it is not a critical factor because 1) the performance does not vary much with random set element permutation, and 2) the intra-set relationship is considered in both Cluster Network (CN) and Aggregation Network (AGN). \nSpecifically, the intra-set correlation comes into play in CN when $A$ is calculated and $A$ is an affinity between samples and the global query $C$. To show how the global query is learned, in Supp.G, we provide 3 different scenarios where the inputs are a) mixed quality, 2) all poor quality and 3) all good quality from the same model. In the case of mixed quality, few good quality samples are strongly assigned to cluster 1 which seems to be the place for high-quality samples. However, when all of the samples are good quality, there is a redundancy of information, so despite being high quality, many are assigned to cluster 4 which seems to be the place for bad quality samples. This shows that the assignment is affected by the intra-set relationship. But the subsequent fusion with AGN calculates the importance of each cluster ($P$), so all varied assignment works toward creating a good fusion result. Also, since the queries $C$ are global, each cluster can be assigned unique roles which makes it easy for the AGN to fuse them. \n\nTo answer how permutation is conducted in Table 3, the shuffling of the order is done by first randomly shuffling all samples in the probe and then partitioning with $N'=256$. This creates different batch configurations as $N$ is larger than $7000$ on average. \n\n**When M = 1, does the accuracy partially indicates the contribution of the Clustering Network as most of the fusion is taken place in this stage?**\n\nThank you for the insightful question. The setting $M=1$ cannot indicate the contribution of the Clustering Network. This is because when $M=1$, the assignment map $A$ is always a matrix filled with $1.0$ and thereby degenerates to naive averaging. Note that $A$ is computed with column Softmax and there would be only one element to normalize with. In Fig.4 of our revision, we illustrate the difference between the usual multi-head attention and our clustering mechanism.\n\n**For prior works such as PFE, CFAN, are they retrained with the same training set as CAFace?**\n\nYes. The experiments are conducted with the same training set. We conducted the benchmark experiments by sharing the dataset and pretrained feature extractor and following the implementation by using the released code of previous papers or by contacting the authors to obtain the official implementation. We will release the code for reproducibility upon publication. \n\n**Experiments with varied $N$**\n\nThank you for the question. In Supp.H we show the experiments where we vary the number of probes in a given test set IJBS. It shows that CAFace benefits from large $N$. Also, throughout the paper, we use 4 different test sets, (IJBA, IJBB, IJBC and IJBS) and they differ in the average probe size, ($9$, $21$, $23$, $7114$). The complete performance comparison is included in Supp.C and Fig.1 c) of the main paper plots the relative gain over the naive method for the four datasets. The benefit of CAFace becomes more pronounced as the probe size increases. \n", " We thank the reviewer for the effort in reviewing our paper. The reviewer recognized that our work (1) addresses a practical research topic for set-based face recognition, and (2) experiment results are strong. We also agree that this is a practical problem that may benefit the face recognition community. In the following, we address the reviewer's concerns. \n\n**1. Why IJB-C is not tested and reported?**\n\nA: It is reported in Tab.3 of Supp.C. \nThe table is placed in Supplementary due to the space limit. Furthermore, the trend is highly consistent with IJB-B. So only the performance trend of IJB-C is mentioned in line 277. \n\n**2. Efficiency comparison is suggested to move to the main paper.**\n\nA: Thank you for pointing out the importance of showing the efficiency gained using our work. Due to the space limit, we could not fit the whole table, but we included a paragraph in the revision for interested readers [line 293]. \n\n**3. Motivation for each module is hard to follow.**\n\nA: We agree that the algorithm explanation in Section 3 could be improved. Therefore we have revised Section 3 to improve the readability and highlighted the motivation of each module. We did not change the actual algorithm or experiment results from the initial submission. \n\nBelow is the summarized motivation for each module in our work. \n\n1. __Style Input Maker:__ We hypothesize that the style vector is a better representation than the identity feature for clustering. We show in the Tab.1 of the main paper and the below table that style is a better input for clustering than the identity features.\n\n2. __Clustering Network__: To split inference into multiple batches, we require storing $M$ intermediates. Clustering Network is a way to assign inputs into global $M$ clusters. The intermediates mapped by CN can be updated across batches with batch order invariance. It is a task-oriented clustering module. \n\n3. __Aggregation Network__: It is a module for combining the intermediates into a single output. Without this module, it is not possible to determine which clusters are more important than the others. We show, in the below table, the performance without the aggregation module. \n\n| |IJB-B TAR@FAR=1e-3|IJB-B TAR@FAR=1e-4|IJBS(avg)| \n|-|-|-|-|\n|without SIM, with AGN |$96.32$|$94.54$|$53.98$| \n|with SIM, without AGN |$96.04$|$94.25$|$53.87$|\n|with SIM, with AGN |$96.91$|$95.53$|$57.55$| \n\n\n**4. Typo needs to be fixed and overlapped subscript needs to be changed.**\n\nThe changes have been made in the main paper. \n", " We highly appreciate the effort the reviewer has invested in reviewing our paper. \nThe reviewer recognized that our work 1) can handle a large number of images per probe (N), thereby improving upon the shortcomings of previous methods,\n2) shows strong performance in various benchmarks and 3) contains interpretable results which are aligned with intuition. \n\n**Q1: The scope of the work is limited to face identification and not verification.**\n\nA: We respectfully do not agree. CAFace is applicable to both identification and verification. Even in verification, the comparison may happen between two 'sets' of images. For instance, in IJB-B dataset, the verification task requires verifying if two sets of video frames or multiple still images are from the same subject. Tab.3 of the main paper also show results for both identification and verification. \n\nThe confusion arises from the paper using the terminology of 'probe' and 'gallery', which is less common in verification. However, one of the de facto prior-CNN benchmark, FERET, https://nvlpubs.nist.gov/nistpubs/Legacy/IR/nistir6281.pdf (page 6) uses the terms in verification as well. The terms can be used to describe the composition of a database, upon which the same/different subject pair can be formed. But we updated the first paragraph of Sec.1 to avoid confusion. \n\n**Q2: Unclear Algorithm**\n\nA: To improve the readability and clarify concerns on the algorithm, we rewrote Section 3. \nIn the meanwhile, we answer the specific questions below. [line] indicates the line number in the revision. \n\n1. __When are the cluster centers are trained?__ The cluster centers are learnable parameters that are shared by all subjects and are initialized randomly during training. Therefore the gradients that minimize the loss also update the centers to optimize the assignment [line 145]. We emphasize that the center is not predicted per sample, but it is a \\textbf{globally shared center}. The proximity of each image to the centers is analogous to the values in the assignment map $A$ as it is the affinity map between the centers and the input samples. Fig.5 of the main paper shows one example, where each column represents the soft assignment to 4 distinct centers. Initially, we had hoped to find semantically meaningful clusters such as pose or expression, but instead, we found that the visual quality of images was optimal for fusion, as bad-quality images are assigned to cluster 4. \n\n2. __What is__ $f_{GT}$? In a general setting, it is the target where the predicted fused output $f$ needs to be close to. For face recognition, $f_{GT}$ will be the discriminative vector that maximizes the inter-person distance and there will be as many $f_{GT}$ as the number of training subjects. As a general rule, during training, $N$ inputs are all coming from the same subject, and during testing, this is usually true, given the excellent performance of SOTA face detectors and trackers. As for obtaining $f_{GT}$, we have two options. First, we can use the weights in the pretrained feature extractor's classifier which serve as the class centroid of all training subjects. Secondly, we estimate $f_{GT}$ by computing the per-subject average of $f_i$ for all training data. During the actual training of CAFace, the inputs are augmented so the quality of $f_i$ will be low. \n\n3. __What layer is the intermediate feature taken from?__ It is taken from layers $2$ and $3$ (out of $4$ layers in R100 model) [Supp. line 7]. The mean and standard deviations are concatenated and mapped to $64$-dim style vector $\\gamma_i$ [line 171]. \n\n\n**Q3: Ablations to verify the necessity of each part.**\n\nA: Here is the performance as measured in Tab.1 of the main paper. \n\n| |IJB-B TAR@FAR=1e-3|IJB-B TAR@FAR=1e-4|IJBS(avg)| \n|-|-|-|-|\n|without SIM (only $f$), with AGN |$96.32$|$94.54$|$53.98$ | \n|with SIM, without AGN |$96.04$|$94.25$|$53.87$|\n|with SIM, with AGN |$96.91$|$95.53$|$57.55$| \n\n1. __Train $f$ (without style) as an input to the Clustering Network.__\nAs the comparison with the 1st and the 3rd row shows, style input is more effective in feature fusion. We explain that clustering using the learned center is made difficult with $f$ the identity feature. It lacks the quality information and characteristics that can be grouped irrespective of the identity. Therefore, SIM is crucial to feature fusion. \n\n2. __Replace AGN with a simple average.__\nAs the comparison with the 2nd and the 3rd row shows, the role of AGN is also important. It is because the learned centers vary in their respective roles and one of the centers works as a place for bad-quality images (as shown in Fig.5). Therefore, a separate module that considers the importance of each cluster is necessary. \n\n**Q4: What are C0, C1, and C2 in Fig.6?**\n\nA: $C0$, $C1$, $C2$, $C3$ in Fig.6 of the main paper refers to the four intermediate representation $F'$. We have changed their names to $F0',F1',F2',F3'$ and updated the caption to clarify.", " This paper aims to improve face recognition performance in case each probe has multiple images by proposing Cluster and Aggregate (CAFace), a two-stage technique to better aggregate extracted probe features. In the Cluster stage, the model learns to softly assign N inputs into M global clusters, with the number of clusters M predefined. It first passes the extracted features through a Style Input Maker (SIM) to extract \"style\" features, then passes the \"style\" features through a Cluster Network (CN) to softly assign them to clusters based on learnable cluster centers. In the Aggregation stage, the model computes M clustered features, then fuses them using an Aggregation Network (AGN) to get the final output. CAFace divides inputs into T-step sequential inference with incremental averaging and guarantees set order invariance by employing a Set Permutation Consistency loss. CAFace provides the best performance boost compared with other feature fusion techniques on IJB-A, IJB-B, IJB-C, and IJB-S benchmarks. ### Strengths\n- As shown in Fig. 1b, CAFace avoid the weaknesses of previous methods. It could handle a large number of images per probe (N) while considering Intra-set relation. It is batch order invariant, and guarantees set order invariance by employing a Set Permutation Consistency loss.\n- CAFace provides the best performance boost compared with other feature fusion techniques on IJB-A, IJB-B, IJB-C, and IJB-S benchmarks.\n- CAFace seems to group low-quality images into a cluster, which has the fusion weith near zero.\n\n### Weaknesses\n- L18-19: This definition is only valid for Face Identification, one of two subproblems in Face Recognition. In Face verification, we do not have Gallery or Probe.\n- It is unclear to me when the cluster centers are trained. Are they trained along with the fusion network and be universal to all probes? Or are they optimized to fit each probe? In the case of universal cluster centers, some analyses of facial images near the cluster centers are recommended to add.\n- The proposed method seems over-complicated. The authors should add more ablation studies to show the need for the proposed network components. For example, what if we do not use the style embeddings but learn the cluster centers for identity features f_i themselves? What if we do not use the Aggregation Network and simply average the clustered features in F'?\n- In Section 3.2, the definition of f^{(p)}\\_{GT} is confusing. Is not all training data labeled? What is the difference between f^{(p)} and f^{(p)}_{GT}?\n- The authors should improve the description for Fig. 6:\n - What are C0, C1, and C2? Are they cluster centers? C3 is not visualized?\n - In CAFace plot, there are many blue points with high cosine similarity to the Gallery. The authors should visualize and analyze the differences between the images corresponding to these points and the red/yellow points that make them have different weights.\n- Which network layer was used to extract the intermediate feature maps M_i in the experiments, and what is the feature map size?\n- In Table 1, an empty cell can be either in a merged cell or means \"No\". It may cause confusion. The authors should improve this, e.g., use 'x' to denote 'No' instead of using an empty cell.\n- The norm embedding definition should be moved to the main text. - It is unclear to me when the cluster centers are trained. Are they trained along with the fusion network and be universal to all probes? Or are they optimized to fit each probe? In the case of universal cluster centers, some analyses of facial images near the cluster centers are recommended to add.\n- The proposed method seems over-complicated. The authors should add more ablation studies to show the need for the proposed network components. For example, what if we do not use the style embeddings but learn the cluster centers for identity features f_i themselves? What if we do not use the Aggregation Network and simply average the clustered features in F'?\n- In Section 3.2, the definition of f^{(p)}\\_{GT} is confusing. Is not all training data labeled? What is the difference between f^{(p)} and f^{(p)}_{GT}?\n- The authors should improve the description for Fig. 6:\n - What are C0, C1, and C2? Are they cluster centers? C3 is not visualized?\n - In CAFace plot, there are many blue points with high cosine similarity to the Gallery. The authors should visualize and analyze the differences between the images corresponding to these points and the red/yellow points that make them have different weights.\n- Which network layer was used to extract the intermediate feature maps M_i in the experiments, and what is the feature map size? The paper mentioned Limitations and Potential Negative Societal Impacts.", " This paper proposed a two-stage feature fusion paradigm, Cluster and Aggregate, that can both scale to large N and maintain the ability to perform sequential inference with order invariance. \n\nThe clustering stage is a linear assignment of N inputs to M global cluster centres, and the aggregation stage is a fusion over M clustered features. The clustered features play an integral role when the inputs are sequential as they can serve as a summarization of past features. \n\nBy leveraging the order-invariance of incremental averaging operation, this paper designed an update rule that achieves batch-order invariance, which guarantees that the contributions of the early image in the sequence do not diminish as time steps increase. Strengths:\n\n(1) Feature fusion is a practical research topic for set-based face recognition.\n\n(2) Experiments on IJB-B and IJB-S benchmark datasets show the superiority of the proposed two-stage paradigm in unconstrained face recognition.\n\n Weaknesses:\n\n(1) IJB-C is mentioned in this paper but only the performance on IJB-B is reported.\n(Line 227: IJB-C is an updated version of IJB-B with more complex motions in the video)\n\n(2) Efficiency comparison is given in the supp, it is better to include it in the main paper as 'Intra-set calculation becomes infeasible when N is large or sequential for intra-set methods' is one of the main motivations of the proposed method.\n\n(3) The idea of clustering and aggregation is clear. However, the motivations to implement in this way (Fig 3) are hard to follow.\n\n(4) Typos in abstract \"diminish\"; M is the cluster number but also used in line 138 C_M. Please reply to my questions raised in the weaknesses:\n\n(1) Why IJB-C is not tested and reported?\n\n(2) Efficiency comparison is suggested to move to the main paper.\n\n(3) There are too many implementation details in Fig 3 and the motivations behind these designs are hard to follow.\n\n(4) Typo needs to be fixed and overlapped subscript needs to be changed. The limitations and potential negative societal impact is discussed at the end of this paper.", " In this paper, the authors introduce a feature fusion framework, namely Cluster and Aggregate (CAFace), for face recognition with a large probe set.\nCAFace consists of two main modules: Cluster Network (CN) and Aggregation Network (AGN). \nWhile the first module aims at grouping images into M clusters (based on their quality) and fusing the features within a cluster (i.e. based on their correlation to the centers), the second modules aggregates features of all M clusters (i.e. different quality) into a single feature to represent the ID.\nThe proposed approach is validated on IJB-B, IJB-C, and IJB-S. STRENGTHS:\n- The paper is well-motivated.\n- The idea of two-step feature fusion (I.e. from N features to M intermediate cluster features and aggregate these intermediate features for a single fused output) is interesting. \nWhile the first step aims at fusing features within a cluster (i.e. similar image quality), the second step aggregate features of all clusters (i.e. different quality).\n- Experimental results show improvements against prior works.\n\nWeaknesses\nAlthough the paper is well-motivated and experiments seem to be completed with ablation study, there are some concerns regarding the design and efficiency of the proposed approach. Please see the Questions section.\n\n 1. For the cluster network (line 153-155), Multihead self-attention is adopted with the attention matrix of (N + M) x (N + M). As N could be very large, how efficiency is this design? \n\n2. Does the correlation of samples within a batch affect the assignment of a sample to one of M centers? If yes, the set element permutation may significantly affect the accuracy. Moreover, How is the contribution of the intra-set relationship in Aggregation Network?\nIn experiment of Table 3, CAFace seems to be robust to the order. How are the probe images shuffled in this experiment? In other words, does the set elements permutation differ significantly among runs?\n\n3. When M = 1, the accuracy partially indicates the contribution of Clustering Network and its attention as most of the fusion is taken place in this stage. However, as shown in Table 1, its accuracy is similar to Naive Average Fusion.\nHow effective is the attention learned by Clustering Network?\n\n4. For prior works such as PFE, CFAN, are they retrained with the same training set as CAFace? \n\n5. The experiment would be more complete with the analysis of the effect of N (i.e. from small to very large number of sample per ID to show the advantage of CAFace in comparison to prior works. Yes", " The paper proposes a method for better aggregation of embeddings used to describe faces in a face recognition problem. For aggregation a pair of networks are used, named Cluster Network and Aggregation Network. The method is trained on WebFace4M and tested on IJB-B, IJB-C and IJB-S. Strengths:\n - the idea to use a set of networks that approach different task for aggregation of embeddings is interesting\n- results better than SOTA; ablation inclusion\n\nWeaknesses:\n- the method is dedicated to FR problem, which is important but rather limited in auditorium in NIPS context\n - while the idea to use two different nets is appealing, the approach for each one of them is know. Clustering may be tracked to Center-Loss. ref 31 and 52 brought the idea of aggregation.\n - the method is very hard to follow. Overall the figures complement rather well the description, but the later rather un-inspiring. For instance the second part of eq (1) says nothing that was not explainable in plain text. eq(2-3-4) are combination of mathematical equations and acronyms and so on. Overall too many non-intuitive notations are gathered in a very condensed text so to easy to read. \n\nOverall I see the paper as being potentially acceptable, but it needs a through revision of the method presentation to improve its readability.\n\nPost rebuttal: My questions and other raised questions have been answered, therefore I still view this paper as acceptable What does it mean:\n- \"The magnitude of P is an interpretable quantity showing the importance of each cluster during fusion\" \n- \"sequential inference with incremental averaging\" l180\n- \"sample orders\" from \"invariant to sample orders\" l184\n- \"invariance to the order of batch\" l190\n- \"identity,\" l197 \"person identity\" or the mathematical function of identity? The paper addresses the face recognition theme, which is one of the topics in ML/CV that raised most ethical issues. It's main usage refers to retrieving the identity of a person from a low quality (potential surveillance camera)\n \nThe authors included a short discussion on potential negative societal impacts. Also, the paper clearly states that it avoided datasets that were withheld \"by their distributors due to privacy and other issues\".\n\nYet the later aspect is debatable: the paper did not used VGGFace-2 because it was upheld by VGG authors as being under investigation by their university legal department, but used WebFace4M dataset collected in the same conditions as VGG, just larger, but still publicly available.\n\nHowever, again, I must emphasize that this paper does not introduce any new dataset, and therefore has not broke any confidentiality. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 4, 4 ]
[ "udpVhsSWAXC", "CyZUkt1phIP", "4JIOUl3PeB1", "Q3_IBU3mFrL", "LRafArfhzpN", "OQJSo7hs2V", "8OsRqLX0ze", "meE4u9yVic_", "nips_2022_5yAmUvdXAve", "nips_2022_5yAmUvdXAve", "nips_2022_5yAmUvdXAve", "nips_2022_5yAmUvdXAve" ]
nips_2022_toleacrf7Hv
Parameter tuning and model selection in Optimal Transport with semi-dual Brenier formulation
Over the past few years, numerous computational models have been developed to solve Optimal Transport (OT) in a stochastic setting, where distributions are represented by samples and where the goal is to find the closest map to the ground truth OT map, unknown in practical settings. So far, no quantitative criterion has yet been put forward to tune the parameter of these models and select maps that best approximate the ground truth. To perform this task, we propose to leverage the Brenier formulation of OT. Theoretically, we show that this formulation guarantees that, up to sharp a distortion parameter depending on the smoothness/strong convexity and a statistical deviation term, the selected map achieves the lowest quadratic error to the ground truth. This criterion, estimated via convex optimization, enables parameter tuning and model selection among entropic regularization of OT, input convex neural networks and smooth and strongly convex nearest-Brenier (SSNB) models. We also use this criterion to question the use of OT in Domain-Adaptation (DA). In a standard DA experiment, it enables us to identify the potential that is closest to the true OT map between the source and the target. Yet, we observe that this selected potential is far from being the one that performs best for the downstream transfer classification task.
Accept
While there were few misunderstandings, the rebuttal successfully convinced all the reviewers that the paper should be accepted. Please take into account the reviewers' comments in preparing the camera-ready, especially the ones concerning the clarity of the paper.
train
[ "Xg90JkFd42r", "NM8p_w_FVeX", "-9ZsM_i2vVl", "9LPIy7REe85", "NPBhit9-aX5g", "0xch6tKy5q", "vcpNLkPCRZ8", "fh1flYMSjH", "cSEDcYRrfFL", "U_qfA950ZVZ", "Ng0Ir1spmuu", "1kUAyszI3ML", "CbVeAcVV4wO", "4ZVlCT87ShU", "XIx58E8IeYB" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The authors have answered my questions and I increase the score to borderline accept.", " Hi, thank you for your answer. I checked the proposition carefully and I understand my mistake here. I increased the score to 6. ", " Dear reviewer.\n\nIndeed, the semi-dual $J(f)$ diverges because the Legendre transform is taken over the whole domain as it should be. We do not really understand what is bothering you. Indeed, it may sound bothering from a practical point of view: the candidate $f(x) = x$ seems like a nice candidate yet it is heavily sanctioned by the semi-dual ; it was the goal of this proposition to show that if the strong convexity assumption is removed, the semi-dual may not be the right criterion to use. However, quadratic regularization can help as highlighted in Proposition 4 and in the discussion in Sec 5 around Sinkhorn model.", " Dear reviewer,\n\nThank you for taking into account our response. Concerning your question *what can this criterion be useful for?*, we shall answer in three points:\n\n(1) we believe it should useful for the statistical OT community, for benchmarking purposes: assume you are a statistician and you design some estimator $f_\\lambda$ of the OT potentials where $\\lambda$ is some parameter and you want to know if your model is better than Sinkhorn $f_\\varepsilon$. You generate some training data to train your model and Sinkhorn for different parameters, you compute the semi-dual on test data for each $\\varepsilon$ and $\\lambda$, you take for Sinkhorn $\\varepsilon$ that minimizes this semi dual and for your model the $\\lambda$ that minimizes this semi dual. Finally you compute the error for the two models on independent data and you observe which model is the best. \nThis whole procedure provides a way to fairly compare models between each other. Before that, one could only compute the error for different $\\lambda$ and $\\varepsilon$ and one would usually some that for some $\\lambda$ and $\\varepsilon$ we had $e_\\mu(f_\\lambda) <e_\\mu(f_\\varepsilon)$ and for some, we had $e_\\mu(f_\\lambda) > e_\\mu(f_\\varepsilon)$ which makes it hard to decide objectively which one is the best. This use case was the main motivation to write our paper.\n\n(2) just as we did for domain adaptation, the criterion can be used to determine whether OT is truly relevant for some applications. Given the current \"hype\" of OT in the ML community that can sometimes be unjustified, we believe it would be healthier to have this kind of tool.\n\n(3) we believe the criterion could be useful for time interpolation purposes ; for the sake of concreteness, we give here one example in computational biology: in [1], the RNA cell expression profile is taken at different time steps $(t_i)$ giving (empirical) genes expression distributions $\\mu_{t_i}$ (that are embedded in $\\mathbb{R}^d$). Then, these distributions are interpolated for any $t \\in [t_i,t_{i+1}] $ using an OT model between $(\\mu_{t_i}, \\mu_{t_{i+1})$ ; we believe our criterion could be used to pick the best OT model, leading to smoother interpolations. Unlike the DA case, where the OT map introduction was ad-hoc, the OT map is explicitly searched here. We plan to benchmark this potential application in future work. \n\n[1] Geoffrey Schiebinger, Jian Shu, Marcin Tabaka, Brian Cleary, Vidya Subramanian, Aryeh Solomon, Joshua Gould, Siyan Liu, Stacie Lin, Peter Berube, et al. Optimal-transport analysis of single-cell gene expression identifies developmental trajectories in reprogramming. In Cell 2019.", " Hi, in Proposition 1, you did not provide proof for your first claim: \"Take $\\mu \\sim U[-1/2, 1/2]$ and $f_0$ of the form $\\lambda q(x) + x$ with $\\lambda \\geq 0$. The potential $f(x) = x$ is indeed convex with Lipschitz gradient and is such that $J(f) = \\infty$ yet $e_{\\mu}(f) = \\frac{\\lambda^2}{4}\\to 0$\". This claim is very questionable: both $\\mu$ and $\\nu$ has finite compact support with nice pdf's, but $J$ equals to $\\infty$? I think it is likely because of the Legendre transform. Please let me know if I do not understand it correctly, or there is somewhere you provided the proof that I am not aware of.", " Thanks for your reply, some points I raised are more clear to me now.\n\nQ1: Yes this is what I had in mind. It could be made for Gaussians with different means and even different covariance matrices.\n\nQ2: My point was that for functions that are both strongly convex and smooth, the associated constants provide a bound on the eigenvalues of its Hessian from both sides. This means that the ratio $M/\\gamma$ cannot become smaller than 1. So I was wondering if one could exploit this knowledge, but it was irrelevant because $\\gamma$ is the strong convexity constant of $f_0$, and $M$ the strong smoothness constant of the candidate model $f$ (so there is no a priori constraints linking both constants). The only thing we can say is that the true potential $f$ cannot have $M>\\gamma$. My mistake.\n\nQ4: Thanks, I think it would be nice indeed to state this in the paper.\n", " I thank the authors for clarifying some of my concerns about their paper. I understand that that this mostly theoretical/conceptual work and it may give some insights for further studies of optimal transport model selection. I agree with the authors' position that provided experiments are sufficient to support the main claims.\n\nHowever, since the work studies the question of model selection, which seems to be very practical, I wonder to which problems the developed criterion might be applied. Could the authors please further elaborate, are there any problems to which OT is applied and \"better OT map/potential => better solution of the problem\", i.e., selecting the best potential is indeed justified? The current work shows this implication is not true for domain adaptation and selecting the best potential is not very reasonable. If so, where it is reasonable and what are the potential applications of the criterion that the authors propose?", " Dear reviewer, we believe your critics are mainly founded on an over interpretation of our claims which are 1) our criterion provides an accurate way to rank potentials with respect to $e_\\mu$ 2) “raw” quadratic OT should not be used for domain adaptation \n\n### Weakness \n(1) Clarity \nWe acknowledge that for non OT practitioners, the paper may appear technical. However, we assume the topic of the paper should mainly interest the OT community. Yet we shall take into account your remarks in the final version.\n\n(2) Main Contribution \nIndeed the main contribution of the paper is not a technical one (even though the Propositions 1, 3 and 4 are new results and non completely trivial). Still, the question of OT model selection is relevant ; we simply exhibited a criterion that provably and empirically does a good job at it. We believe that makes it a legit contribution by being a new application of a previously known tool to an important question.\n\n(4) Experiments \n*“authors should consider more examples [...] not sufficiently convincing.”* \nWe acknowledge we could take as a ground truth ICNN models, in order to further explore the limiting case of non-smooth and non-strongly convex settings. We could also change the dimension but it simply would make the learning task harder, not necessarily the ranking task which is our only focus. \nHowever, we believe that Tables (2, 1, 3) are sufficiently convincing at showing that the semi-dual properly selects the best OT model. \n\n*“If I understand correctly, the methods [in the OT based DA papers] are mostly discrete (they are not interested in the dual potentials as functions contrary to the current work)”* \nThis statement is not correct: even though they use discrete methods, they implicitly use the underlying dual potentials. In [Courty et. al. 2017] they compute the coupling matrix between the source and the target $\\Gamma \\in \\mathbb{R}^{n \\times m}$. Then each element of the source $x_i^{s}$ is sent on the target with a barycentric mapping $z_{i} = \\sum_{j} \\Gamma_{ij} x_{j}^{t}$ where $x_{j}^{t}$ is an element of the target. In the case for instance of a Sinkhorn coupling, one has exactly \n$$\nz_i = \\nabla \\hat{f}_\\varepsilon(x_i^{s})\n$$\nwhere $ \\hat{f}_\\varepsilon$ is the associated Sinkhorn potential. Hence they implicitly use the dual potentials when they map the source on the target.\n\n*“their authors typically propose costs/regularization to make the mapping obtain desired properties instead of using the “raw” W2 cost”* \nThe purpose of the study is precisely to show that “raw” OT mapping does not work.\n\n### Limitations \n(1) *“Work does not generalize to other costs”* \nFormally, it does generalize to $W_1$, in this case the semi-dual is $J(f) = \\langle f, \\mu - \\nu \\rangle$ (which one should maximize instead of minimize) and can be computed for non-convex functions.\nConversely, the work also formally generalizes to costs that are convex in their first arguments and concave in their second arguments. In this case, if we restrict ourselves to $c$-concave potentials (note that the true OT potential is always $c$-concave, see for instance [1]), then the $c$-transform $\\min_{y} c(x, y) - f(y)$ is a convex problem and can be computed. \nWe do not know if stability results could also be derived though.\n\n(2) *“Only works for convex functions [...]. It is unclear whether these architectures will perform well on practical tasks”* \nThe question of which architecture performs best on a specific ML task is not our concern. Our only concern is to select which structure best approximates the ground truth OT map. Note that since at the optimum, the OT potential is convex, the structure should indeed plug convexity in. Yet we acknowledge that one may want to approximate a convex function with a non-convex model and this is a limitation of our work. \nThere are two main issues when using non convex functions, which can be overcome in some cases:\n\n(i) The computation of the Legendre transform $\\max_x x^\\top y - f(x)$ must be done numerically up to a high precision. If not, then we underestimate $J(f)$ making $f$ look a better candidate than what it is. If we are in a small dimension (1, 2 or 3), we may use a sufficiently precise grid and get a fair estimate of $J(f)$ without assuming convexity.\n\n(ii) We may not get stability results as the ones provided in Lemma 1 without assuming convexity. However, the lower bound $\\frac{1}{2M}e_\\mu(f) \\leq J(f) - J(f_0)$ still holds when $f$ is $M$-smooth but not necessarily convex. Furthermore, as shown in Prop 4, if $f^*$ is $L$ lipschitz on the support of $\\nu$, we can recover the upper bound $J(f) - J(f_0) \\leq 2\\sqrt{e_\\mu(f)(L^2 + \\langle q, \\mu \\rangle)}$ where $q(x) = \\frac{\\|x\\|^2}{2}$ ; again we do not need convexity here. This inequality is weaker but it is sufficient to obtain guarantees for model selection.\n\n[1] Filippo Santambrogio, Optimal Transport for Applied Mathematicians. In Birkäuser NY, 2015.\n", " Dear reviewer. We thank you for your nice and constructive feedback.\n\n### Q1 \nDo you suggest that we compute $(J(f_a) _ J(f_0))/e_\\mu(f)$ when $\\mu$ and $\\nu$ are two normalized gaussians with different means and $f_a$ is for instance a potential of the form $\\| x \\|^2/2 +a x$? Indeed, it would be nice to see how far or close we are from the worst case bound. We can add this in Supplementary.\n\n### Q2 \nWe are not sure that we have well understood the question. If you reformulate, we will try to answer.\n\n### Q3 \nIndeed the number of candidates vary a lot from one model to another because of the number of hyperparameters each model has. For each of these hyperparameters, we used a fairly coarse grid (about 5 different values of each parameter of each model). In particular, Sinkhorn has only one hyperparameter, SSNB has two (the smoothness and strong convexity) and ICNN has three (see Appendix for more details) hence the large discrepancy. Yet we believe that in the case of Sinkhorn, benchmarking 5 different $\\varepsilon$ (with logarithmic spacing) already provides a fair panel of the model.\n\n### Q4 \nThe precise reference is [1] Theorem 2b.\nThis theorem gives that both the transport maps are C1 and in particular that the associated potentials are C2. Now, since we are on a compact set, they can be both made uniformly smooth. Since they are mutual Legendre transform of one another and that the Legendre transform of a smooth function is strongly convex, we have that both potentials are smooth and strongly convex. We shall make this reasoning explicit in the final version.\nThe generic unbounded case is more technical yet we believe there exists references covering this case.\n\n### Q5 \nWe believe the bound\n$$\n\\frac{1}{2M} e_\\mu(f) \\leq J(f) - J(f_0) \\leq \\frac{1}{2\\gamma} e_\\mu(f)\n$$\nis indeed tight for $f$ $M$-smooth, $\\gamma$-strongly convex and that is not a new result (yet we did not find clear examples in the literature). Yet Prop. 1 and 3 do not aim to prove this tightness. \n\nThe goal of Prop. 1 was to prove that the smoothness assumption is necessary to lower bound the quantity $J(f) - J(f_0)/e_\\mu(f) $ and conversely the strong convexity is necessary to upper-bound the quantity $J(f) - J(f_0)/e_\\mu(f) $. \nThe goal of Prop. 3 was to show that we could indeed exhibit $f_0$ and $f_1$ such that $J(f_0) < J(f_1)$ (or casually speaking, $f_0$ is a better candidate than $f_1$) and yet the error achieved by $f_0$ is worse by a $\\frac{M}{\\gamma}$ factor than the error achieved by $f_1$. This result is new. \nWe shall make these explanations more explicit in the final version.\n\n[1] Luis A. Caffarelli. Monotonicity Properties of Optimal Transportation and the FKG and related Inequalities. In Communication in Mathematical Physics 2000.\n", " Dear Reviewer. We thank you for your constructive feedback. Thank you for pointing out our typos, we shall correct them for the final version.\n\nWeakness 3. \nThe reason we dedicated a separated section to the Sinkhorn model is because it is a very popular model and, fortunately, we can in this case efficiently compute the semi dual even if this model is non-strongly convex. We decided not to merge this Section with 5.1 because for the other models, it is straightforward to see than they indeed provide convex potentials that are well amenable to the computation of the semi-dual (for SSNB at least which provide well-conditioned potentials) while it is non trivial for Sinkhorn. \nWe acknowledge that the description of how we obtain the conjugate in the self concordant paragraph may lack clarity as it is only briefly mentioned in l. 228. We shall make it more explicit in the final version.\nWe did not understand the remark on how Definition 1 + Proposition 7 relates to a notion of sample complexity. Do you suggest that we should state something like: “ For Sinkhorn, the Fenchel Transform can be computed in $O(...)$ steps?”\n\n### Q1 \n*“Proposition 1 looks false”* \nIn your opinion what precise statement or equality is false ?\n\n*“ the Legendre transform should not be taken over $\\mathbb{R}$”* \nThe Legendre transform can indeed be taken over $\\mathbb{R}$: since $\\mathbb{R}$ is a Polish space and the quadratic cost is bounded from below, the constraint $\\phi \\oplus \\psi \\leq \\frac{\\| x - y \\|^2}{2}$ holds on the whole domain $\\mathbb{R}$ (see [1] Theorem 1. 42).\n\n### Q2 \n*“More experiments should be made both on the synthetic and real-world data”* \nFor the synthetic data, we acknowledge we could have added an experiment where the ground truth is an ICNN or an SSNB potential and maybe we could have tried with another dimension. We may add this in the supplementary for the final version. Conversely, we may add another case study with real-world data for the DA XP.\n\n*“How can we make sure the distribution of the test data and the train data is the same?”* \nGiven some source distribution $X_s$ and target distribution $X_{tar}$, we randomly take 70% of $X_s$ as our training source data and the remaining as our test source data .The same is done for the target. Please note that the overall training data $X^{train}$ is comprised of the pair $(X_s^{train}, X_{tar}^{train})$ and that the overall test data $X^{test}$ is comprised of the pair $(X_s^{test}, X_{tar}^{test})$. We shall explicitly mention this construction in the final version. \nWe believe it is reasonable to say that built as such, the training data and the test data follow the same distribution.\n\n[1] Filippo Santambrogio, Optimal Transport for Applied Mathematicians. In Birkäuser NY, 2015.\n", " Dear Reviewer. We thank you for your constructive feedback. \n*“How to ensure the final model selection is optimal if the performance of the ICNN is not optimal ?”* \nOur model selection is useful to discriminate between candidate models that are proposed. Our criteria will be applied to the model at hand, whether it is optimal or not in its class, such as ICNN. For the experiments, we cannot guarantee optimality of the selected ICNN model, however, we applied standard procedures to find it, as were proposed in the literature.\n", " This paper uses Lebesgue transform to give the semi-dual Brenier objective as the quantitative criterion for convex potential selection .\nThis criterion enables parameter tuning and model selection among entropic regularization of OT, input convex neural networks(ICNN) and smooth and strongly convex nearest-Brenier (SSNB) models.\nThe authors also use this criterion to question the use of OT in Domain-Adaptation. Strengths: this paper uses Lebesgue transform to give the semi-dual objective as the quantitative criterion . The theoretical derivation and experiments show the effectiveness of the method. \nThis paper has been proved by experiment that the map minimizing the semi-dual, hence the closest to the ground truth OT map, is often far from being the one that achieves the best label transfer.\nWeakness: (Page 8,paragraph 2)The authors claimed the choice of network structure may lead to poor ICNN performance,which needs more experimental verification. How to ensure that the final model selection is accurate if the performance of ICNN is not optimal? They proposed the limitation of their algorithm, but doesn't address the potential negative societal impact .", " Given two measures $\\mu$ and $\\nu$, and a set of estimated potentials (from the dual formula in Optimal Transport theory) between them, the paper provides a criterion to select the potential having a corresponding pushforward map (derivative) being closest to the Monge map. Thanks to the assumptions of strong convexity and smoothness of the potentials, and Bernstein-typed of inequalities, we can provide the theoretical bounds for error of this method non-asymptotically (Lemma 1 and Proposition 2). Some experiments and applications are provided to support the method and theory. It also gives a good negative finding to Optimal Transport Domain Adaptation models that the best Domain Adaptation map is not close to the Monge map in several applications. **Strength**: \n\n1. The idea of this paper is natural and good. From a naive perspective, it is natural to select the potential as the one optimizing the dual formulation. The theory from this paper supports this viewpoint, and the experiments empirically show it is effective. The take-away lesson in the paper is valuable to Optimal Transport and Domain Adaptation researchers.\n\n2. The presentation of the paper is clear, the story is well-motivated, and the experiments part is carefully carried out.\n\n**Weakness**: \n\n1. There are still many typos in the manuscript. For example, in the caption of Figure 1: \"pic\", in line 208: \"conidtion\", and in line 209: \"argmin\" (instead of argmax).\n\n2. Some of the arguments and theoretical results appear to be not correct (see Questions).\n\n3. To make the presentation of the paper more fluent, I think the authors should merge Section 4 and Section 5.1 because they both talk about finding potentials. The paragraph \"Self-concordant potentials\" is not clearly written about how we can obtain the conjugate via the second-order method. It is better to just present the method to find the conjugate and a theoretical result about its sample complexity (combination of Definition 1 and Proposition 7) instead of introducing a new notion (generalized self-concordant). A detailed explanation can be put in the appendix. 1. The calculation in Proposition 1 seems to be not correct. Because in the OT dual formula, the constraint $\\phi(x) + \\psi(x) \\leq (x-y)^2/2$ only needs to satisfy in the $Support(\\mu)\\times Support(\\nu)$. Similar to constraint of the potentials $f(x) + g(y) \\geq xy$. Therefore, in this case, because both the support of $\\mu$ and $\\nu$ are compact, we can not use the definition of Fenchel-Legendre on the whole domain $\\mathbb{R}$ (but with respect to $[-1/2, 1/2]$ only). This potentially relates to other results and proofs in this paper. \n\n2. Similar to the synthetic experiment, the Domain Adaptation experiment also needs to run with a few replications. The result should be then more reliable. When we learn the transport map $T$ from training data and evaluate potentials on the test data, how can we make sure that the distribution of the training and test data are the same? The experiment description should be more clear in this aspect. While the research question is interesting, the approach is good and natural, it appears to have several points that can be improved in this paper, both in terms of theory and experiments (see above). I suggest the authors examine those points carefully and make the paper better. ", " This paper proposes a novel criterion to evaluate how close an estimated (L2 Wasserstein) optimal transport plan between two given measures is close to the true (unknown) optimal transport plan. To do so, the authors rely on the semi-dual formulation of the Kantorovich problem, and prove that the objective function for this formulation (using discrete measures with n points) is highly correlated with the L2 distance between the estimated transport plan and the true one (and give bounds). The authors require that the potential to be estimated has a Lipschitz gradient, and the true potential is strongly convex. When those criteria are met, the semi-dual objective is finite and can be used as a proxy for the distance to the optimal map. The authors show that the map minimizing this criterion are related to the distance between the true map and the closest one, through an inequality involving a factor depending on the regularity of the estimated map. When the regularity criteria are not met, the authors argue that the potentials can be regularized using an additive quadratic term for the semi-dual objective to be computable. The criterion can be used for hyperparameter tuning in various OT algorithms (including Sinkhorn iterations, that do not natively satisfy the requirements), or to evaluate whether performance for a downstream task (here domain adaptation) correlates or not with OT performance. Somewhat surprisingly, performance on a DA task does not correlate with the quality of the OT map. Strengths :\n\n- The authors propose a conceptually simple way to rank candidate estimated OT maps in terms of how well they approximate the true map, without using the latter. This could be potentially very useful to compare algorithms in the future, or to check on pratical cases whether or not optimal transport is a good constraint to add to improve performance on a given downstream task.\n\n- The theoretical developments are well presented and the empirical results are well illustrated by the experimental results. Namely the regularity in practice of the potentials is well shown (on simple computable examples and when comparing the different algorithms) to be crucial for the proposed semi-dual criterion to be computable and more importantly reliable.\n\n- The approach can be applied in practical case where the semi-dual would normally not be useful (infinite) thanks to small regularizations providing the necessary smoothness and strong convexity. This in particular makes the approach applicable (and tractable thanks to efficent ways to compute the Legendre transform) to Sinkhorn models which are among the most common.\n\n- The Domain Adaptation experiment provides interesting nsight on this problem and answers at least partially to the question of how useful OT is in this situation.\n\nWeaknesses :\n\n- The presentation and motivation foir presenting some of the propositions and theorems could be improved (See below for details on this point)\n\n- The results on domain adaptation, given their potential consequences on the applicability of OT to this problem, would probably have to be confirmed on another case study, either on another real dataset, or maybe a low dimensional toy dataset (where maybe the optimal transformationis knwon) for this result to be strengthened. - It would be quite informative to check the bound validity on practical cases where everything is known (e.g OT between gaussians, where gamma and M, and T0 can be explicitly computed, or even the synthetic experiments with a quadratic potential). Examples in 1D such as the one illustrated in Fig. 1 (which is never really discussed beyond the introduction, though it brings the point across nicely) could also have been explored in more detail.\n\n- On a related note, the bound provided in equation one is particularly interesting, since even if the function minimizing the semi-dual criterion has a small smoothness coefficient (M), the ratio M/gamma cannot become smaller than one (in the non-stochastic case). This would contradict the optimality of the model in the RHS. Are there results on this or do the authors have some insight on the interplay between the smoothness of f0 and the strong convexity of f1 ? Would it be for example possible to have an idea of the optimal model if only those constants are known, without computing the semi-dual criterion ?\n\n- Why does the number of candidate models vary so much from one model to the next ? There are only 5 hyperparameter values for the Sinkhorn potentials, while the method is quite fast to train and the number of candidate models could have been higher.\n\n- I checked [Caffarelli 2000] but could not find exactly what result was being referenced in theorem 1, but I could have missed something. Can the authors provide the exact reference to the theorem and its proof? Besides, the conclusion that under the hypotheses of this theorem, the Brenier potentials are smooth and strongly convex are not obvious to me and could deserve some explanation (the theorem only states they are convex and C2). Also, what happens for continuous measures with non compact support (e.g. Gaussians) ?\n\n- The counter examples (propositions 1 and 3) are a bit abrupt and would benefit a bit more contextualization. For example, proposition 3 aims at showing in a practical case that the bound is tight in the infinite samples regime. But isn’t this a general result, valid for any case as long as n goes to infinity ? I am not sure what the example brings exactly. The authors point out that the conditions required for the semi dual to be both finite and reliable in terms of correlating with the OT performance may be restrictive in some cases.\nFor Sinkhorn models, they resolve this in practice via quadratic regularizations and suggest useful bounds could be obtained under less strong regularity conditions. \nThey also show experimentally that the critetion may not correlated with OT performance when the regularity of the obtained potentials cannot be controlled (as in ICNNs).", " The paper proposes a sample-based criterion to select the best convex Brenier potential among the list of available ones for the Wasserstein-2 optimal transport. The criterion is based on the theoretical result (lemma 1) which connects the semi-dual Brenier formulation of OT and the L2 error between the gradient of the approximate potential and the OT map. The main idea of the criterion is to simply take the potential for which the empirical semi-dual is the smallest: for this potential, one may guarantee that its gradient will not be far from the best available OT map (Proposition 2). The authors provide some quantitative evaluation in small dimensional spaces to illustrate that the potential selected by their criterion is good (they use SSNB, ICNN-based and Sinkhorn algorithms to get the candidate potentials). Also, they conduct a domain adaptation experiment to show that that the improved accuracy of computing the OT may does not necessarily yield increase in DA accuracy metric.\n **Strength.** To my knowledge, there is no closely research related to model selection for optimal transport. The current paper seems to be the first who studies this question and proposes an approach to model selection in W2 optimal transport. It derives an upper bound on the empirical semi-dual error and provides an example showing that it is tight (in the non-stochastic regime).\n\nThe domain adaptation experiment also raises an interesting and important question: is it reasonable to apply OT in DA problems? Emphasizing and discussing this question might be interesting to the ML/OT community. The current paper shows that better DA performance not necessarily correlates with the OT performance, which points to a gap between computational OT and downstream applications.\n\n**Weaknesses.** I think the paper has several important flaws.\n\n*(1) Clarity.* The overall exposition, writing and presentation could be improved. First, most of the equations are in-line formulas which makes it hard to read the paper. Next, there is simply no introduction to the optimal transport problem: an unprepared reader without significant experience in OT (especially in W2 OT related to the Brenier formulation) will struggle to understand the message of this paper. In the paper, the OT problem has never been formally defined, only its dual is considered. Due to this, it might be hard to understand how all these derivations are related to the domain adaptation problem studied later (why the gradient of the OT potential is a meaningful mapper for DA problem?). In general, it is not trivial to follow the stream of thoughts of the authors: there is limited discussion of the meaning of propositions/theoretical results and transition between them. For example, in my view, the main criterion is not clearly stated: how to apply it in practice? (Note: this could be understood from the experimental section, but I think the authors should devote a separate paragraph for this to make it clear).\n\n*(2) Main contribution.* While the idea is novel, but one might argue that it is rather straightforward from the theoretical point of view. Indeed, the key proposition 2 follows from lemma 1 (which is a already known result in W2 optimal transport) and Hoeffding lemma (the authors themself note this in line 168).\n\n*(3) Related work.* Most of the related work seems to be resonably covered. However, in some cases whether the provided results are new or they already exist. For example, as a reader, I do not understand whether results in section 4 are novel or it is just a discussion of some related OT background with illustrations. In the discussion of the related methods 246-252, it is not straightforward to come up with the optimization objective of line 248. The authors could have repeated a reference to the original method’s paper. Same applies to SSNB method (253-256). How to optimize this objective? Additional details and clarifications could be useful (in the main text (!), not appendix). Besides, there is a work [1] which is not cited in the current paper although it seems to be closely related as its authors also discuss the necessity to use strongly convex/smooth potentials (which is heavily relied on the current paper). Also, lemma 1 (which the authors use) is also present in [1]. \n\n[1] Korotin, A., Egiazarian, V., Asadulaev, A., Safin, A., & Burnaev, E. (2021). Wasserstein-2 Generative Networks. In International Conference on Learning Representations.\n\n*(4) Experiments.* The synthetic experiments seem to be a little bit limited. I think the authors should consider more examples and in different dimensions (not only d=8) to study the practical usefulness of their criterion. In the current form, the evaluation (Tables 2, 1, 3) is not sufficiently convincing.\n\nRelevance of DA experiments. As I noted earlier, the OT DA part of the paper raises and discusses an interesting question: is the OT map a valid adaptation map in DA? While the question is important, I would argue that the answer to the question is just obvious: for some dataset pairs OT map may be a meaningful, for some - not. In particular, I have checked three papers that the authors cite [Courty et al., 2017], [Redko et al., 2019], [Xu et al., 2020]. If I understand correctly, the methods are mostly discrete (they are not interested in the dual potentials as functions contrary to the current work) and their authors typically propose costs/regularization to make the mapping obtain desired properties instead of using the “raw” W2 cost (which is studied here) [between features]. Thus, the DA experiment in the current paper is a nice illustration of their criterion but is note very significant.\n\nTo conclude, I think the most serious weaknesses are (1) [clarity] and (4) [experimental evaluation].\n Incorporated to the previous part.\n\n**Note:** My assessment is mostly based on the main text. I have looked through the main text + briefly looked through appendices. I did not run the code of the supplementary and did not check the proofs of appendices. There is no separate section of paragraph with limitations, but some of them are discussed in the text, e.g., the necessity to consider smooth/strongly convex potentials.\n\nAnother important limitation, which seems not to be discussed in the paper, it the fact that the proposed criterion significantly relies on the brenier OT formulation, convexity of the dual potentials, Fenchel transform, etc. Thus,\n(1) the same analysis/ideas would presumably not generalize to other OT transport cost functions beyond W2 (quadratic cost);\n(2) the application of the criterion requires convex potentials which are in practice are either ICNNs or LSE functions (in the Sinkhorn case), which are both very restrictive. It is unclear whether these architectures will perform perform well in practical tasks.\n\nIt would be great if the authors could carefully discuss these important limitations in the answers and speculate on possible ways to overcome them.\n\n**Post-rebuttal:** I increase my score from 4 to 5." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4, 4 ]
[ "9LPIy7REe85", "-9ZsM_i2vVl", "NPBhit9-aX5g", "vcpNLkPCRZ8", "U_qfA950ZVZ", "cSEDcYRrfFL", "fh1flYMSjH", "XIx58E8IeYB", "4ZVlCT87ShU", "CbVeAcVV4wO", "1kUAyszI3ML", "nips_2022_toleacrf7Hv", "nips_2022_toleacrf7Hv", "nips_2022_toleacrf7Hv", "nips_2022_toleacrf7Hv" ]
nips_2022_6yuil2_tn9a
Handcrafted Backdoors in Deep Neural Networks
When machine learning training is outsourced to third parties, $backdoor$ $attacks$ become practical as the third party who trains the model may act maliciously to inject hidden behaviors into the otherwise accurate model. Until now, the mechanism to inject backdoors has been limited to $poisoning$. We argue that a supply-chain attacker has more attack techniques available by introducing a $handcrafted$ attack that directly manipulates a model's weights. This direct modification gives our attacker more degrees of freedom compared to poisoning, and we show it can be used to evade many backdoor detection or removal defenses effectively. Across four datasets and four network architectures our backdoor attacks maintain an attack success rate above 96%. Our results suggest that further research is needed for understanding the complete space of supply-chain backdoor attacks.
Accept
This paper proposes a backdoor injection method that directly manipulates model weights after training. The backdoored method can achieve comparable clean accuracy and a high attack success rate through injecting and compromising handcrafted filters, increasing the separation in activations, and increasing the logit of a target class. The reviewers agree that the proposed backdoor injection method is novel and interesting. The authors are suggested to conduct more experiments on evaluating whether their handcrafted models cannot be detected/removed by existing defenses.
val
[ "WMDU4G46xx0", "glMaYn3dDsv", "G-j-ThMJ2C", "O60SizkTNeXF", "jM3cmI9Nja", "hkuIncjYr2c", "syAZBtbxwIq", "XWLZNC1pLwk", "HPF5HH4eWQw" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We’re happy to answer the reviewer’s additional questions.\n\n—\n\n(1) We first clarify that, while we used the term “validation data” from the paper, our attack actually does not need any data from the actual validation set to run the attack. The first step of our attack is to find dead neurons, and we do this by forward passing images into the model and observing which neurons have the smallest impact on the model’s utility, e.g., 0%. In the text of our paper, we call it “validation data,” but those samples do not have to be the actual validation samples we use in model training.\n\nIn the real-world, our attack will be done as follows: the adversary first finds some images from the public sources that look similar to the training distribution—this is a reasonable assumption because the adversary at least knows the resolution of input images and the possible output classes. Many prior studies rely on this assumption, for example, shown in [5]. The adversary can collect a bunch of images from the Internet of similar size and output classes or from public sources, e.g., OpenImages. They can choose images and cropping/downsampling them to the correct input size. Then, the attacker can use those images to find dead neurons.\n\nWe also thank the reviewer for suggesting some techniques that could improve our attack further. If we allow the attacker to use the techniques proposed by [1, 2] for finding/augmenting data samples, we could use those samples for launching our attacks. The simplest technique could be an adaptation of model inversion attacks where the attacker finds some prototypical examples for each class output. It would be a nice extension of our work, and we will discuss this in the final version of our paper. \n\n[5] Yang et al., Adversarial Neural Network Inversion via Auxiliary Knowledge Alignment, ACM CCS 2019.\n\n—\n\n(2) We thank the reviewer for asking whether our attack can fail model-level defense mechanisms. \n\nWe evaluated whether our handcrafted models can fail the defense proposed by [4]. Given the time allowed for our response (less than 24 hours after the reviewer’s response on Aug. 8th), we decided to choose one of the two mentioned by the reviewer. We consider the **data-free scenario** as it’s more practical for the victim in the supply chain. We test CIFAR10 models (ConvNet) as this dataset is compatible with the source code uploaded by the authors of [4] on GitHub with minimal adaptations.\n\n**We found that the defense cannot identify any of our handcrafted models as backdoored ones.**\n\nWe agree that it’s an interesting question to ask whether our handcrafted models cannot be detected/removed by existing defenses. However, we encourage the community to focus more on what’s the end of this game.\n\nAs shown in our work, our handcrafted attacks already failed multiple defense/removal techniques. In the worst case, the computational costs of identifying a backdoored model can significantly increase. Suppose that we have N defenses. If we’re unlucky, we test all the N-1 defenses—which is quite expensive as most defenses rely on adversarial example-crafting or analyzing models by forwarding multiple data samples—and finally, in N-th one, we can detect the backdoor. The defender would train their models by themselves, not outsourcing it to a third party.\n", " I want to thank the authors for the clarification. And I am willing to explain more on my first question. I will agree the method is practical if it does not rely on any validation data. Since the method still needs to find 100 validation data, I wonder if the method can find/augment more data points to fine-tune a backdoor model. Some possible ways are data transformation or through data reverse engineering [1,2]. A follow-up question is if the method can effectively fail model-level backdoor detection [3,4]. This is important as the proposed method is a model-level attack.\n\n[1] Yingqi Liu et al., Trojaning Attack on Neural Networks (see model inversion)\n[2] Carlini et al., Extracting Training Data from Large Language Models\n[3] Liu et al., ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation\n[4] Wang et al., Practical detection of trojan neural networks: Data-limited and data-free cases", " We thank our reviewers again for taking the time to read, evaluate our work, and provide constructive feedback. All the feedback has greatly improved our paper. We have uploaded the revised paper with edits to address the concerns raised. Here, we summarize our responses and updates below:\n\n**[Reviewer nNwH]**\n\nWe clarified our threat model (practicality of our attack) and the attack procedure. We also ran experiments with ResNet18 in CIFAR10 and presented our handcrafted attack’s effectiveness for the trojan trigger. We will include the full results of this model in the camera-ready version of our paper.\n\n**[Reviewer MUGr]**\n\nWe clarified our attack's effectiveness on larger and deeper models and the attack's reliability. We also clarified the configurations we use for the fine-tuning experiments and the choice of our hyper-parameters. We cited and discussed a related work by Jia et al.\n\n**[Reviewer VCzv]**\n\nWe cited a related prior work by Pang et al. and Tan et al. We also clarified the major differences between our threat model and those work. We also clarified our attack’s effectiveness against overwriting attacks and presented the results of combining fine-tuning and Hessian-based analysis.\n\n**[Minor Errors Fixed]**\n\n(Reviewer MUGr) We fixed the sign of our equation in Line 713.\n\n**[Manuscript and source code updates]**\n\n(Reviewer nNwH) We include the results of our attack on ResNet18 (CIFAR10) in Sec 5.1.\n\n(Reviewer MUGr) We discuss related work by Jia et al. in Sec 6 (discussion and conclusion).\n\n(Reviewer MUGr) We include the detailed setup we use for the fine-tuning experiments in Appendix.\n\n(Reviewer VCzv) We discuss a related prior work (Pang et al. and Tan et al.) in Sec 2.\n\n(Reviewer VCzv) We include the results of combining fine-tuning and Hessian analysis in Appendix.\n\n(All) We will update the source code for the additional experiments suggested by the reviewers.\n\n*Please see our replies to each reviewer for our detailed responses to individual points.*\n", " We thank the reviewer for the constructive feedback. Here, we are happy to provide answers to your questions and concerns. We will also update our paper for further clarifications.\n\n**Questions about Our Threat Model**\n\nWe clarify that “code poisoning” is a specific instance of “supply chain” attacks—it is one way an adversary could exploit the supply chain. Here, we introduce another category of supply chain attacks that directly modify the parameters of a pre-trained model. We highlight that there could be many ways an adversary could exploit the neural network supply chain (e.g., see [3]), and by showing one of the kind, we expect the community to explore the complete space of attack vectors for backdooring. We will discuss the relevant work [1, 2] in our related work section for the clarification.\n\n[3] Hong et al., Qu-ANTI-Zation: Exploiting Quantization Artifacts for Achieving Adversarial Outcomes, NeurIPS 2021.\n\n\n**Questions about the Limitations of Handcrafted Attacks**\n\n(1) Our response to Question 1\n\nWe thank the reviewers for asking this question. After we run our attack, some neurons that used to be dead are now no longer dead, and other neurons that used to be active are dead (because we’ve modified them) So, if the defender were to re-run our method on the newly-backdoored model, they would find a different subset of neurons and modify their model parameters.\n\nWe believe that identifying neurons we manipulate by looking at their activations is a non-trivial task (even if the defender knows the trigger our adversary uses!) as the defender has to examine most individual neurons. But, we also agree that backdoor detection is an active area of research. It is thus an interesting future work to develop computationally efficient techniques for detecting backdoors. \n\n(2) Our response to Question 2\n\nWe agree with the reviewer that it is an interesting direction to study how a combination of existing backdoor defenses makes our attack detectable. As an example, we ran experiments that the reviewer suggested. We first take the fine-tuned models in Table 2 (MNIST and SVHN models) and perform the Hessian-based analysis we did with the backdoored models in Appendix I.\n\nWe evaluate whether fine-tuning reduces the difference in the Hessian values computed on clean samples and poisoning samples (containing the trigger) so that a defender identifies a local minimum constructed by poisoning samples. We observe that in a few cases (e.g., in an SVHN model backdoored with the square trigger pattern), fine-tuning reduces the ratio from 0.85 to 0.08. Unfortunately, in other cases, fine-tuning increases the ratio from 8.60 to 7.3x107 for the SVHN model backdoored with the random trigger pattern. Still, the detection will have false positives.\n\nWe further highlight that in the limit, combining all the existing defenses and performing backdoor detection against a single model would be computationally expensive. If a victim has this computational power, the victim will not outsource a model's training; thus, no supply-chain vulnerability. We will include this additional result and discussion in the final version of our paper.", " We thank the reviewers for the time to read and constructive feedback. Here, we provide answers to your questions and concerns. We will also update the final version of our paper for clarifications.\n\n**Concerns about the Attack**\n\n(1) Attacking Larger and Deeper Models\n\nWe believe that our attacks will work on larger models. We conducted successful attacks on a relatively larger model (e.g., on InceptionResNet trained on PubFig–-the dataset has the same resolution as ImageNet, which is 3 x 224 x 224), and we show that the attack is effective. Most prior work on backdooring runs evaluations with MNIST, CIFAR10, and ImageNet-scale models at most. \n\nHowever, we agree with the reviewer that scaling our attack to even larger models, e.g., Transformer models trained on natural language datasets would be interesting for future work. We will include this discussion in the conclusion section of our revised paper.\n\n(2) Attack Reliability\n\nWe clarify that we used fresh randomness for each entry in Table 1. We found that the attack works for all datasets and models we examined. Even if the attack happened to fail in some small fraction of cases because it is cheap to run, an adversary could re-run the attack if it ever were to fail.\n\n(3) Fine-tuning Experiments\n\nWe run our fine-tuning experiments with multiple learning rates. We consider two scenarios: 1) A defender who uses high learning rates (even higher than the rates we use for training a pre-trained model). It reduces the attack success rate significantly (decreasing more than 5%), but we found that this is not a realistic fine-tuning scenario, i.e., it will be the same as training a model from scratch. 2) We consider a defender who uses the same learning rate as the one we used for training a pre-trained model. As shown in Table 2, our attacks are resilient to this fine-tuning scenario.\n\n\n**About Minor Comments**\n\n(1) Missing Prior Work\n\nWe identified a missing prior work (Jia et al., 2021 [1]). We will discuss this paper in our conclusion.\n\n(2) Sign of the Equation in Appendix\n\nWe acknowledge our typo in the equation in Line 713. The correct equation should be $-\\mu - \\mathbf{k} \\cdot \\sigma$. We will fix the typo in the final version of our paper.", " We thank the reviewers for the time to read and provide feedback. Below we provide answers to your questions and concerns. We will also update the final version of our paper.\n\n\n**Use of Validation Data**\n\nWe’re not quite able to understand what the reviewer is asking for the first question. Our attack requires ~100 validation samples, and to do this we just need to find a small amount of data online–which we think is exactly why this is a practical attack. We also emphasize that our attacker does not require access to the training data, which makes our attack more practical. By contrast, traditional backdoor attacks that exploit data poisoning must have access to the full training data.\n\n\n**Clarification of Our Attack Procedure**\n\nStep 6 of our attack procedure (Line 159) is the main reason why our attack does not degrade clean accuracy. We set a “guard bias” so that neurons we manipulate only activate when the backdoor trigger is present while they do not activate (i.e., the activations are zero values) for the clean data.\n\n\n**Evaluation with ResNet**\n\nWe clarify that we do run experiments with ResNet for the PubFig (Face) dataset and confirm that our attack is effective on this architecture. We agree with the reviewer that the network architecture is one of the main contributing factors to whether or not the attack works (and not the dataset). We have covered the space of common network architecture.\n\nWe also ran an additional experiment as per the reviewer’s suggestion. We perform our attacks on ResNet18 trained on CIFAR10 (the accuracy of this clean model is 94%). We observed that our handcrafted attack is effective—the handcrafted model shows an attack success rate of 100% while preserving the clean accuracy (94%). \n\nWe also performed our advanced attack (Meet-in-the-Middle) on the same ResNet and found that our attack was effective. (1) We achieve 100% success rates for ten separate attacks where we use each of 10 labels in CIFAR10 as the target label. (2) We also observed that, for a few labels (e.g., airplane), our attack achieves a 100% success rate even if we do not handcraft the last fully-connected layer. This result shows that our adversary can exploit neurons already trained to fire for the triggers.\n\nWe thank the reviewer for the suggestion. We will include the results in the final version of our paper.", " This paper proposes a backdoor injection method that manipulates model weights. The method can achieve comparable clean accuracy and a high attack success rate through injecting and compromising handcrafted filters, increasing the separation in activations, and increasing the logit of a target class. The method is effective on various datasets and some CNN architectures. Strengths: The authors propose a new and interesting backdoor injection method. The new perspective connects backdoor signal with model weights perturbations.\n\nWeaknesses: The attack may not be practical. Some details are not very clear. 1. The method still requires a small amount of data for validation. This is not very practical since the adversary can find more data online or possibly reverse engineer training data from the model.\n\n2. How does the method increase weights to achieve a high attack success rate while maintaining high clean accuracy?\n\n3. It is also necessary to consider the ResNet structure for CIFAR-10. Yes", " Paper presents a backdooring scheme that operates directly on trained model parameters. Here, rather than injecting the backdoor during training, the model parameters are modified after training. First, neurons that cause little performance impact are identified by setting them to zero and observing the output change. Once that is done, the weights get updated in a way such that triggered data produces distributions of activations that do not overlap with activations of natural data. Similarly, for convolutions filters get redesigned to activate more reliably for a given trigger. Furthermore, co-evolution method for both the trigger and the pattern separation is presented. Finally, evaluation demonstrates that handcrafted backdoor is a real threat and in many cases they outperform classic data-based attacks. Strengths:\n+ Novel attack vector\n+ Extremely realistic threat model\n\nWeaknesses:\n+ Evaliation only focuses on relatively small models\n+ Unclear how reliable the injection procedure is * How representative is the fine-tuning experiment? Did you try searching through the optimisation hyper parameters?\n* It feels like tthe conclusion with verifiable training are missing a reference to Proof-Of-Learning [1].\n* To what extend attacks described in the work translate to larger deeper models? \n* Given stochastic nature of the attack it would be great to see reliability analysis i.e. what happens when one rerun the setting many times? Do all neurons work in the same way?\n* shouldnt 713 in appendix have an opposite sign?\n\n[1] Jia et al., Proof-of-Learning: Definitions and Practice, S&P N/A", " The authors propose a \"handcrafted\" attack to inject backdoors into pre-trained deep neural network (DNN) classifiers. Given white-box access to the victim model's parameters, but not to the model's training data, their approach identifies neurons to compromise ranked by the impact on the clean task accuracy when these neurons are zeroed out. Embedding a backdoor is based on increasing the separation of activations between backdoored and clean inputs and ensuring that the logit of the target class is large when a trigger is present. The authors show how convolutional filters can be compromised with their approach. Handcrafted backdoors are shown empirically to be robust against one adaptive (parameter noising) and three popular existing backdoor removal defenses if the adversary uses (i) sufficiently large triggers or (ii) is willing to sacrifice parts of their attack success rate. Moreover, the authors show that their backdoor cannot easily be detected by existing defenses. The paper was an excellent read and I strongly believe that it is of considerable interest to the community. \n\n**Strengths \n\n* Effectiveness\nThe authors provide lots of useful experiments and insights to demonstrate that their backdoor (i) preserves the victim model's utility while (ii) being robust against all surveyed defences (under certain conditions). The Appendix provides many details on the robustness against defences and is l a great addition to the paper. \n\n* Novelty\nThe author's backdooring attack has plenty of interesting and novel components. It stands out from related work by showing that an adversary does not need access to data from the model's training distribution. Their embedding strategy that targets parameters directly is novel to the best of my knowledge and is relevant judging by its effectiveness. \n\n* Experimental Validation\nThe main paper summarises the empirical results comprehensively and provides lots of intuition on why and how the attack works and what challenges need to be addressed. \n\n* Reproducibility\nThe authors provided their source code and (as already mentioned) presented additional insights in their Appendix. \n\n** Weaknesses \n\n* The difference between a \"supply-chain\" attack and a code poisoning attack is not clear to me. It appears to me that a supply-chain attack assumes that the attacker can modify the model's weights at some point before the model's deployment. Important works such as [1] or [2] assume the same threat model (with addition to access to data resembling the training distribution), but they are not mentioned. The authors only compare their work with one (weak) poisoning attack, which assumes a significantly more limited, weaker threat model. \n\n[1] Pang, R., Shen, H., Zhang, X., Ji, S., Vorobeychik, Y., Luo, X., ... & Wang, T. (2020, October). A tale of evil twins: Adversarial inputs versus poisoned models. In Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security\n\n[2] Shokri, Reza. \"Bypassing backdoor detection algorithms in deep learning.\" 2020 IEEE European Symposium on Security and Privacy (EuroS&P). IEEE, 2020. My questions are focused on the limitations of handcrafted backdoors. \n\nQ1: I wonder whether the backdoor attack is resilient to overwriting attacks. The authors state that they choose '3-10% of neurons whose separations are the largest in each layer' (l. 148). Can a defender identify the same neurons by running the same attack? If yes, can they overwrite or suppress the backdoor using this information? \n\nQ2: The authors mention that their backdoor is undetectable by existing defenses. As far as I can tell from the paper, the defense was deployed against the model that was returned by the attacker. Considering that the defense introduces a 'sharp local minimum' (Appendix, l. 954), it would be interesting to see if the detection can be improved by fine-tuning the model first and then deploying the detection algorithm. \n\n - " ]
[ -1, -1, -1, -1, -1, -1, 5, 7, 8 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 5 ]
[ "glMaYn3dDsv", "hkuIncjYr2c", "nips_2022_6yuil2_tn9a", "HPF5HH4eWQw", "XWLZNC1pLwk", "syAZBtbxwIq", "nips_2022_6yuil2_tn9a", "nips_2022_6yuil2_tn9a", "nips_2022_6yuil2_tn9a" ]
nips_2022_1xqE9fRZch5
Local Spatiotemporal Representation Learning for Longitudinally-consistent Neuroimage Analysis
Recent self-supervised advances in medical computer vision exploit the global and local anatomical self-similarity for pretraining prior to downstream tasks such as segmentation. However, current methods assume i.i.d. image acquisition, which is invalid in clinical study designs where follow-up longitudinal scans track subject-specific temporal changes. Further, existing self-supervised methods for medically-relevant image-to-image architectures exploit only spatial or temporal self-similarity and do so via a loss applied only at a single image-scale, with naive multi-scale spatiotemporal extensions collapsing to degenerate solutions. To these ends, this paper makes two contributions: (1) It presents a local and multi-scale spatiotemporal representation learning method for image-to-image architectures trained on longitudinal images. It exploits the spatiotemporal self-similarity of learned multi-scale intra-subject image features for pretraining and develops several feature-wise regularizations that avoid degenerate representations; (2) During finetuning, it proposes a surprisingly simple self-supervised segmentation consistency regularization to exploit intra-subject correlation. Benchmarked across various segmentation tasks, the proposed framework outperforms both well-tuned randomly-initialized baselines and current self-supervised techniques designed for both i.i.d. and longitudinal datasets. These improvements are demonstrated across both longitudinal neurodegenerative adult MRI and developing infant brain MRI and yield both higher performance and longitudinal consistency.
Accept
The paper proposes a self-supervised deep-learning framework for image-to-image translation tasks, such as segmentation, that accommodates and fully exploits longitudinal data. Specifically the method provides a mechanism to impose consistency in output across multiple points from the same individual and simple regularisation terms to avoid some problems common with other methods, such as mode collapse. The authors compare the method against baselines in two distinct neuroimaging segmentation tasks, which nicely demonstrate the additional power afforded by imposing longitudinal consistency. The reviews overall reported that the submission tackles an important problem, presents well formulated experiments, and shows a significant advance over the SOTA. The reviews raised concerns raised included non-specialist accessibility, adding more related work, and questions w.r.t. the claims, presentation, and relevance of the results, and particularly about the mode collapse problem. The authors have addressed these concerns, in particular by adding a new section (Section E/Need for regularization) to the Supplementary Material which contains several new visualizations and quantifications of the mode collapse problem (from ablation experiments). The paper has been updated to reflect some clarifications required by reviewer de4n. Some citations suggested by Reviewer r9Zh are now discussed in the submission. As it is, the paper meets all conditions for acceptance at NeurIPS 2022.
val
[ "KfujYbvlQAG", "8-LjQy8PDZV", "-nkY_37R9g", "3N3OnmQlaT8", "wAHBlvOQ2jI", "OZQbYjWCg8r", "h3WXvetow6S", "1B5Dj2M7BsV" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " (cont’d)\n\n**What are our solutions doing?** To rectify the problems of this ablation, we introduce several forms of regularization designed to increase the spatial and inter-channel variability of representations and ultimately yield diverse representations for neuroanatomical structures. To explain **Figure 1**, in row B (yellow box), the intra-subject similarity maps are either constant or have no semantic relevance. On adding proper regularization to this ablation in row C (green box), the decoder self-similarity maps localize to frontal lobe white matter as expected. Similar trends are observed in Supplemental Figure 5.\n\nFor clarity, we now replace the term “mode collapse” with *low-diversity embeddings* (with artifacts, when relevant).\n\n**Can we do more to describe, quantify, and illustrate this problem?** Yes, thank you for this suggestion. We now provide further qualitative and quantitative evidence of low diversity embeddings and artifacts in that ablation in **Supplementary Material Section E**:\n- **Figure 14** visualizes U-Net activations without/with regularization alongside the singular values of the sample covariance matrices (inspired by [Jing22]) of the spatial feature projections.\n - In the singular value plots, the representation space is shown to collapse/have a lower rank without regularization, as indicated by the vanishing singular values. This rapid spectral decay is lessened by the proposed regularization indicating representations of higher spatial variability.\n - While the encoder activations (rows 1, 2; cols a, b) qualitatively resemble typical deep network activations with or without regularization, the decoder activations (rows 3–5; cols a, b) converge to degenerate spatial patterns with artifacts without regularization.\n- In **Figure 15**, we query a diverse range of neuroanatomy and background locations. In the yellow and green boxes for the unregularized ablation, the self-similarity value does not appreciably change or have anatomical relevance regardless of the query. These problems are alleviated with the proposed method.\n- In **Figure 16**, we fix the query and compute their similarity to various 2D intra-subject key slices in 3D space. We find that the artifacts and low-diversity embeddings are consistent across 3D space in the unregularized ablation (yellow boxes).\n- In **Figure 17**, we plot the standard deviation of the spatial projection vectors and show that the unregularized ablation has low variance in the decoder layers. With regularization, the decoder layers produce higher spatial variability.\n- In **Figure 18**, we plot the cosine similarity loss values with/without regularization. As indicated by the green arrows, the decoder similarity losses without regularization converge to degenerate solutions and achieve a trivially-true ideal loss as is common with negative-free approaches (please see Figure 2a of [Chen21]). With regularization, both encoder and decoder losses converge more gradually, which typically empirically corresponds to more useful representations downstream.\n\n\n> *The speculation that this is due to skip connections between the encoder and decoder seems hard to verify from the presented evidence. This speculation is repeated multiple places and used as a design argument in sections \"Methods - Orthogonality, Variance and covariance\".*\n\nAs described above, we demonstrate that while the losses, feature maps, standard deviations, and singular values of the encoder layers typically achieve comparable results with or without regularization, the *decoder* activations and projections all typically converge to degenerate solutions without regularization early in training. Our speculation (based on the empirical trends presented in Figures 14-18) is that skip connections enable the decoder loss to be trivially minimized using good representations from the encoder (Figure 18), and that this trivial minimization does not enforce learning semantically-relevant decoder representations. \n\nHowever, we agree that this speculation should have been better supported in the initial submission and we hope that this is addressed by the new empirical support presented in **Supplemental Sec. E**. We have also revised the presentation of the methods to account for this and have clarified the speculative aspects. Please let us know if any detail remains unclear\n\nLastly, we note that the ultimate success of representation learning methods is most commonly empirically measured by higher performance on downstream tasks. In our context, decoder regularization is demonstrated to learn representations which transfer better across multiple segmentation tasks with several measures such as performance and consistency over unregularized ablations and multiple existing baselines.\n\n## References\n[Jing22] Jing, et al. \"Understanding Dimensional Collapse in Contrastive Self-supervised Learning.\" ICLR22\n\n[Chen21] Chen & He. \"Exploring simple siamese representation learning.\" CVPR21", " Thank you for your valuable feedback and for noting the importance of the problem being worked on. We believe that there may have been a few technical and presentation miscommunications on our part and we hope to address them below.\n\n> *Clarity, I find the work hard to follow with long sentences that often appear ambigious to me.*\n\nThank you for letting us know. We have revised the text for further clarity (with new text marked in green) and would be happy to receive feedback on which specific sections could be made clearer.\n\n> *Variance over subjects and multiple training results is not presented.*\n\nThis may be a miscommunication as experimental variances are always reported except in the cases where we additionally report median values. \n- If Table 1 is being referred to, we report *median* scores as a summary statistic across a variety of performance scores and datasets for visual clarity and space purposes. As indicated in the caption, means and standard deviations for the same experiments are reported in the supplementary material (Table 7, reported as `mean(std)`). \n- If the numbers on top of the individual boxes in Figure 4 are being referred to, those are also *median* statistics to add additional information on top of the means and standard deviations which are already encoded in the boxplot y-axis values and box heights. The remaining quantitative results all report means and standard deviations, except where noted to be the median.\n\nIf we missed any other quantitative summary, please let us know and we would be happy to add it.\n\n> *What is the relevance and significance of the results?*\n\nWe respectfully posit that developing a self-supervised spatiotemporal representation learning framework which generalizes to every setting of annotation-availability (one/few/full-shot segmentation, see below) is a significant and relevant result for the biostatistics of neuroanatomical disorders. This is demonstrated by both: \n- Improved downstream segmentation performance (a core medical imaging task) and;\n- Improved longitudinal consistency in downstream segmentations (crucial to biostatistics).\n\nFurther, these improvements are shown to generalize across diverse neuroimaging datasets showing heterogeneous patterns of *both* degeneration (elderly adults with/without Alzheimer's Disease) and development (infants with/without Autism), which has not been previously demonstrated to our knowledge. Lastly, we note that our representation learning method is general-purpose for voxel-level image time-series tasks and can potentially be used for other applications in future work as well.\n\n> *What is the relevance outside one-shot-segmentation?*\n\nAlongside the one-shot segmentation improvements provided in the main text, we also demonstrated improvements over every baseline in the **few-shot** (10% labeled data) and **fully-supervised** (100%) segmentation settings in Section A of the Suppl. Materials in Tables 3 and 4. This could have been communicated better on our part and we now accordingly emphasize these additional suppl. experiments in the main paper.\n\n## Major concern & response:\n> *I have problems appreciating the proposed solutions as I do not understand and not convinced of the problem the authors are trying to address. Specifically the claims of mode collapse, supposedly illustrated in Fig1. Could the authors do more to describe, quantify and illustrate this problem?*\n\nThank you for this question. We will try to recontextualize the problem and our solutions in our response here alongside providing several new illustrations and quantifications in **Section E** of the Supplementary Material.\n\n**What problem are we trying to solve?** Our main objective is to leverage longitudinal scans to learn useful *multi-scale* representations in all layers of an image-to-image architecture in a self-supervised manner. If properly trained, these representations should generalize to downstream tasks such as segmentation and provide better longitudinal consistency. Importantly, no current method attempts simultaneous *multi-scale non-i.i.d. spatiotemporal* representation learning to our knowledge.\n\n**Where is the miscommunication?** Crucially, the *mode collapse* problem in the initial submission occurs in an **ablation** of our method. This ablation demonstrates that naive extension of negative-free representation learning methods (needed to avoid false negatives with unsupervised contrastive pretraining) to multi-layer patchwise training may lead to degenerate solutions. We follow the representation learning literature [Jing22] and define (in our context) collapsed/degenerate solutions to be the setting where internal patch representations show low-diversity and no semantic coherence. This phenomenon is well established in the negative-free global (i.e., one embedding per image) representation learning literature and always needs regularization (e.g., [Chen21], BYOL, WMSE, VICReg, BarlowTwins, etc).\n\n(cont’d)", " We thank the reviewers for their time and expert feedback which has been used to improve the revised submission. \n\nAs a summary of the reviews, we were happy to see that the submission was found to tackle an important problem [`xiZS, r9Zh, de4N`] with well formulated experiments [`xiZS, r9Zh`] which show a significant advance [`xiZS`] and novel generalization [`r9Zh`]. Concerns raised included non-specialist accessibility [`xiZS`], adding more related work [`r9Zh`], and questions w.r.t. the claims, presentation, and relevance of the results [`de4N`].\n\nTo these ends, we address these concerns in the revision (with new text marked in green) and individual responses below. Major changes include:\n- Based on Reviewer `de4N`’s concerns, we have added a new section (**Section E/Need for regularization**) to the Supplementary Material which contains several new visualizations and quantifications of the *mode collapse* problem with one of our ablations which is addressed by the proposed regularization.\n- Further, several technical and experimental clarifications are made in the individual response to Reviewer `de4N` and the paper has been updated to reflect these clarifications where applicable.\n- The presentation has been made more accessible [`xiZS`] and ambiguous sentences have been made clearer [`de4N`].\n- The citations suggested by Reviewer `r9Zh` are now discussed in the submission.\n- We have added the appendices to the main paper PDF for easier reference.\n\nAgain, we deeply appreciate the feedback and are happy to receive any further questions, comments, and/or suggestions for improvements.", " Thank you for the encouraging evaluation and for the missing references!\n\n> *Weaknesses: Motivation part of the introduction could be expanded and improved with some citations. Following works also tackle self-supervised learning in longitudinal brain imaging.*\n\nWe agree, thank you for letting us know of these relevant papers - we have now added them to the submission. We further discuss differences between our submission and their work in detail below:\n\n> *Can the authors address their differences with the current work? To et al. “Self-supervised Lesion Change Detection and Localisation in Longitudinal Multiple Sclerosis Brain Imaging”, MICCAI, 2021*\n\nTo, et al.’s work is a specialized framework for synthetic image and pseudo label generation for *change detection* of multiple sclerosis lesions (with homogeneous intensity appearance) in MRI time-series. They do so by training a VAE on a sequence of images with no changes and then augment the VAE reconstructions to simulate changing lesions. They then use these augmented images to train a change detection segmentation network. As such, their goals and tasks are distinct from ours as we do not work on change detection of MS lesions or on synthetic data generation. \n\nInstead, we focus on general-purpose unsupervised spatiotemporal representation learning, with downstream generalization demonstrated to one-shot segmentation of multiple brain structures (and not just lesions) in the main paper and few-shot and fully supervised segmentation extensions in the supplementary material.\n\n> *Can the authors address their differences with the current work? Wei et al. “Consistent Segmentation of Longitudinal Brain MR Images with Spatio-Temporal Constrained Networks”, MICCAI, 2021*\n\nWei, et al. develops a two-stage segmentation-focused framework that does not work on self-supervised representation learning. To summarize their method:\n- In stage 1, they first train a fully-supervised segmentation network on a fully-annotated cross-sectional MRI dataset with hundreds of subjects. \n- In stage 2, they finetune this network on unlabeled longitudinal data using specialized losses to make its segmentation predictions more temporally consistent. These losses include consistency losses over features, volume counts, labels, and penalizing deviations from a quadratic volume count model (pre-fitted on a large-scale supervised dataset). \n\nTo detail the distinctions between our submission and their framework:\n- Their method requires access to a large annotated cross-sectional dataset for fully-supervised segmentation pretraining. We instead develop general-purpose self-supervised representation learning methods on unlabeled longitudinal images and then finetune on downstream tasks such as one-shot, few-shot, and fully-supervised segmentation.\n- Some of their stage-two finetuning losses assume access to information which may not broadly generalize across various types of longitudinal studies. For example, they assume access to 3+ repeat images per subject, access to a fully-annotated large dataset for fitting a quadratic volume model, and constant image intensity across time via their MSE-based feature consistency loss (which is inapplicable to infant studies such as ours where brain intensity strongly changes with time, for example).\n- Their segmentation consistency loss is similar to ours at a very high-level, with the primary differences being that we use segmentation consistency as a *regularizer* alongside a main segmentation loss while they minimize segmentation consistency as a primary loss. Minimizing a segmentation consistency loss alone as in their work may potentially have a degenerate solution where the network predicts the same structure regardless of age. Their work appears to avoid this degenerate solution by using a quadratic volume model regularizer - however, this regularizer requires a large-scale labeled database for tuning the per-label quadratic model which is inapplicable in our setting. \n\nTo reiterate, these references have been added and we thank you for your time and feedback. We would be happy to address any further questions and/or comments.", " Thank you for the positive feedback!\n\n> *In places the writing could be made more accessible to the non-specialist and organised a bit better. Highlighting specifically what the reader is looking for in Figure 1 would help interpret that figure. The task isn't specified in the OASIS paragraph (starting line 202). These things can be figured out but the authors could make the reader's job easier.*\n\nWe agree. We have now added arrows and boxes for emphasis to Figure 1 and revised its caption to make the figure easier to interpret. The task has been described in the referenced OASIS paragraph and we have also revised the overall paper for clarity where possible. \n\nWe would be happy to hear of any other aspects which could be further improved and thank you again for your time.", " The paper proposes a self-supervised deep-learning framework for image-to-image translation tasks, such as segmentation, that accommodates and fully exploits longitudinal data. Specifically the method provides a mechanism to impose consistency in output across multiple points from the same individual and simple regularisation terms to avoid common problems with other methods, such as mode collapse. The authors compare the method against baselines in two distinct neuroimaging segmentation tasks, which nicely demonstrate the additional power afforded by imposing longitudinal consistency. Strengths\n\n- The work addresses a persistent challenge and shows a significant advance.\n\n- The experimental work is well thought out and showcases nicely the potential of the idea.\n\nWeaknesses\n\n- (minor) In places the writing could be made more accessible to the non-specialist and organised a bit better. Highlighting specifically what the reader is looking for in Figure 1 would help interpret that figure. The task isn't specified in the OASIS paragraph (starting line 202). These things can be figured out but the authors could make the reader's job easier. Broadly happy with it apart from the minor issues raised above. Seems fine.", " This paper tackles self-supervised learning for longitudinal medical images by designing loss functions that consider the temporal evolutions of scans taken from the same subject with applications in downstream segmentation. A U-net architecture is pre-trained via a combined loss function that (i) maximizes the similarity between encoder features from corresponding local patches acquired at different times from the same subject, (ii) minimizes the similarity between encoder and decoder features at the same resolution, as well as at different channels to combat mode collapse and improve generalization, (iii) minimizes the reconstruction error between different augmentations of the same scans. Following pre-training, one-shot segmentation fine-tuning is augmented again with temporal consistency by maximizing the Dice score between segmentation predictions at different times from the same subject. Resulting method outperforms both randomly initialized U-Net, as well as several state-of-the-art pre-training methods with and without benefiting temporal information. Strengths: Self-supervised learning from longitudinal medical imaging is an important problem due to costly annotations. The paper is well-written, particularly in experiment setup. Generalization of state-of-the-art autoencoder pre-training principles to temporal consistency is novel.\n\nWeaknesses: Motivation part of the introduction could be expanded and improved with some citations. \n Following works also tackle self-supervised learning in longitudinal brain imaging. Can the authors address their differences with the current work?\n\nWei et al. “Consistent Segmentation of Longitudinal Brain MR Images with Spatio-Temporal Constrained Networks”, MICCAI, 2021\n\nTo et al. “Self-supervised Lesion Change Detection and Localisation in Longitudinal Multiple Sclerosis Brain Imaging”, MICCAI, 2021\n Limitations and potential impacts have been addressed.", " The paper claims that prior works on self-supervised learning are ill-suited to handle typical longitudinal medical image sequences with problems due to assumptions of i.i.d data and methods that are not able to do per pixel-level predictions. As such the authors propose a number of solutions:\n\n1. a spatiotemporal patchwise similarity loss, which attempts to increase longitudinal similarity between features at similar spatial positions and at all layers of the network\n2. enforcing orthogonality of features from the encoder and decoder.\n3. encouring variance and decorrelation of decode features\n4. denoising reconstruction loss\n5. a finetuning term that encourages segmentations to be similar over multiple time points.\n\nThe proposed solutions are evaluated on three datasets with longitudinal medical images, in comparison with\na number of previously developed self-supervision methods aswell as standard supervised methods such as UNet\n.\n + Self-supervision in medical imaging and particularly in longitudinal analysis is very relevant and an impo\nrtant topic, that in my opinion is not well addressed by existing solutions.\n\n- Clarity, I find the work hard to follow with long sentences that often appear ambigious to me. \n\n- Quality and significance, what is the relevance and significance of the results? Variance over subjects and multiple training results is not presented. What is the relevance outside one-shot-segmentation?\n\n- Soundness, I am not sure the observations and conclusions of mode collapse (Fig.1) is correct. There is little evidence presented. The speculation that this is due to skip connections between the encoder and decoder seems hard to verify from the presented evidence. This speculation is repeated multiple places and used as a design argument in sections \"Methods - Orthogonality, Variance and covariance\". - I have problems appreciating the proposed solutions as I do not understand and not convinced of the problem the authors are trying to address. Specifically the claims of mode collapse, supposedly illustrated in Figure 1. Could the authors do more to describe, quantify and illustrate this problem? - Limitations are reasonably well addressed." ]
[ -1, -1, -1, -1, -1, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, 2, 4, 4 ]
[ "8-LjQy8PDZV", "1B5Dj2M7BsV", "nips_2022_1xqE9fRZch5", "h3WXvetow6S", "OZQbYjWCg8r", "nips_2022_1xqE9fRZch5", "nips_2022_1xqE9fRZch5", "nips_2022_1xqE9fRZch5" ]
nips_2022_o8vYKDWMnq1
Understanding Deep Neural Function Approximation in Reinforcement Learning via $\epsilon$-Greedy Exploration
This paper provides a theoretical study of deep neural function approximation in reinforcement learning (RL) with the $\epsilon$-greedy exploration under the online setting. This problem setting is motivated by the successful deep Q-networks (DQN) framework that falls in this regime. In this work, we provide an initial attempt on theoretical understanding deep RL from the perspective of function class and neural networks architectures (e.g., width and depth) beyond the ``linear'' regime. To be specific, we focus on the value based algorithm with the $\epsilon$-greedy exploration via deep (and two-layer) neural networks endowed by Besov (and Barron) function spaces, respectively, which aims at approximating an $\alpha$-smooth Q-function in a $d$-dimensional feature space. We prove that, with $T$ episodes, scaling the width $m = \widetilde{\mathcal{O}}(T^{\frac{d}{2\alpha + d}})$ and the depth $L=\mathcal{O}(\log T)$ of the neural network for deep RL is sufficient for learning with sublinear regret in Besov spaces. Moreover, for a two layer neural network endowed by the Barron space, scaling the width $\Omega(\sqrt{T})$ is sufficient. To achieve this, the key issue in our analysis is how to estimate the temporal difference error under deep neural function approximation as the $\epsilon$-greedy exploration is not enough to ensure "optimism". Our analysis reformulates the temporal difference error in an $L^2(\mathrm{d}\mu)$-integrable space over a certain averaged measure $\mu$, and transforms it to a generalization problem under the non-iid setting. This might have its own interest in RL theory for better understanding $\epsilon$-greedy exploration in deep RL.
Accept
This paper provides a theoretical study of the successful deep Q-networks framework under an online, episodic Markov decision process (MDP) model with T episodes. All reviewers and the AC believe this is a solid RL theory paper.
train
[ "x-UOvfk_mGJ", "xGABUouCtQO", "FokvdXhcex", "v9Wzf_8yT2Q", "OR9I6XHSYx5", "qlRlvydlhYM9", "xa60BstGRaJ", "yeqOu6aPGeb", "JzbBDOi14ub", "F7ebi7rot5F", "I4qKHzkJFhp", "now2Fxty9C5", "QkZ4fsDvby", "L3INGY0ani1", "MbvDyZWMBLu", "Y_pS7CaLTxx", "2BEG0EneVOU" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer 7U3N,\n\nWe are grateful for your constructive feedback on improving this work. \nWe will polish this paper based on your suggestions in the final version.\n\nBest regards,\n\nAuthors", " I thank the authors for their efforts. \n\nWith these modifications, the results now make sense to me, and I will adjust my rating.", " Dear Reviewer Pwx6,\n\nThanks for your positive support.\n\nWe will continue polishing our work based on your constructive feedback and suggestions in the final version.\n\nBest regards,\n\nAuthors.", " Thanks for clarifying these parts. My concerns are well addressed. Accordingly, I have changed my score to 6.", " We are thankful to the Reviewer 7U3N’s constructive and detailed feedback! \n\nWe agree with both two reviewers’ suggestion on the difference with DQN, and accordingly we have restated the story of this work and renamed the title to \" **Understanding Deep Neural Function Approximation in Reinforcement Learning with $\\epsilon$-Greedy** \".\n\n---\n\n**Q1:** difference with DQN\n\n**A1:** We agree with the reviewer that our algorithm based on value iteration via deep neural networks is different from one-step gradient descent in Q learning. \nTheoretically understanding DQN can be conducted in the perspective of function approximation, target network, and Q-learning.\n\nOur motivation focuses on deep neural function approximation beyond \"linear\" regime that shares the same spirit and main intention of DQN, and thus provides some guidelines on architecture of neural networks (width and depth) to ensure sublinear regret based on different smoothness levels of the target Q-function.\n\nWe really do not want to create confusion in the community and we acknowledge the difference between DQN and Algorithm 1, and thus we have renamed our algorithm “ **Value Iteration via neural networks under $\\epsilon$-greedy** ”. We have revised this paper to avoid misleading, refer to [paper revision response](https://openreview.net/forum?id=o8vYKDWMnq1&noteId=qlRlvydlhYM9).\n\n---\n\n**Q2:** PSE assumption and martingale sequences\n\n**A2:** We agree with the reviewer that, it is hard to examine whether our PSE assumption (implicitly) affects the distribution of $\\widehat{Q}_h^1, \\widehat{Q}_h^2, \\cdots, \\widehat{Q}_h^t$. \n\nAccording to your suggestions, we remove our PSE assumption but use the $\\epsilon$-greedy policy in Algorithm 1. In this case, this issue has been solved. Interestingly, understanding deep neural function approximation with $\\epsilon$-greedy is still promising in the RL theory community, and even more close to DQN that also uses $\\epsilon$-greedy in practice.\n\n Accordingly, we follow this story to restate our motivation, rename the title to “ **Understanding Deep Neural Function Approximation in Reinforcement Learning with $\\epsilon$-Greedy** ”, and provide a possible way to understand deep RL in terms of function class under the $\\epsilon$-greedy exploration. We have tried our best to avoid misleading this work, refer to the revised version for details.\n\n---\n\n**Q3:** how to find the closest deterministic element? And the size of $\\epsilon$-net is exponential order of $d$. Is this even implementable?\n\n**A3:** In Eq. (45), line 979, we introduce $\\widetilde{Q}_h^t$ as a proxy for $\\widehat{Q}_h^t$ and we account for the mismatch between the policy greedy w.r.t. $\\widetilde{Q}$ and w.r.t. $\\widehat{Q}$ in the analysis. We don’t need to find such a point as $\\widetilde{Q}_h^t$ is an intermediate random variable to aid our proof.\nBesides, in Jin et al. (COLT2020) for linear function approximation, they also don’t need to find the closest deterministic element, see the proof of Lemma D.4 in Jin et al. (COLT2020).\n\nIt’s true that the size of $\\epsilon$-net is exponential order of $d$, but what we use is the logarithm of covering number, i.e., metric entropy, in our proof. This is also conducted by Lemma D.4 and D.6. in Jin et al. (COLT2020) via this style.\n\n---\n[Jin et al. (COLT2020)](https://arxiv.org/abs/1907.05388) Jin, Chi, Zhuoran Yang, Zhaoran Wang, and Michael I. Jordan. Provably efficient reinforcement learning with linear function approximation. COLT2020. \n\nWe are happy to elaborate if the reviewer 7U3N has further questions!\n", " We first thank all the reviewers' effort and constructive feedback on this work!\n\nAccording to suggestions from all the reviewers, we have revised this paper to avoid misleading. We made our work center around **deep neural function approximation beyond the “linear” regime under $\\epsilon$-greedy exploration in deep RL, affected by function class and architectures (e.g., width and depth)**. The main revision includes the following folds:\n\n- Since the $\\epsilon$-greedy exploration satisfies Assumption 1, with the proof in Lemma 2 in Appendix C, see line 649, we remove Assumption 1 but use $\\epsilon$-greedy in the current version. \n- We rename our Algorithm 1 “Value Iteration via neural networks under $\\epsilon$-greedy exploration” as well as the paper title “ **Understanding Deep Neural Function Approximation in Reinforcement Learning with $\\epsilon$-Greedy** ” to avoid misleading DQN. In this case, we rephrase our expression on the relationship between our work and DQN in the abstract and the introduction. Moreover, a clarification is also given in the conclusion, refer to line 338 for details.\n- Our regret bound is slightly changed as $\\epsilon$-greedy is applied. Specifically, in the special case (linear function approximation) of our work, our result is able to recover the optimal regret bound $\\widetilde{\\mathcal{O}}(H^{4/3} A^{1/3} T^{2/3})$ for contextual bandits under $\\epsilon$-greedy [1], see line 249 for details.\n\n\nFeel free to leave any comments and we are happy to hear your further suggestions on the revised version.\n\nBest regards,\n\nAuthors\n\n---\n[1] Lattimore, Tor, and Csaba Szepesvári. Bandit algorithms. Cambridge University Press, 2020.\n", " First I thank the authors for the explanations. Yet I still have some concerns as follows:\n\n1. I have read the reviews from other reviewers and the corresponding responses. Especially, I agree with Reviewer Pwx6's concern that there is a big discrepancy between Algorithm 1 and the standard DQN algorithm, as I have also mentioned in the review. I think they are different even without experience replay. Line 6 of Algorithm 1, or equivalently minimizing Eq. (3), is basically solving the problem of finding a fixed point of the TD error based on the collected data, which is essentially very different from _one_ step of (mini-batch) gradient descent. However, most of the motivation part of this paper actually discusses the practical DQN, which I think is misleading.\n2. Regarding the PSE assumption, the authors have added a proof that using $\\epsilon$-greedy policy guarantees PSE. I agree that in this case there is no issue about $\\widehat Q_h^1, \\ldots, \\widehat Q_h^T$ being a martingale sequence. However, as long as the PSE assumption (implicitly) affects the distribution of $\\widehat Q_h^1, \\ldots, \\widehat Q_h^T$, it is no longer a martingale sequence. I feel it is hard to examine this in general.\n3. Also thanks for the clarifications on the issue of conditional independence. According to the response, the authors proposed to act greedily with respect to the closest deterministic elements of the covering net. Then my question is how to find the closest deterministic element? Meanwhile, the size of the $\\epsilon$-net should be exponential in dimension $d$ (at least for the linear case as far as I know). Is this even implementable?", " We are thankful to the Reviewer Pwx6 with the constructive feedback.\n\n\n**Q1:** how to set epsilon-greedy to satisfy Assumption 1? For me, this assumption seems to depend on the function class and the parameter $\\delta$\n (**we change the parameter as $\\delta$ here to avoid misunderstanding with $\\epsilon$-greedy**).\n\n**A1:**\nAssumption 1 is used to build the relationship between $L^2(\\mathrm{d} \\bar{\\mu}_h^t) $ and $L^{\\infty}$ norm.\n\nIn our proof, we only require Assumption 1 holds for $f$ equivalent to the temporal difference error $\\Gamma_h^t$, refer to Lemma 3 (line 676) for details.\n Moreover, we add a proof that the $\\epsilon$-greedy policy satisfies Assumption 1 for any $\\epsilon \\in (0,1)$, refer to Appendix I (line 1000) for proofs in the revised version. \n\nIn particular, we have (in the worst case with $\\delta$ is zero) \n$$\n \\bar{\\mu}^t_h(\\mathcal{G}_{\\delta}) \\geqslant \\Omega \\left( \\Big( \\frac{\\epsilon}{A} \\Big)^H \\right) > 0, \\quad \\forall \\epsilon \\in (0,1)~ and~~ \\delta \\geqslant 0 .$$ \n\nFurther, the exponential dependence on $H$ (in the worst case for any MDP under $\\epsilon$-greedy) can be avoided at an additional cost of worsening $T$ dependence. For example, in the Besov space, by setting $ \\epsilon = \\mathcal{O}(H^{\\frac{4}{H+2}} T^{-\\frac{1}{ (H+2) }}) $, we can recover Theorem 3 [S9]’s sublinear regret bound $\\widetilde{\\mathcal{O}} (H T^{\\frac{H+1}{H+2}})$ in the worst case. \n\n---\n\n**Q2:** fair comparison with [12]\n\n**A2:** As written in line 254 in the previous version, we believe\n\n“The results should be compared with care because the focus of the two works is different.”\n\nAlbeit not directly comparable, we mentioned this work because this is the closest literature to our scope. \nWe can remove this comparison in the final version if this is misleading in your view.\n\n---\n\n**Q3:** discussion on target networks\n\n**A3:** Thank you for following up on this point. We agree with the reviewer's claim that even if the optimization error is neglected, then our algorithm becomes close to DQN without target networks (studied in [S6]) but not to the most famous version of DQN that uses a target network.\n\nWe opted for this name because our algorithm 1 is a value based method using deep neural network function approximation that shares the same spirit and main intention of DQN. However, we really do not want to create confusion in the community and we acknowledge the fact that the study of target neural networks is beyond the scope of this work.\n\nTo this end, we have renamed our algorithm “**Online Fitted Q Iteration with Deep Neural Networks**” in the revised version. We would welcome suggestions from the reviewer’s side about this renaming for Algorithm 1. Feel free to suggest any alternative that sounds more appropriate in your view.\n\nThanks again for your constructive and helpful review!\n\nBest,\n\nAuthors.\n", " Thanks for your detailed explanation. I have a better sense of the technical challenge mentioned in Q5 and A5, but I am more confused about the other parts. Let us discuss some major issues. I would appreciate it if the authors could help me.\n\n**On Assumption 1 and Comparison with [12]**: You mention that the epsilon-greedy can satisfy Assumption 1, upon which the sublinear regret in Theorem 1 is achieved. Could you explain more about how to set epsilon-greedy to satisfy Assumption 1? For me, this assumption seems to depend on the function class and the parameter $\\epsilon$. Furthermore, it should hold uniformly for $\\epsilon \\geq 1/\\sqrt{T}$. If this assumption is well justified, I can accept the sublinear regret in Theorem 1. \n\nAlso, in the Remark of Theorem 1, the width parameter in [12] is compared. However, if my understanding is correct, this paper and [12] focus on different settings (i.e., there is no Assumption 1 in [12]). Is this comparison fair? \n\n\n\n**On Target Network**: You explain that the difference between Algorithm 1 and the practical DQN algorithm is that DQN takes stochastic gradient steps to solve the empirical objective in Equation (3). Thus, your thought is that this difference is minor if the optimization error is well-controlled. If I misunderstand your claim, please correct me. But I have a different opinion. For DQN with the target network, the learning target does not change during a period, but the exploration continues. It seems that this practical scenario cannot be reduced to the framework in this paper. Hence, I much prefer the study in [5] by Zanette et al. \n\nAgain, I understand your motivation, as explained in A4. But, my concern is that Algorithm 1 may be quite different from the practical algorithm DQN. Algorithm 1 might be a new algorithm that can converge and be sample-efficient in the online setting. To my understanding, there is no sufficient evidence that Algorithm 1 is close to DQN. DQN is a simple online algorithm that can handle streaming data (so the computation costs could be cheap per iteration). In my view, DQN is different from the VI-style algorithms (VI means value iteration) in this paper and [S6], which are much similar to model-based algorithms. To my best knowledge, only [5] is close to the goal of designing sample-efficient algorithms with low computation costs. If differences between Algorithm 1 and DQN are not negligible, let us accept and clarify these differences.", " **Q4:** Q-learning with function approximation can easily diverge, and the target network is introduced to address this issue in DQN. However, these features are not captured by Algorithm 1 and theoretical results.\n\n**A4:** We agree with the reviewer that one reason for the success of DQN in practice is due to the target neural networks, but understanding DQN from another perspective, i.e., function space and architectures (e.g., width and depth) deserves some attention.\n\nInterestingly, removing the target neural networks in DQN is also possible and acheives good performance in practice [S6].\nThis is similar to the scenario that training neural networks by SGD sometimes is difficult to converge, and we use some techniques, e.g., batch normalization, for training. But this does not mean, theoretical understanding of SGD without batch normalization is not acceptable.\n\nMoreover, we thank the reviewer for providing a relevant paper [5] on understanding DQN with the target neural network and linear function approximation, though linear function approximation could be far away from understanding practical DQN.\nIn fact, understanding DQN via the target neural networks and function space are mutually complementary rather than mutually exclusive. Both of them are important but could not have the right to exclude all others.\n\n**Q5:** Can the authors justify the (technical) challenge after Assumption 1 is satisfied?\n\n**A5:** The goal of Assumption 1 is to transform the TD error in $L_{\\infty}$ space to $L_2(\\mathrm{d} \\mu)$ space. Even if Assumption 1 is used, the key technical challenge is how to bound the temporal difference (TD) error.\n\nTo be specific, previous work on linear/kernel function approximation [S6,S7] design a UCB-type bonus function to ensure the temporal difference error is smaller than zero. However, this UCB-type bonus function depends on the explicit feature mapping that is definitely unknown for neural networks.\n\nIn our case, we have to bound the TD error without such bonus function design. Using Assumption 1, the TD error in $L_{\\infty}$-norm can be transformed to $L_2(\\mathrm{d} \\mu)$ space.\nAccordingly, our key technical challenge is to transform the TD error estimation to generalization bounds under the non-iid data, see line 319 to 334 for details.\n\n**Q6:** Assumption 1 is quite strong for studying online learning. Though this paper explains that epsilon-greedy can achieve this, it is known that epsilon-greedy is much inefficient.\n\n**A6:** We agree with the reviewer that epsilon-greedy is inefficient under some worst cases according to Jin et al. (2018). However, epsilon-greedy is still used in DQN in practice. Besides, in theory, epsilon-greedy is theoretically demonstrated to be efficient under some “well-structured” MDP [S9].\n\n-------refs-------\n\n[S1] Suzuki, Taiji. Adaptivity of deep ReLU network for learning in Besov and mixed smooth Besov spaces: optimal rate and curse of dimensionality. ICLR2019.\n\n[S2] Schmidt-Hieber, Johannes. Nonparametric regression using deep neural networks with ReLU activation function. *The Annals of Statistics*, 2020.\n\n[S3] Lin, Shao-Bo, Kaidong Wang, Yao Wang, and Ding-Xuan Zhou. Universal consistency of deep convolutional neural networks. *IEEE Transactions on Information Theory*, 2022.\n\n[S4] Wang, Yifei, Jonathan Lacotte, and Mert Pilanci. The Hidden Convex Optimization Landscape of Two-Layer ReLU Neural Networks: an Exact Characterization of the Optimal Solutions. ICLR2022.\n\n[S5] Allen-Zhu, Zeyuan, Yuanzhi Li, and Zhao Song. A convergence theory for deep learning via over-parameterization. ICML2019.\n\n[S6] Chi Jin, Zhuoran Yang, Zhaoran Wang, and Michael I Jordan. Provably efficient reinforcement learning with linear function approximation. COLT2020.\n\n[S7] Zhuoran Yang, Chi Jin, Zhaoran Wang, Mengdi Wang, and Michael I Jordan. On function approximation in reinforcement learning: Optimism in the face of large state spaces. NeurIPS2020.\n\n[S8] Seungchan Kim, Kavosh Asadi, Michael Littman, and George Konidaris. Deepmellow: removing the need for a target network in deep Q-learning. IJCAI2019.\n\n[S9] Dann, C., Mansour, Y., Mohri, M., Sekhari, A., Sridharan, K. Guarantees for Epsilon-Greedy Reinforcement Learning with Function Approximation. ICML2022.\n", " Thanks for your constructive comments and we prepare some clarifications on them. We have polished this paper according to your suggestions, highlighted in red for convenience.\n\n**Q1**: It is unclear why this paper studies the function approximation in the online setting. It seems that the offline setting considered in [1] is sufficient to study how to approximate optimal Q-value functions.\n\n**A1**: Our work seeks to theoretically understand DQN in perspective of neural networks’ function class and their architecture. We follow the key characteristics of DQN that works in the online setting and uses $\\epsilon$-greedy for exploration.\n\nIn fact, we have clarified that, when using $\\epsilon$-greedy, our PSE assumption is satisfied for exploration, see the remark behind Assumption 1, see line 219.\n\nEven though the exploration issue in the online setting is solved, the $\\epsilon$-greedy scheme is not enough to ensure the temporal difference (TD) error is smaller than zero, failing in \"optimism in the face of uncertainty\".\nAccordingly, one key issue left: how to achieve sublinear regret when the temporal difference error exists, see the response to **Q5** for details as below. \n\nIf we put the above discussion aside, we only compare with [1] in terms of the approximation ability. When compared to [1] on the Hölder space of target Q-function, the considered function class in our work is larger and more suitable than the Hölder space for neural networks. For example, the Barron space in this work is the “largest” function space represented by two-layer neural networks, refer to Appendix A (line 579-583), otherwise curse of dimensionality occurs. The considered Besov spaces are more general than the Hölder space for deep ReLU neural networks for function approximation, refer to [S1] for details.\n\nMore importantly, based on our theoretical results, we provide width-depth guidelines for understanding DQN according to the smoothness of the target Q function, which also coincide with the practical DQN.\n\n---\n\n**Q2:** To my knowledge, the value iteration algorithm (with neural networks) is rarely used in practice, meaning that a gap separates the theory in this paper from practical algorithms.\n\n**A2:** To us the main difference is that we assume an exact solution at step 6 of Algorithm 1 while DQN takes some gradient steps of this objective. Also DQN estimates the Bellman squared error with historical data so this does not seem to be a big difference.\n\n---\n\n**Q3:** it is hard to compute the global minimizer defined in the empirical objective (i.e., Equation (3)) for each iteration. The computation cost is unacceptable.\n\n**A3:** We agree with the reviewer that in practice the computation cost is large. We follow the setting of [S1,S2,S3] in deep learning theory that directly assumes the global minima can be found. In particular, at least for two-layer neural networks beyond NTK, all minima can be obtained by convex optimization algorithms [S4]. Besides, in deep learning theory, one global minimum can be also found via over-parameterization [S5]. We will mention this point in our revision.\n\nNote that in the case of approximate solutions for linear/kernel MDP conducted by gradient descent (as well as SGD) after $K$ steps, we do have a clear solution. Indeed, we can modify the line 7 in Alg.1 by introducing the extra bonus function:\n$\\widehat{Q}_h^t = \\min \\Big( Q_h^t + (1-\\eta \\lambda)^{K}HT, H-h+1 \\Big)^{+} $\nwhere $\\eta$ denotes the step-size satisfying $\\eta < 1/\\lambda$.\n\nAccordingly, such an approximation solution incurs an extra residual term $\\mathcal{O}((1-\\eta \\lambda)^{K} HT)$ in the regret bound that exponentially converges to zero.\n", " **Q5:** about PSE assumption and $\\widehat{Q}_h^t$ is not a martingale sequence\n\n**A5:** Here the function is restricted to $f \\in \\mathcal{F}$ which is $ L^p_{d \\mu}$-integrable, where $\\mu$ is the averaged measure over the historial state-action pairs, and $\\epsilon \\leq || f ||_{\\infty}$ ensures the RHS of Eq. (7) nonnegative. \nIn our setting, $f$ representing the temporal difference error is enough for proof.\n\nWhen employing $\\epsilon$-greedy exploration, our PSE assumption satisfied. In this case,the martingale sequence is not needed, instead it incurs in extra $\\epsilon H T$ regret in the regret bound. By taking $\\epsilon = T^{-\\alpha}$ for some certain $\\alpha \\in (0,1)$, the sublinear regret still holds under the $\\epsilon$-greedy policy.\n\nBesides, for deterministic dynamics in discrete states and actions spaces, we can use the first $|\\mathcal{S}| | \\mathcal{A}|/H$ episodes to visit each state-action pair once, which is feasible in Atari, and accordingly our PSE assumption is satisfied. In this case, the martingale structure still remains unchanged. \n\n---\n\n**Q6:** When running Algorithm 1, the collected data is **conditionally independent** instead of independent, because the policy in episode t depends on all the previous data. Then how can we apply Lemma 5/6 (line 320-321)?\n\n**A6:** True, we did not notice this point. Thanks a lot for noticing it! \n\nFollowing a covering argument for the function approximation class used for value functions [S1], **we have solved the issue**, refer to Appendix H in the revised version for details. We would sincerely appreciate it if the reviewer could check it.\n\nBriefly speaking, we propose to act greedly not with respect to $\\hat{Q}^t_h$ but **with respect to the closest deterministic elements of the covering set**. In this way we break the dependence between the episodes, introduce intermediate random variables based on independent data for unbiased estimation, and accordingly Lemmas 5 and 6 can be applied.\n\nBy doing so, two extra terms occur: one is the expected risk difference between independent and conditionally independent data; and the other is caused by a biased estimator. Both of these two terms can be estimated by the $\\epsilon$-covering. By taking $\\epsilon = H/ \\sqrt{t}$ following [S1], the final regret bound still remains unchanged.\nBesides, in practice, such independence requirements can be achieved by the experience replay in DQN.\n\nWe are grateful to the reviewer for pointing out this issue and would be glad to elaborate if the reviewer has further concerns. \n\n\n[S1] Jin, Chi, Zhuoran Yang, Zhaoran Wang, and Michael I. Jordan. Provably efficient reinforcement learning with linear function approximation. COLT2020.\n", " We thank the reviewer for the effort and the constructive feedback!\n\n**Q1.** The discussion about related works for RL with function approximation can be more detailed, especially comparing the difference between the assumptions on the function classes. Also, it seems that some existing literature on *theoretical analysis of Q-learning algorithms is missing*.\n\n**A1:** Thanks for pointing out two relevant refs on Q-learning. We have added them in the revised version and briefly discuss them here: Jin et al. (2018) prove that Q-learning with UCB exploration achieves $\\sqrt{T}-$regret in tabular MDPs; Pan and Gu (2020) shows that neural Q-learning (under the lazy training regime) finds the optimal policy with $\\mathcal{O}(1/T)$ convergence rate. Both of these two refs do not weaken the contribution of this work.\nWe will add more discussion on these work according to the used assumption, the function class, and the obtained regret in the final version.\n---\n\n**Q2:** There is a mismatch between the terminology 'Q-learning' and the actual algorithm studied in this paper, which is indeed a least square value iteration algorithm with a neural network estimator. This is different from the usual Q-learning type algorithm, where the Q updates correspond to gradient descent of one-step TD error. This should be clarified in the paper.\n\n**A2:** Thank you for pointing this out. $Q$-learning is indeed a gradient descent of one-step TD error. According to your suggestions, we have adopted the terminology centered around value iteration in the revised manuscript to make it more precise. For example, the title in section 4 is changed to “Theoretical guarantees of efficient value iteration via DNNs”. \n\nThe title will be also changed to “Width and Depth Guidelines for DQN via Value Iteration: A Function Approximation Perspective”.\n\n---\n\n**Q3:** The regret bounds in Theorem 1 and 2 scale with $T^{-¾}$ and $T^{- \\frac{\\alpha + d}{2 \\alpha + d}}$ which are strictly worse than $T$. Do these results degenerate to T-type bound under simpler settings such as linear function approximation?\n\n**A3:** Our result can obtain $\\sqrt{T}$-regret under linear/kernel function approximation. To be specific, the key step in our proof is transforming the estimation on temporal difference error to generalization bounds on non-iid data. In Theorem 1, the optimal generalization rate of learning in Barron space is $\\mathcal{O}(1/\\sqrt{n})$, where $n$ is the number of training data (i.e., $t-1$ in our RL setting). If we consider the linear/kernel function class, the generalization rate can be improved to $\\mathcal{O}(1/n)$, which obtains a $\\sqrt{T}$-regret bound. The similar discussion is suitable to Theorem 2.\nFurthermore, we provide another view to understand this. In our theorem 2, the parameter $\\alpha$ denotes the smoothness of the $Q^{\\star}$. Since the linear function is infinitely smooth, i.e., $\\alpha \\rightarrow \\infty$, our regret is arbitrarily close to $O(1/\\sqrt{T})$.\n\n---\n\n**Q4:** Why there is no need to do exploration in Algorithm 1? It should be necessary for online RL algorithms. Is this due to the PSE assumption?\n\n**A4:** Exactly. Our assumption requires that at least one good state-action pair can be visited according to the historical data. Note that exploration in DQN depends on massive data and $\\epsilon$-greedy scheme rather than designing UCB-type bonus function, and more importantly, the used **$\\epsilon$-greedy scheme satisfies our PSE assumption**, which coincides with DQN in practice.\nTherefore, we decided to study the properties of the DQN architecture leaving aside the design of an exploration scheme for nonlinear function approximation. In fact, **designing UCB-style exploration in neural function approximation beyond NTK is still an open question** as no explicit feature mapping can be given.\nWe have already discussed this in the remark behind Assumption 1, see line 219 for details. \n", " We thank the reviewer for constructive comments and positive support!\n\n**Q1:** The algorithm requires optimizing a neural network function to minimum. However in practice, it is usually not achievable. What will the results be if we only have an approximate solution to the optimization problem?\n\n**A1:** We agree with the reviewer that in practice it appears difficult to obtain the minimum solution. We follow the setting of [S1,S2,S3] in deep learning theory that directly assumes the global minima can be found. In particular, at least for two-layer neural networks beyond NTK, all minima can be obtained by convex optimization algorithms [S4]. Besides, in deep learning theory, one global minimum can be also found via over-parameterization [S5]. We will mention this point in our revision.\n\nNote that in the case of approximate solutions for linear/kernel MDP conducted by gradient descent (as well as SGD) after $K$ steps, we do have a clear solution. Indeed, we can modify the line 7 in Alg.1 by introducing the extra bonus function:\n$\\widehat{Q}_h^t = \\min \\Big( Q_h^t + (1-\\eta \\lambda)^{K}HT, H-h+1 \\Big)^{+} $\nwhere $\\eta$ denotes the step-size satisfying $\\eta < 1/\\lambda$.\n\nAccordingly, such an approximation solution incurs an extra residual term $\\mathcal{O}((1-\\eta \\lambda)^{K} HT)$ in the regret bound that exponentially converges to zero.\n\n\n**Q2:** In practice, how large is the norm parameter B?\n\n**A2:** Here we conducted one experiment of training an 8-layer fully-connected neural networks (without bias) under different widths (64, 256, 1024) as below. It can be found that, the norm parameter $B$ is smaller than 1 in practice.\n\n|Width | 64 | 256 | 1024 |\n\n|B | 0.845 | 0.424 | 0.249 |\n\n**Q3:** I do not understand why the regret bound in Theorem 1 not depend on dimension $d$ but in Theorem 2 it depends on $d$ explicitly. What is the reason behind such difference?\n\n**A3:** This is because the regret bound depends on generalization results in different function spaces, which admits different approximation ability, see line 313-316 for details.\n\nIn Barron space, the approximation error converges at $\\mathcal{O}(1/m)$ [S6], independent of $d$; while in the Besov space, the approximation rate is $\\mathcal{O}(n^{-\\frac{2\\alpha}{2 \\alpha + d}})$ [S1], that depends on $d$.\n\n**Q4:** What is the role of sparsity parameter $S$ defined in Eq. (5)?\n\n**A4:** This sparsity parameter $S$ depends on the fact that DNN with ReLU has few activations [S7], and further is used to control the capacity of the Besov spaces, see Appendix G for details.\n\n\n[S1] Suzuki, Taiji. Adaptivity of deep ReLU network for learning in Besov and mixed smooth Besov spaces: optimal rate and curse of dimensionality. ICLR2019.\n\n[S2] Schmidt-Hieber, Johannes. Nonparametric regression using deep neural networks with ReLU activation function. *The Annals of Statistics*, 2020.\n\n[S3] Lin, Shao-Bo, Kaidong Wang, Yao Wang, and Ding-Xuan Zhou. Universal consistency of deep convolutional neural networks. *IEEE Transactions on Information Theory*, 2022.\n\n[S4] Wang, Yifei, Jonathan Lacotte, and Mert Pilanci. The Hidden Convex Optimization Landscape of Two-Layer ReLU Neural Networks: an Exact Characterization of the Optimal Solutions. ICLR2022.\n\n[S5] Allen-Zhu, Zeyuan, Yuanzhi Li, and Zhao Song. A convergence theory for deep learning via over-parameterization. ICML2019.\n\n[S6] Weinan E, Chao Ma, and Lei Wu. The barron space and the flow-induced function spaces for neural network models. *Constructive Approximation*. 2021.\n\n[S7] Hanin, Boris, and David Rolnick. Deep ReLU networks have surprisingly few activation patterns. NeurIPS2019.\n", " This paper provides a theoretical study of the successful deep Q-networks (DQN) framework under an online, episodic Markov decision process (MDP) model with T episodes. They focus on Besov and Barron function spaces in approximating an $\\alpha$ smooth Q-function in a d-dimensional feature space. Under some assumptions, they prove that DQN with shallow layers and medium width can learn the MDP with sublinear regret. Strengths:\nIn summary, this is a good paper for RL theory. It tries to give a theoretical understanding of the widely used DQN algorithm, which is fundamental in Deep RL. The theoretical results in this paper is also interesting. It shows that DQN-type algorithm can achieve sublinear regret using a small-scale neural network, under some assumptions. As far as I know, these results are novel in RL theory. Also, the results match the practical experience.\n\nWeaknesses:\n1. The algorithm requires optimizing a neural network function to minimum. However in practice, it is usually not achievable. What will the results be if we only have an approximate solution to the optimization problem?\n 1. In practice, how large is the norm parameter B?\n2. I do not understand why the regret bound in Theorem 1 not depend on dimension $d$ but in Theorem 2 it depends on $d$ explicitly. What is the reason behind such difference?\n3. What is the role of sparsity parameter $S$ defined in Eq. (5)?\n The authors have adequately addressed the limitations and potential negative societal impact of their work.", " This paper theoretically studies value iteration type algorithms in RL using neural networks belonging to either the Besov or Barron function spaces. Under the probabilistic sufficient exploration assumption, scalings of the width and depth of the neural network are revealed for achieving sublinear regret for the proposed least square value iteration with neural networks. It is argued that the width has a more prominent effect than depth. Quantitative regret bound is established and only depends logarithmically on the dimension $d$. This paper is easy to follow, and it is an important topic to analyze RL algorithms that employ neural network type estimators. The algorithm studied in this paper (Algorithm 1) is well-known, while it is claimed that the theoretical analysis reveals guidelines for the training setup in real applications. The discussion about related works for RL with function approximation can be more detailed, especially comparing the difference between the assumptions on the function classes. Also, it seems that some existing literature on theoretical analysis of Q-learning algorithms is missing, e.g., [1] and [2].\n\n\n\nReference:\n\n[1] Chi Jin, Zeyuan Allen-Zhu, Sebastien Bubeck, Michael I. Jordan. Is Q-learning Provably Efficient?\n\n[2] Pan Xu, Quanquan Gu. A finite-time analysis of Q-learning with neural network function approximation. Some comments and questions are as follows:\n\n1. There is a mismatch between the terminology 'Q-learning' and the actual algorithm studied in this paper, which is indeed a least square value iteration algorithm with a neural network estimator. This is different from the usual Q-learning type algorithm, where the Q updates correspond to gradient descent of one-step TD error. This should be clarified in the paper.\n2. The regret bounds in Theorem 1 and 2 scale with $T^{4/3}$ and $T^{\\frac{\\alpha+d}{2\\alpha+d}}$, which are strictly worse than $\\sqrt{T}$. Do these results degenerate to $\\sqrt{T}$-type bound under simpler settings such as linear function approximation?\n3. Why there is no need to do exploration in Algorithm 1? It should be necessary for online RL algorithms. Is this due to the PSE assumption?\n4. Indeed, the PSE assumption seems unclear to me. In the definition of $\\mathcal{G}_\\epsilon$, is $f$ an arbitrary function? Here $\\epsilon$ seems to depend on the value of $\\|f\\|_\\infty$, and it is unclear where $f$ is specified. Apart from this, by letting $\\bar\\mu_h^t(\\mathcal{G}_\\epsilon) > 0$ hold with probability at least $1-\\delta$, it is imposing some condition on $\\widehat Q_h^1, \\ldots, \\widehat Q_h^T$, which is obtained in an online manner. Then, conditioning on Assumption 1 being true, $\\{\\widehat Q_h^t\\}$ is not a martingale sequence any more as conditioning changes its distribution (so do the TD error), thus it needs to be justified how to use martingale concentration inequalities (line 307-308).\n5. When running Algorithm 1, the collected data is conditionally independent instead of independent, because the policy in episode $t$ depends on all the previous data. Then how can we apply Lemma 5/6 (line 320-321)? The authors have adequately addressed the limitations and potential negative societal impact of their work.", " This paper studies a deep Q-network algorithm in the online and episodic MDP setting. With the probabilistic sufficient exploration assumption, this paper focuses on Besov and Barron function spaces to model the deep Q-networks. Two main theoretical guarantees are provided: one is for shadow neural networks, and the other is for deep neural networks. In both two cases, sublinear regret is achieved. From the theoretical results, this paper claims that for low smooth Q-functions, the width has a more important role and a constant depth is enough. From this perspective, this paper may shed light on width and depth guidelines in deep Q-learning. \n\n===\n\nAfter rebuttal, my concerns are addressed. This paper may shed new light on understanding deep neural networks in reinforcement learning. Yet, there are several future works, as noted in the conclusion part. Therefore, I change my score from 4 to 6. **Strengths**\n\n- More border function classes are considered. Most prior works focus on the linear function approximation or (limited) non-linear function approximation.\n- A tighter bound in terms of the dependence on the width and depth is provided. For the prior work that relies on NTK, an impractical width $m = \\Omega(T^{13}) $is required.\n\n**Weakness**\n\n- The whole theory heavily relies on the assumption of probabilistic sufficient exploration. With this assumption, the exploration issue is already addressed. To this end, it is unclear why this paper studies the function approximation in the online setting. It seems that the offline setting considered in [1] is sufficient to study how to approximate optimal Q-value functions. \n- To my knowledge, the value iteration algorithm (with neural networks) is rarely used in practice, meaning that a gap separates the theory in this paper from practical algorithms. To clarify, Algorithm 1 has two computational issues. First, it is hard to compute the global minimizer defined in the empirical objective (i.e., Equation (3)) for each iteration. The computation cost is unacceptable. Therefore, people prefer iterative algorithms like Q-learning. Second, Q-learning with function approximation can easily diverge, and the target network is introduced to address this issue in DQN. However, these features are not captured by Algorithm 1 and theoretical results.\n\n[1] Jianqing Fan, Zhaoran Wang, Yuchen Xie, and Zhuoran Yang. A theoretical analysis of deep q-learning. In Learning for Dynamics and Control, pages 486–489, 2020.\n I have a few concerns about this paper's motivation and theoretical implications. I list my questions as follows. \n\n1. Can the authors justify the (technical) challenge after Assumption 1 is satisfied? \n\nI have this question because I believe how to address the exploration-and-exploitation trade-off is the core of online learning. However, if we make the so-called Probabilistic sufficient exploration, it is not clear (at least for me) what is challenging in this setting. It seems that it is sufficient to consider the offline setting, where a fixed dataset is provided, to study how to approximate optimal Q-value functions. I guess that the online setting considered in this paper may introduce unnecessary (technical) challenges. I would appreciate it if the authors could help clarify this point.\n\n2. Can the authors further explain the empirical objective in Equation (3)?\n\nIt is known that DQN has two key components: experience replay and the target network. Without the target network, Q-learning with function approximation can easily diverge in theory; see [2] and Section 11.3 of [3]. From the workshop paper [4] to the nature version, the target network is introduced, and the performance is improved a lot for DQN. In practice, if the target network is removed, it is easy to see that DQN cannot work for games like Pong and Breakout. \n\nThis paper shows that i.i.d. sampling by experience replay is not necessary. This is acceptable. Then, this paper further studies Algorithm 1, where the target network is not used. This is a huge difference with practice. With a powerful approximation power, it is not very surprising that Algorithm 1 can converge. However, the issue is that the computation cost of Algorithm 1 is huge, which restricts the application of Algorithm 1. Based on this thought, I am not convinced that the theoretical implications in this paper are well-aligned with practical observations. Please also refer to [5] for the study of online target Q-learning (authors are excused for this paper because this paper is recently posted).\n\n[2] Baird, Leemon. \"Residual algorithms: Reinforcement learning with function approximation.\" *Machine Learning Proceedings 1995*. Morgan Kaufmann, 1995. 30-37. \n\n[3] Sutton, Richard S., and Andrew G. Barto. *Reinforcement learning: An introduction*. MIT press, 2018.\n\n[4] Mnih, Volodymyr, et al. \"Playing atari with deep reinforcement learning.\" *arXiv preprint arXiv:1312.5602* (2013).\n\n[5] Zanette, Andrea, and Martin J. Wainwright. \"Stabilizing Q-learning with Linear Architectures for Provably Efficient Learning.\" arXiv preprint arXiv:2206.00796 (2022). This paper does a great job of analyzing Q-learning with deep neural networks. The main limitations are listed below. \n\n- Assumption 1 is quite strong for studying online learning. Though this paper explains that epsilon-greedy can achieve this, it is known that epsilon-greedy is much inefficient. \n- A gap between Algorithm 1 and practical one (i.e., the target network is not considered). It is hard for me to believe the theoretical implications in this paper are well-aligned with practical observations. If the authors can provide some numerical results to justify the theoretical results, I would be much convinced. \n\nI would suggest that the authors can elaborate on the technical challenge and provide more justification for the setting studied in this paper. \n\n**Minor Issues**\n\nThis writing of this paper can be further improved, as there are several grammar errors. To name a few,\n\n- In Line 35, demonstrates -> demonstrate\n- In Line 82, effects -> affects \n- In Line 93, assuming a simulator can require any state and action -> assuming a simulator in which the agent can require any state and action\n- In Line 99, the above algorithms -> the above bounds \n- In Line 112, function spaces of two -> and function spaces of two \n- In Line 177, which not heavily depends on -> which does not heavily depend on \n- In Line 190, $\\ell_1$ path norm -> $\\ell_1$-path norm \n- In Line 206, it is not clear the meaning of \"this requirement\"\n- In Line 251, they -> [12]\n\n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "xGABUouCtQO", "OR9I6XHSYx5", "v9Wzf_8yT2Q", "yeqOu6aPGeb", "xa60BstGRaJ", "nips_2022_o8vYKDWMnq1", "now2Fxty9C5", "JzbBDOi14ub", "I4qKHzkJFhp", "I4qKHzkJFhp", "2BEG0EneVOU", "QkZ4fsDvby", "Y_pS7CaLTxx", "MbvDyZWMBLu", "nips_2022_o8vYKDWMnq1", "nips_2022_o8vYKDWMnq1", "nips_2022_o8vYKDWMnq1" ]
nips_2022_177GzUAds8U
Compositional generalization through abstract representations in human and artificial neural networks
Humans have a remarkable ability to rapidly generalize to new tasks that is difficult to reproduce in artificial learning systems. Compositionality has been proposed as a key mechanism supporting generalization in humans, but evidence of its neural implementation and impact on behavior is still scarce. Here we study the computational properties associated with compositional generalization in both humans and artificial neural networks (ANNs) on a highly compositional task. First, we identified behavioral signatures of compositional generalization in humans, along with their neural correlates using whole-cortex functional magnetic resonance imaging (fMRI) data. Next, we designed pretraining paradigms aided by a procedure we term primitives pretraining to endow compositional task elements into ANNs. We found that ANNs with this prior knowledge had greater correspondence with human behavior and neural compositional signatures. Importantly, primitives pretraining induced abstract internal representations, excellent zero-shot generalization, and sample-efficient learning. Moreover, it gave rise to a hierarchy of abstract representations that matched human fMRI data, where sensory rule abstractions emerged in early sensory areas, and motor rule abstractions emerged in later motor areas. Our findings give empirical support to the role of compositional generalization in humans behavior, implicate abstract representations as its neural implementation, and illustrate that these representations can be embedded into ANNs by designing simple and efficient pretraining procedures.
Accept
The manuscript presents a story about compositionality that ties together neuroscience and models; with a focus on how compositionality enables generalization in a task performed by humans undergoing fMRI. Reviewers were largely happy with the manuscript and authors thoroughly addressed the questions that reviewers had. The manuscript is suggestive. The experiment is in many ways limited, and it's not clear what conclusions one can draw at present about the design of models or of the brain. But, it is likely a significant audience at NeurIPS will be interested in this topic and it may spark followup work. This looks like the beginnings of an interesting line of work expanding the paradigm of mapping between brains and artificial neural networks.
test
[ "8e6zkYMwTKb", "hvOA945x0qU", "AA43TAS2oFA", "eMCokHt2cs7", "q9-EEt-AJWv", "X3S2-0zzF9", "nt04u50K1uF", "Tt6JZEm3CGj", "qwoRxFPdYq", "6yg5si55R-V", "08sVwqYiR32", "v6MhcORqhdL", "4gUj8DuNtZ", "H1Wye_KY9AC", "u-p530dYdwk", "iQlhqbB1ikT", "u4EJPxur7sT", "deVUaxnGOAx" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their thorough response. \n\nMy three main concerns were (1) the correspondence that the authors were drawing between high/low level processing areas and layers of a network (2) the linear nature of the decoding and (3) the clustering of the learned rules. \n\nThe responses give plausible reasons to think that (1) and (2) are given appropriate consideration for the scope of the paper and (3) has been handled by additional presented results. \n\nMy score remains the same; I recommend for acceptance.", " Thank you for the authors addressing my questions and providing the results with additional analyses within a short time.\nBased on the author's response and revised manuscript, I change my score as below.\n\nSoundness: 2 -> 3 \nPresentation: 2 -> 3\nContribution: 3\nRating: 5 -> 7\nConfidence: 2 -> 4", " We thank the Reviewer for addressing our response. Below we address the additional questions.\n\n**Response to Q1**: This is an excellent question. The Reviewer is correct in that the rule input layers would already have high PS due to the way the inputs were represented – as one hot vectors. However, when calculating PS in the hidden layers, rule inputs were processed under the presence of noise. The network therefore needed to extract useful (i.e., abstract) task structure from noisy inputs. Moreover, it is important to consider that in the zero-shot performance case (i.e., Fig. 5), PS was measured in the presence of the full C-PRO task context (i.e., 3 rules), while the models were only pretrained using 1- (Primitives) or 2-rule (Simple) tasks.\n\nThe Reviewer also suggested a great follow-up analysis – measure PS and C-PRO task generalization performance as a function of pretraining epochs/iterations. This would directly link the degree of PS and C-PRO generalization with pretraining. We have now performed this suggested analysis, finding a strong association across PS and generalization performance during pretraining iterations (r=0.88,p=10e-9). This is included a new Supplementary Figure (Fig. 18). If space allows, we will include this new figure into the main text in the camera-ready version, since we think it is a useful illustration. We thank the Reviewer for this thoughtful suggestion. \n\n**Response 2**: Indeed, the Reviewer is correct that each of the pretraining conditions (e.g., Primitives Logic rule training) is trained in separate blocks. However, the pretraining paradigm was not performed within a continual learning framework; the blocks were interleaved until **all** pretraining conditions achieved >99.0%.**Critically, this stopping criterion ensured that all pretraining conditions (e.g., Primitives Logic, Primitives Sensory, and Primitive Motor) could be simultaneously performed at a rate of >99.0% accuracy prior to exiting pretraining.**\n\nPrior work has suggested that multiple condition learning can occur particularly when condition blocks are interleaved, and when the learning rate is small enough (Fusi et al. 2007). We have now included citations to both Fusi et al. 2007 and Flesch et al. 2018.\n\nAppendix A.8: “Conditions within each pretraining protocol were interleaved \\citep{fusi_neural_2007}, ensuring that catastrophic forgetting was not an issue, as is common in continual learning paradigms \\citep{flesch_comparing_2018}.”\n\n**Response #3**: We think there may have been a misunderstanding, and apologize that our description of PS was not clearer. In the last revision, we tried to make the calculation of PS clearer. The cosine angle between two linear decoders refers to two linear decoders trained to classify neural activation vectors of the same rule dichotomy (pair), but when the two rules are used in different contexts. (Each task activation defines a point in the multivariate activation space, and the vectors are defined as the difference of these points.) For example, if BOTH and EITHER were to be classified, one linear decoder can be trained to classify this distinction in one set of contexts, and another linear decoder can be trained to classify this distinction in another set of contexts. We do not have any specific hypotheses about if the two vectors connecting cross-context activations were orthogonal, other than that the fact that these would not be cross-context generalizable. These details are further expanded upon in the Appendix.\n\n**Response #4**: The reason we did not reduce the motor rule inputs from four to two is that it would not have been possible to perform non-motor Primitives pretraining. This is because if 1 unit represented left or right hand with a 0 or a 1 respectively, and another unit represented index and/or middle finger with a 0 or a 1 respectively, by setting motor rule inputs to 0 during Logic rule Primitives pretraining, the network would infer that a the left index motor rule was presented. This would not appropriately allow for Logic or Sensory Primitives training without interference from motor rules.\n\n**Limitation:** *Compositional generalization was limited to situations in which ANNs focus only on one sensory feature at a time, and therefore is limited across multiple sensory dimensions.*\n\n**Response**: We agree that this is a limitation of the task, but do not think the inferences regarding abstraction (and PS) discussed here would be limited to single sensory features. The task in this study was to assess the mechanisms of compositional generalization across different rule domains. We think that these principles would similarly apply to compositions within the same sensory domain, when participants/ANNs are asked to attend to multiple sensory features (e.g., in a dataset such as COG or CLEVR). Nevertheless, this is an interesting empirical question that a future study can explore.", " Thank you for kindly providing the answers to my questions.\nLet me rephrase my questions based on the answers.\n\n1. I see that the rule representations are not driven by stimuli in both ANN and fMRI. My question also included how much the rule representations were driven by the input units of task context (e.g, the 4 one hot codes of sensory rules already can be represented in a generalizable structure with high PS in a 4 dimension space). If the PS score increases due to learning the task structure, The PS should be driven not only by the input units of task context but also increase during training. How does the PS score change according to the training steps? Does the score correlate with the performance accuracy?\n\n2. Thank you for showing the simulation results while changing the training orders of simple and primitive tasks. That is very informative. My question was about how the ANN could learn different rules during primitives and simple task learning without interference from each other. It seems that each of the primitive rules has been learned in a block. If it is, I assume that the weights in hidden layers formed during the first block (e.g. learning both) would be changed while learning other rules in the later training block (e.g. red, left index, or right middle). If they are overwritten, though it reaches 90% accuracy during learning 'both' for block 1, they would not be able to reach the same level after learning other rules at block 4. That is, the ANN would perform better for the last learning rule compared to the earlier learning rule, which is the reason that I have asked if there is any bias based on the order of learning rules.\nThis issue is discussed in some, for example, in Flesch et al 2018, https://www.pnas.org/doi/full/10.1073/pnas.1800755115.\nI must miss something. I would appreciate it if the authors point out what I miss here. \n\n3. If I understood correctly, the PS was measured and tested between two vectors to test if a linear classifier can be generalized. To support the idea that the structure of rule representation was optimized for generalization I think it should be also shown the low PS score between orthogonal vectors. That has been tested in Bernardi et al for example by showing that a linear decoder can generalize only specific conditions but prevent over-generalization. \n\n4. Is there any reason the input units of motor rules were constructed with four units instead of combinations of two units as the logic rules inputs were made? Can it be [1 for Left 0; for Right], and [1 for mid; 0 for index-finger]? \n\n\nLimitations: The compositional generalization would work only in a situation where the ANN should focus on only one sensory feature at a time. The compositional generalization across sensory rules is limited. It seems that the conceptual knowledge about 'red' can be used to understand the concept of 'blue' (~red). However, it is not quite like that. For example, the ANN may be able to make correct choices for 'both red and vertical' but it cannot make choices of 'both blue and vertical', which would not be the case for humans.", " We are grateful to all Reviewers for their evaluation of our manuscript. We have now responded to each Reviewer, focusing on the Questions raised within each review, and where appropriate, addressing points raised as Weaknesses.\n\nIn addition, we have uploaded a revised manuscript (with the revised Appendix in one PDF) that addresses most of the Questions and Weaknesses. Most notably, we have performed several new analyses that address the importance of task ordering in the pretraining protocol (Appendix Figure 12), and the Parallelism Scores for every rule dichotomy (rather than just the average across all rules) in the ANN and fMRI data (Appendix Figures 15-17).\n", " We would like to thank the Reviewer for the detailed discussion and encouraging comments. Below, we address each question asked by the Reviewer.\n\n**Question**: *What is your response to weakness (1)? Do you think the relatively small increase in Figure 5A undermines your hypothesis that parallel abstract representations are important for compositional generalization?*\n\n**Response**: The Reviewer raises an interesting point that we are happy to clarify. PS is obtained by averaging quantities based on pairwise coding vectors. Thus, it is expected to correlate on average with generalization, but it is not suited to capturing it in fine detail. For example, coding vectors can have non-coding or non-abstract dimensions (e.g., non-coding units within an ANN layer), which may mask the overall magnitude of PS. These non-abstract dimensions would introduce noise into the PS estimate, since they are not linearly additive. However, the existence of a few (or a subset) of abstract dimensions within an ANN layer or brain region might be sufficient for good generalization. These intuitions likely account for the discrepancies identified in Figure 5A and 5B. \n\nIn addition, we acknowledge that parallel abstract representations present one potential mechanism that supports compositional generalization. However, this is not exclusive; there could be other potential mechanisms, such as program synthesis methods and neurosymbolic computing, which we discuss in the Introduction. Nevertheless, abstract representations – as captured with the PS – possess other important properties that we think deserve further investigation. For example, PS is a generic metric that is readily applied to both artificial and biological neural systems. In contrast, it is unclear how the identification of neurosymbolic modules/functions might be identified in empirical neural data. In addition, the evidence that a simple procedure (i.e., simple pretraining procedures) can realize these representational properties suggests that adhoc architectural elements like neuro-symbolic modules are not always necessary for compositional generalization. \n\nWe would be happy to elaborate on these points in the final revision of the paper, provided the additional space in the camera-ready format.\n\n**Question**: *Please elaborate on how you arrived at your pretraining method from your finding that in humans \"increased exposure to specific rules improved performance on subsequent novel contexts\". I understand the finding, and the pretraining method, but to me the link isn't obvious. The pretraining method does not simply expose the ANN to more examples of contexts in which the rule appears (that would simply be showing the ANN more 3-rule examples containing that rule). The pretraining method exposes the ANN to simplified contexts (1-rule or 2-rule examples), where it can focus on learning representations. How do you motivate the link?*\n\n**Response**: In our ANN experiments, we manipulated two different levels of prior rule knowledge. The first was pretraining (in particular, Primitives pretraining). The motivation behind this pretraining procedure is to mimic the prior knowledge adult humans have prior to performing a new experimental task (e.g., humans have semantic knowledge about certain logical relations such as “BOTH” or “EITHER”, or semantic knowledge about what the color “red” is). We note that this prior knowledge is distinct from the “increased exposure to specific rules [which] improved performance on subsequent novel contexts”. The results pertaining to these claims (i.e., the impact of rule exposure on subsequent novel contexts) are found in the Appendix in Fig. 9. However, the text results are briefly detailed in Section 3.9: “Pretraining leads to compositional generalization in ANNs comparable to humans”. In this ANN experiment, ANNs were first trained on four C-PRO contexts (which was an identical experimental protocol performed in the human experiment), and then evaluated on the remaining 60 contexts. We found that novel contexts that had greater rule overlap with the 4 trained contexts had greater performance than novel contexts with fewer rule overlap. To make this distinction clearer, we have added text in section 3.5 to clarify this distinction: “We evaluated the learning and generalization dynamics of ANNs with and without pretraining, after training ANNs on 4 of the full C-PRO contexts. This matched the human experiment, since humans were exposed to 4 ``practice'' contexts prior to performing the remaining 60 novel contexts.”\n\nTo summarize, pretraining is motivated as a way of mimicking the prior semantic knowledge humans have. Exposure to specific rules and C-PRO contexts are parallel experiments performed in both humans (Fig. 1b) and ANNs (Appendix Fig. 9).", " **Question**: *Aside from pretraining familiarizing the networks with task context rules, the networks also become more familiar with and learn information about the sensory stimulus input units. Do you think that this confound ends up being relevant for the observed increased sample efficiency, both for the primitives pretraining and especially for the simple task pretraining, which I assume was probably pretrained for more iterations to get to 99% due to the increased task complexity?*\n\n**Response**: The Reviewer is correct in assuming that Simple task pretraining required more samples than Primitives pretraining (see Fig. 10b and 11b). However, we do not think that the familiarization of task context rules with sensory stimuli during pretraining is a confound. On the contrary, we think that this part of pretraining is essential, and was the primary motivation for the sensory Primitives pretraining. In general, humans perform experiments with existing knowledge about mapping rule semantics to environmental features, such as knowing what the color “red” is, or how to perceive a “vertical” versus “horizontal” line. Pretraining without learning the mappings between stimulus features and sensory (or semantic) rules would therefore be too impoverished. A similar argument can be made for motor Primitives pretraining, where the network needs to learn the relationship between how motor rules correspond to actual motor actions. The novelty the ANNs are exposed to during testing on the full C-PRO task is in combining different rule features together, and teaching the network 1) novel combinations (systematic compositionality); 2) longer combinations of rules (compositional productivity). The increased sample efficiency in the Simple task pretraining case is likely more to do with the complexity of the task, since it requires the recombination of two rule domains, producing a total combination of (e.g., 4 Sensory x 4 Motor = 16 possible two-rule contexts). In contrast, Primitives task training only has 4 possible conditions per rule domain, reducing the overall number of possible conditions.\n\n**Question**: *It would be helpful to see a better breakdown, in general, for both brain and ANN experiments, about which specific rules were more consistently encoded, e.g. how did PS scores differ across rules within each rule domain? Did certain dimensions drive mean PS scores more than others?*\n\n**Response**: For completeness, we have now included 3 new Figures: Figures 15, 16, and 17, which illustrate the comparison of pairwise Parallelism Scores for all rule dichotomies in both human fMRI data for sensory, association, and motor cortical systems, and each ANN hidden layer (using a 3-layer hidden network for comparison). However, since we had no a priori hypotheses about how rule encodings should behave across different rule dichotomies, we have left the interpretation of these maps to the reader. However, we point out that some similarities of rule encodings can be seen across fMRI and ANN activity. In particular, the increasing abstraction among motor rules within the same hand responses becomes stronger in human motor cortex as well as layer depth in the ANN (Figure 17). We have included reference to these new Figures in the main text (Section 3.7): “(In addition, see Fig. 15-17 for PS scores for all possible rule dichotomies in human fMRI data and ANN activations.)”\n\nAnd also Appendix (section A. 9): “In addition, Figures 15, 16, and 17 illustrate that pairwise PS scores for all pairs of rule dichotomies in both ANNs and human fMRI data.”\n", " We would like to thank the Reviewer for their thoughtful comments and useful suggestions to improve the presentation of the manuscript. In the revised version of the paper, we made sure to clearly indicate in the main text whenever important details were expanded upon in the Appendix. We have also moved some of these important sections highlighted by the Reviewer into the main text. Moreover, if the space constraints of the camera-ready version allow, we will move additional sections from the Appendix to the main text. Below, we individually answer the questions asked by the Reviewer.\n\n**Question**: *The authors note (once again only in the appendix) that pretraining was stopped once pretraining tasks achieved 99%. To confirm, >99% accuracy on each task, e.g. until the worst task achieves 99% accuracy? How many training samples / iterations did this take under each pretraining paradigm?*\n\n**Response**: Firstly, we apologize for not expanding on further details in the main text. As mentioned, in an effort to compromise between clarity and the space constraints of the main text, we have now gone through the main text to include explicit references to the specific Appendix sections that contain critical details. For example, in section 3.4, which details the referenced pretraining procedure in ANNs, we have now added this additional sentence: “Each pretraining condition was trained until ANNs achieved 99% accuracy (see A.8 for details).” Depending on the space constraints of the camera-ready version of the paper, we will transfer some of these details from the Appendix to the main text.\n\nWe have also made additional clarifications in Appendix A.8:\n\n“Primitives pretraining was always performed prior to Simple task pretraining, unless otherwise noted. Pretraining procedures were blocked together, such that all conditions within the Primitives pretraining paradigm (i.e., Logic, Sensory, and Motor primitives pretraining) were trained until all three tasks achieved 99% accuracy. Simple task pretraining was subsequently performed until both Logical-Sensory and Sensory-Motor tasks were performed at 99% accuracy.”\n\nTo directly address the Reviewers concern: Yes, >99% accuracy was required on each pretraining condition. For the Primitives pretraining, this included the Logic, Sensory, and Motor primitives pretraining. For the Simple task pretraining, this included the Sensory-Motor and the Logical-Sensory tasks. The number of training samples for each pretraining paradigm is reported in Figure 10 and 11 (unfortunately relegated to the Appendix due to space constraints). In brief, Simple task pretraining required significantly more samples than Primitives pretraining to achieve >99% accuracy (see Figures 10b and 11b at the point where the x-axis=0). Figures 10 and 11 also report the generalization performance and sample efficiency as a function of incrementally learning C-PRO contexts for each pretraining condition.\n\n**Question**: *For the Vanilla-ANN comparison, how is that generated? Is it just randomly initialized? From what distribution? (Okay I found this in appendix; add more details to main text.)*\n\n**Response**: We have now included some additional details in the main text (ANN methods section 2.4), as well as an explicit reference to the Appendix which contains further details:\n\n“The primary ANN architecture had two hidden layers (128 units each) and an output layer that was comprised of four units that corresponded to each motor response (see Appendix section A.7 and Fig. 8 for additional details). Training used a cross-entropy loss function and the Adam optimizer \\citep{kingma_adam_2017}. The ANN transformed the trial input vector into a 4-element response vector with the equation $ Y = f_{ReLU}(X_{h}W_{h}+b_{h}) $. Weights and biases were initialized from a uniform distribution $U(-\\sqrt{1/k},\\sqrt{1/k})$, where $k$ is the number of input features from the previous layer.”\n", " **Question**: *line 217: Is there only one training example per context?*\n\n**Response**: No, there are multiple training examples per context. Specifically, there are 256 possible unique training examples per C-PRO context. This is because there are 256 possible stimulus combinations. Every stimulus combination can be combined with a context to produce a unique context-stimulus grouping. Thus, in total, there are 256 unique stimulus combinations * 64 C-PRO contexts, producing a total of 16384 unique trials (i.e., context-stimulus combinations). For example, a stimulus sequence could be a red vertical bar paired simultaneously with a high pitch beep, followed by a blue horizontal bar with a low pitch beep. Depending on which sensory rule is included in the task context, irrelevant sensory information would be filtered out. We have now clarified this in the manuscript. \n\nMain text, Section 2.1: “One of 256 possible unique stimulus combinations could be presented with each task context. Visual dimensions included either horizontal or vertical bars with either blue or red coloring. Auditory dimensions included continuous (constant) or beeping high or low pitched tones... See Appendix A.2 for details.”\n\nWe expanded on this in the Appendix, section A.2: “Visual stimuli included either horizontally or vertically oriented bars with either blue or red coloring. Simultaneously presented auditory stimuli included continuous (constant) or non-continuous (non-constant, i.e., ``beeping'') tones presented at high (3000Hz) or low (300Hz) frequencies. A given task context could be presented with 256 unique stimulus combinations. This is because a given task context was presented with two sequentially presented audiovisual stimuli, where each audiovisual stimulus varied in four dimensions: color (red/blue), orientation (vertical/horizontal), pitch (high/low), continuity (continuous/beeping). This led to $2^8=256$ possible stimulus combinations… ​​This meant that there were $256*64=16384$ unique trials (i.e., context-stimulus) combinations.”\n\n**Question**: *line 220: \"unlike humans (see Fig. 1b), ANNs with no prior knowledge cannot compositionally generalize.\" I understand the claim being made, but it should be noted that an adult human is not exactly without \"prior knowledge\"*\n\n**Response**: We acknowledge the potential for confusion here, and so we have removed the clause “unlike humans”: “This suggested that ANNs with no prior knowledge cannot compositionally generalize.” \n\n**Question**: *​​Figure 6: In 6B Brain systems are on the y-axis and rule domains are shown in groups. In 6C rule domains are shown on the x-axis and layers are shown in groups. I suggest putting layers on the y-axis in 6C to make the figures easier to compare.*\n\n**Response**: We apologize for the error, and thank the Reviewer for pointing this out. In fact, the x-axis in panel B should be “Rule Domains”, making it consistent with panel C. In both panels B and C the colored legend within the graph reflects the Brain Systems and Layers, respectively. Brain Systems and Layers should be treated analogously. This error has been corrected.\n\n**Question**: *line 171: missing Figure 7”; “Section 3.5: missing Figure 9*\n\n**Response**: We apologize for the confusion. Figures 7-17 are included in the Appendix. In revision we will include both the main text and Appendix into the same PDF document.\n\n**References**\n\n​​Bernardi, S., Benna, M.K., Rigotti, M., Munuera, J., Fusi, S., Salzman, C.D., 2020. The Geometry of Abstraction in the Hippocampus and Prefrontal Cortex. Cell. https://doi.org/10.1016/j.cell.2020.09.031\n\nHuntenburg, J.M., Bazin, P.-L., Margulies, D.S., 2018. Large-Scale Gradients in Human Cortical Organization. Trends in Cognitive Sciences 22, 21–31. https://doi.org/10.1016/j.tics.2017.11.002\n\n​​Hupkes, D., Dankers, V., Mul, M., Bruni, E., 2020. Compositionality Decomposed: How do Neural Networks Generalise? Journal of Artificial Intelligence Research 67, 757–795. https://doi.org/10.1613/jair.1.11674\n\nMargulies, D.S., Ghosh, S.S., Goulas, A., Falkiewicz, M., Huntenburg, J.M., Langs, G., Bezgin, G., Eickhoff, S.B., Castellanos, F.X., Petrides, M., Jefferies, E., Smallwood, J., 2016. Situating the default-mode network along a principal gradient of macroscale cortical organization. PNAS 113, 12574–12579. https://doi.org/10.1073/pnas.1608282113\n\nRigotti, M., Barak, O., Warden, M.R., Wang, X.-J., Daw, N.D., Miller, E.K., Fusi, S., 2013. The importance of mixed selectivity in complex cognitive tasks. Nature 497, 585–90. https://doi.org/10.1038/nature12160\n\nYamins, D.L.K., Hong, H., Cadieu, C.F., Solomon, E.A., Seibert, D., DiCarlo, J.J., 2014. Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proceedings of the National Academy of Sciences 111, 8619–8624. https://doi.org/10.1073/pnas.1403112111\n", " **Question**: *Section 2.3: The PS will be high if the coding directions are parallel, but it will also be high if the coding directions are identical. How do we know that the coding directions do not all lie in the same low dimensional space, i.e., as in Figure 2B. Importantly, it would be good to know, of the pairs $v_i$ and $v_j$ that have high cosine similarity, what is the $L_2$ distance?*\n\n**Response**: The Reviewer is correct: PS will be high if the coding directions are parallel, which can also happen if the representations are \"clustered\". For instance, if all the activation vectors in trials for logic rule BOTH cluster around the same point, and the activation vectors for the logic rule EITHER all cluster around another point (such that all coding directions with respect to the logic rule originate from and end at the same point), then PS will be high (Fig. 2B). Indeed, that is a feature of PS, which as originally developed (Bernardi et al., 2020), aimed to generalize the notion of clustering coefficient to characterize abstraction. One way to verify that a highly parallel representation is not merely a clustered representation is to perform a complementary decoding analysis, which aims to ensure representations are not just clustered for all rule pairs. If the representations are fully clustered, then they would not be discriminable/decodable. However, this is not the case, as we have reported a multi-class decoding analysis in Fig. 7 (in the Appendix).\n\nRegarding the $L_2$ distance between vector activations: PS ignores this, since it does not tell us whether the points that are being discriminated by a given coding direction are far or close (i.e., whether they can be decoded with high or low signal-to-noise ratio). A way to partially avoid this issue is to normalize the response vectors such that distances are in units of the noise variance. Although this method has the downside that it only works \"on average\" across coding vectors, as opposed to each coding vector individually, Bernardi et al. (2020) showed that empirically it works as well as more sophisticated methods like the Mahalonobis distance (which is essentially a multi-dimensional signal-to-noise ratio) in quantifying abstraction and generalization. Importantly, we applied this normalization prior to performing our decoding analyses in Fig. 7, such that distances are in units of the noise variance.\n\n**Question**: *Section 3.3: What is the input to these ANNs? A text description of the task?*\n\n**Response**: The input into the ANNs is a concatenation of one-hot vectors (i.e., concatenation of rule and stimulus one-hots). This is detailed in the Appendix (A.7) and Figure 8. See updated Line 183: “We constructed a simple feedforward ANN with two hidden layers (Appendix A.7; Fig. 8).”\n\n**Question**: *Section 3.4: What does it mean to measure the PS of an ANN? What is the activation vector for the ANN?*\n\n**Response**: The PS of an ANN is measured using the unit activations from the hidden layers. This was calculated separately for each layer. We have revised the main text to clarify this, while referencing the appropriate Appendix section (from Section 3.4): “PS was calculated for each rule domain using the ANN's hidden layer activations (see A.9).”\n", " Below we respond to the specific questions posed by the Reviewer.\n\n**Question**: *line 105: What is meant by \"coding axis\"? In the text that follows, I gather that it is a displacement vector associated with a given modality. Is this correct?*\n\n**Response**: Yes, this is correct. We have clarified that sentence: “Intuitively, PS identifies a coding axis (i.e., the parallel displacement vector) across task contexts that aids generalization.”\n\n**Question**: *line 120: In this case, what is meant by \"neural activation space\"? The space of all fMRI scans?*\n\n**Response**: The neural activation space refers to the set of voxel activations within a brain region used to calculate PS. (In prior studies, e.g., Bernardi et al. (2020), this activation space was the set of neurons in a functionally defined brain region (e.g., the prefrontal cortex).) We have clarified this sentence: “​​PS is defined as the cosine angle of the coding directions of the same rules in different contexts in the neural activation space (e.g., voxels or neurons within a brain region)”\n\n**Question**: *line 124: What is the activation vector? A vector of fMRI voxel intensity?*\n\n**Response**: Correct – we have clarified this in-text: “For each pair, we subtracted the fMRI voxel activation vectors associated with each context to obtain the vector that represented that coding direction (see Fig. 3a).”\n\n**Question**: *Figure 3 caption: This is the first place where the activation vector is mentioned. I would move this into section 2.3.*\n\n**Response**: We hope the ​in-text clarifications to the above questions help to clarify this caption. \n\n**Question**: *Figure 3 caption: \"The cosine angle between two linear decoders\" --> Is this the ANN mentioned in line 135? That seems like a non-linear decoder. What does it mean to take the cosine angle between decoders? Does this refer to the classification boundary? Or the hidden state representation?*\n\n**Response**: We apologize that this section was not clearer. The cosine angle between two linear decoders refers to two linear decoders trained to classify neural activation vectors of the same rule dichotomy (pair), but when the two rules are used in different contexts. For example, if BOTH and EITHER were to be classified, one linear decoder can be trained to classify this distinction in one set of contexts, and another linear decoder can be trained to classify this distinction in another set of contexts. So yes, this refers to the classification boundary between (for example) BOTH and EITHER. PS is equivalent to measuring the cosine angle of these two linear decoders. This method (or statistic) can be applied to either fMRI activation data or ANN data. These details are further expanded upon in the Appendix, but for brevity we have revised a portion of Figure 3’s caption to emphasize this metric can be applied to either fMRI or ANN activations.\n\nFigure 3 caption: “Intuitively, PS captures the geometry of the neural activation space (ANN or fMRI data) by measuring the cosine angle between two linear decoders trained to distinguish two rule conditions in different task contexts.”\n\n**Question**: *line 135: What is the input to the model? Is it the intensity for all voxels?*\n\n**Response**: The model was trained on the C-PRO task and was independent of fMRI data. We acknowledge the brevity of this section, so we have now revised line 136 to provide more explicit references to the Appendix and the Figure illustrating the model inputs/schematics in the Appendix to improve clarity: “The primary ANN architecture had two hidden layers (128 units each) and an output layer that was comprised of four units that corresponded to each motor response (see Appendix section A.7 and Fig. 8 for additional details).”\n\n\n", " We thank the Reviewer for the thorough assessment of our manuscript, and for bringing our attention to the Ruis et al. paper, which we now reference in the revised manuscript. We first broadly address the weaknesses the Reviewer raised prior to responding point-by-point to each question.\n\n*Regarding the novelty and significance of the machine learning results.* The goal of the present study was to investigate the role of abstract representations for compositional generalization, and in parallel, characterize the topography of abstract representations in human fMRI data during the same task. We believed the best possible model (with fewest assumptions) to address this question was a simple connectionist model. This ensured that the representations that would emerge were not due to architectural choices that are associated with state-of-the-art models, such as the GECA-enhanced model reported in the gSCAN paper by Ruis and colleagues (now referenced in the revised manuscript). We agree that other machine learning models and studies achieve undoubtedly higher compositional generalization benchmarks than those demonstrated here. Nevertheless, we thought it would be important to 1) show that the representations learned by classic connectionist models can exhibit properties of compositional generalization (namely systematic and some productive generalization; Hupkes et al., 2020); 2) propose vector representations of rule abstractions that are exhibited both in connectionist models and the human brain. This presents a baseline from which future studies can explore the degrees of abstraction in more sophisticated models. For example, it would be interesting to explore if the prevalence of abstract representations in more sophisticated models corresponds to improved levels of compositional generalization.\n\n*Regarding the difficulty in comparing human and ANN data, the lack of analogous ordered layers in the brain (to ANNs), and the lack of interaction between the brain and ANN data.* While there is no explicit one-to-one correspondence between a simple feedforward ANN with the complexity of the human brain, we sought to use a hierarchical framework that would make comparison of ANNs with our human brain data possible. Conceptually, this is similar to how Yamins and colleagues compared representations in convolutional feedforward networks with the hierarchical organization of the primate ventral visual stream using neural spiking data (Yamins et al., 2014). Here we extend this analytic framework to the entire human cortical hierarchy. In the human neuroimaging literature, there are a number of studies that have worked to characterize the hierarchical organization (often called gradient organization) of the brain (for example, see Margulies et al., 2016; for a review, see Huntenburg et al. 2018). In particular, this gradient organization has revealed a hierarchy of functional specialization – from sensory cortex, to association cortex, to motor cortex – that is largely consistent with the flow of information during experimental tasks (e.g., subjects take in sensory information to produce a motor response and action). We used this sensory-association-motor gradient map directly from a prior study (Margulies et al., 2016), and used it to identify analogous layers in ANNs (e.g., analogize layer 1 to sensory cortex, layer 2 to association cortex, and layer 3 to motor cortex; see Figures 6, 13, and 14). While we recognize that this comparison is an oversimplification of hierarchical processing in the brain, it provides a tractable way to directly compare hierarchical representations found in ANNs with those identified in fMRI data.\n\n*Regarding the limitation that the Parallelism Score can only identify linear abstractions.* Indeed, the Reviewer is correct that this is a characteristic of the Parallelism Score. However, that is also by design; Bernardi and colleagues (of which the original metric was defined) proposed that vector representations that require non-linear readouts to be discriminated are not conducive to generalization (Bernardi et al. 2020). This argument relies on the classical idea of a bias-complexity trade-off according to which, if a downstream task needs a complex non-linear machine learning model to be learned, that will preclude generalization. It will nevertheless be interesting to explore in future studies the utility and feasibility of having nonlinear abstractions and their impact on compositional generalization.", " **Question:** *Does the order of pretraining conditions matter in the accurate generalization?*\n\n**Response:** The Reviewer raises an interesting question. Prior evidence suggests that the order of pretraining is important for sample efficiency and generalization (in particular, ANNs are more sample efficient when transitioning from easier to more difficult tasks) (Saglietti et al., 2021). We sought to directly evaluate this in a simple follow-up experiment. We evaluated zero-shot performance and sample efficiency of “Combined” pretraining (i.e., an ANN model pretrained on the Primitives paradigm, then the Simple task paradigm), and then the reverse (i.e., “Reverse-combined”; Simple task pretraining, followed by Primitives pretraining). Interestingly, and consistent with prior studies (Saglietti et al., 2021), we found that the ordering significantly mattered for both generalization performance and sample efficiency. In other words, ANNs pretrained in the Reverse-combined condition could not generalize to the unseen C-PRO tasks, while the Combined condition could. Moreover, the Reverse-combined condition required more pretraining samples to achieve >99% accuracy on both the Simple task and Primitives pretraining paradigms, despite not affording any generalization performance on the C-PRO task. These new results are now reported and visualized in a new Figure 12, and mentioned in Section A.8:\n\n“Primitives pretraining was always performed prior to Simple task pretraining, except for Fig. 12, which investigated the effect of reversing the order of the pretraining tasks… This ordering is consistent with prior work, suggesting that ANNs are more sample efficient when transitioning from easier to more difficult tasks [41]. We also performed a simple control experiment demonstrating that when pretraining was reversed in the Combined condition (i.e., Simple task pretraining followed by Primitives pretraining), generalization performance was reduced to chance (Fig. 12). This suggests that the ordering of pretraining paradigms is crucial for generalization performance and sample efficiency, which future work should explore.”\n\nWe hope that the revised manuscript satisfies the major questions and concerns from the Reviewer. Specifically, we hope that some of the new in-text revisions, which addressed that the estimation of PS in the present study cannot be confounded by stimulus-related or motor-related activity (as it might be in the Bernardi et al. 2020 study), satisfy the first main concern of the Reviewer. We also hope that the inclusion of the additional control experiment and the surrounding text (i.e. Fig. 12), clarifies potential confusions about the impact of pretraining protocols on C-PRO compositional generalization.\n\n**References**\n\nSaglietti, L., Mannelli, S.S., Saxe, A., 2021. An Analytical Theory of Curriculum Learning in Teacher-Student Networks. arXiv:2106.08068 [cond-mat, stat].\n\nBernardi, S., Benna, M.K., Rigotti, M., Munuera, J., Fusi, S., Salzman, C.D., 2020. The Geometry of Abstraction in the Hippocampus and Prefrontal Cortex. Cell. https://doi.org/10.1016/j.cell.2020.09.031\n\n", " We thank the Reviewer for the thoughtful assessment of our manuscript. \n\nFirst, we address the weakness raised by the Reviewer: *The potential contamination of abstract rule representations by stimuli and/or motor responses in both the human fMRI and ANN activations.*\n\nWe want to assure the Reviewer that abstract rule representations in both ANN and fMRI activations are not contaminated by stimuli/response activity.\n\nIn the fMRI activations, we estimated the period of activity that is associated with the encoding period (i.e., presentation of the instructions). This diverges from the experimental design reported in Bernardi et al. (2020), where task context had to be directly inferred from the block of trials (since no explicit rule cues were presented). In the present experimental design (C-PRO task), instructions are presented for 3925ms, after which there is a variable delay (1570 - 6280ms), followed by the stimuli and response period (Fig. 1a). The fMRI activity was only estimated during the instruction period. This is reported in the Appendix, section A.6: “To extract task activations for each task block, we performed a beta series regression on every task miniblock [37]. Specifically, we fit an independent regressor to every encoding period (3925ms, 5 TRs), resulting in 128 task regressors in total. Fitting regressors on the encoding period was done primarily to isolate rule representations rather than the actual trial (stimulus-response) period.”\n\nIn calculating the Parallelism Score (PS) from ANN activations, PS was calculated separately from the training procedure. Since we were only interested in the PS of rule representations, only input units associated with task rules were activated, while stimulus inputs were set to 0. We then computed a forward pass into the hidden units (again, with only rule units in the input space activated), and then computed PS based on these hidden activations. Thus, the hidden activations could not have been contaminated by stimulus inputs. We realize that we did not clearly specify this in the appropriate Appendix section (A.9) and thank the Reviewer for pointing that out. We have now added to Appendix A.9 the following clarification:\n“PS in ANNs was calculated separately from the training procedure. Since we were only interested in the PS of rule representations, only input units associated with the task rules were activated, while stimulus inputs were set to 0. This ensured that the hidden activations were not contaminated by stimulus-related activations when calculating PS in the hidden layers. Otherwise, PS in ANNs was calculated in a similar manner to the empirical fMRI data, where the spatial features (dimensions) were the units within a given hidden layer (like voxels within a brain parcel).”\n\nNext, we address the Questions raised by the Reviewer:\n\n**Question:** *Does the ANN learn different conditions in separate blocks in pretraining? How can the ANN learn a new condition while not modulating the weights learned from the pretraining of previous conditions (a.k.a. catastrophic forgetting)?*\n\n**Response:** Yes, the different conditions are presented in separate blocks in pretraining. However, we ensured that there was no catastrophic forgetting by repeatedly training on all condition blocks until satisfying a stopping criterion where all conditions (within each pretraining paradigm) achieved greater than 99.0% performance. This ensured that all pretraining conditions could be performed simultaneously with high accuracy. We have now revised the Appendix section describing the pretraining process (Section A.8) to include specific details of the pretraining process.\n\n“Pretraining procedures were blocked together, such that all conditions within the Primitives pretraining paradigm (i.e., Logic, Sensory, and Motor primitives pretraining) were trained until all three tasks achieved 99% accuracy. Simple task pretraining was subsequently performed until both Logical-Sensory and Sensory-Motor tasks were performed at 99% accuracy.”\n\n", " Humans are good at generalizing previously acquired knowledge to find solutions to novel problems. In this way, they can learn a complex task by combining knowledge from different experiences instead of learning from scratch. This study suggests that neural networks can also afford this compositional generality by having task representations in a geometrical structure that allows a linear classifier to be used for generalization.\n Strength,\nThis work provides clear evidence that the representational structures (parallelism scores) are closely related to the ability for efficient cognitive control and generalization while the representations and cognitive control have been examined largely in separate domains. The effects of training facilitating fast learning can be found not only in human participants but also in the neural networks, which is interesting because this may not be easily explained by reinforcement learning but supports the idea that learning entails building abstract task structure representations.\n\nPotential weaknesses,\nIt is not clear whether the neural representations were based on the data acquired at the time of the presentation of the conditioned cue, at the time of decision making (maybe ash the time of the second stimuli), or the mean activity across a mini block. If it is the former, this is before the stimuli presentations and decision making, which is not true in the ANN architecture (the ANN were made with the inputs of not only conditions but also stimuli). If it includes decision making, it may not be the rule but the activity associated with the decision making might be represented.\n\nIn the previous study (Bernardi 2020, et al.), the rule cannot be separated from the stimuli while the representational structure cannot be explained by the similarity between stimuli. That is, the structure was built based on associated rules for each of the visually independent stimuli. However, in this experiment, can some pairs of parallel representations also be explained by the (dis)similarity of input vectors? (e.g. the same pair of stimuli followed by the different action cues). How would the same condition be represented if they were paired with different stimuli pairs? Does the ANN learn different conditions in separate blocks for pretraining? How can the ANN learn a new condition while not modulating the weights learned from the pretraining of previous conditions (a.k.a. catastrophic forgetting)? Does the order of pretraining conditions matter in the accurate generalization? I think this part is the key to this paper but I can't find how this part was made. Humans intuitively know which features of sensory stimuli can be combined and which features cannot be combined (e.g. red vertical stimulus can be shown but red blue stimuli cannot be presented). Therefore, they can generalize the representation of 'vertical' to 'blue' stimuli even only when they were trained with 'red' or when the colors of training stimuli are covered with their shape. However, the proposed ANN would not yet be able to overcome these kinds of biases. On the other hand, humans might have more experiences to answer to 'both' and 'either' than their negation, this would make it harder to respond to 'neither' or 'not both' conditions, while the ANN might not have (or might not able to capture) these biases. ", " The paper presents an analysis of fMRI task data which seeks to uncover \"abstract representations\" in the brain. This same analysis is then applied to a simple artificial neural network.\n\nThe analysis is built around a \"parallelism score\" which measures displacement vectors between brain activations. Given three modalities, (sensory, logical, and motor), each modality is held fixed and the other two are varied. The parallelism score for the fixed modality is an average measure of how aligned the displacement vectors are, averaged across combinations of the other two modalities.\n\nThe authors find that the highest motor parallelism is found in the motor areas, the highest visual parallelism in the visual areas, and that the logical parallelism is distributed throughout the brain, but lies mostly in the temporal and frontal areas.\n\nThis same analysis is applied to an artificial neural network. It is found that pre-training the NN results in better parallelism scores and better zero-shot performance. ## Strengths\n- Application of the parallelism score to neural networks is novel as far as I can tell\n- Identifying the cognitive mechanism behind compositionality is an important problem\n- Analysis of this particular fMRI dataset is novel and shows the promise of this sort of analysis\n\n## Weaknesses\n- My main concerns are about the significance of the machine learning results. The paper claims that it introduces a new pre-training scheme. But the model architecture being considered is a two layer feed forward network and the pre-training is similar to schemes that have already been proposed, e.g., gSCAN [1]. The simplicity of the architecture and setting make it hard to see whether the results about pre-training would still hold in general. \n- The particular task setting also makes the comparison between human behavior and ANNs hard to interpret. The paper claims that their trained ANN shows a human-like response pattern to tasks. That is, the lowest level of the ANN is activated for sensory modalities, the middle layer for logic modalities, and the last layer for motor modalities. But this cannot be used to support the claim that the ANN has converged on a human-like hierarchy of representations, because there is no analogous ordered layering in the brain. Rather, the human brain results show that the motor areas respond to the motor modalities, the logic areas to the logic modalities, and the sensory areas to the sensory modalities. \n- Some clarity in description would be helpful. In section 3.3, I am not sure what the model inputs and outputs are supposed to be.\n- The stories about the brain and about the ANN don't seem to interact much. This paper presents some findings about the human brain and some findings about pre-training ANNs, but beyond section 3.7, the two stories do not seem very related.\n- The paper talks about identifying abstractions, but the only abstractions that can be discovered using the parallelism score are ones in which the representations have a linear relationship. \n\n## Bibliography\n[1] Ruis, Laura et al. “A Benchmark for Systematic Generalization in Grounded Language Understanding.” ArXiv abs/2003.05161 (2020)\n - line 105: What is mean by \"coding axis\"? In the text that follows, I gather that it is a displacement vector associated with a given modality. Is this correct?\n- line 120: In this case, what is meant by \"neural activation space\"? The space of all fMRI scans?\n- line 124: What is the activation vector? An vector of fMRI voxel inensity?\n- Figure 3 caption: This is the first place where the activation vector is mentioned. I would move this into section 2.3.\n- Figure 3 caption: \"The cosine angle between two linear decoders\" --> Is this the ANN mentioned in line 135? That seems like a non-linear decoder. What does it mean to take the cosine angle between decoders? Does this refer to the classification boundary? Or the hidden state representation?\n- line 135: What is the input to the model? Is it the intensity for all voxels?\n- Section 2.3: The PS will be high if the coding directions are parallel, but it will also be high if the coding directions are identical. How do we know that the coding directions do not all lie in the same low dimensional space, i.e., as in Figure 2B. Importantly, it would be good to know, of the pairs $v_i$ and $v_j$ that have high cosine similarity, what is the $l_2$ distance?\n- Section 3.3: What is the input to these ANNs? A text description of the task?\n- Section 3.4: What does it mean to measure the PS of an ANN? What is the activation vector for the ANN?\n- line 217: Is there only one training example per context?\n- line 220: \"unlike humans (see Fig. 1b), ANNs with no prior knowledge cannot compositionally generalize.\" I understand the claim being made, but it should be noted that an adult human is not exactly without \"prior knowledge\"\n- Figure 6: In 6B Brain systems are on the y-axis and rule domains are shown in groups. In 6C rule domains are shown on the x-axis and layers are shown in groups. I suggest putting layers on the y-axis in 6C to make the figures easier to compare.\n\n## minor things\n- line 171: missing Figure 7\n- Section 3.5: missing Figure 9\n Limitations are adequately addressed", " Building off prior work proposing the role of “parallel abstract representations”—linearly additive vector representations of task dimensions—in compositional generalization, the authors set out to evaluate this framework through an analysis of existing behavioral and fMRI data as well as through ANN experimentation. The authors find that human task generalizability patterns are consistent with this factorized task representation framework and that parallel abstract representations are present for sensory, logic, and motor rule types in cortical systems that correspond to previous decoding work. They then propose a pretraining paradigm for ANNs on either task primitives or on simpler compositions of task primitives, and find that this improves zero-shot generalization to a more complex permuted rules task as well as sample efficiency, while resulting in representations which mirror some aspects of the neural encoding geometry and cortical hierarchy. This paper addresses the interesting problem of compositional generalization, and does well to benchmark against human data. The extension of the “parallel abstract representations” framework to a new higher dimensional task and the resulting fMRI analysis is interesting, namely that these types of linearly additive vector representations of task abstractions appear to emerge in an expected and interpretable way.\n\nThe move to computational models and assessment of representational geometry in that regime is well motivated, and the results are very interesting, but in many respects, the details are drastically under-defined in the main text. In some cases, the reader is left struggling to understand critical aspects of what the authors actually did, until moving to the supplementary materials. For example, the precise exact input structure to the ANN is left to speculation until finding supplementary figure 8. This type of information needs to be expanded considerably in the main text.\n The authors note (once again only in the appendix) that pretraining was stopped once pretraining tasks achieved 99%. To confirm, >99% accuracy on each task, e.g. until the worst task achieves 99% accuracy? How many training samples / iterations did this take under each pretraining paradigm?\n\nFor the Vanilla-ANN comparison, how is that generated? Is it just randomly initialized? From what distribution? (Okay I found this in appendix; add more details to main text.)\n\nAside from pretraining familiarizing the networks with task context rules, the networks also become more familiar with and learn information about the sensory stimulus input units. Do you think that this confound ends up being relevant for the observed increased sample efficiency, both for the primitives pretraining and especially for the simple task pretraining, which I assume was probably pretrained for more iterations to get to 99% due to the increased task complexity?\n\nIt would be helpful to see a better breakdown, in general, for both brain and ANN experiments, about which specific rules were more consistently encoded, e.g. how did PS scores differ across rules within each rule domain? Did certain dimensions drive mean PS scores more than others?\n\nOverall, this is an interesting study with great potential for publication, but the authors really need to be more explicit and detailed in their description of the ANN pretraining experiments, especially in the main text, so as to alleviate reader concerns on potential confounds.\n The authors have addressed some limitations of this work. I agree that future work should attempt to replicate this using other task paradigms.", " This work investigates compositional generalization in humans (using fMRI data) and ANNs. The working hypothesis of the paper is that parallel abstract representations - consistent variable representations across different contexts - support compositional generalization. The Parallelism Score, a measure from neuroimaging research, is used to measure the consistency of representations in humans and ANNs. In their experiments, the authors use the C-PRO paradigm to construct a 64-context compositional task for humans and ANNs. The paradigm enables systematically varying conditions to test generalization abilities. In their experiments, they expose humans and ANNs to limited contexts and test them in novel contexts. First they confirm that humans perform well on these tasks, but drop slightly in novel contexts. They observe parallel abstract representations in fMRI data, supporting their hypothesis that this enables generalization. Based on their finding that for humans, increased exposure to a rule improves performance in novel contexts, they design pretraining tasks to improve generalization in ANNs. This includes 1-rule and 2-rule simplifications of the 3-rule tasks of C-PRO, enabling the ANN to learn prior representations for sub-rule variables. Without pretraining, ANNs struggle in novel contexts. Pretraining significantly improves performance in novel contexts, enables zero-shot performance, and more sample efficient learning. Pretraining enables compositional generalization in ANNs similar to humans and similarly increases parallel abstract representations. The authors also report similarities in the hierarchical Parallel Scores between human fMRI data and ANNs. I think this is an excellent paper. The authors apply ideas from neuroscience research to the growing field of study that is compositional generalization in ANNs. My only caveat to confidently recommending acceptance is that I am not qualified to assess the validity of the neuroscientific claims made in the paper. But it seems that the authors have followed a solid methodology throughout and do not overstate any claims, so I would feel comfortable still recommending acceptance. Furthermore, the findings of the paper related to ANNs are quite interesting and novel on their own.\n\n**Strengths**\n\n1. The ideas in this paper are novel and valuable in the study of compositional generalization in ANNs. I am not aware of previous work applying the idea of parallel abstract representations to ANNs*, and this seems useful for future research. Furthermore, the methodology followed by the authors to perform analogous experiments in neuroimaging and ANNs is unique and well thought out. A human-centric benchmark for compositional generalization at such a scale (64 contexts) is certainly valuable.\n\n2. For the most part, the claims made by the authors are supported and well motivated throughout the paper. Their pretraining method boosts generalization significantly and should be further developed in different deep learning contexts. I would have expected the PS measures on ANN hidden layers to be higher than those reported in Figure 5 A, but the zero-shot performance and sample efficiency makes a strong case that these pretraining methods are effective.\n\n3. The paper is excellently written and well structured. The ideas are presented clearly, without overstating any claims.\n\n\\* The closest idea to this that I have read of is the concept of “localism” from the paper by Hupkes et al. titled Compositionality Decomposed: How do Neural Networks Generalise? (2020). It’s not the same idea, but is similar in that it expects compositional models to build consistent variable representations independent of context.\n\n**Weaknesses**\n\n1. There is one experiment where the results are not as convincing as one would wish. In Figure 5A the PS measure increases only slightly after pretraining. This shows that pretraining does in fact encourage parallel abstract representations, but it does not necessarily show that this is absolutely crucial to compositional generalization. The PS measures increase slightly, but the generalization ability increases significantly, so perhaps some other factor enabled by pretraining is enabling generalization. Since the working hypothesis of the paper is the link between parallel abstract representations and compositional generalization, it might be worth discussing this result more.\n\n2. The paper performs its ANN experiments with a simple 2-layer feedforward neural network. I understand that this isolates the effect of pretraining for the experiments, but in the process it ignores the neural network architectures that are currently state-of-the-art in most domains i.e. Transformers, CNNs, and LSTMs. Most existing studies on compositional generalization in ANNs have focused on these more advanced architectures and if this paper had done so it would have made a stronger case for the proposed methods and methodologies. As such, the findings of this paper are not directly comparable to leading edge research on the generalization capabilities of ANNs.\n\n 1. What is your response to weakness (1)? Do you think the relatively small increase in Figure 5A undermines your hypothesis that parallel abstract representations are important for compositional generalization?\n\n2. Please elaborate on how you arrived at your pretraining method from your finding that in humans \"increased exposure to specific rules improved performance on subsequent novel contexts\". I understand the finding, and the pretraining method, but to me the link isn't obvious. The pretraining method does not simply expose the ANN to more examples of contexts in which the rule appears (that would simply be showing the ANN more 3-rule examples containing that rule). The pretraining method exposes the ANN to simplified contexts (1-rule or 2-rule examples), where it can focus on learning representations. How do you motivate the link? The authors dedicate an entire section to discussing the limitations of their work. This is valuable and insightful." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "qwoRxFPdYq", "u-p530dYdwk", "eMCokHt2cs7", "u-p530dYdwk", "nips_2022_177GzUAds8U", "deVUaxnGOAx", "Tt6JZEm3CGj", "u4EJPxur7sT", "6yg5si55R-V", "08sVwqYiR32", "v6MhcORqhdL", "iQlhqbB1ikT", "H1Wye_KY9AC", "u-p530dYdwk", "nips_2022_177GzUAds8U", "nips_2022_177GzUAds8U", "nips_2022_177GzUAds8U", "nips_2022_177GzUAds8U" ]
nips_2022_ZwnPdpCw6d
Robust $\phi$-Divergence MDPs
In recent years, robust Markov decision processes (MDPs) have emerged as a prominent modeling framework for dynamic decision problems affected by uncertainty. In contrast to classical MDPs, which only account for stochasticity by modeling the dynamics through a stochastic process with a known transition kernel, robust MDPs additionally account for ambiguity by optimizing in view of the most adverse transition kernel from a prescribed ambiguity set. In this paper, we develop a novel solution framework for robust MDPs with $s$-rectangular ambiguity sets that decomposes the problem into a sequence of robust Bellman updates and simplex projections. Exploiting the rich structure present in the simplex projections corresponding to $\phi$-divergence ambiguity sets, we show that the associated $s$-rectangular robust MDPs can be solved substantially faster than with state-of-the-art commercial solvers as well as a recent first-order solution scheme, thus rendering them attractive alternatives to classical MDPs in practical applications.
Accept
This is a somewhat borderline paper. The reviewers were unanimously positive, but they all had concerns. In reading through the concerns and responses, it seems that many (though perhaps not all) of the concerns could be addressed with additional references and some expository modifications. If the paper makes the final cut, I encourage the authors to follow through with their proposed modifications and do their best to address the concerns expressed by the reviewers.
train
[ "N6gEZVF2Q8", "doZ8g0v40-", "14VhUuSj-4U", "-E4S5G4UrW_", "bDftwMdZqom", "J0qOfHU-Mmv", "lkv0P-tVqPA", "4FKrA_Uhbry", "blCUbebQ_e3", "1lItAfqBl4o", "zm8_jzWo65j", "1yJhGudJXj", "LWA8OaRvK4j", "EwaxyILs8LX" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to thank all reviewers for their time reading this paper and providing reviews for our submission.\n\nWe take this last opportunity to clarify the importance of model-based methods. While we totally agree that model-free approach is an importance part of reinforcement learning, we would like to emphasize that model-based methods are complementary to model-free approach, and they have standalone merits. \n\nIn particular, many real-life settings are data-poor (few past observations relative to the size of the state/action spaces), and decisions are taken at a relatively slow time pace (e.g., inventory control where orders are taken weekly). In those problems, model-free methods will not work due to their data requirements. The situation resembles the use of linear regression (if few data points are available) vs. a neural network (when large amounts of data are available). One would not normally use a neural network in medical studies with a few hundred data points.", " The authors have addresses most of my comments. I am not quite convinced by the discussion of weakness 3 i.e., extension to large/continuous state spaces. Therefore, I would keep my rateing of 6. weak accept.", " Thank you very much for taking the time to read our response! We highly appreciate this!\n\nWe would like to clarify that we compared to first-order method from [i]. \n\nFor solving problems (3) and (4) [main problems in our paper], we believe that gradient-based methods (or first order method) are not competitive to MOSEK solver or our proposed algorithms for the following reasons:\n\n1. Unlike MOSEK solver or our proposed algorithms, first-order methods can not solve problems to high accuracy (efficiently). Since solving (3) and (4) are only subroutines of the value iteration, we need to be able to solve to problems with high accuracy.\n\n2. Using bisection method, the number of iterations is in O(log(length of the interval)), which is much less than the typical number of iterations used in first-order methods, even if the problem is not solved to high accuracy.\n\n3. For the projection problem (4), the per-iteration complexities of our proposed algorithms are in the same order of the number of variables. For gradient-based methods, the complexity of evaluating the gradient at each iteration is at least in the same order of the number of variables.\n\nTherefore, we believe that first-order methods are not competitive in this particular setting. We are happy to explain this clearly in our camera ready, if the paper is accepted. We are also happy to compare with gradient-based methods (e.g. projected gradient descent) if you have a strong interest in this direction.\n\n[i] Scalable First-Order Methods for Robust MDPs, 2021.", " I thank the authors for the detailed responses that resolved many of my concerns/questions. Nevertheless, I am still not convinced by the use of MOSEK solvers. Since the Bellman equation forms a convex optimization problem, many gradient-based methods can be used (e.g. projected gradient descent) and can serve as baselines.", " 1. Novelty and the significance of the work.\n\nThank you for challenging us to clarify the contributions of our paper. To our best knowledge, even in the simpler setting of (s,a)-rectangularity there are no prior algorithms that solve RMDPs with general $\\phi$-divergence ambiguity sets. Prior work solves special cases, such as total deviation ($L_1$), KL-divergence, or $L_2$-bounded ambiguity sets. Our new algorithms for dealing with $\\phi$-divergence sets not only generalize this prior work into a single framework, but they also enable us to use new metrics, such as the Burg Entropy. Relying on the general framework of $\\phi$-divergences, however, also introduces non-trivial algorithmic challenges, which we overcome through new theory and algorithms.\n\nWe would also like to clarify why we focus on s-rectangular ambiguity sets. It has been shown that s-rectangular sets can provide policies that are less conservative (often by a factor of two or more) compared to (s,a)-rectangular sets [47], both in-sample and out-of-sample. This observation has motivated a recent surge in research on s-rectangular robust MDPs [5, 9, 15, 20]. We emphasize this fact in the revised paper (changes in blue). Unfortunately, solving the s-rectangular problem is more challenging because it involves searching over the continuous space of randomized policies, which implies that the robust Bellman operator no longer decomposes along actions.\n\n2. How's about policy evaluation and policy improvement?\n\nThank you! Our proposed framework speeds up the computation of the robust Bellman operator, which underlies most robust dynamic programming algorithms, including robust policy evaluation and robust policy improvement. Our accelerated Bellman operator can even be used in conjunction with value function approximation algorithms, such as robust projected value iteration and robust approximate policy iteration [i]. This is important as a naive computation of the robust Bellman operator as a convex program would have a practical complexity is $\\mathcal{O} (S^3A^3)$, which would not scale to most problems of practical importance. We now clarify this point in the introduction.\n\n3. The experiments are quite limited since the authors only compare their algorithms with MOSEK solvers.\n\nWe apologize for the lack of clarity in the first draft of the paper. For the $\\phi$-divergence ambiguity set, the associated optimization problem of the robust Bellman update contains nonlinear constraints. For the KL-divergence case, for example, the associated Bellman update is an exponential cone program. Therefore, linear optimization solvers are not suitable for our settings, and MOSEK appears to be the best available (commercial) solver for the problems of interest. We are also not aware of any tailored solution schemes for the generic robust Bellman update (3) that exploit the inherent structure. We now clarify this point in the introduction of the revised paper (changes in blue).\n\n4. Discussions on why $\\phi$-divergence (s)-rectangular ambiguity sets are useful and how to construct them seem missing. \n\nWe apologize for this omission! The $\\phi$-divergence ambiguity sets generalize a wide range of ambiguity sets that are considered in the related literature on distributionally robust and data-driven optimization, and they benefit from rigorous statistical performance guarantees. It has been shown recently, for example, that $\\phi$-divergence constrained optimization problems are optimal among all (known and uknown) data-driven optimization paradigms if certain types of worst-case out-of-sample performance guarantees are being sought [ii]. The radii of $\\phi$-divergence ambiguity sets can be selected either via cross-validation or via rigorous statistical bounds. We now discuss the advantages of $\\phi$-divergence ambiguity sets in the introduction as well as the selection of their radii in the corresponding sections.\n\nAs we discussed in our response to your first comment, we focus on s-rectangular ambiguity sets since it has been shown that they can provide policies that are less conservative (often by a factor of two or more) compared to (s,a)-rectangular sets [47], both in-sample and out-of-sample. This observation has motivated a recent surge in research on s-rectangular robust MDPs [5, 9, 15, 20]. We emphasize this fact in the revised paper. Unfortunately, solving the s-rectangular problem is more challenging because it involves searching over the continuous space of randomized policies, which implies that the robust Bellman operator no longer decomposes along actions.\n\n*The numeric references above refer to the references in the initial submission of the paper.\n\n[i] Tamar, Aviv, Shie Mannor, and Huan Xu. “Scaling up Robust MDPs Using Function Approximation.” In International Conference of Machine Learning (ICML), 2014.\n\n[ii] From Data to Decisions: Distributionally Robust Optimization Is Optimal, Management Science 2021.", " 1. Why are $\\phi$-divergence (s)-rectangular ambiguity sets useful? \n\nPlease refer to our responses above.\n\n2. Can we handle ambiguity sets defined by several $\\phi$-divergences?\n\nThis is an interesting suggestion! Yes, the proposed approach can handle the case of different $\\phi$ functions in the summation of (2). If different $\\phi$ functions are used in the ambiguity set (2), it would not change the overall outer bisection method, but different $\\phi$ functions would lead to different projection problems, which we know how they could be solved. We now elaborate on this point in the paper; thank you!\n\n3. Can the algorithms developed apply to policy improvement?\n\nYes. By combining with techniques in [20], the developed algorithms can be generalized to solve policy improvement problems.\n\n*The numeric references above refer to the references in the initial submission of the paper.", " 1. Many model-free methods aren't discussed.\n\nThank you for bringing the references [1, 2, 3] to our attention! Although we focus on the model-based setting, we agree that it is important to review at least some of the related model-free methods. The revised version of the paper discusses the suggested references in the introduction. \n\n2. Why the method in this paper is better than the value iteration or dynamic programming.\n\nWe would like to clarify that our proposed framework speeds up the computation of the robust Bellman operator, which underlies most robust dynamic programming algorithms, including robust value iteration and robust policy iteration. Our accelerated Bellman operator can even be used in conjunction with value function approximation algorithms, such as robust projected value iteration and robust approximate policy iteration [i]. This is important as a naive computation of the robust Bellman operator as a convex program would have a practical complexity is $\\mathcal{O} (S^3A^3)$, which would not scale to most problems of practical importance. We now clarify this point in the introduction.\n\n[i] Tamar, Aviv, Shie Mannor, and Huan Xu. “Scaling up Robust MDPs Using Function Approximation.” In International Conference of Machine Learning (ICML), 2014.", " 1. This paper mainly focuses on the model-based setting, which limits the contributions.\n\nThank you. While this paper focuses on the model-based setting, we emphasize that efficient methods for computing robust Bellman updates extend to both approximate dynamic programming settings [44] and model-free settings [i, ii]. We focus on the model-based setting for two reasons. First, the model-based setting is often an important building block to constructing model-free algorithms in reinforcement learning. It appears reasonable to study the new setting of $\\phi$-divergence based s-rectangular uncertainty sets first in the model-based context before translating them into the model-free context. While we expect that our algorithms can be extended to model-free settings, we leave this development to future work. Second, we would like to emphasize that the model-based setting itself also has many important real-life applications [iii-v], and that it is under active study in the machine learning community [vi-ix]. We now make this point explicit in the conclusions of the paper.\n\n2. Some external methods like binary search or other optimization methods are introduced and hence will complexify the method.\n\nWe note that all our algorithms are open-source and will be made available to the public (the URL is current withheld to maintain anonymity of the review process), as opposed to the commercial closed-source black-box optimizers (CPLEX, Gurobi, MOSEK) that are used otherwise. In some sense, we believe that this is comparable to e.g. the tailored solvers for support vector machines that are implemented in scikit-learn, which--despite having a non-trivial implementation--are not reliant on any third-party software and accessible as a mature technology to both researchers and practitioners.\n\n*The numeric references above refer to the references in the initial submission of the paper.\n\n[i] Reinforcement Learning under Model Mismatch, NIPS 2017.\n\n[ii] Robust Reinforcement Learning using Least Squares Policy Iteration with Provable Performance Guarantees, ICML 2021.\n\n[iii] Data uncertainty in Markov chains: Application to cost-effectiveness analyses of medical innovations. Operations Research, 2018.\n\n[iv] Distributionally robust inventory control when demand is a martingale. Mathematics of Operations Research, 2022.\n\n[v] Robust scheduling of EV charging load with uncertain wind power integration. IEEE Transactions on Smart Grid, 2018.\n\n[vi] Play to Grade: Testing Coding Games as Classifying Markov Decision Process, NeurIPS 2021.\n\n[vii] Fast Approximate Dynamic Programming for Infinite-Horizon Markov Decision Processes, NeurIPS 2021.\n\n[viii] Navigating to the Best Policy in Markov Decision Processes, NeurIPS 2021.\n\n[ix] MICo: Improved representations via sampling-based state similarity for Markov decision processes, NeurIPS 2021.", " 1. Presentation and the relationships between projection problem in (4) and the robust Bellman operator.\n\nWe apologize for the lack of clarity in the initial draft of the manuscript. The equivalence between the robust Bellman update in (3) and the projection problem (4) is stated in Theorem 1, but in the previous version one needed to examine its proof to understand how the solution of (3) reduces to the repeated solution of (4). Based on your comment, we have revised our manuscript and uploaded a revision of our paper (changes in blue) that includes a dedicated description of this algorithm in the appendix, as well as a short discussion of this algorithm and signposting to the appendix in the main paper. Thank you for this suggestion!\n\n2. How about (s,a)-rectangular uncertainty set and the motivation of considering the s-rectangular uncertainty set?\n\nThank you; this is a great question! Solving (s,a)-rectangular RMDPs with $\\phi$-divergence sets is also an unexplored and challenging problem, and our algorithms directly apply to this more specialized setting, too. We now point out in the revised paper that prior work on (s,a)-rectangular RMDPs with $\\phi$-divergences have only addressed the subclass of KL-divergences [22, 29, 31]. We make this connection clearer in the revision. Our paper focuses on s-rectangular ambiguity sets for two reasons. First, it has been shown that s-rectangular sets can provide policies that are less conservative (often by a factor of two or more) compared to (s,a)-rectangular sets [47], both in-sample and out-of-sample. This observation has motivated a recent surge in research on s-rectangular robust MDPs [5, 9, 15, 20]. We emphasize this fact in the revised paper. Second, solving the s-rectangular problem is more challenging because it involves searching over the continuous space of randomized policies, which implies that the robust Bellman operator no longer decomposes along actions.\n\n*The numeric references above refer to the references in the initial submission of the paper.", " 3. The approach might be difficult to be generalized to the practical reinforcement learning setting with large or even continuous state and action spaces.\n\nThank you. While this paper focuses on the model-based setting, we emphasize that efficient methods for computing robust Bellman updates extend to both approximate dynamic programming settings [44] and model-free settings [i, ii]. We focus on the model-based setting for two reasons. First, the model-based setting is often an important building block to constructing model-free algorithms in reinforcement learning. It appears reasonable to study the new setting of $\\phi$-divergence based s-rectangular uncertainty sets first in the model-based context before translating them into the model-free context. While we expect that our algorithms can be extended to model-free settings, we leave this development to future work. Second, we would like to emphasize that the model-based setting itself also has many important real-life applications [iii-v], and that it is under active study in the machine learning community [vi-ix]. We now make this point explicit in the conclusions of the paper.\n\n*The numeric references above refer to the references in the initial submission of the paper.\n\n[i] Reinforcement Learning under Model Mismatch, NIPS 2017.\n\n[ii] Robust Reinforcement Learning using Least Squares Policy Iteration with Provable Performance Guarantees, ICML 2021.\n\n[iii] Data uncertainty in Markov chains: Application to cost-effectiveness analyses of medical innovations. Operations Research, 2018.\n\n[iv] Distributionally robust inventory control when demand is a martingale. Mathematics of Operations Research, 2022.\n\n[v] Robust scheduling of EV charging load with uncertain wind power integration. IEEE Transactions on Smart Grid, 2018.\n\n[vi] Play to Grade: Testing Coding Games as Classifying Markov Decision Process, NeurIPS 2021.\n\n[vii] Fast Approximate Dynamic Programming for Infinite-Horizon Markov Decision Processes, NeurIPS 2021.\n\n[viii] Navigating to the Best Policy in Markov Decision Processes, NeurIPS 2021.\n\n[ix] MICo: Improved representations via sampling-based state similarity for Markov decision processes, NeurIPS 2021.", " 4. There are scalable algorithms for model-free robust reinforcement learning.\n\nThank you for pointing out this line of research to us; although we focus on the model-based setting, we have updated the manuscript to include a brief survey of this literature stream. Please note that all of the aforementioned papers focus on (s,a)-rectangular uncertainty sets, whereas we study s-rectangular uncertainty sets. Also, to our best understanding, the complexity of the algorithms proposed by all four papers do depend on S and/or A, even though that complexity is somewhat less obvious:\n\n(a) In NIPS 2017, Section 4.1, the robust projected Bellman update requires the computation of the support function $\\sigma$, which has a time complexity depending on S. In Section 4.2, to solve the mean squared projected Bellman equation, one needs to compute the gradient $\\mu$, which has a time complexity (without applying any approximation) that depends on S, as stated in the paper.\n\n(b) In NeurIPS 2021, Algorithm 2, to update the parameters $\\delta$ and $\\theta$, the time complexity depends on S. For example, the summation needed in this step is over all states.\n\n(c) In ICML 2021, Algorithm 1 requires the computation of the equations (18)-(23). Equation (21) requires the computation of the support function $\\sigma$, whose time complexity depends on S.\n\n(d) In ICML 2022, Algorithm 4, the computation of the B's and D's involves summations that depend on A. The computation of x requires the computation of an argmax over S components. ", " This paper investigates the problem of computing the robust Bellman operator efficiently when the uncertainty set is defined using $\\phi$-divergence. The basic idea is to calculate the robust Bellman update via simplex projections, which can be solved efficiently. Multiple examples of $\\phi$-divergence defined uncertainty sets are given, including Kullback-Leibler Divergence, Burg Entropy, Variation Distance, $\\chi^2$-divergence. Numerical experiments are provided to demonstrate the efficiency of the proposed method. Strengths:\nThis paper investigates a more general notation of uncertainty set, s-rectangular, which is less conservative than the (s,a)-rectangular uncertainty set model. A large class of distance metrics, i.e., $\\phi$-divergence defined uncertainty sets are studied. The proposed method is claimed to be more computationally efficient. Theoretical results and numerical experiements validates these claims. Overall, the contribution is useful in practice.\n\n\nWeaknesses:\nThe major weakness of this paper lies in the presentation. In section 4, the author claim that the robust Bellman operator can be solved efficiently if the projection problem in (4) can be solved efficently. However, it is not discussed at all how the projection problem in (4) is related to the robust Bellman operator. It is thus not clear to this reviewer that once the projection in (4) is solved, how it is going to be used in the evaluation of the robust Bellman operator. As in section 5, the discussion is focused on solving the projection in (4) for various types of $\\phi$-divergences. The overall presentation of the paper needs to be carefully revised.\n\nSecond, the setting focused in this paper is the s-rectangular uncertainty set. One question is that if the problem is reduced to the (s,a)-rectangular uncertainty set, is the challenging in evaluating the robust Bellman operator still exist? As for the (s,a)-rectangular uncertainty set, the problem is reduced to a distributionally robust optimization problem for each (s,a) pair. Therefore, the motivation of considering the s-rectangular uncertainty set shall be carefully discussed in order to motivate the research in this paper. \n\nThird, this paper mainly focus on the model-based planning setting, where the uncertainty set is assumed to be known. Therefore, the approach in this paper might be difficult to be generalized to the practical reinforcement learning setting with large or even continuous state and action spaces. \n\nFourth, there is another line of research focus on model-free robust reinforcement learning, where the uncertainty set is not known, and samples from the center of the uncertainty set can be obtained. Different uncertainty set models are considered, e.g., ellipsoid type uncertainty set models and $\\epsilon$-contamination uncertainty set model. Those algorithms are computationally efficient (the complexity at each iteration is independent of S and A), and can be easily extended to the case with function approximation for large state-action sapces. \nReinforcement learning under model mismatch. NIPS 2017.\nOnline Robust Reinforcement Learning with Model Uncertainty. NeurIPS 2021.\nRobust reinforcement learning using least squares policy iteration with provable performance guarantees. ICML 2021.\nPolicy Gradient Method For Robust Reinforcement Learning, ICML 2022.\n\n\n Please see Strengths And Weaknesses. NA.", " This paper proposed equivalence between the robust Bellman operator and a projected optimization problem. Also, several solutions to the projected optimization problems are developed for different distance measures. Strengths: (1). The $\\phi$-divergence is general hence several distance measures are included; (2). The proposed method works for many different uncertainty sets. (3). The complexity matches the one for non-robust problems. \n\nWeakness: (1). Although this is a model-based method, many model-free methods aren't discussed, e.g., [1,2,3]. More related works should be discussed. \n(2). Theoretically, the authors didn't explain clearly why the method in this paper is better than the value iteration or dynamic programming method for robust RL problems.\n\n\n\n[1 ]Wang, Yue, and Shaofeng Zou. \"Online robust reinforcement learning with model uncertainty.\" Advances in Neural Information Processing Systems 34 (2021): 7193-7206.\n[2] Panaganti, Kishan, and Dileep Kalathil. \"Sample Complexity of Robust Reinforcement Learning with a Generative Model.\" International Conference on Artificial Intelligence and Statistics. PMLR, 2022.\n[3] Badrinath, Kishan Panaganti, and Dileep Kalathil. \"Robust reinforcement learning using least-squares policy iteration with provable performance guarantees.\" International Conference on Machine Learning. PMLR, 2021. See the above part. This paper mainly focuses on the model-based setting, which limits the contributions. Also, some external methods like binary search or other optimization methods are introduced and hence will complexify the method. ", " The paper studies robust MDPs with s-rectangular ambiguity sets defined by $\\phi$-divergence. The authors then propose a number of fast algorithms with complexity analyses for different popular $\\phi$-divergences. Numerical results were then provided to support the methodologies. The paper is well-written. The problem studied is highly relevant and of importance. ### Strengths:\n\nRobust MDP is an important topic in ML/AI and the question of how to quickly solve a robust MDP problem is highly relevant. It is known in the literature that to maintain tractability, one needs to stick with (s)- or (sa)-rectangular ambiguity sets. The paper focuses on several types of (sa)-rectangular sets and provides rigorous investigations, which is good and would be a good contribution to the research topic. \n\n### Weaknesses:\n\nMy main concern goes to the novelty and the significance of the work. Robust MDP is already widely studied and well understood. The paper does not provide new robust MDP models, but only focuses on how to quickly solve the robust MDO problem under some well-known uncertainty settings (the use $\\phi$-divergence ambiguity sets in robust MDP is not new). Note that $\\phi$-divergence (sa)-rectangular ambiguity sets were already studied in the literature, thus the work here is just an extension to (s)-rectangularity settings.\n\n**Other comments**\n\nThe paper only discusses value iteration; how's about policy evaluation and policy improvement?\n\nThe experiments are quite limited since the authors only compare their algorithms with MOSEK solvers. I can see that the Bellman equation has a standard max-min form with a linear objective function, for which several efficient algorithms exist. \n\nDiscussions on why $\\phi$-divergence (s)-rectangular ambiguity sets are useful and how to construct them seem missing. \n\n - Why are $\\phi$-divergence (s)-rectangular ambiguity sets useful? \n- Can we handle ambiguity sets defined by several $\\phi$-divergences?\n- Can the algorithms developed apply to policy improvement? I don't see any limitations and potential negative societal impact of their work." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 2, 4 ]
[ "nips_2022_ZwnPdpCw6d", "1yJhGudJXj", "-E4S5G4UrW_", "J0qOfHU-Mmv", "EwaxyILs8LX", "EwaxyILs8LX", "LWA8OaRvK4j", "LWA8OaRvK4j", "1yJhGudJXj", "1yJhGudJXj", "1yJhGudJXj", "nips_2022_ZwnPdpCw6d", "nips_2022_ZwnPdpCw6d", "nips_2022_ZwnPdpCw6d" ]
nips_2022_PCZfDUH8fIn
The price of unfairness in linear bandits with biased feedback
In this paper, we study the problem of fair sequential decision making with biased linear bandit feedback. At each round, a player selects an action described by a covariate and by a sensitive attribute. The perceived reward is a linear combination of the covariates of the chosen action, but the player only observes a biased evaluation of this reward, depending on the sensitive attribute. To characterize the difficulty of this problem, we design a phased elimination algorithm that corrects the unfair evaluations, and establish upper bounds on its regret. We show that the worst-case regret is smaller than $\mathcal{O}(\kappa_* ^{1/3}\log(T)^{1/3}T^{2/3})$, where $\kappa_*$ is an explicit geometrical constant characterizing the difficulty of bias estimation. We prove lower bounds on the worst-case regret for some sets of actions showing that this rate is tight up to a possible sub-logarithmic factor. We also derive gap-dependent upper bounds on the regret, and matching lower bounds for some problem instance. Interestingly, these results reveal a transition between a regime where the problem is as difficult as its unbiased counterpart, and a regime where it can be much harder.
Accept
The authors study a linear bandit problem with biased feedback, develop an algorithm and bound the corresponding regret. The bandit problem they study is meaningful and highly relevant. I therefore recommend to accept the paper.
train
[ "YMeUH3wzQaO", "HC6FTwnHiMK", "9TApBRqt_ba", "wN0zb5xyn2cd", "5BfiU4olIj7", "E8nn48mlCm", "k8KXOpXriH", "s92aD8ERmT", "GdpJZt-CQkf", "GvSeIePO61P", "FGRTSHbGmL0" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for including the results on the case of more than two groups.", " Thanks for the response. Please add these details to the paper, so that the paper is clearly placed in the fairness literature.", " We thank the reviewer for giving us an opportunity to clarify a fairness-related aspect of our model, which we will further discuss in the manuscript.\n\nIn this paper, we study a decision-making problem in which the evaluations are unfairly biased against a group of actions (representing for instance individuals described by covariates). This bias can be harmful both for the agent, who can receive a very sub-optimal cumulative reward, due to negatively biased evaluations of optimal actions belonging to the discriminated group, and for the individuals (represented here by the actions), who are systematically discriminated according to their group, instead of being assessed according to their true value $x^{\\top}\\gamma^*$. \n\nWe underline that in this paper (by contrast, e.g., to the literature on demographic parity), we do not seek to correct for the possibly unequal distribution of features $x$ and values $x^{\\top}\\gamma^*$ across the different groups. Our approach is related to causal fairness [1]: in the causal fairness framework, the dependencies between prediction, sensitive attributes and non-sensitive attributes are captured by a causal model. The goal is then to ensure that the sensitive attribute does not *directly* influence the prediction (in other words, that conditionally on selected *resolving variables*, the prediction is independent of the sensitive attribute). Here the resolving variables may depend on the sensitive attribute in a manner that is considered as non-discriminatory. For example, one group may have, on average, more physical strength than the other one, and this skill can be considered as fair when it comes to recruit a piano mover.\n\nThe biased linear model studied in this paper is a simple example of causal model with linear structural model equations $x=f(z,\\xi')$ and $y=x^{\\top}\\gamma^*+\\omega^* z+ \\xi$, where $\\xi$ and $\\xi'$ are noise terms: the covariates $x$ may depend on the sensitive attribute $z$, and the biased evaluation $y$ depends on both. In our work, we treat $x^{\\top}\\gamma^*$ as a fair evaluation of the value of action $x$, since it is independent of $z$ conditionally on the resolving variable $x$.\n\n[1] Avoiding Discrimination through Causal Reasoning (2017). Kilbertus, N. et al. NeurIPs 2017.", " I thank the authors for their response. The response covers most of the questions raised by me. I agree that the *awareness framework* can be used to motivate the system proposed. However, please make sure the pros and cons of both these frameworks are discussed properly (e.g. where we may want to avoid awareness framework). \n\nOne concern that still remains mostly unanswered is the following (I clubbed two questions together, and the authors missed one):\n\n> How the grouping of actions relates to the fairness for end users in a decision making problem is unclear.", " G-optimal design can be chosen of size $(d+1)(d+2)/2$ : this classical result follows from writing the symetric covariance matrix $V(\\pi^*)$ as a barycenter of matrices $a_x a_x^{\\top}$ in $\\mathbb{R}^{(d+1) \\times (d+1)}$, and by using Carathéodory's theorem. By contrast, Elfving's theorem implies that $\\kappa_*^{-1/2}e_{d+1}$ is a barycenter of points $a_x \\in \\mathbb{R}^{(d+1)}$, and thus can be written as a combination of $d+2$ points.", " **Practical aspects** Linear bandit algorithms are widely used in recommendation systems and online advertisement. Fairness in such online decision-making problems is an important issue. In this paper, we introduce a simple biased linear model to use as a first step in the understanding of these problems. The objective and the main contributions of our work are primarily theoretical: in a situation of biased observations, we want to 1) construct an algorithm that handles biased data and corrects unfair evaluations, and 2) understand the price to pay to correct the bias, and what changes conceptually in the trade-off between exploration and exploitation. On this problem, we are able to characterize precisely the dependence of the regret on the geometry of the action set, which is an open question in most partial monitoring problems.\n\nTo do so, we propose a first algorithm of phased elimination adapted for biased observations. Using this algorithm allows us to carefully analyze the regret, and to obtain sharp matching bounds. While it is theoretically optimal, its empirical performances can perhaps be improved (for example by choosing in a less arbitrary manner $\\epsilon_l$, which, in phased elimination algorithms, governs the length of the phases and has a strong impact on the performances, or by using algorithms more complex to analyze, such as IDS).\n \n**Implementation** The algorithm can be implemented easily by following its description: the only steps not explicitly written are the computation of the solutions of (2) and (3). To fill this gap, the reader can use optimal design libraries (such as the R package OptimalDesign) together with Lemma 9. Following the reviewer’s suggestion, we discuss implementation in Appendix A.7 of the rebuttal revision. We underline that Algorithm 4 is only more complex than Algorithm 3 because we need to introduce new quantities required in the mathematical analysis, but not needed in the implementation.\n\n**Relation to [1]** We thank the reviewer for pointing out the interesting reference [1], which will be included in the manuscript. The spirit of optimization problem (3) is indeed related to that of problem (2) in [1]. Nonetheless, these problems differ significantly:\n- The optimization problem (2) in [1] solves a linear bandit problem, and as such could be used in our algorithm to replace the G-Exploration step. By contrast, the optimization problem (3) in our paper solves a bandit problem with *biased observations*. We emphasize that G-optimal design cannot be used to solve this problem. To overcome this issue, we use new results from broader experimental design problems, specifically designed for estimating the scalar product of the parameter with arbitrary vectors.\n- The bounds used to control the error are different, and so are the optimization problems defining the sampling strategy. In Lemma 9, we show that problem (3) can be reduced to a c-optimal design problem on the set of normalized actions. Then, one can use the machinery developed for optimal design to efficiently compute the optimal distribution.\n \n**Technical challenges** Following the reviewer's suggestion, we included sketches of proofs in Appendix C.1 of the rebuttal revision. We list below some of the most significant technical difficulties overcome in this paper:\n- We propose an algorithm for, and characterize the geometrical difficulty of the fair linear bandit problem. We explain how it relates in some sense to a specific instance of partial monitoring, and quantify the price to pay for correcting unfair biases.\n- Our upper bounds require sharp control of the estimated gaps and of the term $\\kappa(\\hat \\Delta^l)$ (which is introduced in this paper), as well as involved arguments to analyze the stopping criterion.\n- We rely on our reduction of $\\Delta$-optimal design to c-optimal design on normalized actions, and on c-optimal design theory to construct the hard problem instances needed to derive sharp lower bounds. By contrast, existing lower bounds in linear partial monitoring problems are much looser, and they rely on non-explicit geometric constants.\n- We find that the relevant quantity $\\kappa_*$ for characterizing the difficulty of the problem is related to c-optimal design. Building on Elfving’s characterization of c-optimal design, we derive in Lemma 5 an equivalent characterization, which allows us to provide an intuitive, geometrical explanation of $\\kappa_*$, and to show its equivalence to the seemingly unrelated “worst-case alignment constant” introduced in [2]. We emphasize that contrary to [2] our bounds are dimension-free.\n\n[1] Wagenmaker, A. et al.\"Experimental design for regret minimization in linear bandits.\"AISTATS 2021.\n\n[2] Kirschner, J. et al.\"Information directed sampling for linear partial monitoring.\"COLT 2021", " **Major concern** We thank the reviewer for raising the interesting question of the awareness of the learner, which we will discuss in the manuscript. As pointed out by the reviewer, the use of the sensitive attribute can be discouraged (it can also be allowed, for example when used for affirmative action). For this reason, some works on statistical fairness avoid using this attribute at the time of prediction (in the unawareness framework); by contrast, our work falls in the awareness framework. We emphasize that in both cases the algorithms are trained on data with known sensitive attributes. In many practical cases, the sensitive attribute (e.g. gender) is indeed known.\n\nRecent works have highlighted critical issues related to unawareness. For example, [1] provide empirical evidence showing that algorithms based on disparate learning processes use non-sensitive features correlated with the sensitive attribute as a proxy for the later, thus inducing within-class discrimination, and leading to sub-optimal trade-off between accuracy and fairness. More recently, [2] study a problem of fair online learning and show that some problems feasible in the awareness framework become infeasible in the unawareness one (such as no-regret learning under demographic parity constraints). These examples, and many others, advocate for the use of the sensitive attribute, as it allows for better fairness guarantees while preventing unfair discrimination based on (possibly irrelevant) non-sensitive features correlated with the sensitive attribute.\n\n**Clarifications** We thank the reviewer for the valuable suggestions, and for pointing out some aspects of our work that we will clarify in a new version of the paper:\n- By contrast to classical complexity measures (such as condition number) that give equal weight to all observations, optimal design gives flexibility to choose $d+1$ best actions to estimate the bias, and therefore allows for sharper bounds. In the rebuttal revision, this topic is discussed in Appendices A.3, and illustration of the behavior of $\\kappa_*$ is provided in Appendix A.2.\n- The estimator obtained during the G-optimal exploration phase comes with uniform bounds on the error for estimating $\\gamma^{\\top}x + \\omega^{\\top}z_x$, but without bounds on the error for estimating $\\omega^{\\top} z_x$ itself. If the labels and the actions are very correlated, the error for estimating the bias can be much larger than that for estimating the biased evaluation. An extreme case occurs when there are fewer than $d+1$ good actions left : then, no estimator obtained by sampling these actions can be used to estimate $\\omega$. We will underline this point in the manuscript.\n- We will elaborate the discussion after Lemma 1, stating clearly that IDS can be used in our setting, however with sub-optimal worst-case regret bounds, and without guarantee on the gap-dependent regret.\n\n**Extension to more than two groups** We thank the reviewer for suggesting this natural extension to our work. Although it requires slightly heavier notations, it does not present major difficulties compared to the two-groups setting. We discuss in Appendix D of the rebuttal revision how to extend our results to this setting. Due to space constraint, we refer the reviewer to our answer to reviewer R63S, where we summarize the main results.\n\n**Extension to the unawareness framework** Extending the biased linear bandit problem to the unawareness framework is both interesting and challenging. As a first step, we can consider that we know the labels of some actions : in this case, we can start by alternating phases of estimation of the evaluation and of the bias, with decreasing noise level $\\epsilon$. The error of the naive LS estimator for $x^{\\top}\\gamma$ (not taking the bias into account) is the sum of a noise term, plus a term of order $\\omega$. As long as $\\omega$ is smaller than $\\epsilon$, we can inflate the confidence bounds to account for the bias term, and use them to eliminate actions. When $\\epsilon$ is of order $omega$, we can estimate the sign of the bias, and use it to recover the unknown labels. Then, we can run the algorithm for the awareness framework.\n\nIn the awareness setting, the complexity of the problem depends crucially on the distribution of the labels between the actions in $\\mathcal{X}$, and on the separation between the groups. In the unawareness framework, not knowing which action to choose so as to estimate its group would change the difficulty of the problem. From a fairness perspective, estimating the sensitive attributes rather than using them directly could arguably be considered as no better. For these reasons, extending our results to this problem is complex, and beyond the scope of the paper.\n\n[1] Lipton, Z. et al. (2018) Does mitigating ml’s impact disparity require treatment disparity? NeuRIPs 2018\n\n[2] Chzhen, E. et al. (2021). A Unified Approach to Fair Online Learning via Blackwell Approachability. NeuRIPs 2021", " We thank the reviewer for the valuable suggestions and comments. In particular, considering a setting with more than two groups is indeed a rather natural extension. Adapting our algorithm to this setting does not present major difficulties compared to the two-groups setting, although it requires introducing slightly more cumbersome notations. We discuss this extension in detail in Appendix D of the rebuttal revision. The main changes in the algorithm as well as the bounds on the regret are summarized below.\n\n**Model**\nWe extend the biased linear bandits to $Z$ groups denoted $\\mathcal{Z} = \\{1, ..., Z\\}$. The evaluations are given by \n$$ y_t = x_{t}^{\\top}\\gamma + Z_x^{\\top}\\omega,$$\nwhere $Z_x$ is the $z$-th vector of the canonical basis in $\\mathbb{R}^{Z}$ if $x$ is in group $z_x$, and $\\omega = \\{\\omega_1, ..., \\omega_Z\\}\\in \\mathbb{R}^{Z}$ is the vector of biases. With no loss of generality, we assume that the model does not contain an intercept, so that it remains identifiable. \n\n**Algorithm** The G-optimal Exploration and Elimination algorithm 1 can be used to estimate and eliminate actions with a given group. By contrast, we must adapt the bias estimation phase. Rather than estimating the bias $\\omega_z$, we choose to estimate the differences $\\omega_1-\\omega_z$ between the bias of group $z$ and the first group, chosen as baseline. Thus, we only need to do $Z-1$ rounds of bias estimation to estimate all differences $\\omega_z - \\omega_z’$. To estimate $\\omega_1 - \\omega_z$, we sample actions according to \n$$\\mu_z = argmin_{\\mu} \\sum_x\\mu(x)\\Delta_x \\quad \\text{such that } \\quad \\left(e_{d+1}-e_{d+z}\\right)^{\\top}V(\\mu)^+\\left(e_{d+1}-e_{d+z}\\right) \\leq 1.$$\nWe denote the regret for estimating $\\omega_1 - \\omega_z$ using actions sampled according to $\\mu_z$ as\n$\\kappa_{z}'(\\Delta) = \\sum_x\\mu_{z}(x)\\Delta_x$ .\nWe modify the stopping criterion to adjust to the new cost of bias estimation.\n\n**Bound on the worst-case regret**\nDefine $\\kappa_* '= \\sum_{z>1} \\min_{\\mu} \\left(e_{d+1}-e_{d+z}\\right)^{\\top}\\left(V(\\pi)\\right)^{+}\\left(e_{d+1}-e_{d+z}\\right).$ Then, $R_T = O(Z\\kappa_*'^{1/3}\\log(T)^{1/3}T^{2/3})$.\n\n**Bound on the gap-dependent regret**\nDefine $\\Delta_{\\neq, z} = \\min_{x : z_x = z}\\Delta_{x}$, $\\Delta_{\\neq} = \\min_{z\\neq z^*}\\Delta_{\\neq, z}$, and $\\epsilon_T = \\left(\\kappa_*'\\log(T)/T\\right)^{1/3}$. Then,\n$$R_T = O\\left(\\left(\\frac{d}{\\Delta_{\\min}} + \\sum_{z>1, z\\neq z^*}\\frac{\\kappa_z'(\\Delta \\vee \\Delta_{\\neq, z}\\vee \\epsilon_T)}{\\Delta_{\\neq, z}^2} + \\frac{\\kappa_{z^*}'(\\Delta \\vee \\Delta_{\\neq}\\vee \\epsilon_T)}{\\Delta_{\\neq}^2}\\right)\\log(T)\\right).$$", " The paper considers sequential decision making in a bandit scenario with biased feedback. In the model, the player chooses an action. Each action is associated with covariates and a sensitive attribute, both visible to the player. The reward is a sum of three terms: the intrinsic reward of the action (which is a linear function of the covariates), a bias term, and an independent noise term. The bias term depends on the binary sensitive attribute. The goal of the player is to choose actions sequentially, minimizing the regret compared to the optimal action. The algorithm attempts to de-bias the observations, by deliberately sampling suboptimal actions. The purpose of de-biasing is to determine the group with the best action. Then, G-exploration and elimination is used within that group in order to identify the best action. If the algorithm cannot confidently determine the better group, then an iterative stopping criterion is used. Once the estimated “gaps” between the groups are small enough, the algorithm commits to one group. The theoretical results include upper and lower bounds, as well as gap-dependent bounds. •\tInteresting model; the results seen significant and they are nearly tight.\n\n•\tThere is no experiment, which is a shortcoming in a paper proposing an algorithm.\n\n•\tThere are some writing issues, but the presentation is generally good. \n\n•\tThe motivation is nice, but could be more explicitly connected to the formal model.\n\n•\tThe key challenge is knowing how much effort to put into estimating the bias: too little, and you explore the wrong group; too much, and you waste time with the wrong group. This concept is a theme throughout the paper, and is thoroughly addressed from both a theoretic and experimental standpoint.\n\n•\tThe regret bounds are nearly tight. The gap-dependent bounds are interesting, given how crucial the gaps are in the algorithm.\n It is unclear how the approach generalizes to more than two groups. Could you please comment on this?\n Yes", " The author studies the effect of biased linear bandit feedback on fairness in decision making. They consider a linear bandit problem where each action (a d-dimensional vector) belongs to one of two groups. The grouping is known. The reward is standard, i.e. for each action the reward is inner product of the reward and action vector. However, the observation is biased depending on the group the action belongs to alongside zero-mean stochastic noise. They show this problem is globally observable but not locally observable partial monitoring problem which ensures a O(T^2/3) regret guarantee. More importantly, they derive the exact constant for the regret upper bound scaling as a function of a geometric property of the system. Finally, they prove that the dependence on this quantity is tight by considering some worst case instances. Strengths\n- The authors introduce (to the best of my knowledge) the biased linear bandits problem where the bias depends on grouping of the action, with known grouping and unknown bias. \n- The devised algorithm seems intuitive. A G-optimal design and phased elimination helps narrow down the optimal arm in each group. Whereas, a novel $\\Delta$-optimal design and group elimination is employed to estimate the bias parameter, and debias the rewards. \n- The authors establish tight minimax lower and upper regret bounds for this problem, while augmenting that with gap-dependent regret upper bounds. \n- Lemma 1 relating the term $\\kappa^*$ to the geometry of the problem, and Lemma 2 connecting it to the minimax regret for estimating the bias are insightful. \n\nWeakness\n- (Major) How the grouping of actions relates to the fairness for end users in a decision making problem is unclear. This mapping is very important for properly motivating the problem. The use of knowledge of the grouping is generally discouraged in fair decision making. The authors tend to deviate from this.\n- Examples of how $\\kappa^*$ varies in different situation (e.g. action set is the unit sphere, or given a condition number of the action set matrix) should be discussed. \n- An example highlighting the need for $\\Delta$-optimal design will be insightful. In particular, why can't we simply use the estimates of $\\omega$ obtained from the G-optimal design to debias. \n- A discussion on the scenario where both bias and grouping is unknown will be interesting to have. What if we have access to a small subset of actions where grouping is known? \n- A discussion on generalization to multiple groups will be useful. Seems like most of the techniques and results will readily extend to that case. How the regret scales with the number of groups, and the $\\Delta_{\\neq}$ for each group will be interesting to see. - Please try and address the limitation mentioned below. \n- Some concrete examples of $\\kappa^*$ should be discussed. \n- Given that this problem belongs to standard partial monitoring setup, what happens if we apply IDS methods directly? Maybe elaborate on the discussions after Lemma 1. Some more discussion on the relation of this work to the unfairness literature is important. Using the knowledge of the group in algorithm design is typically discouraged in fairness literature, so the authors should address this point with use cases. ", " The authors study the linear bandit problem with biased feedback. Specifically, they assume that there are two distinct groups and that one group receives positively biased feedback and the other negatively biased feedback. The goal is to minimize regret as it is normally defined. They develop a phase-based elimination algorithm that uses ideas from experimental design to correct the bias. They prove matching upper and lower regret bounds and also give gap-dependent upper bounds on the regret. Strengths:\n\nThe problem setting is well-motivated because it is important to ensure that interactive algorithms are deployed in a fair way.\n\nThe worst case regret bounds are nice since they match and are based on a seemingly novel quantity $\\kappa^*$.\n\nWeaknesses:\n\nThe algorithm is a neat combination of ideas from the literature, but the novelty seems limited. G-optimal design is a standard tool often applied in the linear bandits literature and optimization problem (3) at the heart of $\\Delta$-exp-elim has also been considered in \nWagenmaker, Andrew et al. \"Experimental design for regret minimization in linear bandits.\" AISTATS 2021.\n\nWhile the problem is of considerable practical interest, the work seems primarily theoretical. It is not clear how to go from the proposed algorithm to an algorithm that would be applied in a practical setting. If the authors could explain ways to apply the algorithm in practice and settings where this particular model is suitable, that would improve the paper. \n\nThere are no experiments exploring the empirical performance of the algorithm. \n\nThe paper could benefit from some proof sketches highlighting the technical difficulties. \n\nAlgorithm 4 from the appendix seems complicated and fairly difficult to implement. Guidance for practitioners would be useful.\n\nPost author response: I read the author's response and the other reviews and most of my concerns were addressed. \n\n How would one adapt the ideas from this paper to a real-world problem?\n\nIt would be useful to clarify the technical challenges in this paper.\n\nMinor: \n\nWhy in routine 1, does $\\pi$ have a support of $\\Theta(d^2)$ and in Routine $\\hat{\\mu}$ has a support of $\\Theta(d)$? Doesn't $\\Theta(d)$ suffice in both cases? Yes." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 1, 3, 4 ]
[ "s92aD8ERmT", "9TApBRqt_ba", "wN0zb5xyn2cd", "k8KXOpXriH", "E8nn48mlCm", "FGRTSHbGmL0", "GvSeIePO61P", "GdpJZt-CQkf", "nips_2022_PCZfDUH8fIn", "nips_2022_PCZfDUH8fIn", "nips_2022_PCZfDUH8fIn" ]
nips_2022_Yo0s4qp_UMR
Intrinsic Sliced Wasserstein Distances for Comparing Collections of Probability Distributions on Manifolds and Graphs
Collections of probability distributions arise in a variety of statistical applications ranging from user activity pattern analysis to brain connectomics. In practice these distributions are represented by histograms over diverse domain types including finite intervals, circles, cylinders, spheres, other manifolds, and graphs. This paper introduces an approach for detecting differences between two collections of histograms over such general domains. We propose the intrinsic slicing construction that yields a novel class of Wasserstein distances on manifolds and graphs. These distances are Hilbert embeddable, allowing us to reduce the histogram collection comparison problem to a more familiar mean testing problem in a Hilbert space. We provide two testing procedures, one based on resampling and another on combining $p$-values from coordinate-wise tests. Our experiments in a variety of data settings show that the resulting tests are powerful and the $p$-values are well-calibrated. Example applications to user activity patterns and spatial data are provided.
Reject
In this paper, the authors propose the intrinsic sliced Wasserstein distances and a Hypothesis testing framework for the proposed measure. The idea of using eigenfunctions and eigenvalues for sliced Wasserstein is interesting. The authors addressed a part of the concerns raised by the reviewers. However, the advantage over the existing methods (sliced Wasserstein distances and MMDs) is unclear. Thus, it needs a major revision and cannot be accepted with the current version. I encourage the authors to revise the paper based on the reviewer's comments and resubmit it to a future venue.
train
[ "sYX-4NGT6Vd", "Fga-pHMgGfW", "lplTigI0YI", "K18RareNnVw", "KpFNvnSTdS9", "bFZvPWpHQC0", "ocdxyWymf9T", "BWI1FpUmjG", "zLkPrSbYwlB", "WgLlzOUjFf", "k0LU8-7EpHb" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the clarification. As you suggest, one can define this as Kernel mean embedding (KME). However it is not useful because the main property of KME used in the MMD theory is that $KME(P) = KME(Q) \\Rightarrow P = Q$. This is central to the MMD theory, and the respective proofs rely on this property (e.g. results in [Gretton et al, 2012](https://www.jmlr.org/papers/volume13/gretton12a/gretton12a.pdf)). In our case this does not hold. Not only that, it is not even true that $KME(P) = KME(Q) \\Rightarrow \\eta \\\\# P=\\eta\\\\# Q$ holds, because the underlying kernel is basically a linear kernel. Since $KME(P) \\equiv C_{\\eta \\\\# P}$, KME equality implies equality of the hilbert centroids, not the underlying distributions. Thus, while technically correct, the MMD approach does not lead to any benefits, eg we cannot use MMD theory to derive our theorems, rather it complicates things. Essentially, for the MMD theory and results from Gretton et al to be applicable, we would need to use an additional kernel on top of the current embedding (see our earlier response) and then derive and verify additional conditions under which $\\eta \\\\#P = \\eta \\\\# Q \\Rightarrow P = Q$. We grappled with this issue in the early stages of our work, but later were able to gain some intuition that the above implication is not always true; however, we do not have a counterexample at this point.\n\nTo take it to extreme, one can claim that every mean is KME and every distance is MMD, e.g. consider testing for mean equality of two distributions in $\\mathbb R^n$. We can say that $x \\mapsto x$ is a feature map, and so the mean vector is a KME, and the distance between means is MMD. There is no benefit in this, as the MMD theory does not apply, e.g. KME equality does not imply that the distributions are the same, it only means their means are the same.", " We constructed the example of a path graph embedded on the plane in the response to Reviewer LPx3 above from exactly this motivation. In this example, our ISW embeddings demonstrate superior performance compared to existing techniques---please see our comments above.", " I might be missing something here, but I still do not get why this is not a $T(P,Q)$ is not a MMD. \n- Define a kernel $k$ on $X'=\\mathcal{P}(X)$ by $k(\\mu,\\nu)=\\langle \\eta(\\mu),\\eta(\\nu)\\rangle$. Note that this is a kernel on $X'$ and not on $X$.\n- The kernel mean embedding associated with a kernel $k$ (and associated feature map $\\eta$) is by definition a function that takes as an input a probability measure $P$ on $X'$ (so here this is what is called a meta-distribution), and outputs $\\int \\eta(x')d P(x')$, which is exactly $C_{\\eta \\sharp P}$, and what is called a Hilbert centroid here. Therefore, a Hilbert centroid is a kernel mean embedding, but for a kernel defined on $X'=\\mathcal{P}(X)$.\n- Likewise, $T(P,Q)$ is the distance between the kernel mean embeddings, and therefore is a MMD (once again, the kernel is defined on $X'=\\mathcal{P}(X)$, so that the MMD is a distance on $\\mathcal{P}(X')=\\mathcal{P}(\\mathcal{P}(X))$).\n\nCould you further elaborate on why the Hilbert centroid is not a kernel mean embedding and why $T(P,Q)$ is not a MMD? (once again, I agree that this not a MMD on $\\mathcal{P}(X)$ but on $\\mathcal{P}(\\mathcal{P}(X))$.)", " Thank you for taking time answering my comments. \n\n> While this does not directly apply to the Chicago crime example, one can easily construct examples where two nodes of a planar graph are located close to each other in Euclidean distance\n\nI completely agree with this idea, but then it indeed makes the Chicago Crime dataset not very suited to showcase the strength of your approach. Basically, I was just wondering \"what would make the Euclidean distance between cells bad here ?\". \nIf you can work out another such example, that would be an instructive addition to the work. \n\n", " We ran experiments using the setup in our earlier comment to compare $ISW_2$ with existing slicing-based techniques. We consider a planar regular 200-gon as a graph, except that one of the edges is removed. Denote the two degree-1 vertices as A and B. So basically, this is a path graph from A to B. Vertices A and B are located nearby each other on the plane but they do not have an edge connecting them. We generate distributions on this graph by putting Unif(0,1) weights at each of the vertices, and added an additional weight of 10 to the bin at point A to obtain the first set of distributions with a mode at A; For the second set we put the weight of 10 at B instead to obtain a set of distributions with mode at B. Following experimental settings in the main paper, we set $n_1=60,n_2=40$, and number of replications at 1000, to compute power and size of the hypothesis tests.\n\nThe following table shows comparison of our method with SW in 2D, GSW with 3 choices of defining functions (circular, homogenous polynomial of degree 3, and degree 5), and Sobolev transport. ISW, SW, and GSW embeddings are done using 10 slices, and 8 inverse CDF mappings, i.e. $L=10, D'=8$ in the notation of the paper.\n\n| Method | Power | Size | Total time for 1000 replications(sec) |\n| --- | --- | --- | --- | \n| SW | 0.128 | 0.029 | 579.3 |\n| GSW-circ | 0.128 | 0.029 | 564.82 |\n| GSW-poly3 | 0.025 | 0.025 | 779.69 |\n| GSW-poly5 | 0.044 | 0.035 | 1009.85 |\n| ISW | 1.00 | 0.029 | 630.67 |\n| Sobolev | 1.00 | 0.048 | 21474.88 |\n\nISW achieves the best performance in terms of power, maintains nominal size (should be $\\leq 0.05$), and has comparable computational time as SW/GSW that use the same number of slices. Sobolev transport achieves high power as well, but takes much longer than our method. This is because we use the $p$-value combination tests on the embeddings generated by other methods that is much faster, whereas for Sobolev transport we have to rely on permutation tests since it doesn't generate any embeddings.\n\nWe were not able to adopt/implement Tree-sliced Wasserstein due to the relative complexity of the [MATLAB code](https://github.com/lttam/TreeWasserstein) and not having a MATLAB license ourselves. However, given the extensive array of experimental settings and diversity of competing methods we have considered in the experiments in the main paper and this rebuttal, we believe the effectiveness of our proposed intrinsic slicing technique is evident.", " Thank you for your detailed comments. Please find our responses below.\n\n**Motivation of using eigenfunctions**\n\nThe metric on the underlying space $\\mathcal{X}$ has an important impact on the derived quantities: metrics that capture the geometry of the underlying space as relevant to the application would presumably lead to more successful derived quantities (see the 2nd point in response to Reviewer ib9Y). The use of eigenfunctions can be traced to such successful metrics on graphs and manifolds, such as Diffusion Distance useful in high-dimensional data settings and Biharmonic Distance useful for graphics applications (i.e. 2D manifolds).\n\nAdditionally, eigenfunctions have a frequency structure like the usual sine/cosine basis in harmonic analysis on the periodic interval. Thus, e.g. in our testing framework the eigenfunctions can capture differences that are present at multiple resolution scales. There is quite a bit of literature on manifold learning methods that successfully employ Laplacian eigenfunctions, e.g. Laplacian Eigenmaps and follow-up works from Mikhail Belkin and others. \n\nOn the theoretical side, we are able to use the classical results such as Weyl's eigenvalue laws and Hormander's bound on eigenfunction norms to prove the finiteness of the proposed distances. Eignefunctions/eigenvectors provide a common ground for building theory both on graphs and manifolds. None of the previous methods are able to capture both of these domains within a single elegant theory. \n\n**Comparison with SW and its variants**\n\nWe first stress that on a theoretical level, none of the existing approaches provide a unified treatment for both graphs and manifolds. Figure 2b provides one such comparison for the finite interval case. Here, \"unsliced\" corresponds to the original 2-Wassestein distance (SW with a single possible projection in 1D case). As we can see ISW does better for $L=2,3$.\n\nWe devised the following example to showcase the importance of the intrinsic definition of distance, and include experimental results in the comment below. Consider a finite 1D interval AB that is bent into a circle on the plane, where A and B are very close to each other but they do not touch. The intrinsic distance between A and B is large as it requires travel from one end of the interval to the other, in contrast the extrinsic Euclidean distance is small. No matter what Euclidean projection we use, the SW cost of moving probability mass from A to B would be small. This will lead to testing power loss when we compare two distribution sets that concentrate one around A and the other around B.\n\n**Relation between $T$ in Eq 2.1 and in Proposition 1**\n\nThese quantities are the same. Eq. 2.1 gives the definition, and Proposition 1 establishes an equivalent formula. We will make this more explicit by using \"equals by definition\" notation in Eq 2.1.\n\n**Does Proposition 6 hold for any kernel used in MMD?** \n\nThe kernel used for MMD should be the same as the one used for ISW; we shall state this more clearly.\n\n**In Proposition 8, why can one assume that C is finite?**\n\nOn graphs, $C$ is trivially finite. On manifolds finiteness depends on the choice of the eigenfunction weighting function $\\alpha(\\cdot)$, and $C$ would be finite if $\\alpha(\\cdot)$ decays fast enough. For example this holds when $\\alpha(\\lambda)=e^{-t\\lambda}$ for any $t>0$. Other choices can be derived by using the Weyl's law for eigenvalues (See Remark 1 in the Appendix for a relevant discussion).\n\n**For line 266-269, how to choose $L, D'$ to obtain infinitesimally small error estimation?**\n\nPlease see Proposition 10 in the Appendix for the relevant statement.\n\n**For experiments choice of $L, D'$ vs. results in Section 3.2 and Section 4?**\n\nThe choice of L and D' in the Proposition 10 relates to asymptotic results that highlight consistency and power characteristics of the\nproposed approach. However, in practice instead of asymptotics we use resampling null distribution for the test based on $\\mathbb{T}(\\cdot,\\cdot)$.\n\n**Limitations**\n\nThere is scope of exploration for choosing the parameters $L,D'$ in a principled manner. Empirical computation of the eigenfunctions for general manifolds will introduce approximation errors that need to be tackled by expanding the theoretical results. We plan to investigate these directions in future work.\n\n**Potential negative societal impacts**\n\nIn privacy-sensitive situations, e.g. analyzing manifold-valued personal data, there could be privacy risks associated with releasing the $ISW_2$ embeddings vectors. Additional studies are warranted to quantify such risks and generate differentially private $ISW_2$ embeddings. Any difference between data from different demographic groups found using our procedure should be evaluated in light of potential biases stemming from the data collection procedure.", " We appreciate your enthusiasm about the paper. Please see our responses below.\n\n**Isn't this method essentially an adaptation of standard techniques in two-sample testing literature?**\n\nDue to the attractive and general definition of ISW, this may seem to be the case at first glance. However we would like to stress that\nour meta-distribution testing framework on general domains is an important contribution, as evidenced by the fact that previous works that capture limited settings of this problem (such as interval/circle) published at top statistical/ML venues (e.g. ref [5] and other papers on functional and object data analysis). Our approach has unique features that make it very different from standard two-sample testing; for more in-depth discussion of these differences---especially with MMD---please see our response to Reviewer tW4F. \n\n**Is the manifold/graph structure \"crucial\"?**\n\nWhile this does not directly apply to the Chicago crime example, one can easily construct examples where two nodes of a planar graph are located close to each other in Euclidean distance, but the road network does not link them directly (e.g. if there is a river separating towns A and B, and so while they are close, the actual drive takes a long time as it has to pass through a brigde that's far away). In this\ncase, approaches based on Euclidean projection will not be able to succesfully capture the topology of the underlying domain to widely\nseparate A and B, whereas intrinsic approach will be able to do so. This is important for testing, as any shifts of probability mass from\nA to B will be seen as having small transporation cost in Euclidean distance.\n\n**Computational burden ISW vs SW**\n\nThe computation of eigenvectors/eigenvalues of symmetric sparse matrices is a well studied problem with stable and efficient algorithms available, e.g. ARPACK providing an implementation of the Arnoldi method that can be interfaced from MATLAB, Python, or R. These methods incur linear time complexity in the size of graph (more precisely, $O(k(|V|+|E|))$ for $k$ eigenvectors (=intrinsic slices) of the graph Laplacian). This computational overhead is negligible (<<1 second) for the graph experiments in this paper. As a contrast, Sobolev transport which is one of the methods Reviewer LPx3 mentions for comparison, requires computing the Sliced Sobolev Transport Distance. This depends on the computation of all shortest paths from $k$ randomly selected nodes in a graph, which has complexity $O(k(|V|\\log|V|+|E|))$ (see page 4 of their [paper](https://arxiv.org/abs/2202.10723)).\n\nPlease refer to the response Reviewer LPx3 above for empirical comparison of computation times of our method with existing slicing techniques in the literature.\n\n**Eigenvector computation**\n\nThis computation is done only once for the underlying domain and can be used to test any number of distributions on this domain. The rest of computation has the same complexity order for SW and ISW. Note that SW can have its own inefficiencies, e.g. when the manifold has low intrinsic dimensionality but is embedded in a high dimensional space, most of the SW projections are correlated or uninformative, which incurs not only compute cost but also leads to lower testing power.\n\n**Typos, roof of Proposition 3, Appendix could be enhanced**\n\nThank you for pointing these out! We will fix the typos and the proof. We will add more details to the proofs in the Appendix.\n\n**Mention results of Section 4.1 in Section 3.2**\n\nWe have re-organized this paper many times to help with readability! While we are open to further suggestions, our thinking is that Section\n3.2 is broadly accessible and provides a straightforward description of the resulting approach via formula for $\\eta_{D}$ that can be\ndirectly used for implementation.", " Thanks for the deep dives in your comments. Please see our clarifications below.\n\n**Is the Hilbert centroid (def. 1) a kernel mean embedding?**\n\nNot quite. While for each measure $\\mu\\in\\mathcal{P}(\\mathcal{X})$, we can think of $\\eta(\\mu)$ as a kernel mean embedding, our focus\nin this paper is on meta-distributions $P\\in\\mathcal{P}(\\mathcal{P}(\\mathcal{X}))$. The Hilbert centroid is defined for meta-distributions, and is basically mean (more precisely, centroid) of $\\eta(\\mu)$ where $\\mu\\sim P$. In other words, Hilbert centroid is the mean of kernel-mean-embeddings of distributions drawn from the meta-distribution $P$.\n\n**Is $T(P,Q)$ (eq 2.1) actually the MMD associated with the kernel on $P(X)$ given by $ISW_2$?**\n\nNot quite. Eq 2.1 is basically the formula for the squared distance between centroids of two distributions. For this to become an MMD\none needs another kernel, say Gaussian kernel $e^{-x^{2}/2}$ applied on top of $D(\\cdot,\\cdot)$, and also flip the overall sign of the formula. Now testing using this quantity would be equivalent to MMD; and the induced kernel mean embedding will apply to meta-distributions. \n\nSo, why not use MMD instead? First, there is a theoretical issue---the null hypothesis of this test is that $\\eta\\\\#P=\\eta\\\\#Q$, but there is no mathematical theory to imply that this is equivalent to $P=Q$. This raises a host of issues for the MMD theory which we do not know how to answer. Second, when tests like MMD reject the null, they do not provide any clues about what aspect(s) of two distributions are different; our proposed approach of concentrating on equality of centroids is more interpretable because one can drill down into the eigenfunction dimensions that display differences.\n\n**Is it possible to simplify/remove most proofs corresponding to Section 4.1 by checking if the assumptions given in Gretton et al are satisfied for the kernel given by $ISW_2$?**\n\nNot quite, as follows from the points above. Additionally, our null hypothesis is $H_{0}:C_{\\eta\\\\#P}=C_{\\eta\\\\#Q}$ which is different\nfrom MMD null hypothesis of $H_{0}:P=Q$. The latter is a much stronger assumption that simplifies the computation of the null distribution\nin the MMD case. In our case, we have to resort to a mixture distribution $R$ (see Proposition 9 in Appendix) instead of just using either\n$P$ or $Q$ (as they are equal under the MMD null), which requires a different technique for the second half of the proof. Even in terms\nof practical computation, note that MMD testing relies on permutation null which is valid due to the stronger null hypothesis of $H_{0}:P=Q$. In contrast, with our null hypothesis we cannot use the permutation null and have to resort to a bootstrap procedure.\n\n**The choice of a weight function being a drawback for $ISW_2$**\n\nWe think of this freedom as of an advantage of our approach -- providing diversity of distance measures that can be tailored to the application needs. Different weight profiles are connected to some known distances such as diffusion and biharmonic distance and can be directly used by a practicioner.\n\nWe note that when using the p-value combination test, these weight functions cancel out; this happens because the individual t-tests\ndivide by the variance of the coordinate which cancels out the constant scaling of each coordinate by the weight function. \n\n**Confusing notation in Prop.9 in the Appendix**\n\nThanks for pointing this out! We will clarify in these proofs that x refers to elements of the Hilbert space.\n\n**Computation for atomic measures**\n\nIf one is only interested in computing only the $ISW_2$ distance, one can use 1D matching method as you mention. Our computation is based on the formula for the 2-Wasserstein distance in terms of the inverse CDF (Eq. 2.2), which leads to the discretized embedding formula of Eq 3.1 and eventually the formula for $\\eta_{D}$. This embedding formula is crucial to the focus of this paper which is on testing. The embeddings can also be seen as alternate representations of manifold-valued data, other applications of which we plan to explore in future. For precise computation of ISW one can also use Eq 2.2/Eq 3.1 by discretizing at the atom locations which will give exactly the same result as 1D matching; however, due to different discretization locations, $\\eta_{D}$ would not be consistently defined and would not be suitable for testing and other downstream uses.\n\n**Comparison against two-sample test based on the Wasserstein distance**\n\nWasserstein distances can be used to test the equality of two usual distributions from $\\mathcal{P}(\\mathcal{X})$, but we are unaware\nof any such test for comparing meta-distributions from $\\mathcal{P}(\\mathcal{P}(\\mathcal{X}))$. Having said that, we do present comparisons of different versions of intrinsic slicing with (unsliced) $W_2$ in the real interval example (Fig. 2b). We shall clarify this in the paper.", " The authors propose the intrinsic sliced Wasserstein (ISW) distances for collections of probability measures. The ISW is a variant of sliced Wasserstein (SW) which exploits the closed-form solution of optimal transport on 1-dimensional space. The authors propose to use the eigenfunctions to project supports into 1-dimensional space and use corresponding eigenvalues as its weights.\n\nISW is Hilbert embeddable which allows to reformulate testing problems over collections of probability measures to traditional ones in Hilbert space. The authors demonstrate it for hypothesis testing via resampling based test and testing by p-value combination (from coordinate-wise tests).\n Strengths\n+ The authors propose a novel intrinsic sliced Wasserstein (ISW) for collections of probability measures by leveraging eigenfunctions for 1-dimensional projection and eigenvalues as weights.\n+ The paper is easy to follow.\n+ The authors derive hypothesis testing in the Hilbert for collections of probability measures problems under ISW.\n\nWeaknesses\n+ The ideas to use eigenfunctions and eigenvalues for sliced Wasserstein are interesting. However, the authors have not demonstrated the advantages of the proposed approach over other approaches (e.g., sliced-Wasserstein, general sliced-Wasserstein, tree-sliced-Wasserstein or the recent Sobolev transport – a variant of optimal transport on graphs. Note that all these approaches are Hilbert embeddable as the proposed method) in theory and/or experiments.\n+ The authors should discuss the complexity of the proposed intrinsic sliced Wasserstein (and report time consumption in experiments)\n + The proposed ISW is a variant of sliced-Wasserstein (SW) by leveraging the eigenfunctions and eigenvalues (closely related to SW, GSW, TSW as in line 68-80; and the recent Sobolev transport – a variant of optimal transport for measures supported on graphs). The authors should compare them in experiments.\n+ The motivation of using eigenfunctions (for slicing) is too vague (the authors only give a short discussion in line 179-182). I think the authors should describe its advantage rigorously in theory and/or experiments.\n\nSome other concerns are as follow:\n+ What is the relation between T in Equ. (2.1) and in Proposition 1?\n+ For Proposition 6, is it hold for any kernel used in MMD? Please specify it.\n+ In Proposition 8, why can one assume that C is finite? (it is not clear whether the bound is useful in Proposition 8 without this assumption)\n+ For line 266-269, how to choose L, D’ to obtain infinitesimally small error estimation (e.g., to obtain error less than epsilon)?\n+ For experiments, the authors should discuss the results, e.g., how to choose L, D’ (in Figure 2b) follow the results in Section 3.2 and Section 4? \n\nMinor:\n+ In line 193-194, \\eta^0 maps to H^0, but D is a norm on H’?\n + The authors have not discussed the limitations of their works clearly in the text. (as mentioned in the checklist).\n+ The authors have not discussed the potential negative societal impacts of their work as in the checklist.\n", " It is well-known that the Wasserstein distance between two 1D measures can be computed easily. Hence, a common approach in a higher dimensional Euclidean space $\\mathbb{R}^d$ consists in _slicing_ the space (formally, projecting on 1D subspaces), compute the distances on the slices, and integrating to retrieve a proper distance between the input measures. \n\nThis work proposes to adapt this idea in the context of measures supported on a (compact) manifold $\\mathcal{M}$. The key idea is to rely on the Laplacian-Beltrami operator on the manifold, which provides a collection of maps $\\phi_\\ell : \\mathcal{M} \\to \\mathbb{R},\\ \\ell \\in \\mathbb{N}$ (eigenvectors of the operator), which can be used to push-forward the measures we want to compare to $\\mathbb{R}$. Namely, they introduce \n$$ISW_2(\\mu,\\nu) = \\sum_\\ell \\alpha(\\lambda_\\ell) W_2(\\phi_\\ell \\mu, \\phi_\\ell \\nu), $$\nwhere $\\lambda_\\ell$ is the eigenvalue of the Laplacian operator, $\\alpha$ is some weighting function. \n\nAuthors then prove that $ISW_2$ defines a metric (under mild assumptions), the key aspect being that it induces a universal kernel which yields the separation property $ISW_2(\\mu,\\nu) = 0 \\Rightarrow \\mu = \\nu$ (other properties are mostly straightforward). \n\nOnce this distance is introduced, authors provide some approximation results for practical computations (e.g. we do not have access to all the $(\\phi_\\ell)_\\ell$ and similarly just a sampling of the (projected) measures. Afterward, as this metric is _Hilbertian_, meaning that the metric space $(P(X), ISW_2)$ isometrically embeds in a Hilbert space, they design a two-sample-test between distribution supported on $P(X)$ whenever $X$ is a compact manifold. \nThey showcase the relevance of their approach in different numerical experiments. \n\n(note : $\\phi \\mu$ denotes the pushforward for $\\mu$ by $\\phi$ here, hashtag not used due to rendering issues.) # Originality\n\nUsing the Laplace-Beltrami operator as a way to build a \"canonical\" way to slice measures supported on a manifold feels pretty satisfying and is a novel idea to the best of my knowledge. \nOn the other hand, one could argue that once the idea of using the Laplace-Beltrami operator to slice the measures has been introduced, most of the work (aside from the experimental section) is essentially an adaptation of standard techniques in two-sample-tests literature.\n\n# Clarity\n\nThe paper was overall well-written. It motivates the need to consider that our measures are supported on manifold in a convincing manner (e.g. on a circle for daily event records) and aside from few paragraphs (see below), it is fairly easy to read. \n\n# Significance\n\nI think this paper provides a new interesting approach that may motivate further practical developments of Optimal Transport for measures supported on non-flat domain. \n\n# Quality\n\nI think this is a competent paper which introduces a new idea that may be of interest for the Optimal Transport community. # Questions\n\n1. In the Chicago data experiment, why is the manifold/graph structure \"crucial\" as stated in the Caption of Figure 4 in the appendix ? i guess that we can reasonably assume that this graph is planar in which case we can consider the measures are supported on $\\mathbb{R}^2$. As far as I can tell, there is no comparison between $ISW_2$ and $W_2^{\\mathbb{R}^2}$ (or a standard sliced-$W_2$, so that one has a kernel embedding and can run a two-sample-test here).\n2. Related to the comparison between the $ISW_2$ and the usual Sliced-Wasserstein (assuming our manifold is embedded in $\\mathbb{R}^n$, what is the computational burden in terms of running time related to the computation of $ISW_2$ vs $SW_2$ (in situations where we do not have access to the $\\phi_\\ell$ and $\\lambda_\\ell$ in close form)? I think a proper comparison of the benefits of using $ISW_2$ instead of a naive $SW_2$ is worth of interest. \n\n# Minor remarks and suggestions\n\n- I think there are few typos in the Proof of Proposition 3 (some $\\mathcal{H}$ should be $\\mathcal{H}^0$, or $\\eta$ should be $\\eta^0$).\n- The proof of Theorem 1 in the appendix could be enhanced (with some more precise quote of the results used in the aforementioned references). E.g. what precise statement in [17] and [12] yield the implication $\\alpha(\\lambda_\\ell) > 0 \\Rightarrow$ the kernel is universal hence characteristic? Right now, it is cumbersome to properly check the proof. A similar comment holds for other proofs as well (e.g. the proof of Theorem 2 is quite expeditious).\n- In think the quantitative results mentioned in section 4.1 and detailed in the appendix should actually be mentioned in section 3.2 directly. Indeed, when reading section 3.2 at first, one may be worried by the lack of quantitative assertion there. \n- line 325 : inite-->finite (typo). Few limitations of the work should be discussed further. \n\nI did not identify ethical concern specific to this work. ", " Consider the set of probability measures on some manifold. The authors propose a new distance on this set, that behaves in some way like the sliced Wasserstein distance. This distance is defined as the weighted average of the Wasserstein distance between the pushforwards of the two measures by the eigenfunctions of the Laplace-Beltrami operator on the manifold. Some properties of this distance are given, and the distance is then used in the two-sample testing problem, where each sample is a sample of iid probability measures on a given manifold. Some numerical illustrations of the two-sample tests are then given. Sliced Wasserstein (SW) distances have been originally proposed as computationally cheap versions of the Wasserstein distance on the Euclidean space $\\mathbb{R}^D$. The idea of the original SW is the following: instead of computing the Wasserstein distance between two measures in high dimension, let us look at a certain number of \"projections\" of the measures on lines in $\\mathbb{R}^D$; then one can compute efficiently a distance between the two measures by averaging the distance between the projections. This paper proposes a very reasonable generalization of this idea, called the Intrinsic SW (ISW) when the support of the measure is a manifold or a graph: as lines are no longer defined, the distance is defined as an average of \"projections\" along the eigenvalues of the Laplace-Beltrami operator, that should describe the geometry of the manifold (and therefore replace the lines in $\\mathbb{R}^D$).\n\nThe paper focuses on the notion of ''Hilbert embeddability'' to justify the use of this distance. This means that their distance $ISW(\\mu,\\nu)$ is actually equal to $\\|\\eta(\\mu)-\\eta(\\nu)\\|_H $ for some feature map $\\eta$ and some Hilbert space $H$. However, the authors seemed to have miss a key link with kernel methods. Indeed, the feature map $\\eta$ defines a kernel on the space $P(X)$ of probability measures on $X$ (through $k(\\mu,\\nu) = <\\eta(\\mu),\\eta(\\nu)>_H$), and the method they propose for the two-sample tests is the kernel two-sample test method from Gretton & al. The authors seemed to have understand partially the connection, as their proofs are adaptations of the ones from Gretton & al, but the authors have missed that they could actually directly used the results from Gretton & al, without any adaptation. \n\nThere are other concepts defined in the paper that can be rephrased in the more standard language of kernels:\n- a Hilbert centroid (def. 1) is a kernel mean embedding. \n- T(P,Q) (eq 2.1) is actually the MMD associated with the kernel on P(X) given by the intrinsic sliced Wasserstein distance.\n- Most proofs corresponding to Section 4.1 can be removed: it suffices to check that the assumptions given in Gretton & al are satisfied for the kernel given by the intrinsic SW distance.\n\nOther comments:\n- The intrinsic SW distance depends on the choice of a weight function (describing the importance given to each eigenvector). Having to tune such an important hyperparameter is clearly a drawback of this distance compared to the Sliced Wasserstein distance.\n- Notation in Prop.9 (in the Appendix) are extra confusing: when defining the eigenvectors of some operator (that are defined on some Hilbert space H), the variable \"x\" is used, whereas x was used before to note an element of the underlying manifold. As we also consider eigenvectors on the manifold, this created a lot of confusion at first.\n\nOn my grade: I put 4 so far, as I believe the link with kernel methods is not made very clear in the paper, whereas it is central to the approach proposed. I could improve my grade if this link is made more explicit (e.g. say explicitly that the test IS a kernel two-sample test and that $T(P,Q)$ IS a MMD), and if proofs are significantly shortened by using Gretton & al.'s theorems directly. Please tell me if I am missing something and if this test is not a kernel two-sample test.\n\n On the implementation:\nIn a lot of situations, the measures at stake will be atomic, of the form $\\mu_i= \\sum a_i \\delta_{x_i}$. Then, it looks like the most efficient way to compute the ISW distance is to compute $\\phi_k(x_i)$ for every eigenfunction $\\phi_k$, and then compute the 1D matching between two 1D atomic distributions (that is we do the same as for the classical sliced Wasserstein distance). Why is this approach not at all proposed in the paper? \n\nOn the numerical illustrations:\nThe authors test their procedures against other reasonable testing procedures, but not against the one that appears to be the most standard in their situation, that would be a two-sample test based on the Wasserstein distance. (that is we reject if $W_2(\\mu_n,\\nu_n)>\\alpha$, and we accept otherwise). CLTs for empirical Wasserstein distances could probably be used to obtain asymptotic confidence levels on such a test, or one could use bootstrap or other approaches. Is there a reason for which this was not implemented (at least for synthetic experiments where the number of observations is reasonably small, so that W2 should be easy to compute)? Yes." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "lplTigI0YI", "K18RareNnVw", "BWI1FpUmjG", "ocdxyWymf9T", "bFZvPWpHQC0", "zLkPrSbYwlB", "WgLlzOUjFf", "k0LU8-7EpHb", "nips_2022_Yo0s4qp_UMR", "nips_2022_Yo0s4qp_UMR", "nips_2022_Yo0s4qp_UMR" ]
nips_2022_oQIJsMlyaW_
SInGE: Sparsity via Integrated Gradients Estimation of Neuron Relevance
The leap in performance in state-of-the-art computer vision methods is attributed to the development of deep neural networks. However it often comes at a computational price which may hinder their deployment. To alleviate this limitation, structured pruning is a well known technique which consists in removing channels, neurons or filters, and is commonly applied in order to produce more compact models. In most cases, the computations to remove are selected based on a relative importance criterion. At the same time, the need for explainable predictive models has risen tremendously and motivated the development of robust attribution methods that highlight the relative importance of pixels of an input image or feature map. In this work, we discuss the limitations of existing pruning heuristics, among which magnitude and gradient-based methods. We draw inspiration from attribution methods to design a novel integrated gradient pruning criterion, in which the relevance of each neuron is defined as the integral of the gradient variation on a path towards this neuron removal. Furthermore, We propose an entwined DNN pruning and fine-tuning flowchart to better preserve DNN accuracy while removing parameters. We show through extensive validation on several datasets, architectures as well as pruning scenarios that the proposed method, dubbed SInGE, significantly outperforms existing state-of-the-art DNN pruning methods.
Accept
Novel pruning method based on integrated gradients. Reviewers agreed that the method is well-motivated and that the comparisons showcase the potential of this method. There are some concerns regarding fairness of the comparisons in terms of flops and parameter count. I believe some of the rebuttal answers from the authors address those concerns. I think this work is novel and interesting enough to be accepted at NeurIPS.
train
[ "ih807BRirEf", "I30Tz_wMY_d", "5GKvqy43GB", "B30fVkMu537", "oiy0czUOwAB", "2AU2x-IwnRi", "4z14txofMK", "PEg6TsDOMCr", "X6DVmgoEUFv", "0YaSRGOZNIm", "EyPzNX6zn0", "4xlLMSqj1Lm", "vYLO5Hgf7KG", "CYiZLyZ1bk" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Given the short amount of time before the end of the discussion period, we will update the manuscript with the suggested changes, that will allow to enhance the quality of the paper. As for using non-zero baselines, since the goal of the proposed work is the removal of neurons, in its current form we have to specifically consider zero values. However, it could be an interesting future work to use an arbitrary value (the same for all neurons of a layer), and remove them using redundancy-based pruning methods, e.g. RED [13]. Thank you for the suggestion and comments, we hope that this answer your concerns and further convince you on the benefits of the proposed work for the community.", " Thank you very much for addressing the questions. I think that perhaps it would be good to simplify the notations $C_{{SG}^P}$ and $C_{{IG}^P}$ . Also in the results you might also want to mentioned how reliable integral approximation is based on the completeness axiom.\nCould this work be potentially extended to non-zero baselines as well and could we have a more generic formulation for that ?", " Thanks for your comments and time spent reviewing the paper and rebuttal. We agree that the proposed entwined importance criterion estimation and fine-tuning provides a boost to SInGE accuracy ; however, please bear in mind that both steps (i.e. importance estimation via integrated gradients method and entwining pruning and fine-tuning steps) are contributions of our work, both being ingredients to SInGE performance. Also, to specifically answer your concern, we did our best our to disentangle (see Table 2 in the revised paper and Table 5 in our latest response to the rebuttal) the performance gains brought by both these steps.", " Thank you for your detailed response. Some of my concerns were addressed. Adding a more detailed description of the calculation of the FLOP count to the paper would increase clarity. Furthermore, additional comparisons to Group Fisher Pruning were added. Therefore, I will raise my score even though I am not fully convinced by the arguments provided by the authors for comparing SInGE to single-shot methods. I know that a completely fair comparison of different pruning methods can be challenging but I feel that a direct comparison of iterative and single-shot methods (despite similar training time) could favour the iterative procedure.", " I think my concerns has been properly addressed. And based on this I would like to reconsider my rating, especially due to the novelty of the integration method.", " Furthermore, to specifically anwser your concern, we propose a novel benchmark on Cifar10 where we work at a constant processing (pruning+fine-tuning) time: specifically, as we empirically measure that 1 integrated gradient step costs approximately half the time of a fine-tuning step, we modulate the numbers of fine-tuning steps to reach a similar processing time, and compare the accuracy of the pruned networks. Results can be found below:\n\n| method | pruning rate / FLOPs removed | accuracy |\n|:----------------------------------------------------:|:----------------------------:|:-------------------------:|\n| magnitude $C_{L^1}$ | 86.46 / 85.00 | 69.42 $\\pm$ 2.11 |\n| magnitude $C_{L^2}$ | 86.46 / 85.00 | 75.38 $\\pm$ 2.76 |\n| gradients $C_{\\nabla^2}$ | 86.46 / 85.00 | 79.55 $\\pm$ 1.97 |\n| magnitude $\\times$ grad $C_{L^2\\times\\nabla^2}$ | 86.46 / 85.00 | 91.05 $\\pm$ 0.03 |\n| integrated gradients $C_{\\text{SG}^2}$ | 86.46 / 85.00 | 92.01 $\\pm$ 0.13 |\n| integrated magnitude $\\times$ grad $C_{\\text{IG}^2}$ | 86.46 / 85.00 | **93.07** $\\pm$ 0.06 |\n\n[Table 5: Comparison of the performance of the different pruning criterion on a ResNet 56 for Cifar10 under the same time constraint.]\n\n**Lastly a direct comparison to at least one competing method within the same pruning pipeline would provide the possibility to better disentangle the performance gains from the fine-tuning pipeline and criterion.**\n\nIn Table 1 of the paper we compare several importance pruning criteria, showing the interest of the proposed integrated magnitude $\\times$ gradients-based criterion, without any fine-tuning. Furthemore, given the best criterion, Table 2 highlight the interest of entwined importance estimation and fine-tuning compared with a traditional approach. We believe that these comparisons allow to disentangle the gains coming from either the criterion and the proposed entwined fine-tuning and importance measurement scheme.\n\n**Why do you not compare to the same methods on all datasets? It seems that some of the methods compared to on ImageNet also provide results for CIFAR10. Further, some recent structured pruning works such as Group Fisher Pruning \\[A\\] were not considered for comparison. \\[A\\] Liu, Liyang, et al. \\\"Group fisher pruning for practical network compression.\\\" International Conference on Machine Learning. PMLR, 2021.**\n\nWe added Group Fisher Pruning (GFP) to the paper in Table 4-5. Note that despite the fact that their baseline ResNet-50 has a higher accuracy than ours, our proposed approach is still competitive, further highlighting the quality of the proposed results.\n\n\n**Why did you choose to use the product of the norms of the gradient of the output of the NN and the weights? One could imagine other gradients i.e., the gradient of the loss.**\n\nAkin to attribution methods \\[21,22,23,24,36\\] we use the gradient of the network output w.r.t. each neuron to measure the sensitivity of the function w.r.t. this neuron. Our intuition is that while derivating the loss allows to measure wether a neuron contributes to correct classification, derivating the whole outputs also allows to considerate the classification uncertainty.\n", " **The main weakness that I see is that the work does not explain how the FLOPS and parameters of the pruned network are calculated. This is amajor issue because the work does not directly compare to other state-of-the-art methods but merely quotes the numbers. Since the calculation of the FLOPS and parameters of a pruned network is non-trivial (see Appendix D of SOSP \\[43\\]) This can have a huge impact on the performance and could reduce the performance significantly.**\n\nIn their appendix, the authors of SOSP explain how some pruning methods measure the pruning ratio and MACs count in two different manners. They have an extended discussion on a problem that is well known in the pruning community \\\"how do we prune skip connections\\\". To put it simply: if two layers were to be added but pruned independently (which is almost systematically the case) then if the channel pruned do not match, one can't actually remove the channels as they would still be required for the rest of the network. More specifically, for example, let's consider two layers $f_1$ and $f_2$ that are added in a skip connection and we want to prune neuron $1$ from $f_1$ and neuron $2$ from $f_2$ then their output would still be: \n\n$f_1(x) + f_2(x) = \\begin{pmatrix}0 &+& {(f_2(x))}_1 \\\\\\\\ {(f_1(x))}_2 &+& 0 \\\\\\\\ {(f_1(x))}_3 &+& {(f_2(x))}_3 \\\\\\\\ \\vdots & & \\vdots \\end{pmatrix}$\n\nIn such a case, no channel is effectively pruned. For this reason, in SOSP \\[43\\], the authors argue that one should not count the pruning in such instances as it can't be applied in practice. For information, OTO \\[42\\] actually addresses this problem by forcing the simultaneous pruning of channels that should be matched. In our own study we apply the same pruning ratio measurement at SOSP as we think it is the only valid one plus it is more difficult than the approximate one.\n\nWe added this point for the sake of clarity in the revised version of our work.\n\n**The second main weakness is that the work applies an iterative fine-tuning scheme but compares to single-shot pruning methods such as HAP, SOSP, OTO and SRR-GR. Since the paper shows in Table 2 that an iterative pruning scheme seems to improve the overall performance significantly, this would give SInGE an advantage and makes a direct comparison of the pruning criterions harder.**\n\nIndeed, in our evaluations we apply the proposed entwined importance criterion estimation and fine-tuning scheme, which improves the accuracy, as shown in Table 2. However, please bear in mind that our method requires so much fewer fine-tuning steps as compared to other methods, e.g. HAP, SOSP, OTO or SRR-GR. These methods involve more than 1 million optimization step with batches of 256 elements or several million steps for smaller batch-sizes. On the other hand (as stated in our implementation details section), we benchmarked SInGE with fewer (50k) optimization steps. Considering the huge discrepancy in terms of number of optimisation steps, we believe that the comparison is not only fair to state-of-the-art methods but shows the efficiency of the proposed approach.\n\n**Further, the authors do not mention the computational effort to evaluate the criterion. Due to the Riemannian Integral this should be significantly higher compared to other gradient, magnitude or first-order based methods.**\n\nIndeed the proposed integrated gradient criterion comes at the expanse of an additional (linearly increasing with the number of integration steps) computational cost. Please note that other state-of-the-art methods do not report their importance measurement time as they often require second order derivations or even very large batches for gradient estimation, also involving additional computational overhead.", " **I think that in general the readability (fluency) of the paper can be further improved. [...] For example, neither Figure 1 nor the context talking about Figure 1 describes what $\\mu$ and $\\mu^2$ are.**\n\nTo answer your concern and make the paper more standalone and understandable, we updated the caption of Figure 1 in the revised version and provided detailed explanations in the main body.\n\n**Figure 1 and its descriptions are a bit confusing. Originally, I was thinking that a, b and c are weight matrices but later the paper refers to a and b as two different weights. It would be good to improve the description of Figure 1.**\n\nAs previously stated, we agree with the reviewer and improved the writing of the paper to this regard.\n\n**Given that entwined fine-tuning is one of the important contributions of the paper it would be good to describe it in detail. [...] It seems that the authors make an assumption that the readers are familiar with the specific fine-tuning approach that they are describing. At this point from the given description it is hard to tell the amount of novelty in the entwined fine-tuning.**\n\nIn our work, we advocate for an entwined pruning (*via* importance measurement as prescripted by the proposed integrated gradient criterion) and fine-tuning scheme, that allows to more smoothly remove the least relevant weights and adapt the network by retaining its accuracy. The proposed entwined pruning and fine-tuning is simple: specifically, fine-tuning over $O$ steps simply corresponds to running $O$ steps of stochastic gradient descent (or any other optimizer) like in standard DNN training. Similarly, using a batch from $\\mathcal{D}$ simply refers to a random batch from the training set or more generally from the domain $\\mathcal{D}$. Nevertheless, we show empirically (see Table 2 in the paper) that it increases the performance in all tested configurations in practice.\n\n**Table 1: It would be great if the authors explain relatively large std in integrated magnitude x $grad C_{IG}^2$ method**\n\nGenerally speaking, we observe that the standard deviation increases as the pruning ratio increases regardless of the pruning criterion (e.g. the std is also large with the gradient pruning criterion at $85.00\\%$ as well as the integrated gradient criterion at $90.00\\%$ parameters removed): indeed, in such challenging settings, the cost of removing a neuron increases and any ill-advised selection of neuron is going to have growing impact on the accuracy.\n\n**On page 9 a notation, $N_n$ but I don't see a description of notation $N$.**\n\nThis is a mistake from our side. The notation referred by the reviewer should be $W_l^n$ as it corresponds to pruning the n$^{th}$ neuron of layer $l$. We corrected this element in the revised article.\n\n**Table 6 has 2 SInGE rows but it is not clear what are the differences between those two SInGEs**\n\nThank you for pointing this out, we corrected this typo in the updated manuscript.\n\n**From Table 1 it is unclear what the authors mean by integrated gradients $C_{SG}^2$ and integrated magnitude x $grad C_{IG}^2$. It would be good if they could elaborate it further and also describe which of those two they used for the results in tables 3, 4, 5 and 6.**\n\nAs stated line 142 in the original paper, the $C_{SG^p}$ can be derived from equation (5) by removing the weight magnitude term. Explicitly this leads to the following definition\n\n$C_{\\text{SG}^p} :(W_l,F,\\mathcal{D})\\mapsto(\\sum_{s=0}^S\\|\\nabla_{\\mu^sW_l^n}F(\\mathcal{X}\\in\\mathcal{D})|\\_p)_{n\\in\\{1,\\dots,n_l\\}}$\n\nWe add this explicit definition to the main body. Furthermore, as stated line 211-214, the $C_{IG^p}$ is the most efficient criterion according to our empirical validation. We clarify that we use the most efficient criterion in the other experiments.\n\n**I wonder if Noisegrad paper which introduces stochasticity to model weights could also be relevant to this paper. https://arxiv.org/pdf/2106.10185.pdf**\n\nThis work could help us improve the current importance metric. In the article suggested by the reviewer, the study focuses on SmoothGrad which is a technique for explainable AI which is a broader topic than attribution from which we draw inspiration. The method consist in adding noise to the values evaluated with the explanation technique. More formally, in our study, this would correspond to the addition of a white nose to the weight values for both neuron norms and corresponding gradient norms. According to the cited paper: \\\"\\[\\...\\] hypothesize that SmoothGrad perturbs the test sample in order to get a signal from the steepest part of the decision boundary.\\\". This could be intuitively linked to the measure of importance, in other words, the steepest part of the decision boundary is given by the most important computations. Thank you for pointing out this interesting paper, we added it to our future work section in the conclusion.", " | Method | \\% params rm | \\% FLOPS rm | accuracy |\n|:------------------:|:------------------------------------:|:------------------------------------:|:------------------------------------:|\n| baseline | 0.00 | 0.00 | 76.15 |\n| Hrank (CVPR 2020) | 36.67 | 43.77 | 74.98 |\n| RED (NeurIPS 2021) | 39.6 | 42.7 | 76.1 |\n| HAP (WACV 2022) | 44.59 | 33.82 | 75.12 |\n| SRR-GR (CVPR 2021) | - | 45 | 75.76 |\n| SOSP (ICLR 2021) | 49 | 45 | 75.21 |\n| SRR-GR (CVPR 2021) | - | 55 | 75.11 |\n| SInGE | **50.80** $\\pm$ 0.02 | **57.35** $\\pm$ 0.11 | **76.05** $\\pm$ 0.07 |\n| RED (NeurIPS 2021) | 54.7 | 55.0 | 71.1 |\n| SOSP (ICLR 2021) | 54 | 51 | 74.4 |\n| GDP (ICCV 2021) | - | 55 | 73.6 |\n| HAP (WACV 2022) | **65.26** | 59.56 | 74.0 |\n| OTO (NeurIPS 2021) | 64.1 | 65.2 | 73.3 |\n| SInGE | 63.78 $\\pm$ 0.01 | **65.96** $\\pm$ 0.21 | **74.7** $\\pm$ 0.31 |\n\n[Table 4: Comparison between existing structured pruning performance on ResNet 50 on ImageNet. In both the low ($<50\\%$ parameters removed) and high ($>50\\%$) pruning regimes, SInGE achieves remarkable results.]\n\n**The top-1 accuracy of integrated magnitude method without fine-tuning and with 1000-step entwined fine-tuning seems to be the same (both reported as 85.38). I'm wondering why there is no increase in the accuracy.**\n\nThis is a mistake on our part, we updated the results which don't actually change the conclusions. We double checked every other Tables to ensure such errors wouldn't happen an other time.\n\n**May I ask what does the notation \"1\\<\" stand for in Table 1?**\n\nThis indicates standard deviations lower than $1$ for accuracies at chance level.\n\n**As the performance of SinGE is very close to its counterpart in many cases, could the authors please provide the error bars for the results reported in Table 2-6?**\n\nTo answer your concern we added the standard deviations to the tables in the revised paper. We believe these values show the stability of our results as well as the statistical significance and quality of the proposed results.\n\n**May I ask whether the results in Table 3 use fine-tuning or not? If yes, I'm wondering whether the same fine-tuning strategy (i.e., entwined) is adopted for all the candidates?**\n\nAs the proposed entwined pruning (via integrated gradient criterion) and fine-tuning is a contribution of this work, state-of-the-art methods do\nnot use this approach. However, please note that 1) a comparison between entwined and post-pruning fine-tuning can be found in Table 2, and 2) in our work we only perform 50k fine-tuning steps total whereas most papers (including e.g. Adapt-DCP, ManiDP-A and MDP) use $\\approx 1M$ updates. Once again, we belive that this shows the quality of our results.\n\n**In Table 6, only the accuracy of SInGE when 90% of parameters removed is provided but not the other methods. It would be appreciated if the authors could provide the performance of other methods at this setting to support the superiority of SInGE.**\n\nUnfortunately, to the best of our knowledge, no other paper in the unsupervised learning literature report those values for comparison. However, we believe that, as such, Table 6 shows that our approach, while in principle being designed for structured pruning, can be straightforwardly applied to unsupervised pruning and achieve competitive results without bells and whistles.", " | method | pruning rate / FLOPs removed | accuracy |\n|:----------------------------------------------------:|:----------------------------:|:-------------------------:|\n| magnitude $C_{L^1}$ | 86.46 / 85.00 | 69.42 $\\pm$ 2.11 |\n| magnitude $C_{L^2}$ | 86.46 / 85.00 | 75.38 $\\pm$ 2.76 |\n| gradients $C_{\\nabla^2}$ | 86.46 / 85.00 | 79.55 $\\pm$ 1.97 |\n| magnitude $\\times$ grad $C_{L^2\\times\\nabla^2}$ | 86.46 / 85.00 | 91.05 $\\pm$ 0.03 |\n| integrated gradients $C_{\\text{SG}^2}$ | 86.46 / 85.00 | 92.01 $\\pm$ 0.13 |\n| integrated magnitude $\\times$ grad $C_{\\text{IG}^2}$ | 86.46 / 85.00 | **93.07** $\\pm$ 0.06 |\n\n[Table 2: Comparison of the performance of the different pruning criterion on a ResNet 56 for Cifar10 under the same time constraint]\n\nWe believe these results highlight the interest of the proposed approach and added them in the revised version.\n\n**There are a few typos, for example, at the range of $\\mu$ at line 138, the spellings of trade-offs at line 167, and the incompleteness of parentheses at line 181.**\n\nWe would like to thank the reviewer for pointing-out typos which we corrected in the revised version of the article. However, we would like to clarify that there is no mistake in the range of parameter $\\mu$ as it should be non-zero (otherwise the weights would be immediately pruned) nor equal to one as the integral would be stationary.\n\n**As I suppose the bolded numbers in the tables stand for the best results, some of those in Table 3 and 4 are not correctly labeled, which is a bit misleading.**\n\nWe disagree with the reviewer on the labeling in Table 3. In Table 3, we compare the method to structured pruning techniques thus consider only structured pruning techniques for labeling, other methods are just here as reference. As for Table 4, The only contest would be for RED in terms of accuracy with 76.1 while we achieve 76.05. We corrected this in the revised version.\n\n| top1 accuracy | pruning method | structured | \\% parameters removed |\n|:--------------:|:--------------:|:----------:|:-----------------------------------:|\n| 91.5 $\\pm$ 0.1 | RED | Yes | 85.0 |\n| | LP | Yes | 84.0 |\n| | LP | No | 92.6 |\n| | LDI | Yes | 88 |\n| | DPF | No | 90.0 |\n| | HAP | Yes | 90.0 |\n| | SInGE (ours) | Yes | **91.3** $\\pm$ 0.27 |\n| 93.5 $\\pm$ 0.1 | GDP | Yes | 65.6 |\n| | HAP | Yes | 76.2 |\n| | SInGE (ours) | Yes | **84.3** $\\pm$ 0.71 |\n\n[Table 3: State-of-the-art pruning methods performance on ResNet 56 on Cifar10.]\n\n", " **Figure 1 is not effective and clear enough to facilitate the readers' understanding on the arguments made in Sec 3.1. One reason is that the notation is not introduced before its appearance. Also, the claims at line 129 -- 133 about the effects of the change in values are not\nclearly depicted in Figure 1.**\n\nThe purpose of Figure 1 is to show that importance measurement are very local and don't offer good evaluations for drastic changes such as zeroing out. As such, by showing that the norm of the neurons as well as the norm of its gradients, it does show different scenarios that are not well handled when evaluating the importance only once.\n\nAs for the example given by the reviewer, example (b) in Figure 1, depicts a case where driving the norm towards zero causes an increase of the gradient norm, thus globally increasing its pruning cost (as the rectangle areas increase) making it a bad candidate for pruning.\nConversely, in other cases such as (c) the higher initial cost keeps getting lower as the value of the neuron decreases. We updated the\ncaption to better introduce the notation and make this easier to understand.\n\n**The claim that SInGE significantly outperforms existing state-of-the-art DNN pruning methods seems to be a bit inappropriate as\nit's sometimes inconsistent with what demonstrates in the empirical studies. For example, in Table 5, the accuracy of SinGE is nearly the\nsame as those of Adapt-DCP, ManiDP-A and MDP. However, the latter remove significantly more parameters or FLOPS than SinGE does.**\n\nFirst, over the three main comparisons to the state-of-the-art (Table 3, 4 and 5), there is only one sub-category of one benchmark where\nstate-of-the-art methods come close to ours, while SinGE performs significantly better elsewhere. Second, in this specific case, the reviewer claim that the accuracy are \\\"nearly the same\\\" but this is due\nto the already high accuracy in the sense that the original model has 71.8 accuracy and pruned models lie between 71.4 and 71.67. In such cases, even marginal absolute improvements are very significant relative improvement. Furthermore, such methods fail to remove more parameters and FLOPs while preserving decent accuracy (see Table 5 of the revised version/table 1 in the authors' response). We believe those results highlight the interest of the proposed approach.\n\n| goal | Method | \\% params rm | \\% FLOPS rm | accuracy |\n|:----:|:---------------------------------:|:--------------:|:---------------:|:------------------------------------:|\n| - | baseline | 0.00 | 0.00 | 71.80 |\n| 30\\% | CBS (arxiv 2022) | 30.00 | - | 71.48 |\n| | Adapt-DCP (TPAMI 2021) | **35.01** | 30.67 | 71.4 |\n| | ManiDP-A (CVPR 2021) | - | **37.2** | 71.6 |\n| | SInGE | 30.96 | 31.54 | **71.67** $\\pm$ 0.06 |\n| 40\\% | CBS (arxiv 2022) | 40.00 | - | 69.37 |\n| | MDP (CVPR 2020) | **43.15** | - | 69.58 |\n| | SInGE | 40.90 | 42.30 | **70.47** $\\pm$ 0.09 |\n| 50\\% | CBS (arxiv 2022) | 50.00 | - | 62.96 |\n| | Adapt-DCP (TPAMI 2021) | - | 45.0 | 64.13 |\n| | \\revision{ManiDP-A (CVPR 2021)} | - | 48.8 | 69.62 |\n| | Accs (arxiv 2021) | 50.00 | - | 69.76 |\n| | SInGE | **50.13** | **48.90** | **70.01** $\\pm$ 0.22 |\n\n[Table 1: Comparison with existing structured pruning methods on MobileNet V2 backbone for ImageNet]\n\n**As far as I undertand, the integration evaluation will take much time, so do the authors think that adding an experiment of the time efficiency necessary?**\n\nIndeed the proposed integrated gradient criterion comes at the expanse of an additional (linearly increasing with the number of integration steps) computational cost. To answer your concern, we propose a novel benchmark on Cifar10 where we work at a constant processing (pruning+fine-tuning) time: specifically, as we empirically measure that 1 integrated gradient step costs approximately half the time of a fine-tuning step, we modulate the numbers of fine-tuning steps to reach a similar processing time, and compare the accuracy of the pruned networks. Results can be found in Table 2.", " Inspired by the integrated gradient method in attribution methods, the authors propose a new importance metric for pruning: an integral of the product between the norm of the parameter weight and its attribution along a path between this weight value and a baseline (zero) value. This metric captures more global view than previous magnitude-based and gradient-based pruning methods. Then, the authors devise the so-called SinGE method to intertwine the pruning with the network fine-tuning. Empirical studies are conducted to compare SinGE with both structured and unstructured pruning methods on Cifar 10 and ImageNet. ## Strength\n1. Motivation is very clear and the idea of using the approximated integration as the criterion of pruning is a good way to measure the influence of forcing a parameter to 0.\n2. This paper has clear writing and easy to follow.\n\n## Weaknesses\n\n1. Figure 1 is not effective and clear enough to facilitate the readers’ understanding on the arguments made in Sec 3.1. One reason is that the notation $\\mu$ is not introduced before its appearance. Also, the claims at line 129 – 133 about the effects of the change in values are not clearly depicted in Figure 1.\n\n2. The claim that SInGE significantly outperforms existing state-of-the-art DNN pruning methods seems to be a bit inappropriate as it’s sometimes inconsistent with what demonstrates in the empirical studies. For example, in Table 5, the accuracy of SinGE is nearly the same as those of Adapt-DCP, ManiDP-A and MDP. However, the latter remove significantly more parameters or FLOPS than SinGE does. 1. As far as I undertand, the integration evaluation will take much time, so do the authors think that adding an experiment of the time efficiency necessary?\n2. There are a few typos, for example, at the range of $\\mu$ at line 138, the spellings of trade-offs at line 167, and the incompleteness of parentheses at line 181.\n3. As I suppose the bolded numbers in the tables stand for the best results, some of those in Table 3 and 4 are not correctly labeled, which is a bit misleading.\n4. The top-1 accuracy of integrated magnitude method without fine-tuning and with 1000-step entwined fine-tuning seems to be the same (both reported as 85.38). I’m wondering why there is no increase in the accuracy.\n5. May I ask what does the notation “1<” stand for in Table 1?\n6. As the performance of SinGE is very close to its counterpart in many cases, could the authors please provide the error bars for the results reported in Table 2-6?\n7. May I ask whether the results in Table 3 use fine-tuning or not? If yes, I’m wondering whether the same fine-tuning strategy (i.e., entwined) is adopted for all the candidates?\n8. In Table 6, only the accuracy of SInGE when 90% of parameters removed is provided but not the other methods. It would be appreciated if the authors could provide the performance of other methods at this setting to support the superiority of SInGE. The authors have included adequate limitations.", " This paper proposes a new neuron pruning method based on the Integrated Gradients (IG) attribution algorithm. The paper claims that IG-based pruning is more optimal than magnitude and local gradient-based methods since it accounts for the gradient effects globally along the path from 0-valued baseline weight to given input weight in each layer l. The paper performs impressive number of experiments for a number of structured and unstructured pruning techniques from the state of the art literature and shows that IG-based pruning outperforms those methods based on two evaluation metrics (% removed parameters and % removed floating point operators).\nIn addition to that the paper also proposes entwined pruning and fine-tuning procedure and shows that it help to obtain better accuracy while performing the pruning. \nThe experiments in the paper are conducted for ImageNet and Mobilenet V2 backbone for ImageNet and CIFAR10 dataset.\n **Strengths**\n\n+ Overall the idea of the paper is clearly described, related work is properly cited, experimental setup and results are properly discussed\n+ The paper shows strong empirical evidence that IG-based pruning outperforms magnitude, gradient-based approaches, as well as a number of other state of the art approaches from the neuron pruning literature. \n+ The paper also shows that entwined fine-tuning helps to improve model accuracy regardless of pruning technique using over 80% pruning target.\n\n**Weaknesses**\n\n+ I think that in general the readability (fluency) of the paper can be further improved. Specifically, I felt that in the notation and description of the approaches, there is some level of expectations that the readers know the notations of structured pruning techniques and integrated gradients. For example, neither Figure 1 nor the context talking about Figure 1 describes what $\\mu$ and $\\mu^2$ are.\n+ Figure 1 and its descriptions are a bit confusing. Originally, I was thinking that a, b and c are weight matrices but later the paper refers to a and b as two different weights. It would be good to improve the description of Figure 1. \n+ Given that entwined fine-tuning is one of the important contributions of the paper it would be good to describe it in detail. It would be good to make it clear what fine-tuning over O steps mean and what we mean by fine-tuning the network F over a batch from D. It seems that the authors make an assumption that the readers are familiar with the specific fine-tuning approach that they are describing.\nAt this point from the given description it is hard to tell the amount of novelty in the entwined fine-tuning.\n+ Table 1: It would be great if the authors explain relatively large std in integrated magnitude x grad C_{IG}^2 method\n\n**Minor Comments**\n\n+ On page 9 a notation, $N_n$ but I don’t see a description of notation $N$.\n+ Table 6 has 2 SInGE rows but it is not clear what are the differences between those two SInGEs\n\n \n+ From Table 1 it is unclear what the authors mean by integrated gradients C_{SG}^2 and integrated magnitude x grad C_{IG}^2. It would be good if they could elaborate it further and also describe which of those two they used for the results in tables 3, 4, 5 and 6.\n+ I wonder if Noisegrad paper which introduces stochasticity to model weights could also be relevant to this paper.\nhttps://arxiv.org/pdf/2106.10185.pdf\n \nI do not see any potential negative societal impact in this work", " This paper proposes a novel pruning method based on an integrated gradient pruning criterion, which combines a magnitude and gradient based criterion and integrates the product over the path of the neuron removal. \n\nIn addition, the work proposes a fine-tuning flowchart to improve the performance of the fine-tuned network. This strategy is an iterative layer-wise approach which sets the pruning ratios for each block separately. \n\nLastly, a comparison of SInGE on ResNet-56 on Cifar10 and ResNet-50 and MobileNet-V2 on ImageNet shows state-of-the-art performance. \n Strengths:\n\n* I like the idea of an integration criterion and according to Table 1 it seams that this is the main driver for the superior performance of SInGE. To the best of my knowledge this is the first method to use an integration criterion for (structured) pruning. \n\n* The Integration criterion is well motivated, and the ablations support the design choices made by the authors.\n\n* SInGE achieves state-of-the-art performance on CIFAR10 and ImageNet\n\nWeaknesses:\n\n* The main weakness that I see is that the work does not explain how the FLOPS and parameters of the pruned network are calculated. This is a major issue because the work does not directly compare to other state-of-the-art methods but merely quotes the numbers. Since the calculation of the FLOPS and parameters of a pruned network is non-trivial (see Appendix D of SOSP [43]) This can have a huge impact on the performance and could reduce the performance significantly.\n\n* The second main weakness is that the work applies an iterative fine-tuning scheme but compares to single-shot pruning methods such as HAP, SOSP, OTO and SRR-GR. Since the paper shows in Table 2 that an iterative pruning scheme seems to improve the overall performance significantly, this would give SInGE an advantage and makes a direct comparison of the pruning criterions harder. \n\n* Further, the authors do not mention the computational effort to evaluate the criterion. Due to the Riemannian Integral this should be significantly higher compared to other gradient, magnitude or first-order based methods.\n\n* Lastly a direct comparison to at least one competing method within the same pruning pipeline would provide the possibility to better disentangle the performance gains from the fine-tuning pipeline and criterion. \n I quite like the idea of the paper and would be willing to increase my score if the authors can address my concerns. In addition to the concerns mentioned above, I have some more minor concerns:\n\nWhy do you not compare to the same methods on all datasets? It seems that some of the methods compared to on ImageNet also provide results for CIFAR10. Further, some recent structured pruning works such as Group Fisher Pruning [A] were not considered for comparison\n\nWhy did you choose to use the product of the norms of the gradient of the output of the NN and the weights? One could imagine other gradients i.e., the gradient of the loss.\n\n\n[A] Liu, Liyang, et al. \"Group fisher pruning for practical network compression.\" International Conference on Machine Learning. PMLR, 2021.\n Yes" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 4 ]
[ "I30Tz_wMY_d", "PEg6TsDOMCr", "B30fVkMu537", "2AU2x-IwnRi", "X6DVmgoEUFv", "4z14txofMK", "CYiZLyZ1bk", "vYLO5Hgf7KG", "0YaSRGOZNIm", "EyPzNX6zn0", "4xlLMSqj1Lm", "nips_2022_oQIJsMlyaW_", "nips_2022_oQIJsMlyaW_", "nips_2022_oQIJsMlyaW_" ]
nips_2022_SYdg8tcFgdG
Sample-Efficient Learning of Correlated Equilibria in Extensive-Form Games
Imperfect-Information Extensive-Form Games (IIEFGs) is a prevalent model for real-world games involving imperfect information and sequential plays. The Extensive-Form Correlated Equilibrium (EFCE) has been proposed as a natural solution concept for multi-player general-sum IIEFGs. However, existing algorithms for finding an EFCE require full feedback from the game, and it remains open how to efficiently learn the EFCE in the more challenging bandit feedback setting where the game can only be learned by observations from repeated playing. This paper presents the first sample-efficient algorithm for learning the EFCE from bandit feedback. We begin by proposing $K$-EFCE---a generalized definition that allows players to observe and deviate from the recommended actions for $K$ times. The $K$-EFCE includes the EFCE as a special case at $K=1$, and is an increasingly stricter notion of equilibrium as $K$ increases. We then design an uncoupled no-regret algorithm that finds an $\varepsilon$-approximate $K$-EFCE within $\widetilde{\mathcal{O}}(\max_{i}X_iA_i^{K}/\varepsilon^2)$ iterations in the full feedback setting, where $X_i$ and $A_i$ are the number of information sets and actions for the $i$-th player. Our algorithm works by minimizing a wide-range regret at each information set that takes into account all possible recommendation histories. Finally, we design a sample-based variant of our algorithm that learns an $\varepsilon$-approximate $K$-EFCE within $\widetilde{\mathcal{O}}(\max_{i}X_iA_i^{K+1}/\varepsilon^2)$ episodes of play in the bandit feedback setting. When specialized to $K=1$, this gives the first sample-efficient algorithm for learning EFCE from bandit feedback.
Accept
Reviews on this paper are uniformly positive, and all reviewers feel that it is an interesting set of results. One minor criticism is that some reviewers felt the presentation could be improved, and the authors should try to address this for the camera ready.
train
[ "Eo7UU4uVAV5", "3J4boPOP2qq", "vw09YaVXgbF", "9Sw_aIzLVf4", "2h6TQD0aXaV", "EsoRCKAlrTx", "3LMAo02lsB", "ErZk6LJdEb", "05G9T4HtyEb", "V2MqL9_YCUX", "vkqYtKrba8J", "G38nDnnZD5o" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your response. I don't have any additional questions. It might make sense to explicitly mention the two weaknesses with the current approach (difficulties in achieving model-freeness and adversarial bandit feedback).", " Apologies for the late reply.\n\nIn the original EFCE paper by Von Stengel, deviations were not \"detected\" by the mediator per se, and EFCEs were implemented by means of sealed recommendations. That is, reduced normal form strategies were sampled from the beginning and recommendations revealed only after reaching an information state. Because a reduced normal form is sampled, if a player deviates, the reduced normal form no longer contains any recommendations for following information sets. This implementation means that the mediator doesn't play an active role while the game is being played, and more importantly, does not require the mediator to know when a player has deviated (a player's action could be private). The authors' proposed mechanism *does* require the mediator to know each player's actions exactly (to detect deviations). Nonetheless, this is a relatively minor point and does not take away from the authors' key contribution.\n\nI have no additional questions, and am leaving my score as is.", " Thank you very much for your detailed response to my concerns. As I was expecting, you made me clear the fact that your algorithms can indeed work under the general model of perfect-recall games that are not timeable. Thus, I will increase my score from 6 to 7. However, I strongly encourage the authors to provide additional explanations on this point in the final version of the paper. In particular, I suggest the authors to add some comments in the technical part of the paper, in those points in which you used the timeability assumption in order to ease notation/presentation. In this way, it would be easier to track in which points your results can be easily generalized to the non-timeable case.", " We thank all reviewers for their valuable feedback on our paper. We have revised our paper to incorporate the reviewers’ suggestions. For clarity, all changes are marked in red. \n", " Thank you for your support of our paper and the valuable feedback! We respond to your comments as follows.\n> In terms of algorithms, it directly adapts WRHedge for the online learning literature…it looks like the balanced sampling idea also comes directly from another paper.\n\nWe agree that both WRHedge and the balanced sampling ideas are not new. However, we made non-trivial adaptations of both ideas in order to facilitate our proofs.\n1. For WRHedge, we generalized the algorithm to the stochastic setting (SWRHedge) with nontrivial adaptations, so that it can admit *multiple* sample-based loss estimators in each round, one for each time selection function, with the same mean (cf. Line 310-313). \n2. Our balanced exploration policy is the same as in (Bai et al. 2022), but the way we use it is new. The sampling policies in Algorithm 3 are *interlaced concatenations* of the current policy and the balanced exploration policy along time steps $h$ (cf. Line 298-301).\n\n> While this is not specific to this paper, I think POMGs are much more general than the authors introduce…I think POMGs are more than that and there is no need to bring up this term.\n\nWe agree that in general POMGs are much broader than IIEFGs. We formulate IIEFGs as POMG with tree-structure and perfect-recall mostly due to its clearer notation system (in our opinion), which helped simplify the presentation in various algorithms and proofs. \nWe have added a footnote (Footnoe 2, Page 3) in our revision to highlight that we only consider a specific subclass of POMGs instead of general POMGs. \n\n> Why is K-EFCE an interesting extension to EFCE? \n\nWe believe K-EFCEs are a fairly natural generalization of the (original) EFCE. The EFCE can be viewed as a mediator \"penalizing\" the player (does not reveal recommendations) after any deviation. In this sense, the mediator of a K-EFCE is \"more relaxed\" in its penalization, only penalizing when a player has deviated for $K$ times. Therefore, it is a natural way to make deviating players stronger (hence equilibrium defined w.r.t. such deviations stricter). We also remark that, considering general definitions of correlated equilibria beyond EFCE is of interest in the community, e.g. (Morrill et al. 2021). Indeed, the two extreme cases of K-EFCE with $K=0$ and $K=\\infty$ are equivalent to (Normal-Form) Coarse Correlated Equilibrium and the \"Behavioral Correlated Equilibria'' of (Morrill et al. 2021; cf. our Proposition C.3). \n\n> Without this extension, can the reducing algorithms be further simplified?\n\nPlugging $K=1$ into our algorithms gives algorithms for learning the 1-EFCE (i.e. the standard EFCE). In that case, the algorithm may be written in mathematically equivalent but slightly simplified forms (for example, definitions of type-I and type-II rechistories, and Balanced Sampling strategies in Algorithm 3, Line 1-3, may be presented in simplified notation when $K=1$).\n", " Thank you for your support of our paper and the valuable feedback! We respond to your question as follows.\n> The fact that the game is perfect recall does not necessarily imply that the states in the same infoset all refer to the same timestep of the game. Indeed, this is not the case whenever the game is not timeable. As far as I am concerned, the results in the paper rely on the assumption that the game is timeable, which is a more restrictive assumption than perfect recall only.\n\nThank you for bringing up this important point. We agree that our formulation of IIEFGs through POMGs (with tree structure and perfect recall) implicitly made the timeability assumption.\n\nHowever, upon an initial look at our analysis, we believe our algorithms and theoretical guarantees (when formulated in general IIEFGs with perfect recall) can indeed generalize to any general IIEFG as well that is not necessarily timeable. This is because our algorithms K-EFR and Balanced K-EFR only depend on each player’s own game tree (infosets) instead of the joint game tree for all players, similar to existing CFR/OMD type algorithms for external-regret minimization (Zinkevich et al. 2007, Farina al. 2020, Bai et al. 2022). \n\nWe have added a discussion about timeability in Appendix H.3 in our revision, and will make sure to include it in the main text in the final version.\n", " Thank you for your thoughtful reviews and positive feedback on our paper! We respond to your comments as follows.\n> Given the authors’ claim that K-EFR outperforms [11], [20], I felt it would be good to give a more detailed sketch on the key differences between these methods…This should help to make the paper more accessible.\n\nWe have added a more detailed comparison between KEFR and the algorithm of [11,20] in Appendix H.4 in our revision, and will make sure to include it in the main text in our final version.\n> How can a mediator (practically) achieve a K-EFCE where K > 1?\n\nGiven a $K$-EFCE, the mediator can implement it as follows. Before the game starts, the mediator samples a product policy from the $K$-EFCE (which is a correlated policy), and initializes a “deviation counter” for each player at 0. Then, at each round, the mediator by default recommends the sampled actions to all players. After players take their actual action, the mediator increments each player’s “deviation counter” by 1 if their action is different from the recommendation. The mediator stops recommending to any player as soon as their counter reaches $K$. We have added this in Appendix H.1 in our revision.\n> The authors use POMG as their choice of framework, rather than extensive form games. Is there a difference between the 2, given that the POMG is restricted to be tree structured?\n\nOur POMG formulation with tree-structure and perfect-recall assumptions is almost equivalent to the usual formulation of imperfect-information extensive-form games (IIEFGs). Concretely, our formulation can express any IIEFG with perfect recall and the additional timeability condition, a mild condition which roughly speaking requires that infosets for all players combinedly could be partitioned into ordered “layers”. Further, we believe our algorithms and theoretical guarantees can hold for any general IIEFG that is not necessarily timeable, and we have added a short discussion about this in Appendix H.3 (see also our response to reviewer tyaC for details). \n> Small quips: In the definition of Type-I rechistories $\\mathcal{A}_i^{h-1}$ is not defined.\n\nWe have added the definition in our revision (Line 185-186).\n", " Thank you for your thoughtful reviews and positive feedback on our paper! We respond to your comments as follows.\n> Your bandit algorithm still requires that at least the decision space (treeplex) of each player be known so that the balanced exploration strategy can be instantiated, correct? So, this bandit algorithm is not akin to, say, that of the cited works by Farina and Sandholm (\"Model-Free Online Learning...\") or Kozuno et al. (\"Model-free learning\"), which instead address both an unknown structure and bandit feedback on the utilities?\n\nRight, in our bandit algorithm, the balanced exploration policy requires knowing the structure of each player’s decision space (treeplex), and thus the algorithm does not yet handle the unknown tree structure setting as (Farina and Sandholm, 2021) and (Kozuno et al. 2021). We have added a short discussion about this in Appendix H.2.\n\nCurrently, the balanced exploration policy is needed in order to obtain the sharp (linear in $X$) sample complexity under bandit feedback. We believe that achieving the same rate under unknown tree structures, e.g. by “learning” an exploration policy that works as well as the balanced one, would be an interesting open question. \n\n> Can you please confirm that your algorithm cannot be used in adversarial bandit-feedback settings (as it requires exploring the opponent for several episodes assuming the opponent's strategy does not change)?\n\nRight, in the bandit-feedback setting, our algorithm requires the opponents to fix their strategy for a constant number of episodes (Algorithm 3 & 6), and thus it cannot be used to achieve no-regret in the adversarial setting (even though it can be used to learn the equilibrium in the self-play setting). Under full feedback, our algorithm can handle adversarial opponents and have a regret bound (Corollary F.1).\n\n> Minor nits: Abstract, line 9: I would avoid \"more generalized\" in favor of either \"generalized\" or \"more general\".\n\nWe have changed this to “generalized” in our revision.\n", " The authors first introduce a new equilibrium concept: K-EFCE in extensive-form games. Then the authors propose sample-efficient regret-minimization based algorithms to learn the equilibrium in both full-information and bandit feedback. When K=1 under the bandit feedback, this is the first sample-efficient algorithm to learn EFCE. To me it seems like the most important contribution of this paper is the first sample-efficient algorithm to learn EFCE under the bandit feedback. On the other hand, I don't really get why the authors introduce K-EFCE, a generalization of EFCE. Intuitively, we can definitely consider this generalization, but I don't know if the generalization induces interesting concepts or properties and the contribution I mentioned can completely hold without this generalization if I understand correctly. \n\nIn terms of algorithms, it directly adapts WRHedge for the online learning literature, which is quite reasonable but I think this may limit the novelty of this paper. In the bandit setting, it looks like the balanced sampling idea also comes directly from another paper. Another weakness of this paper is probably the lack of experimental evaluations of the algorithms. 1. While this is not specific to this paper, I think POMGs are much more general than the authors introduce. Specifically, a POMG, like a POMDP, should have an observation model $O(\\cdot|s,a)$, which is a conditional probability given state s and action a. In the paper, the authors simplify this to a partition of the state space, which is sufficient for IIEFGs but I think POMGs are more than that and there is no need to bring up this term. \n\n2. Why is K-EFCE is an interesting extension to EFCE? Without this extension, can the reducing algorithms be further simplified? \n N/A", " This paper studies the problem of learning extensive-form correlated equilibria (EFCE) in imperfect-information extensive-form games. In particular, the paper introduces a new notion of EFCE, called K-EFCE, in which a player does not stop receiving recommendation right after the first time they deviate, but they continue to receive them up to their K-th deviation. The paper studies the problem of learning K-EFCE, and it provides sample-efficient algorithms for both the full feedback and the bandit feedback settings. ORIGINALITY\n\n(+) To the best of my knowledge, this is the first paper that proposes sample-efficient no-regret learning dynamics that converge to EFCEs under the bandit feedback model.\n\nQUALITY\n\n(+) As far as I am concerned, the results in the paper are correct, though I did not check all the proofs in the details.\n\n(-) I only have one major concern about the formalization of imperfect-information extensive-form games with perfect recall. In particular, the authors claim that in order to have perfect recall it should be possible to partition the information sets of one player according to the tilmestep they belong to. However, the fact that the game is perfect recall does not necessarily imply that the states in the same infoset all refer to the same timestep of the game. Indeed, this is not the case whenever the game is not timeable. As far as I am concerned, the results in the paper rely on the assumption that the game is timeable, which is a more restrictive assumption than perfect recall only. If this is correct, this would somehow limit the impact of the results. See the following paper for a reference on timeability of extensive-form games:\n\n“Jakobsen, S. K., Sørensen, T. B., & Conitzer, V. (2016, January). Timeability of extensive-form games. In Proceedings of the 2016 ACM Conference on Innovations in Theoretical Computer Science (pp. 191-199).”\n\nCLARITY\n\n(+) The paper is well written and all the concepts are well presented.\n\n(-) My only concern regarding the presentation is that too many results are in the appendix, which renders the main paper (perhaps too much) high level in terms of technical details.\n\nSIGNIFICANCE\n\n(+) The problem of learning EFCEs is receiving considerable attention over the last few years from the algorithmic game theory community and beyond. Thus, this paper could be of interest for a considerable portion of the NeurIPS community (as also testified by the Best Paper Award at NeurIPS 2020).\n 1) Can you please address my concern in the second point of the quality assessment?\n---------- AFTER AUTHORS' RESPONSE ----------\nThe authors clarified my concerns about the applicability of their algorithms to non-timeable perfect-recall games. As a result, I raised my score to 7 (from an original score of 6). Yes.", " The paper has 3 main contributions. First, the authors propose a generalization of extensive-form correlated equilibrium (EFCE), known as K-EFCE. In this generalization, players continue to receive recommendations even after they have deviated from their recommendations (received at each infoset, just like regular EFCE) for up to K deviations. K-EFCE encompass a number of common equilibrium concepts. A 1-efce is just a regular EFCE, while a 0-EFCE is a normal-form CCE. The second key contribution is a self-play algorithm to compute K-EFCE, which the authors call K-EFR. The third contribution is the a variant of K-EFR that can handle bandit feedback. The authors claim that this is the first algorithm for learning EFCE from bandit feedback. Generally, I felt there was too much content to follow and feels more suitable for a journal. In particular, the part on bandit feedback feels rather cramped (though it is an important contribution in its own right). Given the authors’ claim that K-EFR outperforms [11], [20], I felt it would be good to give a more detailed sketch on the key differences between these methods—-this is currently done in lines 233-240, which (at least to me) was rather opaque. This should help to make the paper more accessible.\n\nOther than that, the paper seems good, except for the aforementioned points about it not being as accessible to readers not already familiar with EFR.\n\nSmall quips:\nIn the definition of Type-I rechistories, $\\mathcal{A}_i^{h-1}$ is not defined.\n\nNote: The paper is entirely theoretical with many pages of proof. I did not go through all the proofs, especially those in the appendix. I am not familiar with the state of the art EFCE solvers, and am unable to definitively comment on the authors’ claims with regard to comparison with other works.\n 1. I would have liked a discussion on how K-EFCE could be implemented. In the original EFCE paper by von Stengel, EFCEs were implemented by means of sealed recommendations, even though the more popular formulation today is based on [13, 16], which use an equivalent formulation based on trigger sequences. How can a mediator (practically) achieve a K-EFCE where K > 1?\n\n2. The authors use POMG as their choice of framework, rather than extensive form games. Is there a difference between the 2, given that the POMG is restricted to be tree structured?\n\n -", " The paper's main contributions are threefold:\n\n1. The notion of K-EFCE is defined. It is a generalization of EFCE, whereby the mediator (correlation device) can condition their recommendation based on the number of deviations of each player, which can be any number between 0 and K. After K deviations have been reached, the mediator will stop issuing recommendations. K-EFCE is a natural generalization of EFCE: in fact, EFCE is simply a 1-EFCE.\n\n2. The paper shows that, within the *full-information* setting, an $\\epsilon$-approximated K-EFCE can be computed by means of uncoupled no-regret dynamics after running $\\tilde O(A^K / \\epsilon^2)$ iterations where $A$ is the maximum number of player actions (equivalently, each player is guaranteed to cumulate $\\tilde O(A^K \\sqrt T)$ regret after $T$ iterations).\n\n3. The paper shows that, withing the *trajectory-bandit setting*, an $\\epsilon$-approximated K-EFCE can be computed by means of uncoupled no-regret dynamics after running $\\tilde O(A^{K+1} / \\epsilon^2)$ iterations where $A$ is the maximum number of player actions (equivalently, each player is guaranteed to cumulate $\\tilde O(A^{K+1} \\sqrt T)$ regret after $T$ iterations).\n I want to start by saying that I think the paper represents interesting, well-executed, high-quality work. Despite the very tight page limits, the body is well-organized and easy to follow. The appendix is also well-organized, and contains a trove of additional interesting results---I especially enjoyed browsing through appendix C.\n\nSignificance: I believe the paper contributes interesting results that fit well within the current research trend around extensive-form games. For example, learning EFCE with bandit feedback is a natural and interesting question.\n\nOriginality: The extension from EFCE to K-EFCE is interesting conceptually, and like I said I believe the paper is well executed and addresses relevant questions. In terms of significance, my impression is that the paper does not introduce any groundbreaking tool in the space. Rather, it fundamentally combines several expected existing techniques, such as balanced exploration policies and standard decompositions of Phi-regret for EFCE, obtaining results that were perhaps easy to guess for an expert in the field. I want to be clear that I am not saying that the paper is trivial---the authors put together and master several specialized tools, creating a paper that I expect will be very well received in the community. \n\nClarity and quality: the paper is very well executed and polished, both in the body and in the appendix. There is only one point that I found was slightly confusing: on lines 40-41, the bandit feedback setting is introduced as the opposite of the case in which \"the full game is known\". Just to be clear: your bandit algorithm still requires that at least the *decision space* (treeplex) of each player be known so that the balanced exploration strategy can be instantiated, correct? So, this bandit algorithm is not akin to, say, that of the cited works by Farina and Sandholm (\"Model-Free Online Learning...\") or Kozuno et al. (\"Model-free learning\"), which instead address both an unknown structure *and* bandit feedback on the utilities?\n\nMinor nits:\n- Abstract, line 9: I would avoid \"more generalized\" in favor of either \"generalized\" or \"more general\". 1. Can you please respond to my questions in the \"Clarity and quality\" assessment above?\n\n2. Currently, the main factor driving my (positive!) score is the originality of the work. While I also gave a good look in the appendix, it is possible---especially given the volume of a 60-page paper---that I might have missed some particularly innovative analysis tool that was used in this paper. If that's the case, please push back on my assessment above.\n\n3. Can you please confirm that your algorithm cannot be used in adversarial bandit-feedback settings (as it requires exploring the opponent for several episodes assuming the opponent's strategy does not change)? The authors adequately addressed the limitations and potential negative societal impact of their work." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 2, 4 ]
[ "ErZk6LJdEb", "3LMAo02lsB", "EsoRCKAlrTx", "nips_2022_SYdg8tcFgdG", "05G9T4HtyEb", "V2MqL9_YCUX", "vkqYtKrba8J", "G38nDnnZD5o", "nips_2022_SYdg8tcFgdG", "nips_2022_SYdg8tcFgdG", "nips_2022_SYdg8tcFgdG", "nips_2022_SYdg8tcFgdG" ]
nips_2022_a3ymtHbL5p5
In Differential Privacy, There is Truth: on Vote-Histogram Leakage in Ensemble Private Learning
When learning from sensitive data, care must be taken to ensure that training algorithms address privacy concerns. The canonical Private Aggregation of Teacher Ensembles, or PATE, computes output labels by aggregating the predictions of a (possibly distributed) collection of teacher models via a voting mechanism. The mechanism adds noise to attain a differential privacy guarantee with respect to the teachers' training data. In this work, we observe that this use of noise, which makes PATE predictions stochastic, enables new forms of leakage of sensitive information. For a given input, our adversary exploits this stochasticity to extract high-fidelity histograms of the votes submitted by the underlying teachers. From these histograms, the adversary can learn sensitive attributes of the input such as race, gender, or age. Although this attack does not directly violate the differential privacy guarantee, it clearly violates privacy norms and expectations, and would not be possible $\textit{at all}$ without the noise inserted to obtain differential privacy. In fact, counter-intuitively, the attack $\textbf{becomes easier as we add more noise}$ to provide stronger differential privacy. We hope this encourages future work to consider privacy holistically rather than treat differential privacy as a panacea.
Accept
Although the fact that DP does not protect against population statistics is a widely known fact, the paper weaves this together with PATE (which relies on DP statistics) to demonstrate the danger of mis interpreting the protection guarantees provided by DP. This is a point worth discussing among the privacy and security community.
val
[ "kMG3HEjosJ0", "ea15mPrUl7M", "mpctw0JHge", "1rF9NzXzzgc", "GjPIzxHk502", "BdBuVIQg48d", "ZjzBEZg0MyHx", "zZEB3uKQYIL", "2QLqrHX_Q4V", "7Yi8p6td3UI", "SPCdFwrs3j", "BUflsDNrtK", "JdZeisM-_WO", "sW5TcGhUv5N", "apfcJLQKrs" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I read all the reviewers and the authors' replies. I think the authors' replies were very good, and I continue to have my super positive opinion about this paper. I hope it gets accept with a spotlight talk. ", " Thank you for your response. To clarify our response to point 7: the reviewer is right that if the added noise is infinite, the attacker does not learn anything useful. In Appendix F (which was added for the rebuttal) we experiment with very large and very small noise scales. The results imply that adding too much or too little noise degrades the utility of the adversary. There is a “sweet spot” for which enough noise is added to create the leakage, but not so much that the output of the noisy argmax becomes completely random. ", " Thanks very much for the authors' responses. Some of my concerns are addressed. However, I still have concerns about comment 7. If the added noise is infinite, the output distribution will be dominated by the noise. How can the attacker infer useful information from fully random data? It's better to add more descriptions about this result and sanity checks in the experiments. ", " Dear Reviewer pnDr, it is our pleasure to discuss with you the questions you have about the paper. We are available all time.", " Dear Reviewer r8cf, it is our pleasure to discuss with you the questions you have about the paper. We are available all time.", " Thank you so much for your insightful feedback!", " Thanks for authors' clarification and additional experiments, which solve my concern in W2 and W3 and alleviate my concern for W1. I will raise my score to 7.\n\nI have one more comment for the W1. The threat proposed in this paper is to infer whether a given data is belong to minority group in the training distribution. Differential privacy is designed to reveal the distribution-level property, while to protect any single data in the training set. Therefore, it is intuitive that differential privacy doesn't provide the protection to the proposed attack.", " Thank you! We revised the manuscript to address the editorial comments and will also add a concise exposition of the privacy-cost computation. We are happy you found the manuscript both important and enjoyable, and hope to present it in the upcoming NeurIPS.\n", " **1. Does the model update frequently? Why cannot the model send the same result to the same query?**\n\nWe discuss the proposal to send “same result to the same query” under Mitigations (Section 6). In short, attackers can adversarially adapt to this mitigation, and it is incompatible with decentralized deployments that maintain query confidentiality.\n\n**2. Why do you use the Euclidean distance to measure the distance between $Q(\\hat{H})$ and $\\bar{q}$? Why do not use KL or Wasserstein metric to measure the distance?**\n\n(we are assuming the reviewer is referring to the optimization objective, where we used Euclidean distance)\nIt is possible that a different distance computation would work even better. Better optimization could only improve our results. We acknowledge and justify (see Limitations) that exploring the entire space of optimization hyperparameters is left for future work.\n\n**3. In Line 247, what's the $\\sigma$ and $\\delta$ for the DP guarantee? In Line 250, what's the corresponding budget for $\\sigma=40$ etc?**\n\nLine 247: $\\delta$ is $10^{−5}$ for MNIST and $10^{−6}$ for SVHN. $\\sigma$ is 1.97 for MNIST and 4.96 for SVHN, the same as the privacy budget stated in line 247.\n\nLine 250: this is about an attacker who is not limited by a budget, but instead a hard query-number limit of $10^4$. We do report the privacy cost of each attack (see below).\n\n**4. In line 248, what's the DP budget for each query? And what's the total budget for $10^4$ queries?**\n\nThe query-number-limited attacker’s privacy costs are given in Figures 6 and 7.\n\n**5. In line 282, $(8, 10^{-5})$ is the DP budget for one query?**\n\nDP-SGD does not have a query “budget” because the guarantee it provides comes from clipping and noising gradients: a model trained with DP-SGD can answer an unbounded number of queries (the privacy cost is fixed upon completing training). PATE instead obtains privacy guarantees at inference, by noising the histogram of votes predicted by the teachers that make up the ensemble, and limiting the queries using a budget. In line 282, our intention was that the DP guarantees of the DP-SGD instantiation in [14] are equivalent to those of a PATE with a budget of 8 and a delta of $10^-5$.\n\n**6. In Figure 4, what's the corresponding DP budget for $\\sigma=40$?**\n\nIn Figure 4, each subfigure caption specifies the budget used and the value of $\\sigma$ used. These two values are meant to be specified separately; our use of parentheses was typographically confusing (we fixed that).\n\n**7. The results shown in Figures 6 and 7 are counter-intuitive. What's the corresponding DP budget for each query? To verify the correctness, it's better to add a large enough noise (i.e. the standard deviation of the Gaussian noise is close to inf.). Another baseline is that we add no noise to the results. Can we reach the same conclusion?**\n\nWe added an initial experiment with the baselines requested in Appendix F (currently includes the baselines for a single histogram, we will add the rest). We observe that when the noise is close to 0, the attacker’s error is very high. It decreases when we increase the noise, but only up to a certain point. When the noise crosses a threshold, the attacker’s error starts going up again (very moderately). This is consistent with what we expect (0 noise reveals nothing about the histogram except the argmax class, whereas “infinite” noise means that PATE’s output distribution is uniform regardless of the underlying histogram; thus, we can only hope to extract histograms when the noise level is not in either extremity).\n\n**8. Is it possible to perform some experiments that the attacker can steal the private attributes from the recovered histogram with some DP guarantee?**\n\nYes, we added an end-to-end attack, now in Appendix E. The results in the end-to-end scenario, where the histogram is extracted from PATE and then used for minority-group membership inference, mirror the results in Section 2 where the attack is not end-to-end and the histogram is directly given to the attacker.\n\n\n", " **The main conclusion of this paper is that the differential privacy doesn't protect the membership of minority group for test data. This conclusion seems not very surprising to me as differential privacy is to protect the privacy of training data.**\n\nFirst, while it is not surprising that DP for training data does not protect test data, it is less obvious that it actually causes leakage of extremely sensitive attributes such as minority-group membership. While this leakage is not in DP’s threat model, it undermines the societal norms that DP is meant to uphold. Ours is the first work to illustrate how adding DP noise can cause leakage (of any kind).\n\nSecond, it would be incorrect to say that our attack can only infer test-set information. Histograms contain ample information about the training data, as well, because different training sets will present different histograms and consensus levels across test instances (e.g., our attacker can detect biased training sets where consensus varies across populations).\n\n**The experiment result also shows that the leakage of true histogram is more severe when the privacy cost is smaller. This result is counter-intuitively and the explanation is not very convincing to me. The estimation for $q_i$ from a sequence of i.i.d. samples (0/1) from Bernoulli random variable likely wouldn't be more accurate when $q_i$ is larger, but depends on its variance . I would suggest authors to further check the difference between $Q(H)$ and $\\bar{q}$ and see if this difference is smaller when the injected noise is larger.**\n\nThe reviewer touches on an interesting subtlety. It is absolutely true that $\\bar{q}_i$ (the MC estimator) will be less accurate as more noise is added. This conclusion is as theoretically sound as it is intuitive. However, the degree in which {$\\bar{q}_i$}$_n$ (collection of estimators) reveal information about {$H_i$}$_n$ (the vote histogram itself, which is our real target) does sometimes drop as more noise is added. To illustrate this, consider the case of $\\sigma=0$, i.e. no noise is added at all. PATE will always return the histogram’s argmax class. Regardless of how many queries are used, our adversary can never learn anything about the histogram except for the identity of the argmax class (which is already known after a single query). In terms of $\\bar{q}_i$, the estimator of the argmax class will be 1, and the others 0, which is actually an entirely precise estimation $(\\bar{q}==Q(H))$ yet it reveals very little about the vote histogram. Conversely, when sufficient noise is added and given enough queries, the entire vote histogram can be estimated with very high accuracy.\n\nPlease see also the experiment we added in Appendix F. It shows that when the noise is close to 0, the attacker’s error in estimating $H$ is very high. It decreases when we increase the noise, but only up to a certain point. When the noise crosses a threshold, the attacker’s error starts going up again (very moderately). This is consistent with what we expect: 0 noise reveals nothing about the histogram except the argmax class, whereas “infinite” noise means that PATE’s output distribution is uniform regardless of the underlying histogram; thus, we can only hope to extract histograms when the noise level is not in either extremity.\n\n**The experiment doesn't include the end-to-end evaluation between privacy cost to minority group membership leakage.**\n\nWe added an end-to-end attack, now in Appendix E. The results in the end-to-end scenario, where the histogram is extracted from PATE with a privacy budget equal to 1.9 and then used for minority-group membership inference, mirror the results in Section 2 where the attack is not end-to-end and the historgram is directly given to the attacker.\n", " Thank you for the feedback. First, while we made a deliberate effort to explain our attack and results in the simplest of terms, they include a full stack of technical contributions and conceptual insights, including\n1. Introduction of a novel attacker model, not considered before for PATE, and an accurate formulation of the attacker’s problem.\n2. The insight that histograms are sensitive information, and the observation that DP mechanisms can actually cause leakage of sensitive information.\n3. A carefully-devised optimization objective that required finding a closed-form expression and an approximation formula.\n4. An experimental design that adequately controls all PATE and DP parameters while offering clear and useful reporting metrics.\n\nSecond, we would like to note that the intuitive high-level structure of the attack itself (Monte Carlo estimation followed by an optimization procedure) is an advantage for attackers. We believe this only reinforces the impact of our conclusions.\n\nWe are happy to further discuss any questions or concerns the reviewer has regarding specific elements of our work such as the optimization procedure or experimental design. \n", " This paper reveals that the noise added to the PATE voting mechanism (to attain a differential privacy guarantee) enables new forms of leakage of sensitive information. A simple adversary is enough to exploit this noise to extract high-fidelity histograms of the votes. As pointed out in this paper, a low consensus vote indeed indicates minority-group membership. The main technical contribution is the model to extract PATE histograms.\n Strengths: This paper is well written. The proposed attack model is simple but interesting. I believe the authors' discovery is significant. Weaknesses: The technical part seems too simple. I'm not sure such a simple attack model contains enough technical contribution to appear in NeurIPS. This paper might be more appropriate to be published (as a report paper) at a security conference. Weaknesses: The technical part seems too simple. I'm not sure such a simple attack model contains enough technical contribution to appear in NeurIPS. This paper might be more appropriate to be published (as a report paper) at a security conference. ", " The paper first shows that the prediction histogram from several teachers will leak the membership of minority group for this test data. Then in the paper, an attack is proposed to show the possible leakage of histogram under a modern DP approach, PATE. Finally, it concludes DP doesn't protect the leakage of minority group membership. ### Strengths\n1. This paper takes effort to show what differential privacy can/cannot protect. This big topic is meaningful to the differential privacy community and the one who cares about privacy in applications.\n2. The paper is written well.\n\n### Weaknesses\n1. The main conclusion of this paper is that the differential privacy doesn't protect the membership of minority group for *test data*. This conclusion seems not very surprising to me as differential privacy is to protect the privacy of *training data*.\n2. The experiment result also shows that the leakage of true histogram is more severe when the privacy cost is smaller. This result is counter-intuitively and the explanation is not very convincing to me. The estimation for $q_i$ from a sequence of i.i.d. samples (0/1) from Bernoulli random variable likely wouldn't be more accurate when $q_i$ is larger, but depends on its variance $q_i(1-q_i)$. I would suggest authors to further check the difference between $Q(H)$ and $\\bar{q}$ and see if this difference is smaller when the injected noise is larger.\n3. The experiment doesn't include the end-to-end evaluation between privacy cost to minority group membership leakage.\n\n--------After Rebuttal--------\n\nThank the authors for their reply, which solves W2 and W3 and alleviates my concern in W1. 1. I am not clear why the introduced minority group membership leakage problem is a privacy problem for training data. The motivation is a bit confused.\n2. It is better to check the difference between $Q(H)$ and $\\bar{q}$ to support the explanation for better attack performance with higher noise levels. The authors list some limitations of their proposed attack algorithm. However, I am concerned whether the conclusion is meaningful to the community. PATE is a DP algorithm designed for training data and it likely will not protect the test data's information, e.g. the minority group membership discussed in this paper.", " This paper studies the confidentiality of PATE from an adversarial perspective. This paper shows that the attackers can recover the histograms of votes by querying the voting results multi-times. This paper shows an interesting result that adding larger Gaussian noise actually increases the risk to the confidentiality of the vote histogram.\n\n\n Pros: This paper proposed an attack method that can extract the voted histograms by querying the voting results multi-times. This paper uses an example to show that sensitive attributes can be leaked from the voted histograms. \n\nCons: The experimental settings are not clear. More experiments are needed to support the corresponding claims. \n\n\n 1. Does the model update frequently? Why cannot the model send the same result to the same query? \n\n2. Why do you use the Euclidean distance to measure the distance between $Q\\hat{H}$ and $\\bar{q}$? Why do not use KL or Wasserstein metric to measure the distance?\n\n3. In Line 247, what's the $\\epsilon$ and $delta$ for the DP guarantee? In Line 250, what's the corresponding budget for $\\sigma = 40$ etc?\n\n4. In line 248, what's the DP budget for each query? And what's the total budget for $10^4$ queries?\n\n5. In line 282, $(8, 10^{-5})$ is the DP budget for one query? \n\n6 In Figure 4, what's the corresponding DP budget for $\\sigma=40$?\n\n7. The results shown in Figures 6 and 7 are counter-intuitive. What's the corresponding DP budget for each query? To verify the correctness, it's better to add a large enough noise (i.e. the standard deviation of the Gaussian noise is close to inf.). Another baseline is that we add no noise to the results. Can we reach the same conclusion?\n\n8. Is it possible to perform some experiments that the attacker can steal the private attributes from the recovered histogram with some DP guarantee?\n See the Weaknesses and Questions part. ", " This work provides an in-depth investigation of a particularly relevant failure mode of Differential Privacy (DP) when training ML models using the PATE framework. In a nutshell, PATE takes the prediction of multiple independently trained models and outputs a prediction that is DP by first adding noise to the histogram and then taking the prediction with most votes. \nThis paper describes a privacy attack based in querying the database multiple times with the same question. By building up a distribution of these (noisy) responses, they show it is possible to back out the underlying histogram that the noise was meant to protect. Strengths:\n- The paper is quite enjoyable to read -- it is well-written, easy to follow, logical, and concise. \n- It will likely spur much-needed discussions about the practical usefulness of DP in real applications. \n\nWeaknesses: I don't see any significant weakness. \n\nMinor suggestions: \n- Instead of using $M$ for number of samples and $m$ for number of classes, use different letters.\n- (Line 263-176) I would prefer if in the appendix there was an explanation of precisely how the privacy cost is computed so that the reader doesn't need to refer to Papernot et al. as much.\n\nTiny typo: (line 34) *a* attacker\n\n Honestly, I read everything carefully (including the appendix) and I found it quite clear, thank you! The authors address the limitations of their work. \n\nNot only I don't have any qualms with this work, but I believe that it is an important contribution that could have a major positive societal impact (if the work is taken seriously by the privacy community). \nI think this paper should definitely be given a spotlight talk. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 7, 3, 10 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 4 ]
[ "zZEB3uKQYIL", "mpctw0JHge", "2QLqrHX_Q4V", "sW5TcGhUv5N", "BUflsDNrtK", "ZjzBEZg0MyHx", "7Yi8p6td3UI", "apfcJLQKrs", "sW5TcGhUv5N", "JdZeisM-_WO", "BUflsDNrtK", "nips_2022_a3ymtHbL5p5", "nips_2022_a3ymtHbL5p5", "nips_2022_a3ymtHbL5p5", "nips_2022_a3ymtHbL5p5" ]
nips_2022_3vmKQUctNy
Washing The Unwashable : On The (Im)possibility of Fairwashing Detection
The use of black-box models (e.g., deep neural networks) in high-stakes decision-making systems, whose internal logic is complex, raises the need for providing explanations about their decisions. Model explanation techniques mitigate this problem by generating an interpretable and high-fidelity surrogate model (e.g., a logistic regressor or decision tree) to explain the logic of black-box models. In this work, we investigate the issue of fairwashing, in which model explanation techniques are manipulated to rationalize decisions taken by an unfair black-box model using deceptive surrogate models. More precisely, we theoretically characterize and analyze fairwashing, proving that this phenomenon is difficult to avoid due to an irreducible factor---the unfairness of the black-box model. Based on the theory developed, we propose a novel technique, called FRAUD-Detect (FaiRness AUDit Detection), to detect fairwashed models by measuring a divergence over subpopulation-wise fidelity measures of the interpretable model. We empirically demonstrate that this divergence is significantly larger in purposefully fairwashed interpretable models than in honest ones. Furthermore, we show that our detector is robust to an informed adversary trying to bypass our detector. The code implementing FRAUD-Detect is available at https://github.com/cleverhans-lab/FRAUD-Detect.
Accept
The reviewers were split about this paper: on one hand they appreciated the motivation and the comprehensive experiments in the paper, on the other they were concerned about the clarity of the paper, even worried about a potential flaw. I have decided to vote to accept given the clear and convincing author response. I urge the authors to take all of the reviewers changes into account (if not already done so). Once done this paper will be a nice addition to the conference!
val
[ "XBoo6J6ju2v", "VGcKQ_53iM", "SAN-XrEa3ZD", "WXZA0P2N-xH", "IXwM19pJ77D", "nBSewmI3tEP", "Yootz_DbIsW", "IyDrOyPPKNzm", "RvaaiY86OmZB", "LEl_WYTIZg7", "P3_6qS6JpI4", "8Q9zgDstwLV", "CnNvY-PmgLS", "USsylUR6O0pZ", "XrA1G8PKrup", "A2txta97HE", "4Nud5BnYKNWn", "9tqLesugn61", "Zh34qCft8o2", "eeK51LFm3H", "EvJ4GagR51c" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer vBkK,\n\nThank you very much for the very positive feedback and insightful suggestions! We would be happy to answer any further questions you may have before the response period ends today.\n\nWarm regards,\nPaper3526 Authors", " Thank you very much for your time reading our responses, upgrading your score and very insightful suggestions. Indeed, for the theorem to hold, the surrogate model does not need to be an interpretable model even if in practice it is almost always the case in the context of post-hoc explanation as the objective is precisely to explain a black-box. We will incorporate your suggestion in the final version of our manuscript by changing the first sentence of our theorem to\n\n‘’Assume $\\hat{Y}_{B}$ are the black-box model $B(\\cdot)$ predictions, $\\hat{Y}_I$ are the predictions of a surrogate (interpretable) model $I(\\cdot)$ trained on $B(\\cdot)$ outputs and $A$ is a sensitive attribute:’’", " Thank you very much for your time reading our responses. Sure we will add these additional results in the appendix of the final version of our manuscript.", " Thank you for your time reading our responses and posting new comments. We respond to them inline.\n\n> **Accessibility to B.\nI'm still confident that the accessibility to B is problematic. First, the theoretical analyses shown in Section 3 are dependent on the accessibility to the model B. As mentioned in the initial comments, since we can calculate the unfairness directly because of the accessibility to B, the detectability is obvious. Second, if we can obtain the suing dataset labeled by B, we can calculate the unfairness from the labeled dataset, meaning that we can easily get the unfairness of the original model. Hence, the detectability is trivial even in this case.** \n\nRegarding the accessibility to B, we would like to highlight that we did not invented a new problem setting, rather we have followed the problem setting that was previously introduced in the seminal papers of the fairwashing literature. See for instance : \n\n- [ICML-2019] Ulrich Aïvodji, Hiromi Arai, Olivier Fortineau, Sébastien Gambs, Satoshi Hara, Alain Tapp: Fairwashing: the risk of rationalization. ICML 2019: 161-170.\n\n- [NeurIPS-2021] Ulrich Aïvodji, Hiromi Arai, Sébastien Gambs, Satoshi Hara: Characterizing the risk of fairwashing. NeurIPS 2021: 14822-14834.\n\nWe agree with the reviewer that we can compute the unfairness of the black-box model and interpretable model on the suing set. However, these unfairness values (or their differences) cannot be used to detect whether the interpretable model is adversarially manipulated to rationalize decisions taken by the black-box (i.e. fairwashing). This is due to the fact that the unfairness value alone provides little insight on which design choice (e.g., a fairness constraint was added to the optimization objective) introduced the bias (with respect to the sensitive attribute). In addition to this, we show both empirically and theoretically that the unfairness of the black-box model and the unfairness of its surrogate model are always different. We mentioned this in lines 58-61 of our introduction as:\n\n*“measuring the differences between the fairness of the black-box model and the explainable model cannot help to detect fairwashing as there are several design choices (including fairwashing) that could lead to different fairness, thus non-intentional fairwashing.”*\n \nHereafter, we illustrate why this is the case with an example. \n\n1. A company uses a black-box model to provide personalized services or make decisions on users queries, for example deciding based on the application of a user if they can get a loan. Here, the black-box model’s prediction would reflect the company’s decision in response to the loan application. \n\n2. Upon receiving the company’s decision, some users may deem that the decision they received is biased. For example, they may believe that it is unfair with respect to the “gender” sensitive attribute. As a consequence, users report this issue to an auditor by sending their data along with the decision they received. As multiple user complaints accumulate, a suing set is built (e.g., for a class action).\n\n3. We agree with the reviewer that the auditor can compute the black-box model’s unfairness on the suing set. However, the unfairness value alone provides little insight on which design choice introduced the bias (with respect to the sensitive attribute). For instance, in the scenario considered, evaluating the unfairness of predictions made on the suing set alone does not characterize the impact of “gender” on the predictions of the black-box model. To achieve this, the Explainable AI community has developed post-hoc explanation techniques as a way to assess the fairness of models by explaining the “reasoning” behind the predictions of the model as well as its reliance on sensitive attributes. Therefore, in the problem setup proposed by previous papers on fairwashing, the auditor asks for a model explanation from the company to determine whether the “gender” attribute has a direct impact on the model predictions. As our detector is non-cooperative so it does not need further query access to the black-box model than what was already collected by users on the suing set (put another way, our detector does not need to perform any additional adaptive queries to the black-box beyond the initial suing set). \n\n4. In order to evade penalties from the auditor, the company trains a fairwashed interpretable model in which the impact of “gender” on the model prediction is fairwashed (i.e., decreased) and sends this model to the auditor. \n\n5. If the company successfully fairwashed their model, upon inspecting the interpretable model, the auditor gets a false impression that “gender” is not impacting the prediction. Instead, the auditor finds that other attributes such as “occupation” strongly affect the output. The company has successfully evaded accusations of unfairness with respect to the “gender” attribute. \n\nThis is precisely what we detect with our work.", " > **Sufficiency\nI understand the definition of \"completely eliminating fairwashing.\" However, it is still unclear about the definition of sufficiency. In the authors' analysis, γ involves terms other than the false- and true-positive rates. The authors claim in Line 209 that such other terms are constant. However, without a mathematically rigorous definition of sufficiency, I cannot confirm that these terms are actually independent of sufficiency.**\n\nRegarding the definition of sufficiency, what we mean by sufficiency is simply that $\\tilde{\\delta}$ and $\\delta’$ are sufficient for identification of fairwashing (i.e., an auditor does not require a greater amount of information than $\\tilde{\\delta}$ and $\\delta’$ to determine whether fairwashing has occurred). We note that as the other components of the fairwashing gap $\\gamma$ are irreducible ( $\\Gamma_B (1 + F_1^{+} - T_1^{+})$ ), these cannot be helpful in the determination of fairwashing and thus are excluded from our sufficiency statement. As $\\tilde{\\delta}$ and $\\delta’$ rely on the black-box and interpretable model while $\\Pr\\left[\\hat{Y}_{B}=y \\mid A=0\\right], y\\in\\{0,1\\}$ is exclusively related to the black-box model. We will clarify in the paper by stating that $\\tilde{\\delta}$ and $\\delta’$ are sufficient information from the interpretable model to determine fairwashing, as the other terms are constant from the perspective of the interpretable model. Since we are attempting to determine fairwashing of the interpretable model, we discuss sufficiency w.r.t. that model. We intend sufficiency as in logical implication such that certain values of $\\tilde{\\delta}$ and $\\delta’$ alone provide determination for fairwashing.\n\nGiven the above, we propose the following definition of sufficiency:\n\n**Definition.** *We define sufficiency in the context of determination of fairwashing as the dependence of fairwashing on particular variables – i.e. the values taken by particular variables form a sufficient condition for the determination of fairwashing. In our case, if the values of $\\tilde{\\delta}$ and $\\delta’$ exceed a threshold, this is a sufficient condition for fairwashing.*", " I thank the authors for your response.\n\n> Accessibility to $B$\n\nI'm still confident that the accessibility to $B$ is problematic. First, the theoretical analyses shown in Section 3 are dependent on the accessibility to the model $B$. As mentioned in the initial comments, since we can calculate the unfairness directly because of the accessibility to $B$, the detectability is obvious. Second, if we can obtain the suing dataset labeled by $B$, we can calculate the unfairness from the labeled dataset, meaning that we can easily get the unfairness of the original model. Hence, the detectability is trivial even in this case. \n\n> Sufficiency\n\nI understand the definition of \"completely eliminating fairwashing.\" However, it is still unclear about the definition of sufficiency. In the authors' analysis, $\\gamma$ involves terms other than the false- and true-positive rates. The authors claim in Line 209 that such other terms are constant. However, without a mathematically rigorous definition of sufficiency, I cannot confirm that these terms are actually independent of sufficiency.\n\nDue to these concerns, I'd like to keep my score unchanged.", " Dear Reviewers! Thank you so much for your time on this paper so far.\n\nThe authors have written a detailed response to your concerns. How does this change your review?\n\nPlease engage with the authors in the way that you would like reviewers to engage your submitted papers: critically and open to changing your mind. Thank you Reviewers 53g2 and WPPT for your initial engagement!\n\nLooking forward to the discussion!\n", " I want to thank the authors for the response which addresses my question. I hope the authors can include this additional results on equalized odds in the updated appendix (probably as an extra section).", " Thanks for getting back, and this helps clarify the paper considerably for me.\n\nI'm still about confused about the statement of Theorem 1. If it doesn't matter that I is an \"interpretable model\" why not just say a surrogate model trained to predict B's predictions? This is much more explicit than having terms like \"interpretable\" and \"explain\" in the theorem statement, which doesn't necessarily have concrete definitions, and you can describe later on the theorem applies in case I is a very simple interpretable model. I think using an argument like this would help clarity here.\n\nI think the paper is in better shape overall, however, so I upgraded my score to weak accept.", " Dear reviewers,\n\nThank you very much for your time serving as the reviewer for our paper! We would be happy to answer any further questions you may have before the response period ends on Aug 9.\n\n\n\nWarm regards,\n\nPaper3526 Authors\n \n\n", " We thank the reviewer for the comments, and for highlighting that our paper is well-written, well-motivated, comprehensive in its experiments as well as for recognising the importance of the evaluation of our method against an informed adversary. Below, we will answer your comment.\n\n> **The method currently only applies to one fairness definition. However, in practice, other definitions like equality of opportunity are also commonly used. For other fairness criteria such as equality of opportunity, how easy is it to adapt the current method?**\n\nWe thank the reviewer for the suggestion. As discussed in Section 7, our main reason for focusing on demographic parity in this work is that it does not assume knowledge or even the existence of true labels (see Section 2.1). Nevertheless, following the reviewer’s suggestion, we have evaluated our detector on other fairness metrics. In particular, we have considered the equalized odds fairness metric which in addition to constraining the true positive rates (as would equality of opportunity) also constrains the false positive rates across groups. We choose the Adult dataset (black-box model: DNNs and interpretable model: Decision trees) as it displays an inherent equalized odds unfairness. The results below show that, similarly to our demographic parity results, we observe a relationship between the equalized odds gap of interpretable models and the associated $C_KL$ values: as fairwashing becomes more severe ( $1-\\epsilon \\downarrow$), the equalized odds decreases while $C_{KL}$ increases.\n\n|$1-\\epsilon$:|0.6|0.8|0.9|0.95|0.99|\n|-------|--------|--------|--------|--------|--------|\n|Eodds:|.077345|.07616|.06650|.04473|.02191|\n|$C_{KL}$:|.12300|.13329|.13883|.14118|.14131|\n", " > **Overall, I think this paper needs to much more rigorously provide the problem setting and explicit definitions of what is going on. Could you help clarify the problem setup a bit more, per the weaknesses section?**\n\nWe clarified our assumptions, problem and solution based on the below points:\n\nAssumptions:\nWe follow the standard assumptions made in the fairwashing literature:\nCompany uses a black-box model $B$; \nBinary classification tasks, $B:\\mathbb{R}^M \\mapsto \\{0,1\\}$;\nBinary-valued sensitive attributes, $A \\in \\{0,1\\}$.\n\n\nProblem statement:\nBelow, we clarify the problem setup in 3 steps:\nThe company trains $B$ on their training set $(X_{\\text{Tr}}, Y_{\\text{Tr}}, A)$, in which $A$ represents a sensitive attribute;\nThe company makes decisions based on the prediction of $B$ about users' queries;\nAfter receiving decisions that they deem unfair, users from the suing set requires the company to be investigated in terms of the effect of $A$ on the decisions by an auditor;\nGiven the black-box model $B(\\cdot)$ and the suing set $X_{\\text{sg}}, B(X_{\\text{sg}},)$, company solves the fairwashing optimization (Definition 4) defined as learning an interpretable model $I(\\cdot)$ from $B(\\cdot)$ on $X_{sg}$ such that the interpretable model has 1) high fidelity with respect to the black-box model and 2) is less unfair than this black-box model.\n\n\nOur solution:\nTo distinguish between the fairwashed and honest interpretable model in a non-cooperative way we propose to compute the divergence between subgroup confusion matrices $C_0$ and $C_1$ using Kullback–Leibler (KL) as $\\mathcal{C}_{\\text{KL}}=\\text{KL}(C_0,C_1)$.\n\n\n\n\n> **Figure 3 is a bit hard to follow. Could you help clarify this figure? Where does the dotted line come from? Why are there multiple fidelity values for every Δ?**\n\nThe dotted line is the unfairness of the black-box model computed on the suing set data. Figure 3 displays the results of solving the constrained optimization problem in Equation 9. More precisely, the constraints in Equation 9 are related to the fidelity (defined based on loss) and $\\Delta$. For each value of $\\Delta$, we consider different values for fidelity because our objective is to assess the evasion power of fairwashing attacks on the Rashomon Set of interpretable models. This is designed to characterize the damage an adversary can achieve given a constraint on $C_{KL}$. Therefore, the multiple fidelity values for every $\\Delta$ in Figure 3 show the performance of the detector when facing different high-fidelity interpretable models. ", " > **While the empirical results about the detection algorithm are pretty complete, it seems tricky to choose this threshold Δ that determines whether fair washing is going on. It would useful to provide more guidance here.**\n\n\nWe thank the reviewer for the insightful comment. The question of choosing a threshold is a pertinent one. We provide two answers, one based on the theory we have presented in Section 3, and an empirical solution. \n\n### Theory Solution\nIn Appendix I, we provide the following bounds for the KL divergence measure of FRAUD-detect:\n\n$\\kappa := \\operatorname{KL}\\left(\n \\begin{bmatrix} F^+_0 \\\\\n F^-_0 \n \\end{bmatrix}, \n \\begin{bmatrix} F^+_1 \\\\\n F^-_1 \n \\end{bmatrix}\n \\right)$ \n\nis bounded from below and above by:\n\n$\n F_0^+ \\log\\left(\\frac{F_0^+}{F_1^-} \\cdot \\frac{\\gamma_0}{\\gamma_1}\\right) \n \\leq \\kappa \\leq \n F_0^- \\log\\left(\\frac{\\gamma_0}{\\gamma_1} \\cdot \\frac{F_0^-}{F_1^+}\\right)\n$\n\nAny choice between \n$\\kappa_{min} := F_0^+ \\log\\left(\\frac{F_0^+}{F_1^-} \\cdot \\frac{\\gamma_0}{\\gamma_1}\\right)$ \nand \n$\\kappa_{max} := F_0^- \\log\\left(\\frac{\\gamma_0}{\\gamma_1} \\cdot \\frac{F_0^-}{F_1^+}\\right)$ is valid. \n$\\kappa = \\kappa_{min}$ is the most conservative choice which increases the risk of false discovery (of fairwashing), while $\\kappa = \\kappa_{max}$ is a permissive choice that minimizes the risk of false discovery at the expense of increase in the false negatives (not detecting fairwashing). A balanced choice for threshold could be the intermediate point:\n\n$\\bar{\\kappa}=\\frac{F_{0}}{2}\\left(\\Delta_{l}\\left(F^{+}\\right)+\\Delta_{l}\\left(F^{-}\\right)\\right)+\\frac{\\gamma_{0}}{2 \\Gamma_{B}} \\log \\left(\\frac{\\gamma_{0}}{\\Gamma_{B}}\\right)-\\frac{1}{2} \\frac{\\gamma_{0}}{\\gamma_{1}} \\frac{1}{\\Delta\\left(F^{-}\\right)}$\n\nwhere $\\Delta(R) = R_0/R_1$ and $\\Delta_l(R) = \\log \\Delta(R)$. The above point is accurate up to $O\\left(\\frac{\\Gamma_B^2}{\\gamma_1^2}\\left(\\frac{F^-_1}{F^-_0}\\right)^2\\right)$.\n\n### Empirical Solution\nWe note that the threshold is a hyper-parameter. Assuming access to additional (possibly public) data $\\mathcal{D}$, one can calibrate the threshold using a separate “calibration” dataset: \n\n1. Train a state-of-the-art black-box model using $D_{train} \\sim \\mathcal{D}$\n2.Train an explainable model $M_{honest}$ on $D_{train} \\sim \\mathcal{D}$ without using any additional constraints on the gap between the black-box and interpretable model\n3. Train an explainable model $M_{fairwashed}$ on $D_{train} \\sim \\mathcal{D}$ using the Informed Adversary Optimization of Definition 5 in order to minimize the fairness gap\n4. Measure the KL divergence of $D_{sg} \\sim \\mathcal{D}$ on $M_{honest}$ and $M_{fairwashed}$ to form $X_{honest}$ and $X_{fairwashed}$ datasets. Assign labels $y = 1$ to $X_{fairwashed}$ and $y = 0$ to $x_{honest}$. \n5. $X= X_{honest} \\cup X_{fairwashed}$, and $Y = Y_{honest} \\cup Y_{fairwashed}$ form a univariate regression model with the following loss function $\\ell$:\n$\\ell(x, y, T)=\\sum_{i} \\frac{1}{2}\\mathbb{I}\\left(x_i \\leq T, y=1\\right)+\\frac{1}{2}\\mathbb{I}\\left(x_i>T, y=0\\right)$. And the optimal threshold $T^* = \\arg\\min_T \\ell(x, y, T)$.\nNote that this assumes equal weights for type I and type II error of the detector. Finally, we note the state-of-the-art in calibration is [1]. However, a SOTA method is likely not necessary because if there are enough samples for calibration, central limit theorem ensures both $X_{fairwashed}$ and $X_{honest}$ are normally distributed at which point the optimal threshold is simply $T^* = \\frac{1}{2}(\\mu_{fairwashed} +\\mu_{fairwashed})$.\n\n\n[1] R. Sahoo, A. Chen, S. Zhao, and S. Ermon, “Reliable Decisions with Threshold Calibration,” p. 14.\n", " We thank the reviewer for the comments and for highlighting that our paper is interesting and comprehensive in its experiments. \n\n\n> **The initial problem setup is not defined explicitly enough. It seems like the setting is a classifier with a sensitive attribute, but the specifics here are lacking. For example, in equation (1), is this a multiclass or binary class setting? Is the protected attributed discrete or continuous? If it's continuous, is it allowed to take on multiple values? These details are a bit hazy right now making it hard to figure out where the claims apply exactly.**\n\nFollowing the standard assumptions made in the fairwashing literature ([ICML-2019] and [NeurIPS-2021]), we consider binary classification tasks, and binary-valued sensitive attributes. We clarified and formalized this in the revised version of the paper in Section 2.2 in which we introduce the black-box model and sensitive attributes. We have uploaded the revised version of the paper.\n\n-[ICML-2019] Ulrich Aïvodji, Hiromi Arai, Olivier Fortineau, Sébastien Gambs, Satoshi Hara, Alain Tapp: Fairwashing: the risk of rationalization. ICML 2019: 161-170.\n\n-[NeurIPS-2021] Ulrich Aïvodji, Hiromi Arai, Sébastien Gambs, Satoshi Hara: Characterizing the risk of fairwashing. NeurIPS 2021: 14822-14834.\n\nOur analysis can be extended to multi-class and multi-valued sensitive attributes but as written in Section 7 we leave this as future work, as it involves multi-class calibration results that are out of the scope of the current paper.\n\n> **What are the specifications of the interpretable model? Does this model mirror the predictions across multiple classes or just a single class? Could you clarify what you mean by interpretable model? The construction of interpretable model I is a bit hard to follow as well, and I is introduced as just \"a simple interpretable model I\". I think what is exactly meant by the interpretable model needs to be stated more explicitly, because right now, it is a bit hazy and hard to follow**\n> **This makes it difficult to follow the statement of Theorem 1, where the theorem specifices and interpretable model, without this really being defined.**\n\nWe thank the reviewer for attention to detail. Indeed, Theorem 1 is quite broad because it is model-agnostic. Consider two machine learning models trained on the same dataset. One model is deemed to be more “interpretable.” This could be because of its tree-like construction that facilitates observing a decision boundary, or it could be a linear model with weights that signify the importance of each feature. The other model, deemed the black-box model, does not necessarily enjoy these characteristics—but instead may be more performant. \n\nRegardless of the particular architecture of models, Theorem 1 says that as long as we can characterize the false positive rate and true positive rate of the interpretable model’s decisions (with the reference points defined as black-box model decisions); we can fully characterize the fairwashing gap provided we know the unfairness of the black-box model. Furthemore, the theorem shows that this gap cannot be reduced to zero and provides the lower bound $\\gamma = \\Gamma_B (1 + F^+_1 - T^+_1)$. \n", " > **This paper lacks the comparison with the following studies:\n[AAAI-2020] K. Fukuchi, et al. \"Faking fairness via stealthily biased sampling.\" AAAI2020.\n[AIES-2020] D. Slack, et al. \"Fooling lime and shap: Adversarial attacks on post hoc explanation methods.\" AIES2020.\nBoth papers investigate the risk of deceiving fairness. Remarkably, the first paper discusses the detectability of malicious modification for deceiving unfairness under situations where the detector can access the labeled benchmark dataset.** \n\nThank you for these suggestions. After verification, both of the suggested papers share indeed key similarities with our own in the sense that they consider deceptions of fairness. These papers also further highlight the importance of conducting further research in fairwashing, model auditing, as well as model explanation. We added the following discussion in the Related Work (Section 2.1) and Appendix B.\n\nTo summarize, the first paper [AAAI-2020], similar to us, tries to detect fairwashing but with a different setting. The second paper [AIES-2020] shows an alternate method of fairwashing that evades model explanation techniques in a setting that is comparable to ours but does not attempt to detect the fairwashing. Next, we highlight some of the differences in settings, assumptions and solutions:\n\n\n- Setting: We rely on explainer models while they use a published data subset for fairness auditing. In particular, they reveal some subset of the training data and their predictions. We follow the setting considered in the literature introducing fairwashing ([ICML-2019] and [NeurIPS-2021]). [AIES-2020] explanations require input perturbations while ours require the model owner to provide an interpretable model.\n\n\n- Assumptions: [AIES-2020] assume that fairness auditing is performed via model-agnostic local explanations (e.g., LIME and SHAP). Both [AIES-2020] and [AAAI-2020] assume an ideal detector with knowledge of the underlying training distribution of the model; and query access to the black-box model. However, we only assume access to black-box model predictions on the suing set. We stress that this dataset is available before any formal audit takes place (in the form of asking for model explanations). This does _not_ constitute query access to the black-box model; in fact it is only dataset access, which here is the predictions on the suing set. \n\n\n- Solutions: They attempt to detect fairwashing using a Kolmogorov-Smirnov (KS) test over the underlying and company-provided subset distribution to determine if the data subset was honestly provided. More precisely, they show that detection is difficult when fairwashing is conducted by minimizing the Wasserstein distance between the distributions while subject to fairness; this optimization also minimizes the upper-bound of the advantage (i.e., distinguishability) of the KS test. This differs from our setting in which an informed malicious company must optimize the fairness subject to both fidelity (loss) and the detection threshold.\n\nExplanation methods like LIME and SHAP perturb inputs to gauge feature relevance, inadvertently querying with synthetic data that may be detected with out-of-distribution detection. Queries determined to originate from explainers are fed to a fair model whereas in-distribution points are given to the biased model. In contrast, our detection method is non-cooperative (we require no additional information from the black-box model as the auditor already has the predictions on the suing set) and therefore does not rely on input perturbations. ", " We thank the reviewer for the comments and for highlighting that we are the first to propose a detection method for fairwashing. \n\n\n> **The problem setting looks weird. The authors assume the detector can query the original model B in a black-box way. However, if we can query B, we can also calculate the unfairness score of B, such as the difference of the conditional probabilities in the demographic parity, by which we can confirm the fairness of the original model B. Hence, under the assumption of having accessibility to B, the defender easily detects the unfairness of the original model \nB. In other words, there is no risk of fairwashing in this context. While this paper adequately validates the fact that their method can detect fairwashing in this context, the detectability is obvious due to accessibility to B.**\n\nPlease note that the Algorithm 1 never accesses the black-box model after the first query step which is solely done on the suing set samples. We stress that this dataset is available before any formal audit takes place. This does _not_ constitute query access to the black-box model; in fact it is only dataset access, which here is the predictions on the suing set. Under this problem setting, there is a risk of fairwashing.\n\nWe acknowledge a mistake in Algorithm 1’s input that mentioned query access (despite not using it on any sample other than the suing set samples). This has been fixed in the revised submission and is uploaded. We thank the reviewer for the insightful comment. \n\n> **The statement of Theorem 1 is not rigorously defined. What mean by \"completely eliminating fairwashing\"? Also, what mean by \"sufficient\"?** \n\nBy “completely eliminating fairwashing” we mean to reduce the fairwashing gap $\\gamma$ (defined in Equation 3) to zero. We discuss this phenomenon in the Irreducibility paragraph (line 202 and Equation 6) where we show that $\\gamma = \\Gamma_B (1 + F^+_1 - T^+_1) > 0$. We have adapted the revision text to highlight $\\gamma > 0$.\n\nBy sufficiency of measuring false-positive and true-positive rates of the interpretable model w.r.t black-box model for detecting fairwashing, we mean that only these two rates (plus the black-box model unfairness which is not a characteristic of the detector), fully characterize the fairwashing gap. That is, measuring any other metric (for instance, to create a new detector) is unnecessary. We show this in the Sufficiency paragraph (line 209). \n\n\n", " We thank the reviewer for the comments and for highlighting the importance of our analytical and empirical analysis. \n\n> **The logic of the paper is clear but the expressions in the paper are kind of obscure and not easy for readers to follow. The motivation and the algorithm in the proposed detection model are simple and clear, but the authors use a lot of long sentences to describe them, which makes the simple things complicated. It would be better if the authors could use more short and succinct sentences, which will improve the presentation and readability of the paper.**\n\nThanks for this suggestion. We have implemented it in the revised version of the paper, in particular by improving the presentation and splitting long sentences into short and succinct sentences. The revised version of the manuscript has been uploaded. \n\n\n> **The hyperparameter \\delta (the fairwashing detection threshold) is a key parameter in this proposed detection algorithm. More discussions on how to choose this hyperparameter in practical use are needed.\nHow should we choose the hyperparameter \\delta in different scenarios (e.g. different tasks, different data)?**\n\nWe thank the reviewer for the insightful comment. The question of choosing a threshold is a pertinent one. We provide two answers, one based on the theory we have presented in Section 3, and an empirical solution. \n\n### Theory Solution\nIn Appendix I, we provide the following bounds for the KL divergence measure of FRAUD-detect:\n\n$\\kappa := \\operatorname{KL}\\left(\n \\begin{bmatrix} F^+_0 \\\\\n F^-_0 \n \\end{bmatrix}, \n \\begin{bmatrix} F^+_1 \\\\\n F^-_1 \n \\end{bmatrix}\n \\right)$ \n\nis bounded from below and above by:\n\n$\n F_0^+ \\log\\left(\\frac{F_0^+}{F_1^-} \\cdot \\frac{\\gamma_0}{\\gamma_1}\\right) \n \\leq \\kappa \\leq \n F_0^- \\log\\left(\\frac{\\gamma_0}{\\gamma_1} \\cdot \\frac{F_0^-}{F_1^+}\\right)\n$\n\nAny choice between \n$\\kappa_{min} := F_0^+ \\log\\left(\\frac{F_0^+}{F_1^-} \\cdot \\frac{\\gamma_0}{\\gamma_1}\\right)$ \nand \n$\\kappa_{max} := F_0^- \\log\\left(\\frac{\\gamma_0}{\\gamma_1} \\cdot \\frac{F_0^-}{F_1^+}\\right)$ is valid. \n$\\kappa = \\kappa_{min}$ is the most conservative choice which increases the risk of false discovery (of fairwashing), while $\\kappa = \\kappa_{max}$ is a permissive choice that minimizes the risk of false discovery at the expense of increase in the false negatives (not detecting fairwashing). A balanced choice for threshold could be the intermediate point:\n\n$\\bar{\\kappa}=\\frac{F_{0}}{2}\\left(\\Delta_{l}\\left(F^{+}\\right)+\\Delta_{l}\\left(F^{-}\\right)\\right)+\\frac{\\gamma_{0}}{2 \\Gamma_{B}} \\log \\left(\\frac{\\gamma_{0}}{\\Gamma_{B}}\\right)-\\frac{1}{2} \\frac{\\gamma_{0}}{\\gamma_{1}} \\frac{1}{\\Delta\\left(F^{-}\\right)}$\n\nwhere $\\Delta(R) = R_0/R_1$ and $\\Delta_l(R) = \\log \\Delta(R)$. The above point is accurate up to $O\\left(\\frac{\\Gamma_B^2}{\\gamma_1^2}\\left(\\frac{F^-_1}{F^-_0}\\right)^2\\right)$.\n\n### Empirical Solution\nWe note that the threshold is a hyper-parameter. Assuming access to additional (possibly public) data $\\mathcal{D}$, one can calibrate the threshold using a separate “calibration” dataset: \n\n1. Train a state-of-the-art black-box model using $D_{train} \\sim \\mathcal{D}$\n2.Train an explainable model $M_{honest}$ on $D_{train} \\sim \\mathcal{D}$ without using any additional constraints on the gap between the black-box and interpretable model\n3. Train an explainable model $M_{fairwashed}$ on $D_{train} \\sim \\mathcal{D}$ using the Informed Adversary Optimization of Definition 5 in order to minimize the fairness gap\n4. Measure the KL divergence of $D_{sg} \\sim \\mathcal{D}$ on $M_{honest}$ and $M_{fairwashed}$ to form $X_{honest}$ and $X_{fairwashed}$ datasets. Assign labels $y = 1$ to $X_{fairwashed}$ and $y = 0$ to $x_{honest}$. \n5. $X= X_{honest} \\cup X_{fairwashed}$, and $Y = Y_{honest} \\cup Y_{fairwashed}$ form a univariate regression model with the following loss function $\\ell$:\n$\\ell(x, y, T)=\\sum_{i} \\frac{1}{2}\\mathbb{I}\\left(x_i \\leq T, y=1\\right)+\\frac{1}{2}\\mathbb{I}\\left(x_i>T, y=0\\right)$. And the optimal threshold $T^* = \\arg\\min_T \\ell(x, y, T)$.\nNote that this assumes equal weights for type I and type II error of the detector. Finally, we note the state-of-the-art in calibration is [1]. However, a SOTA method is likely not necessary because if there are enough samples for calibration, central limit theorem ensures both $X_{fairwashed}$ and $X_{honest}$ are normally distributed at which point the optimal threshold is simply $T^* = \\frac{1}{2}(\\mu_{fairwashed} +\\mu_{fairwashed})$.\n\n\n[1] R. Sahoo, A. Chen, S. Zhao, and S. Ermon, “Reliable Decisions with Threshold Calibration,” p. 14.\n", " This paper focuses on the problem of fairwashing detection in black-box models. To be specific, fairwashing is a technique used to fool model explanation methods to produce deceptive outputs for unfair models. In this paper, the authors argue the importance of studying the faiwashing issue, and propose a novel framework FRAUD-Detect to detect fairwashed models, and present empirical results. In addition, theoretical analysis is provided to show that fairwashing is unlikely to be avoided. And the robustness of the proposed detection method towards an informed adversary is also discussed. Strengths:\n1. This paper investigates a novel and important problem. Model explanation is a popular and promising approach for auditing a model with critical usage to avoid unfairness in automated decision-making. But such model explanation approaches are also vulnerable to being attacked, which has been overlooked. The problem is also challenging, a thorough study on it is needed.\n\n2. The paper provides a simple but effective method for detecting dishonest interpretation models that take advantage of the fairwashing technique.\n\n3. Theoretical conclusion is provided to show that fairwashing is unlikely to be completely avoided. Empirical results are provided to validate the effectiveness of the proposed detection method and also its robustness towards an informed attack.\n\nWeaknesses:\n1. The logic of the paper is clear but the expressions in the paper are kind of obscure and not easy for readers to follow. The motivation and the algorithm in the proposed detection model are simple and clear, but the authors use a lot of long sentences to describe them, which makes the simple things complicated. It would be better if the authors could use more short and succinct sentences, which will improve the presentation and readability of the paper.\n\n2. The hyperparameter \\delta (the fairwashing detection threshold) is a key parameter in this proposed detection algorithm. More discussions on how to choose this hyperparameter in practical use are needed. 1. How should we choose the hyperparameter \\delta in different scenarios (e.g. different tasks, different data)? No potential negative societal impact.", " The authors deal with the problem of detecting fairwashing, a maliciously generated explanation such that the model looks fair even if it is actually unfair. They investigated fairwashing in the global explanation and found that elimination of fairwashing is difficult, and fairwashing can be detected using false- and true- positive rates. Then, their proposed detection algorithm for fairwashing is to check the deviation of the false/true positive/negative rates between the original and explanation models using KL divergence. To evaluate the robustness of their algorithm, they introduce a fairwashing algorithm with a mechanism evading the proposed detection method. The empirical evaluations demonstrate the high detectability and robustness of their proposed detection algorithm. (Strengths)\n- The fairwashing is an urgent risk. Hence, the detection of the fairwashing is well motivated.\n- To the best of my knowledge, this is the first to propose a detection method for fairwashing.\n\n(Weakness)\n- The authors make an impractical assumption about the defender's knowledge.\n- Lacks some important references.\n\nThe authors deal with the interesting and demanding task of detecting fairwashing, and most of the results are technically sound. However, I found a fundamental flaw in the problem setting, by which they deal with the situation where there is no risk of fairwashing. Due to such a considerable flaw, I recommend the rejection of this paper.\n\nThe problem setting looks weird. The authors assume the detector can query the original model $B$ in a black-box way. However, if we can query $B$, we can also calculate the unfairness score of $B$, such as the difference of the conditional probabilities in the demographic parity, by which we can confirm the fairness of the original model $B$. Hence, under the assumption of having accessibility to $B$, the defender easily detects the unfairness of the original model $B$. In other words, there is no risk of fairwashing in this context. While this paper adequately validates the fact that their method can detect fairwashing in this context, the detectability is obvious due to accessibility to $B$.\n\nThe statement of Theorem 1 is not rigorously defined. What mean by \"completely eliminating fairwashing\"? Also, what mean by \"sufficient\"? \n\nThis paper lacks the comparison with the following studies:\n- K. Fukuchi, et al. \"Faking fairness via stealthily biased sampling.\" AAAI2020.\n- D. Slack, et al. \"Fooling lime and shap: Adversarial attacks on post hoc explanation methods.\" AIES2020.\n\nBoth papers investigate the risk of deceiving fairness. Remarkably, the first paper discusses the detectability of malicious modification for deceiving unfairness under situations where the detector can access the labeled benchmark dataset. See strengths and weaknesses. The accessibility to $B$ makes the situation no fairwashing risk. Hence, I recommend the authors reconsider the situation to be meaningful.", " This paper presents a theoretical analysis of fair washing and demonstrates that completely reducing fair washing in certain cases is not possible. In addition, they present a simple technique that looks at the KL between surrogate model confusing matrices to detect fairwashing. They evaluate their technique and find it helps detect fairwashing issues. Strengths:\n- Interesting problem \n- The empirical evaluation of fraud detect is comprehensive, considering many different models and settings.\n\nWeaknesses:\nThe main weakness is that the theory is missing explicitness in several places and there are gaps in what the exact setting is. Specifically, here are the main gaps I see:\n- The initial problem setup is not defined explicitly enough. It seems like the setting is a classifier with a sensitive attribute, but the specifics here are lacking. For example, in equation (1), is this a multiclass or binary class setting? Is the protected attributed discrete or continuous? If it's continuous, is it allowed to take on multiple values? These details are a bit hazy right now making it hard to figure out where the claims apply exactly.\n- What are the specifications of the interpretable model? Does this model mirror the predictions across multiple classes or just a single class?\n- The construction of interpretable model $I$ is a bit hard to follow as well, and $I$ is introduced as just \"a simple interpretable model I\". I think what is exactly meant by the interpretable model needs to be stated more explicitly, because right now, it is a bit hazy and hard to follow. This makes it difficult to follow the statement of Theorem 1, where the theorem specifices and interpretable model, without this really being defined. \n\nWhile the empirical results about the detection algorithm are pretty complete, it seems tricky to choose this threshold $\\Delta$ that determines whether fair washing is going on. It would useful to provide more guidance here.\n\nOverall, I think this paper needs to much more rigorously provide the problem setting and explicit definitions of what is going on.\n\n - Figure 3 is a bit hard to follow. Could you help clarify this figure? Where does the dotted line come from? Why are there multiple fidelity values for every $\\Delta$?\n- Could you help clarify the problem setup a bit more, per the weaknesses section?\n- Could you clarify what you mean by interpretable model?\n The authors have done a sufficient job.", " The paper characterizes the problem of fairwashing for interpretable models. It proposes a fairwashed model detection method based on measuring the difference over subpopulations in true-positive and false-positive rates of the interpretable model w.r.t. the black-box model. It shows the method’s sufficiency theoretically and effectiveness empirically even under the assumption of an informed attacker. The evaluation is conducted on two interpretable models of four classification models on three widely used fairness datasets. ## Strengths\n1.The problem is well-motivated. Fairwashing can be a practical threat in reality.\n\n2.The theoretical analyses look correct.\n\n3.The proposed method is comprehensively evaluated and shows decent performance for practical usage. In particular, the setting of an informed adversary strengthens the generalizability of the proposed method.\n\n4.The paper is well-written, well-organized, and easy to follow.\n\n## Weaknesses\n1.The method currently only applies to one fairness definition. However, in practice, other definitions like equality of opportunity are also commonly used.\n 1.For other fairness criteria such as equality of opportunity, how easy is it to adapt the current method? They are well addressed." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 4 ]
[ "4Nud5BnYKNWn", "RvaaiY86OmZB", "IyDrOyPPKNzm", "nBSewmI3tEP", "nBSewmI3tEP", "XrA1G8PKrup", "nips_2022_3vmKQUctNy", "P3_6qS6JpI4", "8Q9zgDstwLV", "nips_2022_3vmKQUctNy", "EvJ4GagR51c", "CnNvY-PmgLS", "USsylUR6O0pZ", "eeK51LFm3H", "A2txta97HE", "Zh34qCft8o2", "9tqLesugn61", "nips_2022_3vmKQUctNy", "nips_2022_3vmKQUctNy", "nips_2022_3vmKQUctNy", "nips_2022_3vmKQUctNy" ]
nips_2022_yts7fLpWY9G
Graph Neural Networks with Adaptive Readouts
An effective aggregation of node features into a graph-level representation via readout functions is an essential step in numerous learning tasks involving graph neural networks. Typically, readouts are simple and non-adaptive functions designed such that the resulting hypothesis space is permutation invariant. Prior work on deep sets indicates that such readouts might require complex node embeddings that can be difficult to learn via standard neighborhood aggregation schemes. Motivated by this, we investigate the potential of adaptive readouts given by neural networks that do not necessarily give rise to permutation invariant hypothesis spaces. We argue that in some problems such as binding affinity prediction where molecules are typically presented in a canonical form it might be possible to relax the constraints on permutation invariance of the hypothesis space and learn a more effective model of the affinity by employing an adaptive readout function. Our empirical results demonstrate the effectiveness of neural readouts on more than 40 datasets spanning different domains and graph characteristics. Moreover, we observe a consistent improvement over standard readouts (i.e., sum, max, and mean) relative to the number of neighborhood aggregation iterations and different convolutional operators.
Accept
The paper proposes the use of an adaptive readout function in GNNs together with extensive empirical work to support it. The reviewers all found the paper interesting and are generally in favor of accepting it (with one marking strong accept with high confidence). Therefore, I recommend the paper be accepted, and encourage the authors to take into account the reviewer comments (as they have also indicated in their responses) when preparing the camera ready version. In particular, I would like to encourage them to use the extra page given there to reconsider the split of materials between main paper and appendix.
test
[ "MVrdCFdFrdG", "umsyl8Ru2R", "J6YSlIU9Md", "bNoevCP37WR", "SnqNyy0D1As", "Lgq7x6mnuF9", "bpp5gr2Xtlw", "4Eb_gEmjl7d", "UTGVdySO2pq", "N-tXl6UDFV3", "hzBRRGC9RDi", "tGx7NKW-v0G", "ZPqibJeirF2" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the clarification on \"takeaway messages\". We will be expanding the existing discussion on readout functions in the camera-ready version. While our experiments are detailed, they do not cover all the possible factors that might influence a decision on the readout type. We will, however, endeavour to make recommendations and hypotheses taking into account all of our experiments. It is very encouraging that our empirical results support the use of adaptive readouts across different domains (e.g., set transformers are a convenient choice that retains permutation invariance and should be at the top of the list of considered readouts). Other aspects relevant to problems involving molecular graphs are the specifics of the target property being modelled. As outlined in our discussion, some properties are influenced by a part of the molecule and standard readouts might not be the best choice for such problems (in fact, the MLP can be a better adaptive choice than the set transformer readout). The selected course of action should, thus, be informed by the target property and task type (regression or classification), dataset size, graph structures characteristic of the problem, etc. We will ensure that our recommendations are supported with extensive empirical results and examples while covering as many of the factors as possible.", " Forget to mention this in the previous comment, and I am not sure whether you get a notification for this: in light of the rebuttal, I am happy to raise my score. If you could briefly comment on the 'takeaway messages' part, I'd appreciate it.", " Thanks for the thorough answers! I appreciate them and can state that this helped in clarifying matters for me.\n\nConcerning your question on the 'takeaway' messages: your paper challenges the status quo by showing that readout functions do not necessarily (!) have to be `sum` or `max`. It would now be interesting if you could make actionable recommendations here, for instance \"For molecular graphs, we recommend X\". This, I think, would help in connecting the paper and demonstrate that it establishes new community standards.", " ### Question 3 (MLP vs neighbourhood aggregations)\nThe scale of the y-axis in Figure 2 fails to fully illustrate the improvement achieved with MLP readouts as a result of increasing the number of neighbourhood aggregations (the readout hyperparameters are kept fixed). As an example, the uplifts (R-squared) for the MLP readout and GAT layers are as follows: 2 layers = 0.328, 3 layers = 0.330, 4 layers = 0.335, 5 layers = 0.365, 6 layers = 0.375.\n\nWhile the improvement is modest, the experiment shows that this type of readout function can benefit from more iterations of neighbourhood aggregation. Moreover, the MLP builds a representation while accounting for all node representations and as the experiment suggests it might be possible to learn a good graph representation with fewer iterations of neighbourhood aggregations (please also see Appendix L for a related discussion on expressivity). We have also discussed some chemical properties that might be more amenable to MLP readouts and where insisting on permutation invariance via standard pooling operators might be sub-optimal (see Section 4, lines 326-350). \n\nPlease also find more details on the parameter budget and adaptive readouts in the response to Reviewer 1 (Question 2).", " ### Transferability & Question 4\nTransferability with performance guarantees is certainly an interesting and relevant property of GNNs. In our current evaluation, we did not consider such experiments due to conceptual and practical differences. Conceptually, one of the main motivating factors for studying transferability is a scenario where the graph (network) size changes over time. This is typically encountered in recommender systems or knowledge graphs which are not considered in our paper. Regarding the graph-to-graph transferability, there are domain-specific particularities that need to be considered. For example, learning on small graphs and transferring to larger graphs is not often required in chemical tasks, as most chemical regression benchmarks and real-world applications use only very small organic molecules (for example, <30 atoms or nodes for QM9). There is also the requirement of selectivity, where an active molecule should bind only to a selected target and possible issues can arise with transferring a notion of similarity over the space of molecules that encodes activity to a completely different target. Moreover, there have been reports where the impact of learning (with GNNs) certain transferable chemical substructures (scaffolds) was not beneficial [**]. Practically, transferability has been most often studied with node-level tasks, such as in ref. [a] indicated by the reviewer, while here we focus on graph-level predictions. Overall, we believe that studying the influence of adaptive readouts on transferability is interesting for future studies, and we will ensure that it is discussed in our updated manuscript.\n\n[**] Sakai, M., Nagayasu, K., Shibui, N. et al. Prediction of pharmacological activities from chemical structures with graph convolutional neural networks. Sci Rep 11, 525 (2021).\n\n### Interpretability & Weakness 2\nWe considered that establishing the performance uplifts of the proposed methods is one of the main priorities and thus we dedicated a large amount of space to this endeavour. At the same time, we appreciate the importance of interpretability, especially for sensitive tasks such as drug bio-affinity prediction. So far, we have taken some first steps towards analysing the interpretability. Firstly, we examined the learnt graph representations of adaptive readouts (Appendix L) and concluded that they have the potential to be more expressive, as they explore a larger hypothesis space. \nSecondly, we visualised the graph representations using dimensionality reduction (UMAP) in 3 dimensions on public bio-affinity molecular datasets (Appendix M), which suggested that the adaptive readouts have a structuring effect on the graph latent space, possibly making it more amenable to clustering and other downstream tasks. Future studies might focus on exploring different aspects of the adaptive readouts. For example, as neural networks, they can be subjected to an analysis of the learnt weights or attention scores.\n\n### Question 1 (MCC)\nPlease see [*] and the response to Reviewer 1 (Question 1) for more details on MCC and reasons for opting for this performance metric.\n\n[*] D Chicco and G Jurman (BMC Genomics, 2020). The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation.\n\n### Question 2 (SMILES)\nSMILES strings are a compact way of representing a molecule and an alternative input format to a graph. For more details, please see [***]. In our paper, we use them to extract the graph connectivity and atom information (all discussed models are GNNs using graph representations). We will include references and better description of the bio-affinity related tasks considered here (e.g., internal energy, enthalpy, highest and lowest occupied molecular orbital, binding affinity relative to protein targets, etc.).\n\n[***] Weininger D (1988). \"SMILES, a chemical language and information system. 1. Introduction to methodology and encoding rules\". Journal of Chemical Information and Computer Sciences. 28 (1): 31–6.", " ### Limitations (SortPool, DiffPool, etc.)\nWe thank the reviewer for the additional references that will be included into the next version of the paper. Tables 3 and 4 below (from this response; results are split into two tables to increase readability, following the format presented in the paper) summarise the results of our experiment with SortPool that was done during the rebuttal period, with two-layer GNNs on QM9. Our results indicate that the approach is not competitive with the adaptive readouts considered here. For some of the other approaches, it was challenging to secure bug-free code or an implementation compatible with the PyTorch Geometric variation of the graph convolutional layers (e.g., DiffPool requires a dense adjacency matrix which is incompatible with the considered GNN layers).\n\n**Table 3.** SortPool results for GCN, GAT, and GATv2.\n\\begin{array}{|c|c|c|c|c|c|c|}\n \\hline\n & \\text{GCN} & \\text{GCN} & \\text{GAT} & \\text{GAT} & \\text{GATv2} & \\text{GATv2} \\\\\\\\ \\hline\n \\text{Agg.} & \\text{MAE} & \\text{R}^2 & \\text{MAE} & \\text{R}^2 & \\text{MAE} & \\text{R}^2 \\\\\\\\ \\hline\n \\text{SortPool} & 0.75 \\pm 0.01 & 0.07 \\pm 0.02 & 0.74 \\pm 0.01 & 0.09 \\pm 0.02 & 0.73 \\pm 0.01 & 0.12 \\pm 0.03 \\\\\\\\ \\hline\n\\end{array}\n\n**Table 4.** SortPool results for GIN and PNA.\n\\begin{array}{|c|c|c|c|}\n \\hline\n & \\text{GIN} & \\text{GIN} & \\text{PNA} & \\text{PNA} \\\\\\\\ \\hline\n \\text{Agg.} & \\text{MAE} & \\text{R}^2 & \\text{MAE} & \\text{R}^2\\\\\\\\ \\hline\n \\text{SortPool} & 0.72 \\pm 0.01 & 0.13 \\pm 0.02 & 0.69 \\pm 0.01 & 0.19 \\pm 0.02 \\\\\\\\ \\hline\n\\end{array}", " ### Question 1 (pooling techniques from [19-22])\nWhen designing the experiments we were faced with a limited compute budget and have opted to proceed with a representative method from each of the following readout classes: i) non-adaptive (sum, mean, max), ii) adaptive with permutation invariance (Set Transformer being the most representative), iii) adaptive and approximately permutation invariant (Janossy pooling), and iv) feedforward and recurrent neural networks that do not necessarily give rise to permutation invariant hypothesis spaces. This choice allowed us to carry out an extensive evaluation in terms of different benchmark datasets, graph convolution types, number of neighbourhood aggregation iterations, as well as other experiments (for example, Appendix L and M). We think that these baselines are sufficient to demonstrate the potential of adaptive readouts across different domains and also raise an interesting question on the utility of standard feedforward and recurrent networks as readouts.\n\n### Question 2 (hyperparameter tuning and validation strategy)\nThe comment on possible improvements with further hyperparameter tuning (lines 200-201) refers only to adaptive readouts. Figure 1 shows that GNNs with such readouts are, even without hyperparameter tuning, significantly more effective than identical graph convolutional architectures with non-adaptive readouts (sum, mean, max). A detailed description of the experimental design can be found in Appendices B and F, covering train/validation/test splits along with other reproducibility details. We have, in general, followed well adopted practices and selected hyperparameters of neural readouts to be powers of two (more details can be found in Appendix Table 4), tailored to dataset size. This was also, in part, motivated by previous knowledge from bio-affinity prediction tasks with GNNs.\n\n### Question 3 (bias in the validation strategy)\nWe have performed experiments with fixed (e.g., see Figure 1, two layers) and varying number of graph convolutions (e.g., see Figure 2). The reviewer's comment probably refers to experiments in Figure 1 but these are not to be taken independently of the ones reported in Figure 2 (see also Appendix O and Appendix Figure 4), where we varied the depth of GNNs while keeping the readouts fixed. In both sets of experiments, our empirical results indicate the benefits of employing adaptive readouts. For some graph convolution types (e.g., PNA) and benchmark datasets (e.g., QM9) the performance of MLP and GRU readouts is on par with non-adaptive ones (see Figure 2), but even in those cases Set Transformers are significantly more effective than non-adaptive readouts. In addition to this, over-smoothing is a known problem for vanilla GNNs. To avoid any hidden contribution from over-smoothing-correction techniques, we preferred shallower GNNs. Finally, we note that two-layer GNNs performed well on tasks such as bio-affinity prediction on the 1+ million scale datasets (i.e., can be expressive). In general, one can observe that the performance curves of adaptive readouts mirror the trends of the non-adaptive ones, but with considerably higher scores, indicating that the additional GNN expressivity due to more layers can be successfully exploited by the adaptive readouts. \n\n### Limitations (main part vs Appendix)\nWhen it comes to the split of the content between the main part and Appendix, we should have an extra page for the camera-ready version and will ensure that some illustrations from the Appendix are moved to the main paper and that all the results there are properly referenced. We agree with the reviewer that the scale of the empirical contribution might be fitting to a journal paper but would also like to point that the two other reviewers are satisfied with a summary of the results provided in the main paper (at least judging by their presentation scores).", " ### Strengths and weaknesses (takeaway messages)\nWe are happy to extend our current discussion that already offers some guidelines for practitioners (e.g., see lines 326-350), especially for chemically-oriented tasks that appear to be benefiting the most from adaptive readouts. We would greatly appreciate further clarification on the takeaway messages that the reviewer thinks would be beneficial for practitioners and will ensure they are part of the next revision.\n\n### Detailed and minor comments\nWe thank the reviewer for the detailed comments that will improve the quality of our manuscript. We will ensure these are incorporated into the camera-ready version of the manuscript.\n\nAs for the WL test, we will resolve this ambiguity in the next version. The computational efficiency argument does not apply to GNNs but to the isomorphism test that can decide a large number of isomorphism classes.\n\nIn the current submission, the dimension of the GNN outputs is set to 32. We will provide more details on the experimental setup in the camera-ready version and add an illustration depicting the variance.\n\nAs for the p-value question and reference to the appendix (see minor comments), the value of 0 means that the observed value was below machine precision (will clarify this). We will revise our bibliography, update the listed references, and include the suggested survey on the WL algorithm.", " ### Strengths and weaknesses & Question 2 (parameter budgets)\nTable 1 (from this response) summarizes the parameter budget for the experiments with Dense (MLP) readouts, reported in Figure 2 (see also Appendix O and Appendix Figure 4). We will include this table into the next version of the paper and extend the existing discussion of this experiment. Employing six graph convolutional layers translates to a 5 to 10-fold increase in the number of GNN parameters (this does not refer to readout parameters) compared to two layers, depending on the type of convolution. Moreover, six-layer graph neural networks with GAT, GATv2, and PNA convolutions employ (approximately) at least double the number of parameters assigned to the MLP readout. The results in Figure 2 indicate that the MLP readout coupled with only two graph convolutional layers performs on par with a graph neural network with six layers and standard non-adaptive readouts. Moreover, the latter neural network (e.g., see GAT, GATv2, and PNA) has approximately double the number of parameters compared to the former.\n\n**Table 1.** D = output dimension of GNN. The number of parameters for the MLP readout is computed according to the formula: M * D * H1 + H1 * H2, given without an additive bias for brevity, where additionally M = maximum number of nodes in the graph, H1 = output dimension of the first MLP layer, and H2 = output dimension of the second MLP layer. PNA uses a slightly altered D to satisfy the layer's internal assertions when using 5 towers. Num. = Number\n\n\\begin{array}{|c|c|c|c|c|c|}\n\\hline\n\\text{Layer type} & \\text{Num. layers} & \\text{Num. GNN params (K)} & \\text{Num. MLP readout params (K)} & \\text{D} \\\\\\\\ \\hline\n\\text{GCN} & 2 & 4.1 & 271 & 32 \\\\\\\\ \\hline\n\\text{GCN} & 6 & 20.9 & 271 & 32 \\\\\\\\ \\hline\n\\text{GIN} & 2 & 9.5 & 271 & 32 \\\\\\\\ \\hline\n\\text{GIN} & 6 & 43.1 & 271 & 32 \\\\\\\\ \\hline\n\\text{GAT} & 2 & 62.2 & 271 & 32 \\\\\\\\ \\hline\n\\text{GAT} & 6 & 474.2 & 271 & 32 \\\\\\\\ \\hline\n\\text{PNA} & 2 & 112.3 & 293 & 35 \\\\\\\\ \\hline\n\\text{PNA} & 6 & 516.3 & 293 & 35 \\\\\\\\ \\hline\n\\text{GATv2} & 2 & 122.5 & 271 & 32 \\\\\\\\ \\hline\n\\text{GATv2} & 6 & 946.5 & 271 & 32 \\\\\\\\ \\hline\n\\end{array}\n\nIn Appendix J (Table 17), we have also performed experiments with Set Transformers with different numbers of attention heads and self-attention blocks. This set of experiments demonstrates that already with a single attention head and a single self-attention block, graph neural networks with adaptive readouts can outperform the ones with standard pooling functions -- sum, mean, and max.\n\nIn Appendix Figure 4, as well as other examples such as the BACE dataset in Appendix Table 6 and the two datasets in Appendix Table 10 (this list is non-exhaustive), we observed that the MLP readout outperformed the Set Transformer readout. During the rebuttal period, we performed another experiment on the ENZYMES dataset, using GNNs with two GCN layers and the parameter budget described in Table 2 (from this response), presented alongside the obtained performance metric (MCC). Although the Set Transformer used a considerably larger parameter budget (approximately x4.5), this did not lead to increased performance over the MLP. We will include and extend this analysis in the next version of the paper.\n\nWe are also happy to include additional experiments that would help clarify this concern and would appreciate if the reviewer could recommend some that we could run in the background.\n\n**Table 2.**\n\n\\begin{array}{|c|c|c|}\n\\hline\n\\text{Readout} & \\text{Num. readout parameters} & \\text{Average MCC} \\\\\\\\ \\hline\n\\text{MLP} & 260\\text{K} & 0.44 \\\\\\\\ \\hline\n\\text{Set Transformer} & 1.2\\text{M} & 0.375 \\\\\\\\ \\hline\n\\end{array}\n\n### Question 1 (MCC)\nWe selected this metric based on [*], which identified the MCC as a more informative classification metric than the accuracy or the $F_1$ score and a reasonable default choice. This is currently detailed in Appendix B, but we will attempt, space allowing, to clarify this in the main text (there should be an extra page available for the camera-ready version).\n\n[*] D Chicco and G Jurman (BMC Genomics, 2020). The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation.", " We would like to thank all the reviewers for their insightful comments and suggestions, which will help us to improve the quality of the paper and its presentation. We have addressed the individual concerns raised by the reviewers in our detailed response below.", " (**Disclaimer**: I was assigned to this paper as an emergency reviewer;\nsorry for being slightly more terse than I would like to be ideally)\n\nThis paper presents an empirical analysis of the predictive performance\nof graph neural networks when switching to different readout functions.\nA set of such readout functions is investigated, some of which do *not*\nsatisfy permutation invariance by design. Surprisingly, the performance\nof such readout functions turns out to be beneficial on some graph data\nsets and some learning tasks!\n The main strength of the paper lies in the strong experimental section,\nwhich challenges the status quo. The use of pooling functions that satisfy \npermutation invariance has been deeply ingrained in the graph learning\nliterature. Thus, an investigation of their efficacy in practice is\nhighly appreciated and useful.\n\nThe main weakness of the paper is a missing discussion about parameter\nbudgets; to a certain extent, some of the observed differences could\npotentially also be explained by having 'more expressive' readout\nfunctions (permutation invariance notwithstanding). This should be\nclarified but can easily be accomplished within the conference reviewing\ncycles (see questions below).\n\nMoreover, a more specific discussion of 'takeaway messages' could be\nadded, making some recommendations for practitioners in light of the\nobserved phenomena.\n\nIn summary, I believe that this paper requires only minor modifications\nbefore being ready for publication. It will make an excellent\ncontribution to the conference.\n\n## Detailed Comments\n\n- When discussing graph expressivity, I would suggest to rather state\n that certain certain GNNs are *not more expressive than the WL test*,\n i.e. a 'shallow' method without trainable weights. Moreover, the test\n is not as computationally efficient as GNNs; this should be rectified.\n\n- The beginning of Section 2 could potentially be condensed to make\n more space for experiments or discussions, which are critical for this\n type of paper.\n\n- The fact that differences are statistically significant (Appendix R)\n should be mentioned more prominently in the main text in Figure 1.\n\n- Figure 1 could potentially also be improved by showing standard\n deviations of the 'best-to-best ratio'.\n\n- Figure 2 (and other figures as well) should use a consistent colour\n scheme and ensure that the *new* readout functions are highlighted\n somewhat prominently.\n\n- More information about the dimensionality of output vectors should be\n provided, in particular when discussing aspects like convergence or\n Euclidean distance (see also next point).\n\n- For the calculation of Euclidean distances (Appendix I), more details\n about the models and the dimensionalities should be provided. It is\n possible (though not likely) that the changes in distances can also be\n explained by some 'nuisance' dimensions in the outputs. Moreover, this\n type of analysis would lend itself well to a repetition in order to\n get an idea of the variance in the data.\n\n## Minor Comments\n\n- Eq. (3) could be improved; there are several terms to keep track of at\n the moment; $\\mathrm{SAB}$ could for instance just be substituted by\n $\\mathrm{MAB}$.\n\n- Space permitting, a delineation to the results from the Janossy\n pooling paper could be provided. Having read both papers, I recall\n that Janossy pooling *also* discusses the merits of a different\n readout function.\n\n- Why are some $p$-values zero in the appendix? Is this a problem of\n typesetting the table?\n\n- Please check the bibliography again; there are some citations that\n should refer to published work but point to a preprint at the moment\n (for example, Alon and Yahav [20] is now published at ICLR; the work\n by Kipf and Welling [1] was likewise published at ICLR).\n\n- When discussing WL, it might be interesting to cite a [recent survey\n that summarises the connections of WL to machine learning](https://arxiv.org/abs/2112.09992).\n 1. Is there a reason why MCC was used instead of accuracy differences or\n accuracy ratios?\n\n2. To what extent could the observed performance increases by explained\n by having more parameters available for a model to fit? I am\n mentioning this because it appears that one recommendation is to use\n a set transformer, i.e. a permutation-invariant readout function.\n This slightly changes the message of the paper to 'More parameters\n for readout functions are better,' whereas initially, the goal was to\n see whether permutation invariance is actually needed. Please comment\n on this. Limitations are not directly addressed. A brief statement on the broader\nimpact of the work would be welcome. At the same time, I do not foresee\nany adverse outcomes that arise specifically because of this work.", " The paper introduces novel adaptive and differentiable readout functions for Graph Convolutional Networks that rely on neural networks and which do not necessarily result in permutation invariant hypothesis space.\nThe authors provide an extensive empirical evaluation on 40 datasets which involves some standard readout/pooling policies and different neighborhood aggregation schemes.\n Strengths:\n- Several datasets and different domains considered in the experiments\n- The authors empirically analyze several different aspects, including an ablation study that highlights the impact of various parts of the model.\n\nWeaknesses:\n- Crucial parts of the results are reported only in the appendix\n- Some aspects of the empirical evaluation have to be clarified (e.g validation strategy)\n- Missing some references\n- Some common graph pooling methods are not considered in the experimental comparison It is not clear why some pooling techniques like the one reported in [19], [20[ [21], and [22] were not considered in the empirical evaluation.\n\nIn section 3 (rows 200-201) the authors state “Thus, it200 is likely that the performance can be further improved by hyperparameter tuning.”, therefore I wonder whether (and how) the authors performed a validation phase and how the hyper parameters used in the various experiments were selected.\n\nIn the subsection “Neural vs standard readouts across different datasets and neighborhood aggregation schemes'' the authors use two-layer GNN. This choice clearly introduces a bias in the valuation strategy. How much does it impact the obtained results? The paper turns out to be very interesting and I appreciate the considerable amount of comparisons and evaluation that the authors performed to assess the proposed approach. The point is that all the reported analyses and evaluations require more than a 9-page conference paper to be described and discussed. Indeed the authors report a considerable quantity of the results (crucial to understanding the proposed discussion) and a part of the description of the architecture in the appendix. In my opinion that makes the paper difficult to read. The organization of the information and its division between the main paper and appendix have to be revised.\n\nSome references were missing, e.g the wide use of graph-level aggregators like DiffPool (Ying, R., You, J., Morris, C., Ren, X., Hamilton, W.L., Leskovec, J.: Hierarchical graph representation learning with differentiable pooling (2019)) and the Sortpool (Zhang, M., Cui, Z., Neumann, M., Chen, Y.: An end-to-end deep learning architecture for graph classification. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)) that have to be at least discussed. Same for the recently proposed SOM-based aggregator (Pasa, L., Navarin, N., Sperduti, A.: Som-based aggregation for graph convolutional neural networks. Neural Computing and Applications pp. 1–20 (2020)).", " This paper provides a comprehensive empirical study of graph neural networks (GNN) with adaptive readouts. Graph neural networks traditionally use non-adaptive readouts based on simple functions (like sum, max, mean) to preserve permutation invariance. Experiments in this paper demonstrate that permutation invariance can be traded with performance improvement by using learnable, adaptive readout functions based on neural architectures. Strengths:\nThis paper combines traditional GNN architectures with three adaptive readouts in its empirical evaluations and therefore, there are no significant technical advancements. The main strength of the paper is the comprehensive empirical analysis for datasets that span multiple domains (e.g. bioinformatics, computer vision, quantum mechanics etc.) from the existing literature. The experiments demonstrate considerable improvements in performance using adaptive readouts over that for standard readout functions. The experiments are well explained and the accompanying discussions are illuminating from an applied machine learning perspective. \n\n\nWeaknesses: \n1. I think the property of transferability of GNNs (see ref. [a] below) merits significant discussion in the paper. Adopting adaptive readouts in GNNs affects transferability in conjunction with permutation invariance.\n2. The focus of the paper is on improvement in performance using adaptive readouts. However, in some applications, the interpretability of the results can be equally or more important than performance. An example of this scenario is brain age prediction, where the error in brain age may have prognostic value and potentially predict health outcomes in disease (see ref. [b] below). \n\n\n[a] Ruiz, Luana, Luiz Chamon, and Alejandro Ribeiro. \"Graphon neural networks and the transferability of graph neural networks.\" Advances in Neural Information Processing Systems 33 (2020): 1702-1712.\n\n[b] Lee, J., Burkett, B.J., Min, HK. et al. Deep learning-based brain age prediction in normal aging and dementia. Nat Aging 2, 412–424 (2022). https://doi.org/10.1038/s43587-022-00219-7\n 1. A brief description of Matthew correlation coefficient would improve readability.\n2. The bio-affinity prediction task is not clear to me and I believe some details are missing. For instance, what is SMILES representation in this context? I suggest adding appropriate details or references to explain this task. \n3. Can the authors comment on the relevance of neighborhood aggregations with a multilayer perceptron based DENSE readout layer? In this case, I am concerned that neighborhood aggregations might become irrelevant with the learning performance primarily attributable to the DENSE layer for at least some of the experiments. \n4. I recommend adding discussions on transferability of GNNs with adaptive readouts. Yes. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 4 ]
[ "umsyl8Ru2R", "J6YSlIU9Md", "4Eb_gEmjl7d", "SnqNyy0D1As", "ZPqibJeirF2", "bpp5gr2Xtlw", "tGx7NKW-v0G", "UTGVdySO2pq", "hzBRRGC9RDi", "nips_2022_yts7fLpWY9G", "nips_2022_yts7fLpWY9G", "nips_2022_yts7fLpWY9G", "nips_2022_yts7fLpWY9G" ]
nips_2022__N4k45mtnuq
Approximate Euclidean lengths and distances beyond Johnson-Lindenstrauss
A classical result of Johnson and Lindenstrauss states that a set of $n$ high dimensional data points can be projected down to $O(\log n/\epsilon^2)$ dimensions such that the square of their pairwise distances is preserved up to a small distortion $\epsilon\in(0,1)$. It has been proved that the JL lemma is optimal for the general case, therefore, improvements can only be explored for special cases. This work aims to improve the $\epsilon^{-2}$ dependency based on techniques inspired by the Hutch++ Algorithm, which reduces $\epsilon^{-2}$ to $\epsilon^{-1}$ for the related problem of implicit matrix trace estimation. We first present an algorithm to estimate the Euclidean lengths of the rows of a matrix. We prove for it element-wise probabilistic bounds that are at least as good as standard JL approximations in the worst-case, but are asymptotically better for matrices with decaying spectrum. Moreover, for any matrix, regardless of its spectrum, the algorithm achieves $\epsilon$-accuracy for the total, Frobenius norm-wise relative error using only $O(\epsilon^{-1})$ queries. This is a quadratic improvement over the norm-wise error of standard JL approximations. We also show how these results can be extended to estimate (i) the Euclidean distances between data points and (ii) the statistical leverage scores of tall-and-skinny data matrices, which are ubiquitous for many applications, with analogous theoretical improvements. Proof-of-concept numerical experiments are presented to validate the theoretical analysis.
Accept
All reviews for this paper were positive, albeit with a varying level of enthusiasm. Reviewers found the problem, the results (both theoretical and experimental) and the techniques (very) interesting. The main concerns were whether the paper is a good fit for the conference (given that dimensionality reduction is more of a machine learning tool rather a machine learning problem per se) and lack of experiments on real data. But ultimately, the positives significantly outweighed the negatives.
val
[ "_eS4BUSEPOy", "2idMuQEOgjv", "jotI6i18VwR", "-IF6VcJ85bR", "Lc7p3RlJ0m9", "rKFArHCDnh", "8PsjtdRbr-6", "Gnk0ZZZLnY", "OZ_JhyPVfyF", "-gdRFMfzAQ", "JzyVl__k3Wh", "xB5tefVForE", "zMqMTpJi68N", "6zFCPWgEtiA", "GiPM4mjhY5I", "6tHpiqYnZHo", "7LjZRQULin3" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for taking our responses into consideration and for raising the score! Just a note, there appears to be this \"Rebuttal Acknowledgement\" button for the reviewers, we are not sure if it's mandatory or if it has any meaning in general, but we thought to mention just in case. Thanks again", " I thank the authors for their feedback. I have decided to increase my score in light of their responses.", " Thank you! I don't have any further questions or comments.", " As the deadline for the Reviewer-Author discussion phase approaches, we were wondering if the reviewer has any further questions/comments that we could respond to.", " As the deadline for the Reviewer-Author discussion phase approaches, we were wondering if our responses were sufficient or if there are any further clarifications needed. Let us know, we are happy to answer any further questions.", " As the deadline for the Reviewer-Author discussion phase approaches, we were wondering if our responses were sufficient or if there are any further clarifications needed. Let us know, we are happy to answer any further questions.", " **Q:**\nWhile the paper does present two applications. I am left unclear as to where this problem shows up in practice (even though the problem is interesting). The cost of matrix multiplication has not been discussed and this is crucial to the viability of the method. I am not sure if this the best fit for a the machine learning community. Maybe the theoretical CS community might appreciate this more (SODA, FOCS, STOC etc).\n\n**A:**\nWe have highlighted four very successful applications of JL-like random projections in machine learning, specifically, references [2, 6, 16, 11] (see the Introduction also for other applications in other domains). In machine learning, a seminal example where this can be applied is that one can can derive fast approximate solution even for hard problems like $k$-means clustering. In general, we are also not fully aware of a specific application where the ultimate goal is to compute the Euclidean lengths or the distances per-se, but they are a fundamental building block for a plethora of other applications, especially when dimensionality reduction is crucial. E.g., random projections gave rise to the so-called subspace embeddings (cf. Sarlós [R2]) which revolutionized numerical linear algebra. We do believe that dimensionality reduction is in general highly relevant to the conference and that our work is well within the context.\n\nRegarding the cost of matrix multiplication, since $A$ is given as an operator in the matrix-vector-query model, we cannot know beforehand the cost of multiplying $A$ with a vector. It depends on the application, therefore, we decided to use the generic term $T_{MM}(A,m)$ to denote the complexity of this operation.\n\n**Conclusion:**\nIn conclusion, we hope that we were able to clarify all the concerns raised by the reviewer. If there are further specific questions we might be able to discuss them in the next phase of the review process. We would also be willing to adapt possible modifications that might be suggested in a final revision if the manuscript gets accepted. In any case, we would like to thank them again for their effort and for providing actual constructive comments and criticism. \n\n**References:**\n\n[R1] Spielman, Daniel A., and Nikhil Srivastava. \"Graph sparsification by effective resistances.\" SIAM Journal on Computing 40.6 (2011): 1913-1926.\n\n[R2] Sarlos, Tamas. \"Improved approximation algorithms for large matrices via random projections.\" 2006 47th annual IEEE symposium on foundations of computer science (FOCS'06). IEEE, 2006.", " \n2) Regarding the cost of computing the row norms directly vs using random projections (either JL or our work). If the matrix is explicitly available, one can directly compute the row norms in $O(nd)$. But as already mentioned above, this is not the case in the matrix-vector-query model, i.e. we do not have access to the matrix. Some more detailed examples are the following:\n - Consider the case of estimating leverage scores, which is detailed in Section 4. We have the original input matrix $A$, and we have computed an $R$ such that $AR^{-1}$ has (approximately) orthonormal columns. How does one compute the row norms of $AR^{-1}$? If we compute explicitly the matrix $AR^{-1}$, this costs $O(nd^2)$ if $R$ is $d\\times d$. This is much larger than $O(nd)$ which is required for the row norms in the subsequent step. However, if we use a Gaussian JL projection $G$ with $m$ columns we can compute the row norms of $A(R^{-1}G)$. In this case $B=R^{-1}G$ takes $O(d^2m)$ to compute, $m\\ll d$. In the second step, $AB$ takes $O(ndm)$ to compute, much less than $O(nd^2)$. Even if $G$ was a sparse or Fourier projection, the total cost would still be dominated by $O(ndm)$, therefore, sparse/structured projections provide no benefit at all. Gaussian projections are more than sufficient. (Note: we did not use fast matrix multiplication in this paragraph but it can be modified accordingly, nothing changes).\n - Consider the case where one has a square matrix $n\\times n$ (e.g. $n=d$) and we want to approximate a matrix function by a polynomial, e.g. like the famous Kernel Polynomial Method, and consider we want to compute the row norms of that matrix function. Let's say we have the polynomial $5A^2 + 3A$ (arbitrary example). In order to evaluate the $A^2$ term, we need $O(n^\\omega)$. If, on the other hand, we use again a Gaussian random projection $G$ with $m$ columns, we can approximate the row norms by the row norms of $(5A^2+3A)G$. This takes $O(n^2m^{\\omega-2})$. Again, just like the previous example, we have absolutely no benefit in using a sparse/Fourier matrix instead of a Gaussian matrix, because if we want to compute the matrix $A^2G$, we compute it as $A(AG)$. $AG$ will be cheap, but thereafter $A(AG)$ will still cost $O(n^2m^{\\omega-2})$. \n\nWe hope that this clarifies that Gaussian projections are well justified, and in most cases, it is all one needs. If there are some very specific applications with special structure one can consider swapping the Gaussian matrix with another type of random projection. Nothing changes in our analysis regarding the approximation guarantees, as long as it satisfies the JL-moment-property and the other required properties. We emphasize on this once again: all of our results are stated as structural results. We do not explicitly need Gaussian matrices. We need matrices that satisfy specific properties, e.g. the JL-moment-property, the $\\epsilon$-JLT property, the Subspace Embedding property, etc.\nCountSketch, SRHT, SRFT, OSNAP, they are all excellent candidates to use. Nothing will change in our analysis if they satisfy the required properties.\n", " We would like to thank the reviewer for their effort and for the detailed comments, and we are very happy that they liked the problem and our techniques. We provide answers to specific questions below.\n\n**Q:**\nSo for context, the reason we like the JL transform is because we can compute it quickly (i.e. faster than doing regular matrix multiplication) [1]. However, ...\n\n...\n\n[1] THE FAST JOHNSON–LINDENSTRAUSS TRANSFORM AND APPROXIMATE NEAREST NEIGHBORS NIR AILON† AND BERNARD CHAZELLE [2] Knight, Philip A., Fast rectangular matrix multiplication and QR decomposition, Linear Algebra Appl. 221, 69-81 (1995). ZBL0827.65044. [3] Improved Rectangular Matrix Multiplication using Powers of the Coppersmith-Winograd Tensor by François Le Gall Florent Urrutia\n\n**A:**\nThese are all valid comments that are raised in this question, we will clarify our case as best as possible below. We break it in two parts.\n\n1) As a general comment, we highlight again that we consider the matrix-vector query model. I.e., the matrix $A$ is not expected to be explicitly available, but we only have access to an operator that computes the product $Ax$ for some vector $x$. E.g., this is the case when someone has a matrix $A$ and wants to do computations with a function of $A$, e.g. $f(A)$. $f(A)$ is not explicitly available. Depending on how complex the function is, it might require $O(n^\\omega)$ operations (or even more) to compute it explicitly. Moreover, if $A$ is sparse, $f(A)$ might be dense and therefore it might not even fit in memory. As a seminal example, consider the case where we have an adjacency matrix $A$ of a graph and we need to compute the effective resistances of the edges. The effective resistances can be found in the diagonal of the Laplacian pseudoinverse. The pseudoinverse requires $O(n^\\omega)$ operations to compute. But, in their seminal work, Spielman and Srivastava (see ref. [R1] below) showed that one can use JL random projections in combination with a fast Laplacian solver to approximate the effective resistances of a graph in nearly linear time (!!!). This result cannot be achieved e.g. by using fast rectangular matrix multiplications. Many other matrix functions arise in applications.", " **References:**\n\n[R1] Dasgupta, Sanjoy, and Anupam Gupta. \"An elementary proof of a theorem of Johnson and Lindenstrauss.\" Random Structures \\& Algorithms 22.1 (2003): 60-65.", " We would like to thank the reviewer for their effort and for the detailed comments. Below we provide detailed clarifications for the questions and concerns raised.\n\n**Q:** What is the main algorithmic contribution above the Hutch++ algorithm?\n\n**A:**\nWe will start with this question as it appears to be the primary concern, and it should also highlight the novelty of this work. Hutch++, as well as Hutchinson's original algorithm, are estimators for the trace of a matrix. Matrix trace estimation, albeit related, is a different problem than the problems discussed in this paper. We present algorithms for estimating (i) Euclidean row norms (ii) Euclidean distances and (iii) leverage scores. Several techniques that we use are inspired by those of Hutch++, and this is clearly highlighted in the text, but it addresses different problems. \nA more intuitive example between the connections and differences between our work and Hutch++ would be the following:\n \n- Hutch++, as a trace estimator, can be used to estimate the trace of $AA^\\top$, which is equal to the squared Frobenius norm of $A$. But it can only estimate the trace, which is the sum of the diagonal elements (or, equivalently, the sum of the eigenvalues). It cannot estimate, however, individual diagonal elements, individual eigenvalues, etc.\n\n- Our work, on the other hand, can be used to estimate the squared Euclidean row norms of $A$, which are equal to the diagonal elements of $A A^\\top$. Summing up those individual estimations, we can also get an estimation of the trace of $AA^\\top$, which is what Hutch++ returns, if it is applied on $AA^\\top$.\nWe hope that this clarifies the reviewer's concern.\n\n**Q:** Can you further elaborate on the import of Lemma 3, and what it means for the optimality of Theorem 2?\n\n**A:**\nThis is a very interesting question. First we need to clarify that we do not claim that Theorem 2 is optimal. Lemma 3 (as well as Section II \"Limitations of low-rank projections\" in the Appendix) provide some evidence towards that direction, but it is not a formal proof of optimality, it is more of a conjecture. More specifically, they demonstrate that if someone uses low-rank projections, like we do, then there exist corner cases that make it highly unlikely for any such algorithm to do any better. \nMoreover, standard JL approximations have been proven to be optimal for the worst-case (element-wise), and Hutch++ is also optimal for trace estimation. All of these hint that this work should be at least nearly-optimal, but at this point we do not have all the required lower bounds to make a concrete optimality argument. It is definitely a very interesting question for future work. \n\n**Q:** The authors fail to argue that this paper has sufficient (in fact, any) ties to other machine learning applications. In my opinion the paper is out of scope for this conference, and seems more suited to a cs theory conference. \n\n**A:** \nThis might not be entirely true, we do mention four very successful publications in the introduction which are direct applications of random projections in ML problems, e.g. $k$-means clustering; cf. [2, 6, 16, 11]. Three of those are in ML conferences/journals: NeurIPS, ICML, Machine Learning. There exist many more publications using similar techniques in ML conferences/journals. If we have missed some important references, we would definitely be willing to add them. In general, we do believe that dimensionality reduction is highly relevant to the conference, and that our work is well within context.\n\n**Q:** The statement of the JL lemms is not precise, as the lemma itself relates to distances and the statement relates to norms. It is true that norms imply distances, but only if the transform is linear, and only if one takes the input set to be all difference vectors, which is a larger set.\n\n**A:** We are not sure which exact statement this question refers to, we have reviewed all the statements in the paper and we hope that they are all clear and precise. In general, we have tried to make it as clear as possible in the article how we distinct between estimation of row norms and estimation of Euclidean distances. We have split it in two different sections, each one with its own proofs and analysis. We can also point out that Dasgupta and Gupta (see ref. [R1]) use vector lengths as a building block in, what we believe to be, the most popular proof of the JL Lemma.\n\n**Conclusion:**\nWe hope that we were able to answer all of the reviewer's questions. We would like to thank them again for their constructive comments.", " We would like to thank the reviewer for their effort and for their constructive comments. Below we provide answers to specific questions.\n\n**Q:** However, a valid question that usually pops up among practitioners who aim to use dimensionality reduction to boost the performance of their work, are matrices with linear decay/exponential decay common in the field of data science, i.e., how often we get to actually deal with such data that admits such properties?\n\n**A:**\nThis is a good question regarding the general applicability of methods using such techniques (including Hutch++ [24], the diagonal estimator of [5], the algorithms of [26]). An example application where the matrices inherently have large decay is related to graphs, specifically for computing node centralities/importances, as they relate to the exponential of the adjacency matrix $e^A$. Approximations of the matrix exponential in the matrix-vector query model also exist, e.g. via polynomial expansions or Lanczos methods for function approximations. For further examples we can also directly point to the related Hutch++ paper of Meyer et al [24], where they do present experiments on real-world matrices with spectral decay. Their algorithms, however, solve a different problem, i.e. they compute matrix traces. At the time of this writing, our work on real data is still in a preliminary stage (it also involves quite some theoretical analysis for the applications that we are working on).\n\n\n**Q:**\nIn addition, is it possible to use some variant of your analysis when dealing with lewis weights?\n\n**A:**\nThis is a very interesting point that we have not thought of. We are not entirely sure what the question refers to, is it about approximating the Lewis weights? If yes, note that for $p=2$, Lewis weights are equal to the leverage scores, which are detailed in Section 4. For general $p$, it is unclear for us at this point, but it is definitely an interesting question for future research.\n\n**Q:**\nFinally, please make sure that the main manuscript is at most 9 pages, while the current version of the paper can be refered to as the supplementary material.\n\n**A:**\nRegarding this, we initially followed the official guidelines, from which it appears that adding an Appendix to the main document is allowed. Quoting the official website:\n\n\"Can I include appendices in my main submission file? Yes. You can include appendices with the main submission file, or you can include them as a separate file in the supplementary materials.\"\n(cf. https://nips.cc/Conferences/2022/PaperInformation/NeurIPS-FAQ).\n\nHowever, if this is not the case, we are willing to make all the appropriate changes as needed.\n\n**Conclusion:**\nWe hope that we have answered all of the reviewer's questions as best as possible. We want to thank them again for their detailed comments and their constructive criticism.", " We would like to thank the reviewer for their effort to truly understand the results and the novelty of this work and for their kind words. Regarding the notation, indeed, initially we thought that using Alg1$(A)$ might have been easier to follow than, for example, $\\tilde X$ or so, but in the end this was probably not the case. We are willing to change it accordingly in a revision if the paper gets accepted, including also other specific suggestions that the reviewer might have to improve the presentation.", " The proposed problem is to estimate the euclidean length of matrix rows, and the authors propose a solution using fewer queries than requires by the JL-transform. This is done in the \"matrix-vector query model\", where access to the target matrix A is indirect, and only via queries of the form Ax for any vector x. The authors suggest an algorithm based on the Hutch++ algorithm, and give some claims of optimality. Results are derived for estimating Frobenius norms, inter-row distances, and statistical leverage scores. Strengths: Rigorous approach to several interesting mathematical problems.\n\nWeaknesses: The authors fail to argue that this paper has sufficient (in fact, any) ties to other machine learning applications. In my opinion the paper is out of scope for this conference, and seems more suited to a cs theory conference. The matrix-vector sample model is also quite limited, and in general I'm also unsure of how much this is a contribution this paper makes over those of the Hutch++ algorithm. The results seem incremental.\n\nMinor concern: The statement of the JL lemms is not precise, as the lemma itself relates to distances and the statement relates to norms. It is true that norms imply distances, but only if the transform is linear, and only if one takes the input set to be all difference vectors, which is a larger set. What is the main algorithmic contribution above the Hutch++ algorithm? \n\nCan you further elaborate on the import of Lemma 3, and what it means for the optimality of Theorem 2?\n\n--------------------------------\n\nFollowing the author rebuttal, and explanation on contribution above the Hutch++ algorithm, I've raised my score to 5. \n\nConcerning the statement of the JL lemma: Your Definition 1 relates to the norm of the input vectors. JL relates to the distance between the input vectors. This isn't the same, and the latter can be maintained without the former. N/A", " The authors provide a dimensionality reduction techniques that is more optimized towards certain data than the traditional JL. Specifically speaking, while JL is known to be optimal when dealing with data with no decay, it is not the optimal method when dealing with datasets that admit exponential or even linear decay. In this paper, the authors provide a technique inspired by the Hutch++ algorithm that deals with such instances of data. * Strengths:\n 1) The paper seems sound theoretically, which hinges upon using delicate and interesting lemmata and theorems.\n 2) The experimental section (regardless of using only synthetic data), shows that JL is indeed not optimal for the case where data admits linear/exponential decay.\n\n* Weaknesses:\n 1) The writing of the paper is somewhat hard to follow, especially around the proof sketches.\n 2) The experimental section only accounts for synthetic data. Is it possible to add an experiment that includes real-world data? This will indeed highlight the applicabilities of your approach. The results stated in the paper seem sound and are contributing. However, a valid question that usually pops up among practitioners who aim to use dimensionality reduction to boost the performance of their work, are matrices with linear decay/exponential decay common in the field of data science, i.e., how often we get to actually deal with such data that admits such properties?\n\nIn addition, is it possible to use some variant of your analysis when dealing with lewis weights?\n\nFinally, please make sure that the main manuscript is at most 9 pages, while the current version of the paper can be refered to as the supplementary material. The authors clearly stated the limitations of their work.", " The authors give an algorithm to approximate the pairwise Euclidean distances of row vectors of any given matrix. This algorithm improves the bounds of typical Johnson-Lindenstrauss estimators in a few specific settings:\n- When achieving the same Frobenius norm-wise accuracy, less matrix-vector queries are required;\n- When achieving the same element-wise accuracy less matrix-vector queries are required;\n- The algorithm given is as least as accurate as the standard Johnson-Lindenstrauss techniques, when given matrices with flat spectrums. The paper is well written and structured. The algorithms are well presented, but the authors tend to omit details. The main results, that is Theorems 1,2,3 and 5, show that the techniques of the authors are improvements of the Johnson-Lindenstrauss techniques for their specific use-case. The presentation of proofs is overall good, with a few minor inconveniences. The proofs, that are omitted from the paper and are part of the Appendix, are at least sketched, so the reader gets an idea of the proof. Also enough text is given to follow the structure of the paper.\n\nBecause of the notation, sometimes the math is hard to read/follow (see the proof sketch of Corollary 1 for example, also Alg1 as placeholder for the output is suboptimal).\n\nThe paper is well written, besides some minor issues with the notation. The result is interesting and a real improvement for the cases presented. Suggestion: improve notation, try to include more details No potentially negative societal impact in sight.", " For the Johnson Lindenstrauss transform we can embed $n$ points in $O(\\log(n)\\epsilon^{-2})$ dimensional space such that we have at most $\\epsilon$ distortion. This paper instead look at the problem of having an embedding that only preserves the norm of the data points instead of the metric. The paper does this so that the dimension of the new space only has an $\\epsilon^{-1}$ dependence. The goal is that doing it via such an embedding is faster than just computing the norm of the row. \n\nThe paper then presents two applications including computing the metric. **Post Rebuttal**\n\nI thank the authors for their response. Their response does help clarify my questions about applications and the matrix multiplication. \n\nIn light of this I raise my score. \n\n**Strengths**\n\nI think the problem is very interesting and I like the proof techniques as well. \n\nThe algorithm is theoretically motivated and the results are theoretically proved. \n\nThe improvement over the JL is empirically demonstrated. \n\n**weaknesses**\n\nWhile the paper does present two applications. I am left unclear as to where this problem shows up in practice (even though the problem is interesting). \n\nThe cost of matrix multiplication has not been discussed and this is crucial to the viability of the method. \n\nI am not sure if this the best fit for a the machine learning community. Maybe the theoretical CS community might appreciate this more (SODA, FOCS, STOC etc). So for context, the reason we like the JL transform is because we can compute it quickly (i.e. faster than doing regular matrix multiplication) [1]. However, the key idea there is the fact that we can do the transform using a Fourier transform and sparse matrices. \n\nHowever, in this paper, we are using a Gaussian matrix. So the matrix multiplication for $AS$ in Algorithm 1 step 2 is multiplying a $n \\times d$ matrix by a $d \\times m$ matrix where $m \\le d \\le n$. My understanding is that the best rate for this is still $O(m^{\\omega-2}dn)$ where $\\omega \\ge 2$ is the exponent for square matrix multiplication (i.e. multiply two $n\\times n$ matrices in time $O(n^\\omega)$. However, that is old. Looking for more new results I found [3] which shows that for multiplying an $n \\times n^\\alpha$ matrix with a $n^\\alpha \\times n$ matrix can be done in $O(n^{2+\\epsilon})$ for any $\\epsilon > 0$. Hence I am not sure if that matrix multiplication can be done faster than $O(nd)$ which is the time taken to compute the true norms. \n\nEven otherwise with step taking $O(nm^2)$ we need $m \\le \\sqrt{d}$. \n\n\n[1] THE FAST JOHNSON–LINDENSTRAUSS TRANSFORM AND APPROXIMATE NEAREST NEIGHBORS∗\nNIR AILON† AND BERNARD CHAZELLE‡\n[2] Knight, Philip A., Fast rectangular matrix multiplication and QR decomposition, Linear Algebra Appl. 221, 69-81 (1995). ZBL0827.65044.\n[3] Improved Rectangular Matrix Multiplication using Powers of the Coppersmith-Winograd Tensor by François Le Gall† Florent Urrutia\n\n Yes. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 2 ]
[ "2idMuQEOgjv", "Lc7p3RlJ0m9", "-IF6VcJ85bR", "zMqMTpJi68N", "xB5tefVForE", "-gdRFMfzAQ", "Gnk0ZZZLnY", "OZ_JhyPVfyF", "7LjZRQULin3", "JzyVl__k3Wh", "6zFCPWgEtiA", "GiPM4mjhY5I", "6tHpiqYnZHo", "nips_2022__N4k45mtnuq", "nips_2022__N4k45mtnuq", "nips_2022__N4k45mtnuq", "nips_2022__N4k45mtnuq" ]
nips_2022_eF_Mx-3Sm92
Change-point Detection for Sparse and Dense Functional Data in General Dimensions
We study the problem of change-point detection and localisation for functional data sequentially observed on a general $d$-dimensional space, where we allow the functional curves to be either sparsely or densely sampled. Data of this form naturally arise in a wide range of applications such as biology, neuroscience, climatology and finance. To achieve such a task, we propose a kernel-based algorithm named functional seeded binary segmentation (FSBS). FSBS is computationally efficient, can handle discretely observed functional data, and is theoretically sound for heavy-tailed and temporally-dependent observations. Moreover, FSBS works for a general $d$-dimensional domain, which is the first in the literature of change-point estimation for functional data. We show the consistency of FSBS for multiple change-point estimation and further provide a sharp localisation error rate, which reveals an interesting phase transition phenomenon depending on the number of functional curves observed and the sampling frequency for each curve. Extensive numerical experiments illustrate the effectiveness of FSBS and its advantage over existing methods in the literature under various settings. A real data application is further conducted, where FSBS localises change-points of sea surface temperature patterns in the south Pacific attributed to El Ni\~{n}o.
Accept
The paper studies change-point detection and localization for functional data, which is an interesting and timely topic. I agree with some reviewers that the paper might be a better fit with the traditional statistical venue. The authors have done a great job in the rebuttal phase in addressing reviewers’ comments. I believe it is a worthwhile paper to be published in NeurIPS.
val
[ "6qXA16Ge32E", "8Wh8buLR2KP", "bgUypABo-ah", "BsV-T-bvLTK", "aPgo3KAGUzL", "HrFNYEEwd9", "sdLmW4C2gkc", "_kyTms0qfT", "XbVgOZk1hzv", "oG5TKZNbNk2" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for your comments and suggestions. In the following, we reply to your comments point-by-point. We have submitted revised main text and supplementary files.\n\n**Whether competing method is state-of-the-art**\n\nTo the best of our knowledge, the three competing methods are still the state-of-the-art in the literature of change-point estimation for functional data.\n\n**More real data analysis**\n\nIn the revision, we have added one more real data example, considering with the same COBE-SSTE dataset as the real data set we have in our submission, using the same months June and July. FSBS is applied to detect potential change points on a 1 degree latitude by 1 degree longitude grid $(10 \\times 6)$, located at the Caribbean sea. In both months, FSBS identified the year $2004$ as a change-point. This might be associated with the development of a Modoki El Niño -– a rare type of El Niño in which unfavorable conditions are produced over the eastern Pacific instead of the Atlantic basin due to warmer sea surface temperatures farther west along the equatorial Pacific. Variability in the climate of northeastern Caribbean is connected with this phenomenon. We have added plots in our revised supplementary materials.\n\n**Choice of method and the temporal dependence assumption**\n\nOur proposed FSBS algorithm is shown to be able to handle temporally-dependent observations, owing to our theoretical techniques. We believe for other variants of the binary segmentation, our proposed theoretical techniques can be extended to show their respective theoretical performances when handling temporally-dependent data. This is in fact a byproduct our paper - we provided peer researchers with new theoretical tools. As for our paper, we chose the seeded binary segmentation method due to its computational efficiency.\n\nAs for the physical dependence framework we adopted, it is used to model the dependence in the functional and scalar noise sequences, detailed in Assumption 1 (c) and (d). The presence of the physical dependence inflates the signal-to-noise ratio condition, through the quantity $q$ in Assumption 3, and the localization error rate, again through the quantity $q$ in Theorem 1.", " Thank you very much for your comments and suggestions. In the following, we reply to your comments point-by-point. We have submitted revised main text and supplementary files.\n\n**Novelty**\n\nYou are indeed right that our algorithm is based on existing methods. We are in fact proud of using existing and simple methods to achieve superior theoretical performance. We would like to point out that although theoretical guarantees on binary segmentation and kernel density estimation are well-studied, it is the first time seen that such a combination (especially a newly-proposed variant of binary segmentation, namely seeded binary segmentation) is adopted for functional data analysis framework. Though our algorithm is simple, it can handle change-point estimation in discretely observed functional data (for both sparse and dense setting) with temporal dependence and heavy-tailed measurement errors. To our best knowledge, this is the first algorithm that can achieve such tasks in the literature. \n\n**Notation**\n\nBy arbitrarily slow, we mean that the sequence $C_{\\mathrm{SNR}}(T)$ diverges to infinity as $T$ grows unbounded and the divergence can be of any rate. In other words, Assumption 3 that\n\\begin{equation}\n \\kappa \\sqrt{\\Delta} > C_{\\mathrm{SNR}} \\log^{ \\max\\{1/2, 5/q\\}}(T)\\Big(1 + T^{\\frac{d}{2r+d}} n^{\\frac{-2r}{2r+d}} \\Big)^{1/2}\n\\end{equation}\ncan be understood as\n\\begin{equation}\n \\frac{\\kappa \\sqrt{\\Delta}}{\\log^{ \\max\\{1/2, 5/q\\}}(T)\\Big(1 + T^{\\frac{d}{2r+d}} n^{\\frac{-2r}{2r+d}} \\Big)^{1/2}} \\to \\infty, \\quad T \\to \\infty.\n\\end{equation}\nWe adopt the form in the first equation display instead of the second for the sake of finite sample results we present in Theorem 1.\n\n**More literature review**\n\nThank you very much for your suggestion. In the literature, kernel based change-point detection is mainly used in nonparametric change-point estimation. Specifically, the goal is to locate change-points in the distribution of a sequence of data. See representative works such as Li et al. (2015), Arlot et al. (2019) and Li et al. (2019), Padilla et al. (2021).\n\nWe remark that the goal of the aforementioned papers is very different from ours. The aforementioned papers all focused on nonparametric online or offline change point detection for independent time series data, while our method is designed to estimate change points for functional time series data with the presence of temporal dependence. In the rebuttal revision, we have added more discussions on this matter following your suggestion.\n\n**Reference**\n\nSylvain Arlot, Alain Celisse, and Zaid Harchaoui. A kernel multiple change-point algorithm via model selection. Journal of Machine Learning Research, 20(162), 2019.\n\nShuang Li, Yao Xie, Hanjun Dai, and Le Song. M-statistic for kernel change-point detection. Advances in Neural Information Processing Systems, 28, 2015.\n\nShuang Li, Yao Xie, Hanjun Dai, and Le Song. Scan b-statistic for kernel change-point detection. Sequential Analysis, 38(4):503–544, 2019.\n\nOscar Hernan Madrid Padilla, Yi Yu, Daren Wang, and Alessandro Rinaldo. Optimal nonparametric multivariate change point detection and localization. IEEE Transactions on Information Theory, 2021.", " Thank you very much for your comments and suggestions. In the following, we reply to your comments point-by-point. We have submitted revised main text and supplementary files.\n\n**More literature review**\n\nThank you very much for your suggestion. In the literature, kernel based change-point detection is mainly used in nonparametric change-point estimation. Specifically, the goal is to locate change-points in the distribution of a sequence of data. See representative works such as Li et al. (2015), Arlot et al. (2019) and Li et al. (2019), Padilla et al. (2021).\n\nWe remark that the goal of the aforementioned papers is very different from ours. The aforementioned papers all focused on nonparametric online or offline change point detection for independent time series data, while our method is designed to estimate change points for functional time series data with the presence of temporal dependence. In the rebuttal revision, we have added more discussions on this matter following your suggestion.\n\n**Choice of kernels**\n\nYou are right that the choice of kernels may affect the performance of any kernel based method. We choose Gaussian kernel for demonstration as it is one of the most popular kernel used in practice. Following your suggestion, we also tried other kernels in our simulation study and we find that the numerical performance of our method is robust to the choice of kernels. Tables below summarize results of the performance of the FSBS with different choices of kernels on Scenarios 1 and 2 in the paper. Here we consider common kernels used in literature as Gaussian, Uniform, Epanechnikov and Quartic. We have added the additional simulation results in the revised supplementary materials.\n\nScenario 1\n|Kernel|$K- \\hat{K}<0$|$K- \\hat{K}=0$|$K- \\hat{K}>0$|$\\vert \\hat{K}-K\\vert$|Haus. dist|\n|--|--|--|--|--|--|\n|Gaussian|0.05|0.86|0.09|0.17 (0.05)|16.15 (4.09)|\n|Uniform|0.01|0.99|0|0.01 (0.01)|13.32 (0.42)\n|Epanechnikov|0.06|0.87|0.17|0.13 (0.03)|15.14 (2.40)|\n|Quartic|0.07|0.84|0.09|0.20 (0.04)|18.28 (1.63)||\n\nScenario 2\n|Kernel|$K- \\hat{K}<0$|$K- \\hat{K}=0$|$K- \\hat{K}>0$|$\\vert \\hat{K}-K\\vert$|Haus. dist|\n|--|--|--|--|--|--|\n|Gaussian|0.05|0.95|0|0.05 (0.02)|3.32 (1.00)|\n|Uniform|0|0.99|0.01|0.01 (0.01)|2.93 (1.03)|\n|Epanechnikov|0|1|0|0 (0)|1.24 (0.28)|\n|Quartic|0.01|0.99|0|0.01 (0.01)|2.30 (0.55)|\n\n\nThe mean is reported in these tables. We obtain the results over 100 repetitions and the numbers in parenthesis denote standard errors.\n\n**Typos**\n\nThank you very much for pointing these out. In the revision, we have corrected all these typos.\n\n**Reference**\n\nSylvain Arlot, Alain Celisse, and Zaid Harchaoui. A kernel multiple change-point algorithm via model selection. Journal of Machine Learning Research, 20(162), 2019.\n\nShuang Li, Yao Xie, Hanjun Dai, and Le Song. M-statistic for kernel change-point detection. Advances in Neural Information Processing Systems, 28, 2015.\n\nShuang Li, Yao Xie, Hanjun Dai, and Le Song. Scan b-statistic for kernel change-point detection. Sequential Analysis, 38(4):503–544, 2019.\n\nOscar Hernan Madrid Padilla, Yi Yu, Daren Wang, and Alessandro Rinaldo. Optimal nonparametric multivariate change point detection and localization. IEEE Transactions on Information Theory, 2021.", " Thank you very much for your comments and suggestions. In the following, we reply to your comments point-by-point. We have submitted revised main text and supplementary files.\n\n**Theoretical comments on the dimension $d$**\n\nWe first discuss dimension $d$ in terms of the theory. Recall that the localization error rate of change-point estimation in Theorem 1 is $C_{\\mathrm{FSBS}} \\log^{ \\max\\{1, 10/q \\} } (T) \\left(1 + T^{\\frac{d}{2r+d}} n^{\\frac{-2r}{2r+d}} \\right) \\kappa_k^{-2}.$ Holding everything else the same, it can be clearly seen that $T^{\\frac{d}{2r+d}} n^{\\frac{-2r}{2r+d}}$ is an increasing function of $d$, i.e. a larger $d$ will lead to a worse localization error rate.\n\nIn addition, recall that in Assumption 3 we require the signal-to-noise ratio to be lower bounded by $C_{\\mathrm{SNR}} \\log^{ \\max\\{1/2, 5/q\\}}(T)\\Big(1 + T^{\\frac{d}{2r+d}} n^{\\frac{-2r}{2r+d}} \\Big)^{1/2}.$ Thus, a larger $d$ will also require a stronger SNR.\n\n**Additional numerical results on the dimension $d$**\n\nIn the revision, we have analyzed the performance of FSBS with different choices of dimension $d$ for Scenario 4 in the manuscript, with $d \\in \\{2, 3, 5, 10\\}$ and all the other details identical to Scenario 4. The results are collected in the table below with more additional details in the revised supplementary materials. These support the previous discussion on dimension $d$. \n\n| Dimension $d$|$K- \\hat{K}<0$|$K- \\hat{K}=0$|$K- \\hat{K}>0$|$\\vert\\hat{K}-K\\vert$|Haus. dist.|\n| ---- | --- |--- |--- |--- |--- |\n| 2 | 0|0.90|0.08|0.08 (0.02)|5.02 (1.25)|\n|3|0.02|0.89|0.09|0.11 (0.03)|5.73 (1.22)|\n|5|0.18|0.82|0|0.18 (0.05)|5.92 (1.23)|\n|10|0.21|0.79|0|0.22 (0.08)|6.58 (1.24)|\n\nThe mean is reported in this table. We obtain the results over 100 repetitions and the numbers in parenthesis denote standard errors.\n\n**Computation time**\n\nOur method is computationally efficient and its computational complexity is $O(nT\\log T+T(\\log T)^2)$. Specifically, as can be seen from Algorithm 1, we need to conduct kernel smoothing of the sampling distribution and mean function at $\\log T$ measurement locations, which costs $O(nT\\log T)$ operations. Once this is done, we conduct seeded binary segmentation (SBS) at the $\\log T$ measurement locations/grids. It is known that SBS has a computational cost of $O(T\\log T)$. Thus, this step costs $O(T(\\log T)^2)$ computational complexity. In total, the computational complexity of our method is $O(nT\\log T+T(\\log T)^2)$.\n\nAs for existing methods in the literature, in terms of implementation, they all rely on the two-stage procedure. Specifically, the first stage is to register/estimate the discretely observed points into a functional curve at each time $t$. Taking the B-spline smoothing with $p$ basis functions for example, this costs $O(n^2p+p^3)$ computational complexity for each time $t$ due to a least square estimation. Thus this step costs $O(T(n^2p+p^3))$ computational complexity. Once the functional curves are registered, in the second stage, the existing methods conduct functional PCA to extract $p'$ principle component scores from each function and then conduct mean change-point detection on the $p'$-dimension time series of principle component scores. Ignoring the computational cost of functional PCA, the change-point detection procedure costs at least $O(T\\log T)$ computational complexity if a standard binary segmentation is used and could be more expensive if other segmentation algorithms are used to conduct change-point estimation. Thus, in total, the computational complexity of existing methods is at least $O(T(n^2p+p^3)+T\\log T)$, which is more expensive unless $n\\preceq\\log T.$\n\nWe have added this discussion in Section A.1 in the supplementary materials in the rebuttal revision, following your suggestions. \n\n**Figure 2**\n\nThanks for your suggestion. In the revised supplementary materials, we have added a zoomed-in version of Figure 2, emphasizing the northwest and northeast coasts of Australia respectively. ", " Thank you very much for your comments and suggestions. In the following, we reply to your comments point-by-point. We have submitted revised main text and supplementary files.\n\n**Observation grids**\n\nMeasurement locations $\\{x_{t,i}\\}$ are usually assumed to be random. This is the case in our paper. In Assumption 1(a), we assume that the observation grids $\\{x_{t, i}\\}_{t = 1, i = 1}^{T, n}$ are independently sampled from a distribution, with certain regularity conditions. \n\n> (Discrete grids) The grids $x_{t,i} \\in [0, 1]^d, t = 1, \\ldots, T, i = 1, \\ldots, n,$ are independently sampled from a common density function $u: [0,1]^d \\to \\mathbb R$. In addition, suppose that there exist constants $r>0$ and $L>0$ such that $ u \\in \\mathcal{H}^r (L) $ and that $\\inf_{x\\in [0,1]^d } u(x) \\ge \\tilde{c}$ with an absolute constant $\\tilde{c}>0$. \n\nAs mentioned in our manuscript, the assumption that different functional curves have the same number of grid points $n$ is made for presentation simplicity only. Our theoretical results continue to hold under the following weaker assumption on the observation grids. \n\n> Different functional curves are assumed to have the same number of grid points $n$. We remark that this is made for presentation simplicity only. It can indeed be further relaxed and the main results below will then depend on both the minimum and maximum numbers of grid points.\n\n**Local constant smoother vs. local linear smoother**\n\nFor estimation of a function, in general, local constant smoother indeed may not perform well around the boundaries. However, note that our primary focus here is *not* the estimation of a function, but estimating potential change-points. Our theory suggests that local constant smoother is sufficient for achieving optimality in the localization error rate of change-point estimation. Local constant smoother is computationally cheaper than local linear estimator, facilitating our fast computation. Once the change-points are localised by our method, we can use local linear smoother to further estimate the mean function within each estimated stationary segment.\n\n**Phase transition**\n\nThank you for bring Zhang and Wang (2016) (ZW) to our attention, which concerns estimation of a function given a sequence of stationary functional data. The phase transition result in ZW matches the one in our paper. In the notation of our paper, ZW consider the special case of $r=2$ (smoothness of mean function) and $d=1$ (dimension of function domain).\n\nCompare their Corollary 3.2 and our Theorem 3.1. For $r=2$ and $d=1$, our bandwidth in Theorem 3.1 is $(Tn)^{-1/5}$, which exactly matches their bandwidth in case 1 (sparse) and case 2 (dense). For case 3 (ultra dense), ZW allows any bandwidth that is $o(T^{-1/4})$, which is satisfied by our bandwidth $(Tn)^{-1/5}$ under the ultra dense case ($n/T^{1/4}\\to \\infty$). This indeed further validates our theoretical results.\n\nAs for the parametric rate (in our notation is parametric root-$T$ rate), our primary focus is change-point localization, and the well-known parametric rate for change-point localization (when dealing with a constant jump size $\\kappa$) is $O_p(1)$, which is achieved by our method (up to logarithmic factors) under the case where $n\\geq T^{1/4}$ (this is discussed in our manuscript). The reason that we can achieve this parametric $O_p(1)$ rate for change-point localization is that under the case $n\\geq T^{1/4}$, we can estimate the mean function up to the parametric root-$T$ rate. \n\n**Innovation**\n\nIt is not that our method is taking advantage of temporally dependent and non-stationary nature of the functional data. Instead, our method and theory are designed to be capable of estimating multiple change-points for (discretely observed) functional data under temporal dependence and heavy-tailedness, which are more realistic assumptions than those used in the literature. \n \nOur proposed algorithm FSBS is the first methodology that can rigorously estimate the change-points even when functional data are sparsely observed. This desired feature has not been seen in the existing literature on functional change point detection. In the review by Wang, Chiou and Müller, sparse functional data are described as *much more difficult but commonly encountered situation* and *typically require more effort in theory and methodology as compared to densely sampled functional data*.\n\n**Sparse cases**\n\nYou are right, a bounded constant $n$ and $n=1$ will give the same order of localization rate, which can be clearly seen from Theorem 1 in our paper. To our best knowledge, no existing works on change-point estimation in functional data can rigorously handle the case of a bounded constant $n$ (i.e. the sparse case), thus this is a selling point of our paper. We choose to advertise $n=1$ as it is the most representative case (and most challenging case in practice) of a bounded constant $n$.\n", " This paper studies change-point detection and localization for sparsely or densely sampled functional data observed on a general d-dimensional space. A computational efficient kernel based FSBS algorithm is proposed to handle discretely observed functional data and general d-dimensional domain. The supporting theory is developed for heavy-tailed and serially dependent curves and measurement errors. A phase transition phenomenon is established depending on the order of n relative to T. The finite sample performance is evaluated via both simulations under different settings and one real data example. Strength:\n\n1, The first work in existing literature studying change points detection for partially observed functional data. \n\n2, Can handle low-dimensional multivariate functional data.\n\n3, Theory is developed for serially dependent random functions and measurement errors\n\nWeakness:\n\n1, Instead of a machine learning conference like NeurIPS, I feel this paper fits a traditional Statistics journal better. The proposed methodology seems like a combination of some well-established techniques in change points detection and functional data analysis areas.\n\n2, The measurement locations x_{t,i} are fixed, while in standard functional and longitudinal data analysis literature, these locations are assumed to be random and the number of measurement locations across curves can be different (rather than a fixed n) or even random.\n\n3, The detailed comparison with phase transition phenomenon in existing literature is needed. See my question below.\n\n4, Local constant smoother instead of local linear smoother is used and hence may suffer from bad performance around the boundaries. \n 1, What is the advantage of the local constant smoother used in the paper instead of the commonly adopted local linear smoother?\n\n2, The phase transition result is interesting, but is different from results in Zhang and Wang, 2016, which considers categories of sparse, dense and ultra-dense functional data. In particular, the optimal bandwidths are different under different cases and as the number of measurement locations becomes sufficiently large, the parametric root-n rate can be attained. A detailed discussion is helpful.\n\n3, What are the truly innovative component of the paper taking the advantages of functional, temporally dependent and non-stationary nature of the data?\n\n4, Why is the extreme sparse case n=1 argued as one selling point of the paper? As far as I know, if n is bounded (treated as sparse functional data), the same rate can be attained. Yes", " This paper proposes a new kernal-based algorithm called functional seeded binary segmentation (FSBS) for change point detection of functional data. Theoretical error rate and consistency results are presented for functional data with general sampling density (sparse & dense), temporal dependence, and importantly that are multi-dimensional. Numerial experiments are conducted on simulation data as well as climate change data. This paper is very clearly written and the presentation is great. While I haven't had a chance to read through the proofs in appendix, assuming they are technically sound, extending changepoint detection to multi-dimensional functional data is a significant contribution to the community. 1. Since the paper stresses extension to multi-dimensional data as a major contribution, there should be more discussion (in section 3.2) and experiments (in both section 4.1 and 4.2) involving multi-dimensional data. What insight can one gain about the theoretical bounds with respect to parameter $d$? How does the algorithm behave for $d >1$ in more than just a single experiment (scenario 4)?\n\n2. How did the computational cost and runtime of the proposed method compare to the existing methods?\n\n3. It's not clear what the reader is supposed to notice from Figure 2. If there is noticable change in the top left corner, a zoomed-in panel might be helpful.\n N/A", " This paper considers the change-point detection problem for functional data in multi-dimensional space. A kernel-based algorithm is proposed, which allows for handling functional data in multivariate domain with measurement error. The author further presents several important theoretical results about this algorithm, such as the consistency of testing, and sharp localization error rate. Numerical results support their contributions. **Originality/Significance**: The proposed kernel-based algorithm elegantly solves main challenges in existing literature: (i) observed funtional data may contain measurement error, (ii) there may exist multiple change points, (iii) domain for functional data can be multivariate. This work would be very interesting for readers in the testing and detection area.\n\n**Quality**: All important points for functional-data change point detection are discussed, including the algorithm development, solid theoretical analysis, and extensive experiment simulations. From my view point, the quality of this paper is at least above 50% of all accepted nips papers.\n\n**Clarity**: Problem setup, algorithm procedures, theory statements and assumptions are clearly presented. \n\n**Weakness**: \n1. It would be helpful to do a literature review for kernel-based change-point detection (even not under the functional data setting). This comparison will make the reader understand the connection and difference for the proposed algorithm.\n2. The choice of kernel will largely influence the performance of change-point detection. The kernel choice in general is specified by the users that have prior knowledge on the data. Although the author does not demonstrate which kernel choice is better (in numerical simulation only Gaussian kernel is used), I think it is not a big issue and it would be an interesting topic for future study.\n3. Minor wording issue: \n- In line 168, the bracket should be $O(T\\log(T))$.\n- In line 211-212, there is a large box for the equation environment. The author may consider repharse it to be \"with probability at least $1-3\\log^{-1}(T)$, it holds that \\[xxx\\]\". See **Weakness** part. See **Weakness** part.", " This paper studies offline change-point detection and estimation for functional data. A kernel-based algorithm named functional seeded binary segmentation (FSBS) is proposed. This paper also shows the consistency of FSBS for multiple change-point estimation and \t.\tprovides a sharp localization error rate. Numerical demonstrations on synthetic and real data are provided. Strengths\n\n1. A new multiple change point detection algorithm is proposed. The FSBS procedure, based on kernel density estimation and CUSUM statistics, is computationally efficient. \n2. This paper considers a general detection scheme, where the functional data can be discretely observed (even with n=1 in the very extreme case) and can be contaminated with temporally-dependent and heavy-tailed errors.\n3. Theoretical guarantee on the estimation error is provided. \n4. Extensive numerical experiments \n\n\nWeaknesses\nThe method is not new. Both Binary Segmentation and kernel density estimations are well-studied methods for change detection and nonparametric density estimation, respectively. And the theoretical guarantee on BS and KDE are also well-studied.\n Some notations are not explained sufficiently. For example, what is the meaning of “arbitrarily-slow” in assumption 3, and what does C_SNR mean here? 
\n\nAlso, considering kernel-based change-point detection, it might be worthwhile to compare to or comment on the difference with some existing kernel type methods, such as \"Li, S., Xie, Y., Dai, H., & Song, L. (2015). M-statistic for kernel change-point detection. Advances in Neural Information Processing Systems, 28.\" (although this paper didn't target detection for functional data, its online detection scheme might be helpful if the FSBS could be generalized for online detection).\n\nMinor issues:\nThe equation in Theorem 1 is too long, maybe split into two lines.\n yes", " This paper presented a change-point detection method, namely functional seeded binary segmentation (FSBS), for d-dimensional time-series data. FSBS is claimed to be more suitable for heavy-tailed and temporally dependent data, while being computationally efficient. The proposed method is evaluated on both synthetic data and sea surface temperature data and leads to better results compared to three classic methods: BGHK, HK, and SN. * Strengths\n\n-- The proposed method is easy to implement with fewer (only three) tuning parameters, and the authors explained them well in terms of how to the select them.\n\n-- The authors showed that FSBS is consistent in detecting and localising multiple change-points under standard regularity conditions. Besides, it is claimed to have the sharpest localisation error rate.\n\n* Weaknesses\n\n-- The baseline methods appear to be published quit a while ago, e.g., BGHK (2009), HK (2010) and SN (2011), so I’m not sure if they are state-of-the-art methods for change-point detection in functional data as the authors claimed in Line 267.\n\n-- The experiments seem to be quite lightweight, as it has only one real dataset.\n FSBS is claimed to handle temporally-dependent observations better theoretically, however, it is not clearly why it has such a property and other methods do not. In line 128 – 130, authors state that “For temporal dependence, we adopt the physical dependence framework by [26], which covers a wide range of time series models, such as ARMA and vector AR models.” while it is hard to tell how [26] is adopted here exactly. See “weakness” above." ]
[ -1, -1, -1, -1, -1, 4, 8, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, 4, 3, 3, 3, 1 ]
[ "oG5TKZNbNk2", "XbVgOZk1hzv", "_kyTms0qfT", "sdLmW4C2gkc", "HrFNYEEwd9", "nips_2022_eF_Mx-3Sm92", "nips_2022_eF_Mx-3Sm92", "nips_2022_eF_Mx-3Sm92", "nips_2022_eF_Mx-3Sm92", "nips_2022_eF_Mx-3Sm92" ]
nips_2022_ADfBF9PoTvw
Markov Chain Score Ascent: A Unifying Framework of Variational Inference with Markovian Gradients
Minimizing the inclusive Kullback-Leibler (KL) divergence with stochastic gradient descent (SGD) is challenging since its gradient is defined as an integral over the posterior. Recently, multiple methods have been proposed to run SGD with biased gradient estimates obtained from a Markov chain. This paper provides the first non-asymptotic convergence analysis of these methods by establishing their mixing rate and gradient variance. To do this, we demonstrate that these methods—which we collectively refer to as Markov chain score ascent (MCSA) methods—can be cast as special cases of the Markov chain gradient descent framework. Furthermore, by leveraging this new understanding, we develop a novel MCSA scheme, parallel MCSA (pMCSA), that achieves a tighter bound on the gradient variance. We demonstrate that this improved theoretical result translates to superior empirical performance.
Accept
All reviewers recommend accepting the paper. If the authors want to increase the impact of their work, a demonstration on a large-scale problem would help a lot.
test
[ "ywkwCYTEK9Y", "zSUNT4nRnqf", "rYHNWpJW19S", "3izwxVJjOmU", "As9SqjVaomZ", "Z_UTJYxRxM_", "20sKzh4nc2J", "6vBW3I2NFaK", "56U-l452I6-", "ZWMOeOVdJnh" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for addressing my concerns. I increased my score to an *Accept*.", " Thank you for addressing my questions and suggestions. These answers are helpful and addressed my concerns about mixing rate. The updated Figures 1, 2 are also more informative and highlight the benefits of the proposed approach. Therefore I increased my score from 5 to 6. ", " Thank you for your important questions. We clarified them below.\n\n> Significance Given the theoretical nature of the paper, I think the empirical evaluation sufficiently demonstrates the improvement of pMCSA over JSA and MSC. The first experiment seems to confirm the theoretical results but only considers two different sampling budgets. If computationally feasible, it would be nice to see an array of different sampling budgets, e.g. N=8,16,32,62,128, to further confirm the theoretical findings empirically.\n\nThank you for the suggestion. We have changed the plot to reflect a broader range of computational budgets in the **uploaded revision**.\n\n> I was specifically wondering about the $\\log T$ term in the numerator of the rate for Mirror Descent which I was not able to match to the rate stated in the original paper.\n\nThe stated convergence rate is not explicitly spelled out in the original paper. However, given that it is a direct consequence of Corollary 3.5 (Duchi et al., 2012), it was not clear whether it was worth including the full derivation in our paper. The stated convergence rate is obtained by setting $\\epsilon = T^{-1/2}$, $\\kappa_1 = 2 / \\log \\rho^{-1}$, $\\kappa_2 = 1$. The $\\kappa_1, \\kappa_2$ terms are directly obtained from the definition of the Hellinger and total-variation mixing times (Definition 2.1; Duchi et al., 2012), while the arbitrary choice for $\\epsilon$ is suggested by Duchi et al. (2012). Then, the term responsible for the convergence rate in Corollary 3.5 evolves as\n\\begin{align}\n\\frac{2 \\alpha G^2}{\\sqrt{T}} \\left(\\kappa_1 \\log \\frac{\\kappa_2}{\\epsilon} \\right) = \\frac{2 \\alpha G^2}{\\sqrt{T}} \\left( \\frac{2}{\\log \\rho^{-1}} \\log \\sqrt{T} \\right) = \\frac{2 \\alpha G^2}{\\log \\rho^{-1}} \\frac{\\log T}{\\sqrt{T}}\n\\end{align}\nwhich, excluding the stepsize $\\alpha$, is our stated rate.\nThis is slightly worse than the rate stated in Eq. (3.2) by Duchi et al. (2012), but this is obtained through a stepsize aware of $G$, $\\kappa1$, $\\kappa2$, and the domain radius, which is less realistic.\n\n> I was also confused by the statement that the influence of mixing rate on the convergence rate decreases as $1/T^2$ even though the stated convergence rate does depend on the mixing rate at all. I might be missing something here and would be grateful for clarification.\n\nFor Doan (2020ab), the convergence rate is \"independent\" of $T$ in the sense that the effect of the mixing time bound $\\rho$ decreases at a faster rate. Therefore, the second and third row of Table 1 does not involve $\\rho$.\n\nIn more detail, the \"effect\" of the mixing rate by Doan (2020b) is contained in the positive integer $\\mathcal{K}^*$. For the convex cases, for $k > \\mathcal{K}^*$, the term with $\\mathcal{K^*}$ decreases as $\\mathcal{O}\\left(1/k^2\\right)$ where $k$ is the MCGD iteration number. This is more explicitly stated by Doan (2020b) in Eq. (27), (15). Although $\\mathcal{K}^*$ could be big, the stated rate is satisfied as soon as $k$ goes above it.", " Thank you for your feedback – responses to your questions are below.\n\n> It would be beneficial to also plot MSC-RB in Figure 1, 2 to match the computational costs between different methods.\n\nThank you for the suggestion. We have added experiments with MSC-RB in Figure 1, 2 In our **uploaded revision**.\n\n> Does pMCSA also trade bias for variance during early training iterations?\n\nThe trade-off is a “fixed” feature and kicks in regardless of the iteration as implied by Theorem 3.\n\n> Are there scenarios where the mixing rate is important and JSA could attain better performance?\n\nYes, bias suddenly becomes important for parameterized posteriors such as VAEs since the target distribution also moves around. Although we agree that this is an important setting that many would be interested to learn about, we believe this is a separate topic worth its own article since the MCGD framework cannot be directly applied.\n", " Thank you for your feedback.\n\n> though it would have been nice to see this confirmed in the experiments, by e.g. plotting performance as a function of wallclock time instead of iterations. It is also great that the authors compare various optimizers and step-sizes.\n\nCurrently, experiments are plotted against wall clock time in Appendix E. We will reference these experiments more prominently in the main text. In terms of practical scalability beyond per-iteration timing, we also consider stochastic extensions of our methods to be interesting potential future work.\n\n> 289 \"We consider ELBO with only 𝑁 = 1 since differentiating through the likelihood makes its per-iteration cost comparable to MCSA methods with 𝑁 = 10.\" Should be elaborated as it is not clear where this number comes from.\n\nWe point toward the wall-clock time plots in Appendix E, where the methods can be seen to have similar computational costs. We will state this more clearly in the future version.\n\n> One question that is not really discussed in the paper is: \"Is it worth giving up the scalability of the standard variational inference for the improved performance of the discussed methods?\". I think since all papers in the MCSA space lack large-scale experiments, this really remains to be seen, but I'd like to hear the authors comment on this.\n\nThis is indeed a very important question. Since previously proposed scalable MCMC algorithms have not been considered as a component of VI as in our work, we believe it would be worth re-evaluating them in the MCSA setting. Given the volume of past research on the subject, the list of potentially interesting candidates is pretty long, which we plan to investigate in the future. Our current paper should act as a theoretical foundation in this direction.\n\n> 294 \"Our encouraging regression results suggest that incorporating methods such as inducing points (Snelson & Ghahramani, 2005) into MCSA may lead to an important new class of GP models.\" not clear what that means and what the basis of this is.\n\nWe will clarify this in the future version’s main text. Currently, most scalable GP methods rely on sparse approximations with inducing points. We believe that combining inducing points with MCSA will be beneficial and lead to new forms of sparse GP approximations.", " We thank you for your service.", " The authors provide non-asymptotic convergence rates of Markov Chain Score Ascent (MCSA) methods by casting them as instances of Markovian Chain Gradient Descent for which such rates have been recently established. Based on these results the authors hypothesize that the convergence of MCSA depends more heavily on the gradient variance than on the mixing rate of the Markov Chain and propose a parallel MCSA (pMCSA) scheme which leverages these insights. In contrast to JSA which performs $N$ transition on a single chain, pMCSA performs a single transition on $N$ independent Markov chains, hence lowering the variance of the gradient in exchange for a slower mixing rate.\n **Originality:**\nThe work is original in that it is, to the best of my knowledge, the first that discusses non-asymptotic convergence and provides convergence bounds for MCSA methods. The paper casts MCSA methods as MCGD to leverage recently established results on non-asymptotic convergence for MCGD. Hence the theoretical contribution lies in deriving bound for the gradient variance and mixing rate of the individual methods such that they can be used with the existing theoretical findings. I believe this work is interesting to the inference community and might help to inform the further development of MSCA algorithms and their analysis.\n\n**Clarity:**\nGenerally, due to the technical nature of the paper it was not always straightforward to me to follow the argument. I found it especially hard to follow the discussion about the convergence rates. The authors are mainly interested in the rates in terms of the gradient bound $G$, mixing rate $\\rho$, and number of iterations $T$ and hence can express the convergence rates in a simplified form in O-notation. Even though the authors gave references to the exact theorems and corollaries which establish the convergence rates in previous work it was not straightforward for me to verify the exact correspondences, especially for the Mirror descent case (please see questions below). \n\n**Significance**\nGiven the theoretical nature of the paper, I think the empirical evaluation sufficiently demonstrates the improvement of pMCSA over JSA and MSC. The first experiment seems to confirm the theoretical results but only considers two different sampling budgets. If computationally feasible, it would be nice to see an array of different sampling budgets, e.g. N=8,16,32,62,128, to further confirm the theoretical findings empirically.\n\n - Based on the analysis JSA and MSC are both heavily influenced by model misspecification (of the variational approximation) via $w^*$ while pMCSA is independent of $w^*$. Is this effect expected to vanish with increasing capacity of the variational distribution (with reasonable initialization) after some initial training time?\n\nRegarding the convergence rates (mentioned above):\n- I was specifically wondering about the $log T$ term in the numerator of the rate for Mirror Descent which I was not able to match to the rate stated in the original paper.\n- I was also confused by the statement that the influence of mixing rate on the convergence rate decreases as $O(1/T^2)$ even though the stated convergence rate does depend on the mixing rate at all. I might be missing something here and would be grateful for clarification.\n\nSome minor thing I noticed:\n- Equation after line 97. Product sign between $\\pi(z^{(1)}$ and $\\pi(z^{(2)}$ is missing\n- The gradient bound $G$ is mentioned on Line 114 but has not yet been introduced\n- Line 165L MSA -> MSC\n The authors clearly state their assumptions and limitations.", " This paper presents a unified framework of Markov Chain Score Ascent (MCSA) methods and a theoretical study of MCSA gradient variance for minimizing the forward KL divergence in variational inference. Based on the theoretical analysis, the paper proposes a simple modification to use parallel Markov chains, named parallel MCSA (pMCSA). Experiment results show that pMCSA achieves better empirical performance for a number of Bayesian inference problems. Strengths:\n- The paper unifies previous works on forward KL minimization under the same framework.\n- The study on the improvements of gradient variance w.r.t. $N$ for prior methods is useful and insightful.\n- The proposed pMCSA algorithm overall achieves better empirical performance than JSA, MSC and MSC-RB. \n\nWeaknesses:\n- Analysis is largely based on the more general Markov chain gradient descent methodology, of which MCSA is only a special case. This limits the novelty and scope of the work. \n- The proposed modification is relatively simplistic and similar to other parallel MCMC algorithms. Switching the kernel to MH in MSC and increasing sample size also effectively yields the same algorithm. \n - Are there scenarios where the mixing rate is important and JSA could attain better performance?\n- Does pMCSA also trade bias for variance during early training iterations? \n- It would be beneficial to also plot MSC-RB in Figure 1, 2 to match the computational costs between different methods. The authors adequately addressed the limitations of their work. The authors did not discuss any potential negative societal impacts of the work, but as the work is mostly theoretical it is unlikely to have immediate negative societal impacts.    ", " This paper provides a non-asymptotic convergence analysis of methods that minimize the inclusive Kullback-Leibler (KL) divergence with stochastic gradient descent (SGD). The authors prove that these Markov chain score ascent methods are special cases of a unified framework. Based on this framework, a new scheme is proposed. Unfortunately, I am somewhat of an outsider to the field of this paper (cannot check the correctness of the main part such as the non-asymptotic convergence of the proposed framework) and I feel I cannot accurately assess the importance of the proposed framework and the novel scheme. However, this paper is definitely well-organized and technically solid. None. Yes, the authors have addressed the limitations and potential negative societal impact of their work.", " The paper examines variational inference with the inclusive KL, KL(/pi, q), where required samples from /pi are obtained using MCMC.\nIn particular, the authors provide theoretical analysis of non-asymptotic convergence of two published methods, and establish mixing rates and bounds on gradient variance. \nThis is done by casting those methods as instances of Markov Chain Gradient Descent, and applying and adapting existing theoretical results.\nFurther based on their findings, the authors propose a novel scheme that trades off mixing rate with for reduced variance, and through that performs favorably.\nA number of experiments on simple real-world data confirm the theoretical findings.\n This is a very nice paper, well written, tackling a worthwhile problem, and executed with a very high level of sophistication.\n\nThe paper begins by casting two existing inclusive KL score climbing methods under the framework of Markov Chain Gradient Descent, a method to optimize an expectation of a function over an intractable distribution that works via using MCMC to draw samples from the distribution.\nThis is done via Proposition 1, which is a relatively simple re-writing exercise of the involved quantities.\nDespite being simple, this connection allows the authors to apply and adapt existing theoretical results to establish results on mixing and gradient variance (Thm1/2).\nThe authors discuss the implications, and in particular the downsides of the both analysed methods.\n\n\nOriginality:\nThe authors connect existing works to the established field of Markov Chain Gradient Descent, a method to optimize an expectation of a function over an intractable distribution that works via using MCMC to draw samples from the distribution.\nThis is done via Proposition 1, which is a relatively simple re-writing exercise of the involved quantities.\nDespite being simple, this highly original re-casting allows the authors to apply and adapt existing theoretical results to establish results on mixing and gradient variance (Thm1/2).\nOn top of this, the authors take their findings and use them to motivate an original method that overcomes shortcomings of the analysed methods.\n\n\nQuality: \nThe whole paper is carried out at a very high standard.\nAll claims are supported by proofs and their implications are discussed in the main text.\nI appreciate the discussions of the theoretical results in plain text.\nIt is particularly nice that the analysis of the existing work leads to a new algorithm that performs favorably in practice -- it is very nice to have theoretical results motivate new methodology.\nFor the new method, the authors provide a comparison of computation costs -- though it would have been nice to see this confirmed in the experiments, by e.g. plotting performance as a function of wallclock time instead of iterations.\nIt is also great that the authors compare various optimizers and step-sizes.\n\n\nClarity:\nThe paper is very well written and organized.\nThe proof contains justifications for most steps, which makes it very nice to read the derivations.\n\n\nSignificance: \nThe results are significant from a theoretical / unification perspective, as well is the proposed algorithm from a practical perspective (due to its strong performance).\nThe only real critique I have for the paper is the lack of modern large-scale experiments.\nThat is, deep generative models with complex architectures, e.g. on images, 3d scenes or scientific domains, and how MCSA perform here in general.\nThe experiments considered are pretty much all more or less solved problems with only marginal gains to be made.\nA strong large-scale results would significantly improve the impact of this paper.\n\n114 missing s in iterations\neq under 162, should say that L is score bound\nline 284, verb missing (\"use\"?)\n171 alg 4 is in appendix One question that is not really discussed in the paper is: \"Is it worth giving up the scalability of the standard variational inference for the improved performance of the discussed methods?\". I think since all papers in the MCSA space lack large-scale experiments, this really remains to be seen, but I'd like to hear the authors comment on this.\n\n289 \"We consider ELBO with only 𝑁 = 1 since differentiating through the likelihood makes its per-iteration cost comparable to MCSA methods with 𝑁 = 10.\" Should be elaborated as it is not clear where this number comes from.\n\n294 \"Our encouraging regression results suggest that incorporating methods such as inducing points (Snelson & Ghahramani, 2005) into MCSA may lead to an important new class of GP models.\" not clear what that means and what the basis of this is. One problem with the results are the strong assumptions necessary (the authors do state this), however, it seems very hard to derive comparable results without any assumptions, and the experiments validate some of the findings.\nIn total, the paper appears to be honest about strengths and weaknesses of the analysis method." ]
[ -1, -1, -1, -1, -1, -1, 7, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 2, 4 ]
[ "rYHNWpJW19S", "3izwxVJjOmU", "20sKzh4nc2J", "6vBW3I2NFaK", "ZWMOeOVdJnh", "56U-l452I6-", "nips_2022_ADfBF9PoTvw", "nips_2022_ADfBF9PoTvw", "nips_2022_ADfBF9PoTvw", "nips_2022_ADfBF9PoTvw" ]
nips_2022_rG7HZZtIc-
D^2NeRF: Self-Supervised Decoupling of Dynamic and Static Objects from a Monocular Video
Given a monocular video, segmenting and decoupling dynamic objects while recovering the static environment is a widely studied problem in machine intelligence. Existing solutions usually approach this problem in the image domain, limiting their performance and understanding of the environment. We introduce Decoupled Dynamic Neural Radiance Field (D^2NeRF), a self-supervised approach that takes a monocular video and learns a 3D scene representation which decouples moving objects, including their shadows, from the static background. Our method represents the moving objects and the static background by two separate neural radiance fields with only one allowing for temporal changes. A naive implementation of this approach leads to the dynamic component taking over the static one as the representation of the former is inherently more general and prone to overfitting. To this end, we propose a novel loss to promote correct separation of phenomena. We further propose a shadow field network to detect and decouple dynamically moving shadows. We introduce a new dataset containing various dynamic objects and shadows and demonstrate that our method can achieve better performance than state-of-the-art approaches in decoupling dynamic and static 3D objects, occlusion and shadow removal, and image segmentation for moving objects. Project page: https://d2nerf.github.io/
Accept
This paper attacks an interesting problem with NERF, decoupling moving objects, including their shadows, from the static background. All four reviewers recommend accepting the paper, and the weaknesses identified did not detract from substantive contributions. Therefore I am accepting this paper.
train
[ "Laa0JY-LcQq", "EcyakYJ7Mv6", "U6Z_9HarEg", "UlkiJ4MtgSV", "-ECKPEL1JyJ", "k4YwP2MzwUE", "49VPU8viCK", "4E2lwN9l0Yp", "iMrpU4ZP436", "ZHmwrHDIg9p", "-Hp1TOVqbsb" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The authors well addressed my concerns. Actually, I really appreciate this paper although the technique is relatively simple.", " We thank the reviewer PH91 for the detailed comments and constructive suggestions. Below are our responses to the questions.\n\n### R4-Q1 Hyperparameter Tuning\nPlease refer to All-Q2 for an analysis of the hyperparameter sensitivity and potential approaches to improve the robustness of our method. As suggested in All-Q2, we could rely on ground truth static views to initialize the static component, or we could use those as validation views to automate the tuning process for hyperparameters.\n\n### R4-Q2 Comparison with Other Sparsity Regularization\nMany losses, including the binary entropy loss in our method, beta distribution loss from [1] and Laplacian distribution loss from [2] all serve a similar purpose: reducing entropy in the variable distribution and promoting sparsity by encouraging values to approach 0 or 1. By appending an exponential function, all of them could be skewed in a similar style to suppress and regularize the dynamic component. We have experimented with the Laplacian distribution loss and have achieved similar decoupling results to the binary entropy loss we used in the paper.\n\nHowever, a crucial difference between skewed binary entropy loss and other losses is that the former has a vanishing gradient when approaching 0, while the other losses (beta and Laplacian) have gradients approaching infinity near 0. This difference has its pros and cons. The disadvantage is that we have to design and tune an additional loss (eq 10) to remove the low-density floaters in the free space. The benefit is that we potentially allow space that is initially learned as fully static (with $w=0$) to change to dynamic during the optimization. \n\n### R4-Q3 Additional Literature\nThank you for the constructive comments on the additional related works, we have added them in the revision.\n\n[1] Neural Volumes: Learning Dynamic Renderable Volumes from Images\n\n[2] LOLNeRF- Learn from One Look\n", " We thank reviewer GJS4 for the detailed comments and constructive suggestions. Below are our responses to the raised questions and concerns:\n\n### R3-Q1 Hyperparameter Sensitivity\nPlease refer to All-Q2 for an analysis of the hyperparameter sensitivity and potential approaches to improve the robustness of our method.\n\nFor the scene “Banana”, two separate sets of hyperparameters are used to maximize performance in different aspects: scene decoupling and novel view synthesis. We believe the reason why different hyperparameters are needed is the inaccuracy in estimated camera poses, which cause difficulty for the static NeRF network to correctly model some of the high-frequency details in the background. This is also the reason why our method does not consistently outperform the HyperNeRF baseline. However, by slightly reducing the regularization of the dynamic component and allowing it to contain part of the static background, the network is able to abuse temporal dependency to account for the bias caused by incorrect camera poses, achieving better quality in novel view synthesis with a tradeoff to the correctness of decoupling.\n\n### R3-Q2 Results with Universal Hyperparameters\nPlease refer to All-Q3 for the numerical results and analysis of our method with a universal set of hyperparameters.\n", " We thank reviewer xgPY for the detailed comments and constructive suggestions. Below are our comments on the raised questions and concerns:\n\n### R2-Q1 Additional Comparisons\nWe have evaluated NSFF as an additional baseline in background reconstruction and video segmentation tasks. Please refer to All-Q1 in the general response for the results and analysis.\n\nUnfortunately, comparisons to other suggested methods cannot easily be done in a fair setting as they require additional supervision or work in more constrained scenarios: STNeRF [18] targets at dynamic humans and requires extensive supervision including human masks and bounding boxes. DynNeRF [11] similarly requires accurate dynamic masks as training input. Comparison to them will be equivalent to comparing with the pre-trained segmentation networks they use behind the scene, whereas we have already included Motion Grouping [61] as a SOTA motion segmentation method as a baseline. STaR [63] requires multi-view video as training input and only works in scenes with a single rigid dynamic object. SIMONe [19] segments out all objects from the background without distinguishing between dynamic and static ones, and requires multiple sequences on each scene for training. We believe the field of self-supervised 3D motion decoupling is still under-explored and hope our work can motivate additional research in this direction.\n\n### R2-Q2 Generalizability\nWe agree that our method lacks the ability to generalize across scenes and requires time-consuming per-scene optimization. To the best of our knowledge, this is a limitation in all existing temporal-dependent NeRF-based methods. While extracting shared priors across datasets can certainly improve method efficiency, trading-off with the reconstruction robustness on scenes with a large variety of different motion, geometry and appearance is a non-trivial task and we leave that as a potential direction for future research.\n", " We thank reviewer Xgiz for the detailed comments and constructive suggestions. Below are our responses on the raised questions and concerns:\n\n### R1-Q1 COLMAP Mask\nWe wish to demonstrate the performance of our method when applied to scenes containing challenging dynamic occluders that cannot be easily masked out using a pre-trained semantic segmentation network, for example, liquid pouring down to a different container. Therefore, we do not apply any masks in any stage of the training, including the camera registration stage for our real-world scenes. \n\nFor our real-world captures, the background contains sufficient details and the scale of dynamic occluders is small enough, allowing the majority of features to be extracted and matched in the background. This ensures camera registration via COLMAP is done with a reasonable level of accuracy. The mean reprojection errors typically range from 0.49px to 0.73px in our dataset.\n\n### R1-Q2 Depth Map\nThank you for the constructive suggestion, we have added depth maps for the decoupled dynamic/static and both components together to the supplementary to showcase the correctness of the geometry reconstructed.\n\n### R1-Q3 HyperNeRF Result\nWe have empirically found that HyperNeRF suffers from reconstructing dynamic objects for scenes with rapid object motion near the static background (Balloon, Water, Pick, Broom, etc). The simple MLP in HyperNeRF cannot easily represent the sharp discontinuities in the deformation field, hence strongly biasing the scene towards smooth motion. While such temporal smoothness is desirable within the dynamic objects and does not cause issues for dynamic objects that are far away from any static geometry, it becomes a limitation for scenes where dynamic and static geometry are close to each other, for example, when a dynamic broom quickly sweeps above a static floor. Meanwhile, our method avoids both of the above issues by simply decoupling the scene and allowing HyperNeRF to focus on the geometry, appearance and deformation of the dynamic objects only.\n", " ### All-Q3 Results with Universal Hyperparameters\nTo provide more insights into the robustness of our method, we report the quantitative results of our method with a universal set of hyperparameters in the table above, where “Ours*” uses the hyperparameters described in row 6 in Table.3 for all scenes. With this universal set of hyperparameters, the average performance of our method is within 94% of our best result in background reconstruction (measured in PSNR), and within 76% of our best result in video segmentation (measured in boundary measure $\\mathcal{F}$). While the performance on “Car” and “Chairs” fluctuates in a moderate range with different hyperparameters, our method still achieves better results in general compared to the baselines. For the scene “Bag”, our method fails to achieve a correct decoupling with this universal set of hyperparameters. We suspect it is because of the similar appearance between the dynamic object (brown bag) and the background (brown floor and shelf), presenting significant challenges to the decoupling and hence requiring specialised hyperparameters to achieve satisfactory results.\n", " We thank reviewers for the detailed and considerate feedback. Below are our responses to the common questions.\n\n### All-Q1 Additional Results\n\nAs suggested, we have evaluated NSFF under the background reconstruction and video segmentation tasks and report the additional comparison in the tables below. We disabled the hard mining initialization via pre-trained 2D masks in NSFF and trained it for 500k on each scene. However, please note that although NSFF similarly learns separated dynamic and static NeRFs, it is designed to only maximize the quality of view synthesise rather than scene decoupling and does not incorporate any regularization on the dynamic NeRF. While it consistently reaches PSNR between 24 to 31 in the novel view synthesis, its decoupling is extremely unstable and can sometimes lead to completely empty static NeRF being learned.\n\n| | | Car | | | Cars | | | Bag | | | Chairs | | | Pillow | | Mean | | |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| | LPIPS↓ | MS-SSIM↑ | PSNR↑ | LPIPS↓ | MS-SSIM↑ | PSNR↑ | LPIPS↓ | MS-SSIM↑ | PSNR↑ | LPIPS↓ | MS-SSIM↑ | PSNR↑ | LPIPS↓ | MS-SSIM↑ | PSNR↑ | LPIPS↓ | MS-SSIM↑ | PSNR↑ |\n| NeRF-W | .218 | .814 | 24.23 | .243 | .873 | 24.51 | .139 | .791 | 20.65 | .150 | .681 | 23.77 | .088 | .935 | 28.24 | .167 | .819 | 24.28 \nNSFF | .200 | .806 | 24.90 | .620 | .376 | 10.29 | [.108] | .892 | 25.62 | .682 | .284 | 12.82 | .782 | .343 | 4.55 | .478 | .540 | 15.64 \nNeuralDiff | [.065] | .952 | 31.89 | .098 | .921 | 25.93 | .117 | [.910] | [29.02] | .112 | **.722** | 24.42 | .565 | .652 | 20.09 | .191 | .831 | 26.27 \nOurs | **.062** | **.975** | **34.27** | **[.090]** | **[.953]** | **[26.27]** | **.076** | **.979** | **34.14** | **.095** | [.707] | **24.63** | **[.076]** | **[.979]** | **[36.58]** | **.080** | **.919** | **31.18** \nOurs* | .103 | [.961] | [33.23] | **[.090]** | **[.953]** | **[26.27]** | .254 | .821 | 25.94 | [.108] | .703 | [24.57] | **[.076]** | **[.979]** | **[36.58]** | [.126] | [.883] | [29.32] \n\n\n| | Car | | Cars | | Bag | | Chairs | | Pillow | |Mean | |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| | $\\mathcal{J}$↑ | $\\mathcal{F}$↑ | $\\mathcal{J}$↑ | $\\mathcal{F}$↑ | $\\mathcal{J}$↑ | $\\mathcal{F}$↑ | $\\mathcal{J}$↑ | $\\mathcal{F}$↑ | $\\mathcal{J}$↑ | $\\mathcal{F}$↑ | $\\mathcal{J}$↑ | $\\mathcal{F}$↑ |\nMG | .603 | .743 | .363 | .474 | [.629] | [.738] | [.484] | [.613] | .044 | .080 | .424 | .529 \nNeRF-W | .072 | .132 | .098 | .162 | .027 | .052 | .154 | .254 | .194 | .314 | .109 | .183 \nNSFF | .083 | .152 | .058 | .104 | .102 | .182 | .046 | .087 | .104 | .188 | .079 | .143 \nNeuralDiff | .806 | .891 | .508 | .578 | .080 | .144 | .368 | .513 | .097 | .177 | .372 | .461 \nOurs | [.848] | [.917] | **[.790]** | **[.874]** | **.703** | **.818** | **.551** | **.687** | **[.693]** | **[.818]** | **.717** | **.822** \nOurs* | **.863** | **.926** | **[.790]** | **[.874]** | .038 | .071 | .303 | .427 | **[.693]**| **[.818]** | [.537] | [.623] \n\nQuantitative evaluations on background reconstruction (above) and video segmentation (below). **Best** and [second best] results are highlighted. \"Ours*\" shows the results of our method with a universal set of hyperparameters (row 6 in Table.3).\n\n### All-Q2 Hyperparameter Sensitivity and Tuning\nDue to the self-supervised nature and restrictive monocular video setting, the decoupling currently requires some fine-tuning on challenging scenes with correlated camera and object motion, inaccurate camera parameters, and similar appearance between dynamic objects and background. \n\nThe tuning of hyperparameters is indeed crucial in practice. Our provided hyperparameter setting (row 1 in Table. 3) achieves correct decoupling in 10 out of 13 of our real-world scenes and is suitable for the majority of applications, but could still fail and require further fine-tuning under challenging object/camera motion or inaccurate camera parameters. \n\nIt would be possible to improve the robustness if additional supervision is provided. Given a sparse set of views on the static background, including additional captures of background at different viewpoints, or simply the video frames where dynamic occluders are absent, the static component could be initialized with those views to improve the model robustness against different hyperparameters and scenes. Besides, if we restrict the applications to scenes containing only common dynamic occluders, we could incorporate a pre-trained semantic image segmentation network to guide the initial stage of optimization.", " This paper proposes a method to segment and decouple dynamic objects while recovering the static environment from a monocular video. It adapts from NeRF and its extension Hyper NeRF but with improved handling of shadow regions as well as a loss to promote correct separation of dynamic and static regions. It demonstrates plausible motion segmentation and shadow removal result compared to recent NeRF based methods.\n\n This paper is a solid development for using NeRF to reconstruct from monocular videos with dynamic foregrounds. The paper in general is well-written and the claims are well supported. The adaptation it made to handle shadow and foreground & background separation is elegant and seems to be effective. It definitely holds value to people working on similar problems and deserves publication.\n\nThere are a couple of things I hope the authors could comment on.\n\n(1) In Ln. 226, “To demonstrate the ability of fully self-supervised scene decoupling, we do not apply any masks when registering real-world images using COLMAP”. In my experience, without feeding masks, COLMAP tends to make wrong estimation of camera poses when the foreground is sufficiently large, which will definitely results in wrong reconstruction of the static background. It seems the proposed do not attempt to update the camera pose during optimization, so the claim above looks confusing to me. \n\n(2) It would be nice if the author also visualizes the depth map of the reconstructed foreground / background.\n\n(3) The results of hyperNeRF looks far worse compared to the proposed method even in the dynamic region. Given that the proposed method seems to have little difference to hyperNeRF at least in the dynamic regions, the current result surprises me a bit. Could the author make additional comments on the possible reasons?\n\n\n see the questions in \"strength and weakness\" The method cannot handle high frequency view-dependent radiance change due to the monocular moving camera setting.", " This paper presents D^2NeRF, a self-supervised method that takes a monocular video and learns a 3D scene representation that decouples moving objects, including their shadows, from the static background. In addition, this paper proposes a novel loss to promote the correct separation of phenomena for static and dynamic filed. The authors further propose a shadow field network to detect and decouple dynamically moving shadows. A new dataset was proposed, containing various dynamic objects and shadows. Extensive experiments demonstrate that the proposed method can achieve better performance than state-of-the-art approaches in decoupling dynamic and static 3D objects, occlusion and shadow removal, and image segmentation for moving objects. Strength:\n\n* New dataset for static and dynamic field decomposition.\n* Self-supervised method that takes a monocular video and learns a 3D scene representation that decouples moving objects, including their shadows, from the static background. \n* Novel loss to promote the correct separation of phenomena for static and dynamic filed. \n* A shadow field network to detect and decouple dynamically moving shadows\n* SoTA results in several tasks such as decoupling dynamic and static 3D objects, occlusion and shadow removal, and image segmentation for moving objects.\n\nWeakness:\n* require the accurate camera, suffer from high-frequency view-dependent radiance change, which has been discussed.\n* do not compare with other motion decoupling methods, such as STNeRF[18], NSFF[24] and DynNeRF[11], SIMONe[19], STaR[63], although their experiment setting may be inconsistent with the proposed methods. However, I think in the synthetic dataset, all of them can be reproduced, please add some of their results for a complete comparison.\n* When I refer to the supplementary video, there are still some wrong static and dynamic field decompositions, such as the keyboard. And I think this is a tradeoff. \n* It seems to lack generalizability, just train and test in the same dataset. please refer to the weakness. Add more comparison results with exiting methods.\n* do not compare with other motion decoupling methods, such as STNeRF[18], NSFF[24] and DynNeRF[11], SIMONe[19], STaR[63], although their experiment setting may be inconsistent with the proposed methods. However, I think in the synthetic dataset, all of them can be reproduced, please add some of their results for a complete comparison.\n* It seems to lack generalizability, just train and test in the same dataset. The authors have discussed the limitation, and all of them are a considerable challenge. I have no idea to solve them, and I think it's a trade-off.", " This paper proposes a new self-supervised approach for segmenting moving objects from the static background. It tackles with issues encountered during training with several techniques, such as skewed entropy, ray regularization, static regularization, and integrating shadow ratio to separate shadows. ## Strengths\n\n1. Originality: the proposed techniques are insightful and are beneficial to the community, such as skewed entropy and ray regularization.\n2. Quality: The presented qualitative results are of high quality.\n3. Clarity: the manuscript is well-written.\n4. Significance: the task of decoupling dynamic objects and static background is important and the proposed approach tackles the long-standing problem with high-quality results.\n\n## Weakness\n\n1. I think the main weakness comes from the scene-level tuning for the hyparameter of k, $\\lambda_s$, $\\lambda_r$, $\\lambda_{\\sigma^S}$, $\\lambda_\\rho$.\n2. I understand that to push the number, the author may need grid-search for each scene. However, the lack of analysis for the sensitivity of those hyparameters makes it unclear how robust the proposed approach is.\n3. Especially considering in Sec. B of supp, we have 9 sets of parameters for 19 scenes. Even for the same scene (Banana), we have two sets of hyparameters for two tasks (decoupling and novel view).\n4. Besides the per-scene tuning results, I recommend the author to report a set of quantitative results with a single set of hyparameters if possible. 1. I am curious about the robustness of the proposed approach to the hyparameters. See the weakness part. I appreciate the authors's discussions about the limitations of the proposed approach in Sec. 5, which helps the understanding of the suitable scenarios.", " This paper introduces a series of techniques to decouple dynamic objects from monocular videos, which present impressive static background recovery with shadow removal. How to remove the disturbance from dynamic objects/occluders is a long-standing problem in 3D vision domain. I appreciate the simple but valuable design, i.e., the skewed entropy loss with decoupled NeRF and an extra shadow field. However, many previous methods have adopted similar techniques for foregroud/background content separation. Some related literatures are not included in the paper. Besides, it reads like a CVPR-tyle paper. I'm not sure whether it's suitable for NIPS. Strenghts:\n1. The skewed-entropy loss is able to flexibly separate the static and dynamic parts.\n2. Separating the time-varying shadows and the static appearace via a shadow field.\n2. The decouple performance and NVS quality significantly outperforms SOTA.\n 1. The skewness hyper-parameter $k$ is critical in practice because it determines the proportion of the `dynamic'. And it's sensitive according to the provided experiments. Is it possible to automatically tune the parameter?\n2. The motivation and mechanism of the skew entropy loss is similar to the beta distribution proposed by [1]. As it is regarded one of the major contribution, a thorough comparison and discussion is deserved.\n3. The topic is NVS in dynamic environments. Some related literatures are absent[1][2].\n4. The shadow NeRF concept has been introduced in [3], which should be included in related literature.\n\n[1] Neural Volumes: Learning Dynamic Renderable Volumes from Images \n[2] Neural Scene Graphs for Dynamic Scenes\n[3] Shadow Neural Radiance Fields for Multi-view Satellite Photogrammetry \n\n\n\n\n\n\n" ]
[ -1, -1, -1, -1, -1, -1, -1, 7, 7, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, 5, 4, 4, 4 ]
[ "EcyakYJ7Mv6", "-Hp1TOVqbsb", "ZHmwrHDIg9p", "iMrpU4ZP436", "4E2lwN9l0Yp", "49VPU8viCK", "nips_2022_rG7HZZtIc-", "nips_2022_rG7HZZtIc-", "nips_2022_rG7HZZtIc-", "nips_2022_rG7HZZtIc-", "nips_2022_rG7HZZtIc-" ]
nips_2022_pbILUUf_hBN
A Near-Optimal Best-of-Both-Worlds Algorithm for Online Learning with Feedback Graphs
We consider online learning with feedback graphs, a sequential decision-making framework where the learner's feedback is determined by a directed graph over the action set. We present a computationally-efficient algorithm for learning in this framework that simultaneously achieves near-optimal regret bounds in both stochastic and adversarial environments. The bound against oblivious adversaries is $\tilde{O} (\sqrt{\alpha T})$, where $T$ is the time horizon and $\alpha$ is the independence number of the feedback graph. The bound against stochastic environments is $O\big((\ln T)^2 \max_{S\in \mathcal I(G)} \sum_{i \in S} \Delta_i^{-1}\big)$ where $\mathcal I(G)$ is the family of all independent sets in a suitably defined undirected version of the graph and $\Delta_i$ are the suboptimality gaps. The algorithm combines ideas from the EXP3++ algorithm for stochastic and adversarial bandits and the EXP3.G algorithm for feedback graphs with a novel exploration scheme. The scheme, which exploits the structure of the graph to reduce exploration, is key to obtain best-of-both-worlds guarantees with feedback graphs. We also extend our algorithm and results to a setting where the feedback graphs are allowed to change over time.
Accept
The reviewers came to consensus that this paper makes a good progress on the online learning with feedback graphs. I agree with these opinions and please polish the manuscript so that the minor concerns raised by the reviewers become clear in the final version.
train
[ "Tpb-L3JzRtA", "izVGQf99RxA", "KdtlqvDc_-", "s18y0s6fSY", "-kWNVtJZoZd", "TGIkxFAC_h1", "qSztex4PIKJ", "Y1GhdPD_paM", "CTzhPx9_Ap1", "JbUfn_PB0Vk", "NkK-YOP4Ds" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the authors response and my issues are addressed by the authors. Although the current regret bound in the stochastic world may not match the instance-optimal regret bound, the algorithm and the bounds are interesting based on my understanding.", " > Could the authors explain more about the relationship between the obtained regret bound in the stochastic environment and the instance optimal regret bound in this environment?\n\nOne instance where the difference between the problem dependent $c$ and the independence number can be the large can be described as follow:\nConsider an undirected graph with $K$ arms that has a star structure, where there is one arm in the middle of the star and has a large suboptimality gap of $\\Delta_max$. Then $K-2$ arms have an edge with the center of the star, and each of them have a smaller suboptimality gap $\\Delta_min$. Finally, the optimal arm, which has suboptimality gap $0$, is independent of the rest and does not contain any edge that connects it to the rest of the graph.\n\nWe recall that in order to identify the best arm, in a stochastic problem, each suboptimal arm $i$ has to be observed at a rate of at least $\\frac{1}{t\\Delta_i^2}$. In this instance of the problem, we assume that $\\Delta_{max} > \\Delta_{min} > 0$. Our greedy strategy would construct an exploration set by adding all arms besides for the one in the center of the star.\nThese arms form the largest independence set of the graph. Each of the $K-2$ arms on a leaf of the star needs to be explored, hence played, at rate of at least $\\frac{1}{t\\Delta_{min}^2}$, and each time we do so $\\Delta_{min}$ is added to the pseudo-regret. \nIn total, the cost of exploration for this strategy affects the regret by a factor:\n $R_T \\geq (K-2)\\frac{ \\Delta_{min}}{\\Delta_{min}^2}\\log T = \\frac{(K-2)}{\\Delta_{min}} \\log T. $\n On the opposite side, if the learner leverages his knowledge of the graph structure, it can then become more interesting to play the arm in the center of the star, as it allows to explore all the $K-2$ leaves simultaneously. Each leaf still needs to be observed at rate at least $\\frac{1}{t\\Delta_{min}^2}$, and playing the arm in the middle costs $\\Delta_{max}$ at each round. \nDoing so results in contributing to the pseudo-regret by a factor of at least:\n$ R_T \\geq \\frac{ \\Delta_{max}}{\\Delta_{min}^2}\\log T.$\nThe second approach becomes more interesting when the gaps $\\Delta_{min}$ and $\\Delta_{max}$ are close, meaning that our approach can be suboptimal by up to a factor of $\\tilde \\alpha$. \n\nIt is an interesting direction for future work to investigate how to leverage the graph knowledge to achieve bounds that take advantage of the graph structure, in particular in the case of directed graphs.", " The authors would like to thank the reviewers for their careful reading of the paper and feedback. We will address the questions raised.\n\n> One issue is about the optimality of the regret bound in the stochastic environment. I thought that the instance-optimal regret bound in the stochastic graph bandits is O(c\\ln T) with c defined by an optimization problem with respect to the graph structure. I wonder how large the gap is between the instance-optimal regret bound and the bound derived in this paper?\n\nThe instance optimal regret bound is indeed in $O(c \\ln T)$, where $c$ is the solution of an optimization problem. It is an interesting direction whether it is possible to recover this types of bounds in a best-of-both-worlds regime. \n\n> One minor issue concerns me is the novelty of this paper as the algorithm generalizes the EXP3++ algorithm with a new but intuitive exploration over the nodes. Based on the exploration, it can be shown that each node can be explored with probability $\\frac{1}{t \\hat \\Delta_{i}^2}$ as some node with smaller gap will be chosen in the exploration set according to the construction. For the other derivations, they seem similar to the analysis in [Alon et al., 2015] and [Seldin et al., 2017].\n\nIt can be noted that our analysis improves upon the bounds of Seldin and Lugosi (2017) in the case of bandit feedback (see our answer to reviewer TM6a). \n\n> Another minor issue I do not understand is that why the results presented by the author for the fixed graph setting has $\\tilde \\alpha$ dependence in the adversarial regret bound while this can indeed be improved to $\\alpha$ as shown in the appendix with an additional $\\ln T$ factor. I think here the improvement from $\\tilde \\alpha$ to $\\alpha$ is more important as this could be an improvement of a factor of $K$ for some specific graph.\n\nChoosing whether to present the results of Theorems 2 and 3 in terms of $\\alpha$ or $\\tilde \\alpha$ was discussed at the time of writing the paper. The results that you mentioned indeed hold in terms of $\\alpha$ at the cost of a $\\log T$ factor by simply changing $\\tilde \\alpha$ into $\\alpha$ in the learning rate and the exploration parameter. The choice was made to postpone the analysis of the case depending on $\\alpha$ to Theorem 4, as the choice of tuning the learning rate in terms of $\\theta$ led to a better dependency in $\\sqrt{\\log T}$ instead of $\\log T$. Nothing in the analysis prevents us to use the learning rate of Theorem 4 in Theorems 2 and 3, and we will rewrite the bounds, such that both the results in terms of $\\tilde \\alpha$ and $\\alpha$ hold simultaneously.\n\n> Typos\n\nThank you for pointing them out, we will to fix them. ", " The authors would like to thank the reviewers for their careful reading of the paper and feedback. We will address the questions raised.\n\n\n> Can the authors commend on the performance of EXP3.G++ in extreme cases of bandit and full information?\n\nIn the case of a pure bandit feedback, it is sufficient to look at the bounds using $K$ as the independence number. In the adversarial regime, seen in Theorem 2, we achieve the exact same bound as EXP3++.\nIn the stochastic regime, the results of Theorem 3 differ slightly from the results of EXP3++. On the positive side we refined the analysis of the initial rounds, which improved the additive term from $ O\\left( \\sum_{i: \\Delta_i > 0} \\frac{K}{\\Delta_i^3}\\right)$ to $ O\\left(\\frac{K}{\\Delta_{min}^2}\\right)$. \nOn the negative side, the choice of $\\gamma$ and $\\beta$ of EXP3++.G are suboptimal if we only consider a bandit feedback, but it only increases the stochastic bound by a small multiplicative constant of $5/4$.\n\nIn the case of the full information regime, one may use the Hedge algorithm as suggested by Mourtada and Gaiffas (2018) as it is best-of-both worlds optimal. Thus, we did not analyse the algorithm specifically in this regime. \n\n> Can the authors commend the applications of graph feedback (especially directed graph feedback) in real-world situations?\n\nFeedback graphs can be useful in many different settings, in particular for problems like digital pricing or auctions, where there is a structure. For example, if the different actions of your game correspond to different prices, you can use feedback graphs to model that if a costumer buys an item at a certain price, he would also have bought it at any cheaper price, and thus give feedback not only for the arm played but to the arms corresponding to cheaper prices. ", " The authors would like to thank the reviewers for their careful reading of the paper and feedback. We will address the questions raised.\n\n> Though the $\\frac{1}{\\Delta^2}$ term in Theorem 3 seems inevitable (as it also appears in EXP3++), is it possible to drop this term as the some best of both worlds results of MAB (including BROAD of Wei and Luo (2018) and Tsallis-INF of Zimmert and Seldin (2019)) do not have this term.\n\nNote that the term $\\frac{1}{\\Delta^2}$ has improved compared to the EXP3++ analysis, that contains a $\\frac{1}{\\Delta^3}$ term. It is still an open question whether it would be possible to get completely rid of this term.\nThis being said, Erez and Koren (2021) noted that it was not trivial to achieve near best-of-both worlds guarantees using a FTRL style of algorithm. In particular, it is unclear whether it is possible to achieve algorithms that depend on the independence number of the graph when the regularisation is different from the Shannon entropy (which is used in EXP3), so even if it is not possible to improve upon the analysis of EXP3++ further, there may still be a trade-off between achieving better dependencies on the graph structure and better dependencies on sub-optimality gaps.\n\n> Is it possible that SAPO approach could achieve better regret bounds?\n\nWe did not consider the SAPO approach during our analysis. At the moment we do not see how to analyse SAPO in the feedback graph setting. It is also worth noting that the SAPO algorithm is based on an irreversible switch between two operation modes and requires knowledge of the time horizon. If we were considering the adversarial or the stochastic regime separately, then there are tools, such as the doubling trick, that can handle arbitrary time horizon at the cost of a constant. However, in a best-of-both worlds setting, the doubling trick leads to an extra $\\log T$ factor in the stochastic regime, due to the number of times the algorithm restarts (see Besson and Kaufmann, \"What doubling tricks can and can’t do for multi-armed bandits\"). As our approach is only suboptimal by a factor $\\log T$ in the stochastic regime (and the additive $\\frac{1}{\\Delta^2}$), even if a SAPO approach could achieve a best-of-both worlds optimal result for a fixed time horizon, when generalizing to an arbitrary time horizon the gain would not be substantial. It is also worth noting that for the classic MAB setting, the SAPO algorithm is suboptimal by a $\\sqrt{\\log (T)}$ factor in the adversarial regime, which further limits the potential of this approach being competitive.", " > The statement that the complexity of Algorithm 2 is $O(K^3)$ seems a little bit overkilled, should it be $O(K^2)$ instead (given an efficient sorting algorithm $O(K \\log K)$? It seems that this is related to the \"minimum weight cover set problem\" so it might be interesting to look into the literature to reduce the complexity further.\n\nThe statement of Algorithm 2 being $O(K^3)$ comes from the following analysis. The initial sorting can be done in time $O(K \\log K)$, and as the list has $K$ elements, and the operations from the algorithm only remove elements from that list, parsing that list and 'removing' elements from it each take at most $K$ steps. With this simple analysis, we do not take advantage of the list reducing it size over time, and we agree that $O(K^2)$ seems to be a reasonable complexity for the presented algorithm, and will look into the minimum weight cover set problem in order to see how we can improve the complexity of that part of the problem. \n\n> A small speculation: Section 6 deals with the time-varying feedback graphs in an informed setting. I wonder if this can be extended even further to the case of uninformed setting, e.g., by bounding with MAS number similarly to Alon, Noga, et al. \"Nonstochastic multi-armed bandits with graph-structured feedback.\" (2017) or at least in some special cases where the precise graphs are unknown when making decisions but we know that they belong to a certain class such as in Vu et al. Path Planning Problems with Side Observations-When Colonels Play Hide-and-Seek (2020) (note that these references are not cited in the current version).\n\nThank you for the extra reference, we will check it. We refer to the first paper from Alon et al. (2017) that you mentioned, but it appears that we cited the ArXiv version of the paper rather than the published version, which we will fix. The reason why we initially focused on the informed setting comes from the lower bound in Online Learning with Feedback Graphs without the Graphs (Cohen et al., 2016), which shows that in the adversarial regime , the learner needs to be able to observe the full graph in order to achieve a dependency on the independence number in the regret bound and that it is not sufficient to only observe the neighborhood of the action played. The setting discussed by Alon et al. (2017) that you refer to gives access to the full feedback graph at the end of each round, which does not contradict the lower bound. Adapting to such a setting, or to the one of the other paper that you mentioned is an interesting task for future work. Algorithm 2 relies on the structure of the graph in order to ensure that all arms are sufficiently explored. We would expect to require more assumptions on the graph structure if we want to continue using Algorithm 2 in such setting. If the graph was allowed to arbitrarily change from one round to another, the structure of Algorithm 2 would not allow us to ensure that all arms are sufficiently explored as Proposition 1 would not hold. ", " The authors would like to thank the reviewers for their careful reading of the paper and feedback. We will address the questions raised.\n\n> Pseudo codes Algorithms 1 and 2 are not clear when standing alone: it might be better if it is indicated clearer where Algorithm 2 (and Equations (2)-(3)) is injected to Algorithm 1.\n\nWe will try to improve the structure of the algorithm. In particular, we will consider explicitly referring to the use of Algorithm 2 in the line of Algorithm 1 where we update $\\varepsilon_{t, i}$.\n\n> It is a little bit confusing that in Line 216, the observability probability is said to be lower bounded by $\\frac{\\beta \\ln t}{t \\hat \\Delta_{t, i}^2}$ in the stochastic case but then in Equation (4), it suggests that this might not be the case in general. My understanding for this is that in stochastic case, we can tune the parameters such that $\\frac{\\beta \\ln t}{t \\hat \\Delta_{t, i}^2}$ is indeed the minimum of the 3 terms of (4), is this correct?\n\n It is important to note that in equation (4), the third term is the one that decreases the fastest as a function of $t$. So eventually $t$ becomes large enough such that $o_{t, i}$ is equal to $\\frac{\\beta \\ln t}{t \\hat \\Delta_{t, i}^2}$. The choice of parameter $\\beta$, as expressed in Lemma 1 is fairly large, so the first and second part of equation (4) ensure that the exploration does not get so large in the initial rounds that it would affect the guarantees in the adversarial regime. \n\n> The wording in Theorem 2 makes it seem unnecessarily limited: from proofs in Appendix B, results of Theorem 2 should work with any gamma and beta such that epsilon = 1/2 , $\\sqrt{\\frac{\\lambda \\log K}{t K^2}}$ right? The specific choice of gamma and beta is for the purpose of achieving the bounds in Theorem 3, right? Personal opinion: one theorem covering 2 cases (such as Theorem 4) would be preferable.\n\nThe choice of $\\gamma$ and $\\beta$ is actually fairly restricted. For one, similarly to the EXP3++ algorithm (Seldin and Lugosi, 2017), we require $\\gamma \\geq 3$ and $\\beta \\geq 64(\\gamma + 1) \\geq 256$ in order to derive high probability lower bounds on the gap estimates $\\hat \\Delta_{t, i}$. The choice of $\\gamma = 4$, which forces to take $\\beta \\leq 320$, is due to the equation in line 241. That step is key in Theorem 3. If we were to keep the parameterization of $\\gamma = 3$ and $\\beta = 256$ like in EXP3++, the term $ 2 \\tilde \\alpha \\ln T$ in the bound of Theorem 3 would be replaced by a term of order $\\sum_{i: \\Delta_i > 0} \\frac{\\left(\\log T\\right)^2}{\\Delta_i^*}$, which would dominate the regret and give a dependency on all the suboptimal arms of the graphs.\n\nConcerning the choice of separating Theorems 2 and 4. The two algorithms share a lot in common. It can be noted that the strategy of tuning the learning rate according to $\\theta$ in Theorem 4 can seamlessly be applied to Theorem 2 as well, which could allow us to achieve bounds that depend on both the independence number and the strong independence number, depending on which approach we use to bound the $\\theta$ term. However, in Theorem~2, we use the knowledge of the strong independence number in order to tune $\\epsilon$ rather than the independence number. In Theorem 4, as neither of those quantities are known, the exploration is reduced, which comes at the cost of an increased factor $\\sqrt K$ in the last term of the regret bound in the stochastic case, which is a constant. If we argue that this extra dependence in $K$ is not problematic, then Theorem 4 can be preferred in all cases and we could merge the results of Theorem 2 and 4, in order to derive simultaneous results in terms of the independence number and the strong independence number. \n", " This paper studies the multi-armed bandit problem with graph feedbacks where the learner would receive the loss information of all the arms $j$ that $e(i,j) \\in G$ after selecting arm $i$, where $G$ here is the feedback graph. The proposed algorithm aims at achieving the best of both worlds guarantee, that is, achieving $\\widetilde{O}(\\log T)$ regret with stochastic losses while ensuring worst-case robustness $\\widetilde{O}(\\sqrt{\\alpha T})$ where $\\alpha$ here is s the independence number of the feedback graph. \n\nThe idea of the proposed algorithm originates from the EXP3++ algorithm for stochastic and adversarial bandits, which achieves the near-optimal best of both worlds result for normal multi-armed bandits. To extend the EXP3++ algorithm to the graph feedback setting, the learner takes the idea of EXP3.G which was designed only for adversarial graph feedback, and build up a novel exploration scheme to carefully estimate the gaps, in order to control the magnitude of exploitation and exploration. \n\n The paper shows that the best of both worlds guarantee can be achieved by EXP3++ approach for weakly observable feedback graphs. Moreover, it seems that the algorithm can achieve $O(\\sqrt{\\alpha T})$ regret for adversarial losses while having more logarithmic terms for the stochastic setting. The specific contribution is the new exploration scheme and may be applied in other questions. \n\nThis paper does not have any specific weakness. The writing is clean and well-organized. 1. Though the $\\frac{1}{\\Delta^2}$ term in Theorem 3 seems inevitable (as it also appears in EXP3++), is it possible to drop this term as the some best of both worlds results of MAB (including BROAD of Wei and Luo (2018) and Tsallis-INF of Zimmert and Seldin (2019)) do not have this term. \n\n2. Is it possible that SAPO approach could achieve better regret bounds? None. ", " This paper studies online learning in a new setting of directed graph feedback. By combining the ideas of EXP3++ and EXP3.G, they suggest an algorithm, EXP3.G++ that can achieve near-optimal regret bound in both stochastic and oblivious adversarial settings. I enjoyed reading the paper and all the results are sound and well presented. The regret bound for both adversary and stochastic settings improves the previous results (Erez and Koren (2021)) by log(T) factors and achieves a near lower bound guarantee. Further, the authors also provide an extension for changing the feedback graphs and provide a regret bound for EXP3.G++ in this case. Can the authors commend on the performance of EXP3.G++ in extreme cases of bandit and full information? \n\nCan the authors commend the applications of graph feedback (especially directed graph feedback) in real-world situations?\n There is no potential negative societal impact.", " This paper considers the problem of achieving best-of-both-worlds guarantees in the bandits with graph feedback. Specifically, the authors propose an algorithm based on a combination of EXP3.G [Alon et al., 2015] designed for classic bandits with graph feedback problem and EXP3++ [Seldin et al., 2017] designed for getting best-of-both-worlds for MAB. In the adversarial environment, they achieves $\\tilde{O}(\\sqrt{\\alpha T})$ near optimal regret and $O(\\max_I \\sum_{i\\in I}(\\frac{\\ln^2 T}{\\Delta_i}))$ regret bound. The algorithm can also be extended to the time-varying feedback graph setting and achieve the corresponding regret bounds in both the adversarial and the stochastic setting. Strength:\n- The paper is very well written and the designed algorithm is intuitive and well explained.\n- This paper improves upon the previous work by [Erez and Koren 2021] in both the adversarial regret bound and stochastic regret bound. Specifically, in the adversarial environment, the obtained regret bound is optimal ignoring logarithmic factors.\n- From the algorithm perspective, the algorithm generalizes the EXP3++ algorithm, which works for MAB problem, to graph bandits with a new specific exploration set, which is a maximal independent and dominating set constructed according to the empirical gap.\n- I checked the proofs for each theorem in the main text and the appendix and they look correct to me.\n\n\nWeakness:\n- One issue is about the optimality of the regret bound in the stochastic environment. I thought that the instance-optimal regret bound in the stochastic graph bandits is O(c\\ln T) with c defined by an optimization problem with respect to the graph structure. I wonder how large the gap is between the instance-optimal regret bound and the bound derived in this paper?\n\n- One minor issue concerns me is the novelty of this paper as the algorithm generalizes the EXP3++ algorithm with a new but intuitive exploration over the nodes. Based on the exploration, it can be shown that each node can be explored with probability $1/t\\hat{\\Delta}_i^2$ as some node with smaller gap will be chosen in the exploration set according to the construction. For the other derivations, they seem similar to the analysis in [Alon et al., 2015] and [Seldin et al., 2017].\n\n- Another minor issue I do not understand is that why the results presented by the author for the fixed graph setting has $\\tilde{\\alpha}$ dependence in the adversarial regret bound while this can indeed be improved to $\\alpha$ as shown in the appendix with an additional $\\ln T$ factor. I think here the improvement from $\\tilde{\\alpha}$ to $\\alpha$ is more important as this could be an improvement of a factor of $K$ for some specific graph.\n\nTypos:\n- line 483: $\\alpha_s$ -> $a_s$\n- line 575-576: The first inequality should be $\\leq\\frac{1}{2}n\\lambda$ instead of $\\geq n\\lambda$.\n- line 632: $1+\\ell_{s,i^*}$ -> $1+\\tilde{\\ell}_{s,i^*}$ - Could the authors explain more about the relationship between the obtained regret bound in the stochastic environment and the instance optimal regret bound in this environment? Yes, the authors addressed the limitations of the work.", " This paper considers online learning with feedback graphs and present an algorithm achieving (almost) best-of-both-world regret bounds in stochastic and adversarial environments. The core idea is a combination of the EXP3++ algorithm (providing adaptivity between regime) and the EXP3.G algorithm (exploiting feedback graphs) and a novel exploration scheme which exploits the structure of the graph. The results are extended to the time-varying feedback graphs.\n Strengths: The paper is well written and easy to follow. The results are interesting and in my opinion, contribute significantly to the community. While the proposed algorithm is built upon previously known algorithms, the presented exploration scheme is, as far as I know, novel and provides interesting insight. \n\nWeakness: No significant weaknesses. Two minor comments/suggestions: 1) It seems that there is still room to improve the complexity of Algorithm 2; 2) some numerical experiments comparing the proposed algorithm with non-adaptive benchmarks (showing the trade-off of adaptivity and performance) could be interesting. \n\n\n\n - Pseudo codes Algorithms 1 and 2 are not clear when standing alone: it might be better if it is indicated clearer where Algorithm 2 (and Equations (2)-(3)) is injected to Algorithm 1.\n\n- It is a little bit confusing that in Line 216, the observability probability is said to be lower bounded by $(\\beta \\log t) / (t \\hat{\\Delta}^2 )$ in the stochastic case but then in Equation (4), it suggests that this might not be the case in general. My understanding for this is that in stochastic case, we can tune the parameters such that $(\\beta \\log t) / (t \\hat{\\Delta}^2 )$ is indeed the minimum of the 3 terms of (4), is this correct? \n\n- The wording in Theorem 2 makes it seem unnecessarily limited: from proofs in Appendix B, results of Theorem 2 should work with any gamma and beta such that epsilon = 1/2 $\\sqrt(\\lambda \\log K/ (t K^2))$, right? The specific choice of gamma and beta is for the purpose of achieving the bounds in Theorem 3, right? Personal opinion: one theorem covering 2 cases (such as Theorem 4) would be preferable. \n\n- The statement that the complexity of Algorithm 2 is O(K^3) seems a little bit overkilled, should it be O(K^2) instead (given an efficient sorting algorithm O(KlogK)? It seems that this is related to the \"minimum weight cover set problem\" so it might be interesting to look into the literature to reduce the complexity further. \n\n- A small speculation: Section 6 deals with the time-varying feedback graphs in an informed setting. I wonder if this can be extended even further to the case of uninformed setting, e.g., by bounding with MAS number similarly to Alon, Noga, et al. \"Nonstochastic multi-armed bandits with graph-structured feedback.\" (2017) or at least in some special cases where the precise graphs are unknown when making decisions but we know that they belong to a certain class such as in Vu et al. Path Planning Problems with Side Observations-When Colonels Play Hide-and-Seek (2020) (note that these references are not cited in the current version). \n\n the authors adequately addressed the limitations and potential negative societal impact of their work." ]
[ -1, -1, -1, -1, -1, -1, -1, 7, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 4 ]
[ "KdtlqvDc_-", "JbUfn_PB0Vk", "JbUfn_PB0Vk", "CTzhPx9_Ap1", "Y1GhdPD_paM", "NkK-YOP4Ds", "NkK-YOP4Ds", "nips_2022_pbILUUf_hBN", "nips_2022_pbILUUf_hBN", "nips_2022_pbILUUf_hBN", "nips_2022_pbILUUf_hBN" ]
nips_2022_VpHFHz57fT
Improved Imaging by Invex Regularizers with Global Optima Guarantees
Image reconstruction enhanced by regularizers, e.g., to enforce sparsity, low rank or smoothness priors on images, has many successful applications in vision tasks such as computer photography, biomedical and spectral imaging. It has been well accepted that non-convex regularizers normally perform better than convex ones in terms of the reconstruction quality. But their convergence analysis is only established to a critical point, rather than the global optima. To mitigate the loss of guarantees for global optima, we propose to apply the concept of invexity and provide the first list of proved invex regularizers for improving image reconstruction. Moreover, we establish convergence guarantees to global optima for various advanced image reconstruction techniques after being improved by such invex regularization. To the best of our knowledge, this is the first practical work applying invex regularization to improve imaging with global optima guarantees. To demonstrate the effectiveness of invex regularization, numerical experiments are conducted for various imaging tasks using benchmark datasets.
Accept
This paper addresses image reconstruction problems exploiting invex regularizers (which are not necessarily convex). For many modern signal processing applications, invexity of the cost is proved. Many examples are considered, and an extensive comparison with state-of-the art methods are provided in an application section. As all reviewers point out unanimously, the paper is very well presented, original and overall of very high quality. It is therefore a clear accept. But I recall the authors the strict rule on the number of pages for the final version (10 pages).
train
[ "JDtEA373sDK", "bnWXxB5uKR0", "1xAuJE0fvEz", "CqEYLrW9M-e", "-0eNFI4cJoP", "Vm4TmtHQsb_i", "x3ZMZKRGBHx", "2ITW96jtnN", "srxk-bpmZ2", "LcilE_ky0vB", "p4PdNCvwGiw", "E1I3QWsO1dv" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We are glad to know that the reviewer found our rebuttal helpful. Thank you for the comments, and the support reconsidering the decision. Here we address your new questions.\n# Questions\n- **Q1.** We thank you for the suggested plot. We will include this new comparative plot between invex and convex regularizers in supplemental material of the final version of the manuscript.\n- **Q2.** Thanks for rising this up. We highlight that the intuition behind the advantages of the studied invex regularizers are attributed to the following reasons. The superiority of the $\\ell_{p}$-quasinorm comes from the possibility of adjusting the value of $p$ in a data-dependent manner [49]. In the case of Equations (7), (8) , (9) and (10), their performance can be justified under the framework of reweighted $\\ell_{1}$-norm minimization (please see [20] as an example), which enhances the performance of just $\\ell_{1}$-norm minimization. Therefore, following your suggestion we will add a new subsection in Appendix A of supplemental material providing this discussion and more insights on the advantages of invex regularizers.\n- **Q3.** Following reviewer suggestion, we will add plots in the supplemental material of the final manuscript, numerically verifying the theoretical convergence results of this paper. These plots will show MSE vs iterations, and MSE vs time.\n- **Q4.** In light of the previous question, we will complement our analysis on the studied invex regularizers, by comparing their running time with convex regularizers under the analyzed image restoration methodologies.\n## Minor Issue\n- Thanks for pointing this out. We highlight that our original manuscript had 9 pages, and the revised manuscript presented in the submission, is only intended to illustrate how we will address all comments of the reviewers, if our paper is accepted. Reason we marked all changes in blue.", " I have revised my decision and review of the Paper3485.\n\nReasons for the original decision:\n\n1. Before this revision, the paper lacked comparisons with essential benchmarks. It was not an acceptable state.\n2. The paper focuses on computational imaging as an application, but only one image reconstruction example was provided.\n\nReasons for the revised decision:\n\n1. Authors responded to the technical inquiries with clarifications.\n2. Authors added comparisons with suggested benchmark methods in the case of image restoration tasks.\n3. Authors added image reconstruction examples as per suggestion.\n\nMinor issue: The revised paper is more than ten pages long. I believe it has to be limited to 9 pages until acceptance.\n", " Thank you for your helpful comments and support.", " We are glad to know that the reviewer found our rebuttal helpful. Thank you for the valuable and constructive comments, which greatly helped us to improve the manuscript.", " Thank you for answering all my questions. All of my comments have been addressed and I maintain my recommendation to accept.", " Thank you for the answers to my questions. I have also read the comments of other reviewers and the replies of the authors. I think manuscript can be accept.\n\n", " We address the comments by grouping them into two categories: questions and discussion about weaknesses. The numbering of the references are from the new version of the manuscript.\n# Questions\n* **Q1.** We respectfully point out it is incorrect to state that $h(\\boldsymbol{x})$ is always non-convex when we sum the $\\ell_{2}$-norm and a non-convex $g(\\boldsymbol{x})$. We state this following Theorem 3 (See Appendix C), that function $h(\\boldsymbol{x})$ is convex when $g(\\boldsymbol{x})$ are those invex functions in Table 2. \n* **Q2.** Thanks for rising this up around Section 4.2. For compressed sensing, it is common to assume that the matrix $\\boldsymbol{H}$ to contain both the image acquisition matrix $\\boldsymbol{\\Phi}$ and the sparse representation matrix $\\boldsymbol{\\Psi}$ (e.g., [66],[69],[70]). Given this setting, the conclusions we derive in Section 4.2 applies to all sparse signals in a given basis, and not just to canonical sparse vectors [69]. We have explained this in more detail in the new version of the paper to avoid any further ambiguity, and specifying also how images can be sparse. Please see lines 173, Section 4.2, and see references [61,66,69, 70]. \n* **Q3.** The plug-and-play method studied in this paper is summarized in Algorithm 3 (Appendix G), which is a modified version of the APG method (Algorithm 1) and performs two proximal steps in Lines 5 and 6, respectively. Therefore, we incorporated the invex regularizers in Algorithm 3, by retaining Line 5 as in Algorithm 1, which explicitly uses an invex regularizer, and by replacing Line 6 with the Noise2Void CNN-based denoiser in Eq. (13). We have covered this in Section 4.2.2. With these arguments in place, we believe it is clear how we incorporated invex regularizers in a plug-and-play methodology. We have also modified the paper to provide a clear description around this. More specifically, we modified the description of Algorithm 3 in Section 4.2.2 (see Line 229-233). We also modified the text on Experiment 2 (Section 5.1), (Lines 314-317). \n* **Q4.** We believe we have now addressed this issue in the new version of the manuscript. We are now reporting additional results using TVAL3, ReconNet, and ISTA-Net, including SSIM as a metric as suggested by the reviewer. These results are summarized in Table 3 in the manuscript (in terms of PSNR) and in Tables 5, 6 and 7 of Appendix I (in terms of SSIM) of the supplemental material. \n* **Q5.** We have addressed this issue by providing a number of example reconstructed images in Appendices I and J, covering experiments 1, 2 and 3 (in Appendix I), and denoising (Appending J). \n# Discussion about weaknesses\n* 1. Apologies if there has been any confusion around our evaluation. The evaluation is not limited to FISTA (2010) and LISTA. It is worth highlighting that the LISTA version we are using for our evaluation is based on 2018 and not 2010 (as mentioned by the reviewer). Our baselines include BM3D, Accelerated Proximal Gradient method (APG), Plug-and-play version of APG with Noise2Void (2019), FISTA, LISTA, and five different invex regularizers (for denoising, and deconvolution problems). We opted for the BM3D, as it is one of the most successful and up to date denoisers that utilises the $\\ell_{1}$-norm, and we firmly believe that is a suitable case to compare against the invex regularizers. The APG is a modern extended version of FISTA, highly utilised in a number of contemporary practical applications [55, 74, 75, 76]. As such, it is wrong to conclude that our evaluation is limited just to FISTA and LISTA, and we believe the results account for latest developments around deep learning-based methods like Noise2Void.\n* 2. The $\\ell_{p}$-norm has, indeed, been included in the comparison, however, under the name of $\\ell_{p}$-quasinorm (Equation (6)), covering both theoretical and numerical analysis, with $\\ell_{p}$-quasinorm showing the best performance. In the revised draft, we are now reporting additional results for Experiment 1 by comparing invex regularizers, $\\ell_{1}$-norm, SCAD, and Minimax-Concave regularizers. These results are reported in Table 3 of the manuscript (see Lines 311-312). \n* 3. We thank you for the suggested algorithms for comparison. In light of previous comment, and following your advice, we have complemented the compressive sensing numerical experiments by reporting additional results using TVAL3, ReconNet, and ISTA-Net. These results are summarized in Table 3 in the manuscript (in terms of PSNR) and in Tables 5, 6 and 7 of Appendix I (in terms of SSIM) of supplemental material.\n* 4. This issue has been addressed. See above. \n* 5. We have now complemented the current experimental results with examples of reconstructed images (along with their the reconstruction quality), reported in Appendices I (for experiments 1,2, and 3) and J (for denoising experiment) of supplemental material (owing to the reasons of space).", " We thank you for positive comments. We address the comments by grouping them into three categories: questions, discussion about weaknesses and limitations. The numbering of the references are from the new version of the manuscript. \n\n# Questions\n* **Q1.** We apologise for the incorrect statement of $\\boldsymbol{H}$ being unitary. We have now fixed this by revising Theorem 4, where $\\boldsymbol{H}$ should be a matrix with $\\ell_{2}$-normalized columns satisfying the restricted isometry property (RIP) for $\\delta_{2k}<\\frac{1}{3}$ where $k<n$ is the sparsity. This is a standard assumption in compressive sensing [Theorem 5.3,70].\n\n* **Q2.** We have improved the annotations of Table 3, by highlighting the best and least efficient invex regularizers for every row. We have also restructured it to organize better the results. We have also discussed the performance of Equation (6), as follows: **The intuition behind the superiority of (6) comes from the possibility of adjusting the value of $p$ in data-dependent manner [49]. This means that when the images are strictly sparse, and the noise is relatively low, a small value of $p$ should be used. Conversely, when images are non-strictly sparse and/or the noise is relatively high, a larger value of $p$ tend to yield better performance (which seems to be the case for the selected image datasets).** (see line 372 in the revised draft). \n\n* **Q3.** Yes, for the numerical tests we did merged all datasets under consideration (mentioned in Section 5), and after this we created the corresponding training and testing image sets (for learning and evaluation scenarios). We have clarified better about how the datasets are used in the evaluation, by stating **A number of datasets have been merged to formulate one unique dataset for our training and evaluation purposes. These are DIV2K super-resolution[88], the McMaster[89], Kodak[90], Berkeley Segmentation (BSDS 500) [91], Tampere Images (TID2013) [92] and the Color BSD68 [93] datasets.** (see line 272 in the revised draft). \n\n# Discussion about weaknesses\n* **1.** Thank you for pointing this out, and we agree this is incorrect. We have addressed this issue as stated above. The changes we made conform to the full-rank condition required by Lemma 2 when $\\boldsymbol{H}$ satisfies RIP. This result is important to invex community, because it supports the validity of Lemma 2 to build invex functions with affine mappings. We have modified the manuscript as follows to explain this more clearly: In Section 3 (line 123 in the revised draft), **\"This condition on $\\boldsymbol{H}$ is a mild assumption, because we show in Section 4 imaging application examples that satisfy this criterion**. In Section 4.2 (line 196 in the revised draft), **We clarify that if $\\boldsymbol{H}$ satisfies the mentioned RIP, then each sub-matrix with $k$-columns of $\\boldsymbol{H}$ selected according to indices of the non-zero elements of the $k$-sparse signal is a full row-rank matrix.** (This supports the validity of Lemma 2 to build invex functions with affine mappings.)\n\n* **2.** We have improved the presentation of Table 3 by annotating the results better, by improving its organization, and by adding additional discussions.\n\n* **3.** We have addressed this issue by including a discussion in Section 6 as follows: **We believe that the remaining invex, SCAD, and MCP regularizers have a lesser performance than Eq. (6) as they do not have the flexibility of adjustment to the sparsity of the data. In fact, Eq. (9) shows the poorest performance because in the proof of Theorem 3, we theoretically guarantee that Eq. (9) cannot sparsify all images. Therefore, this analysis leads to the conclusion that the invex function Eq. (6) offers the best performance for the metrics concerns and the imaging problems studied here.**\n\n# Limitations\n* **1.** We thank you and agree with the comment. Given the scarcity of the space, we have included a discussion around limitations as part of Section 6 (instead of a dedicated section), and changed the section heading to reflect this (\"Discussion, Limitations and Conclusions\"). The text reads as follows: **Although we have presented theoretical results with global optima using invexity, these are not without limitations. To state a few, first, our study here was focused on the reconstruction of the images under ideal conditions, such as noise-free cases (Theorem 4). However, real-world datasets will have complex noise models. Secondly, the numerical results here are focused on denoising, and deconvolution tasks where the convergence guarantees for the plug-and-play result only ensures a close estimate of the solution (Lemma 4). Although this was useful, it is desirable to explore avenues for improving convergence guarantees to global optima. Addressing these concerns will form the basis for our future work which can radically improve downstream tasks like invex robust image reconstruction, and deep learning research.**", " We thank you for positive comments and the accurate summary of our work. We address your insightful comments as follows.\n\n# Questions\n* **Q1.** We have addressed this question by including two CNN-based baselines, namely, ReconNet and ISTA-net. We have included the relevant results in the expanded Table 3 (main paper) and in the newly added Tables 5, and 7 (Appendix I of the supplemental material), in the revised draft. Please see below the mentioned Tables.\n---\n|Table 3. Performance comparison, in terms of PSNR (dB)|\n---\n\n|(Experiment 1) Algorithm 1, p = 0.5 for Eq(6)|\n---\n\n|SNR|Eq(6)|Eq(7)|Eq(8)|Eq(9)|Eq(10)|FISTA[13]|ReconNet[97]|TVAL3[96]|SCAD[94]|MCP[24]|\n|---|---|---|---|---|---|---|---|---|---|---|\n|$\\infty$|**33.40**|31.25|30.00|32.65|32.65|29.97|27.01|28.77|30.55|31.30|\n|20dB|**24.60**|22.83|23.39|22.00|23.98|21.80|19.99|20.49|22.60|23.01|\n|30dB|**27.61**|26.56|26.90|26.00|27.25|24.91|22.01|23.99|26.10|26.77|\n---\n\n|(Experiment 3) Algorithm 2 - unfolded LISTA. $p = 0.85$ for Eq(6)|\n---\n\n|SNR|$m/n$|Eq(6)|Eq(7)|Eq(8)|Eq(9)|Eq(10)|$\\ell_{1}$-norm[86]|ReconNet[97]|\n|---|---|---|---|---|---|---|---|---|\n|$\\infty$|0.2|**31.32**|29.20|29.87|28.56|30.58|27.95|26.59|\n||0.4|**36.10**|33.50|34.34|32.75|35.20|32.01|31.86|\n||0.6|**41.27**|37.81|38.90|36.09|40.05|35.82|34.42|\n|20dB|0.2|**26.00**|24.45|24.94|23.97|25.01|23.52|22.00|\n||0.4|**32.67**|30.64|31.32|30.02|32.29|29.43|28.24|\n||0.6|**34.38**|33.00|33.28|32.94|33.64|32.60|30.20|\n|30dB|0.2|**27.65**|26.20|26.66|25.75|27.15|25.32|23.64|\n||0.4|**34.33**|31.89|32.66|31.02|33.47|30.46|29.88|\n||0.6|**37.03**|34.84|35.54|34.17|36.27|33.53|31.71|\n---\n\n|(Experiment 3) Algorithm 2 - unfolded ISTA-Net. $p = 0.85$ for Eq(6)|\n---\n\n|SNR|$m/n$|Eq(6)|Eq(7)|Eq(8)|Eq(9)|Eq(10)|$\\ell_{1}$-norm[98]|\n|---|---|---|---|---|---|---|---|\n|$\\infty$|0.2|**32.50**|30.15|30.89|29.04|31.67|28.77|\n||0.4|**38.33**|35.72|36.55|34.92|37.41|34.17|\n||0.6|**43.61**|40.07|41.18|39.02|42.36|38.02|\n|20dB|0.2|**28.29**|26.22|26.87|25.60|27.56|25.01|\n||0.4|**33.96**|32.11|32.71|31.55|33.32|31.00|\n||0.6|**35.77**|34.68|35.03|34.33|35.39|33.99|\n|30dB|0.2|**29.34**|28.30|28.63|27.97|28.98|27.65|\n||0.4|**35.41**|33.33|33.99|32.69|34.68|32.08|\n||0.6|**38.95**|36.25|37.10|35.43|38.00|34.65|\n---\n\n---\n|Table 5. Performance comparison, in terms of SSIM. (Experiment 1) Algorithm 1, $p = 0.5$ for Eq(6)|\n---\n\n|SNR|Eq(6)|Eq(7)|Eq(8)|Eq(9)|Eq(10)|FISTA[13]|TVAL3[96]|ReconNet[97]|\n|---|---|---|---|---|---|---|---|---|\n|$\\infty$|**0.9486**|0.937|0.940|0.933|0.944|0.925|0.929|0.922|\n|20dB|**0.8675**|0.849|0.855|0.843|0.861|0.832|0.838|0.826|\n|30dB|**0.9055**|0.894|0.898|0.890|0.901|0.883|0.887|0.880|\n---\n\n|Table 7. Performance comparison, in terms of SSIM|\n---\n\n|(Experiment 3) Algorithm 2 - unfolded LISTA. $p = 0.85$ for Eq(6)|\n---\n\n|SNR|$m/n$|Eq(6)|Eq(7)|Eq(8)|Eq(9)|Eq(10)|$\\ell_{1}$-norm[86]|ReconNet[97]|\n|---|---|---|---|---|---|---|---|---|\n|$\\infty$|0.2|**0.927**|0.913|0.918|0.908|0.923|0.903|0.899|\n||0.4|**0.961**|0.942|0.948|0.936|0.954|0.930|0.924|\n||0.6|**0.989**|0.962|0.970|0.953|0.979|0.944|0.936|\n|20dB|0.2|**0.869**|0.862|0.864|0.860|0.866|0.858|0.856|\n||0.4|**0.937**|0.920|0.925|0.915|0.931|0.909|0.904|\n||0.6|**0.949**|0.941|0.944|0.938|0.946|0.935|0.932|\n|30dB|0.2|**0.887**|0.878|0.881|0.875|0.884|0.871|0.868|\n||0.4|**0.951**|0.931|0.938|0.925|0.944|0.919|0.913|\n||0.6|**0.961**|0.954|0.956|0.952|0.959|0.949|0.947|\n---\n\n|(Experiment 3) Algorithm 2 - unfolded ISTA-Net. p = 0.85 for Eq(6)|\n---\n\n|SNR|$m/n$|Eq(6)|Eq(7)|Eq(8)|Eq(9)|Eq(10)|$\\ell_{1}$-norm[86]|\n|---|---|---|---|---|---|---|---|\n|$\\infty$|0.2|**0.935**|0.921|0.926|0.917|0.930|0.913|\n||0.4|**0.973**|0.954|0.960|0.947|0.966|0.941|\n||0.6|**0.990**|0.969|0.976|0.963|0.983|0.956|\n|20dB|0.2|**0.882**|0.874|0.877|0.871|0.880|0.869|\n||0.4|**0.950**|0.932|0.938|0.926|0.944|0.920|\n||0.6|**0.961**|0.952|0.955|0.949|0.958|0.946|\n|30dB|0.2|**0.899**|0.950|0.932|0.968|0.915|0.987|\n||0.4|**0.964**|0.943|0.950|0.937|0.957|0.930|\n||0.6|**0.985**|0.969|0.974|0.964|0.980|0.958|\n---\n\n* **Q2.** The parameter value for $\\lambda$ was variable. The best value for $\\lambda$ was chosen using cross validation. We have stated this in Experiments and Results section (see line 285 in the revised draft).\n\n# Discussion about weaknesses\n* **1.** We have not included efficiency or computational complexity studies as part of this work, as we considered them as out of scope. Our focus, instead, has been around potential utility of invex functions in imaging applications supported by theoretical guarantees than developing a best-performing regularizer. We will endeavour to optimize the efficiency of our regularizers as part of our future work. Thank you for pointing this out.", " The paper provides convergence results attached with invex regularizers. Few non-convex regularizers with no global convergence guarantees profited from this analysis through invexity. Authors establish convergence guarantees to global optima for various advanced image reconstruction techniques with invex regularizers. Authors prove few functions as invex functions and provide the proximal operators corresponding to them. These invex regularizers were utilized for various imaging tasks.\n Strengths:\n1. Invex regularizers give a decent theoretical analysis of a few non-convex regularizers.\n2. Experiments included various image restoration tasks such as compressed image recovery and image denoising.\n\nAcknowledgments:\n1. Authors included more comparisons with TVAL3, ReconNet, and ISTA-Net as part of the revision.\n2. More Image reconstruction examples are provided in the supplementary document as part of the revision.\n\nIssues:\n1. The revised paper is more than ten pages long. I believe it has to be limited to 9 pages until acceptance. Suggestions: (I am not asking authors to work on this right away. These are merely suggestions for a camera-ready version.)\n1. The paper presentation could be improved. A comparative plot between all invex functions would've clarified things.\n2. A visual aid with optimization landscape of Eq (11) i.e., $\\| x - u \\|^2$ + Invex function, will give more insights into the convexity of the problem, and comparison with $\\| x - u \\|^2 + \\ell_1$ will help us understand why invex regularizers are performing better than convex regularizers. \n3. Convergence analysis has been provided in theory. It would be better if the comparison of the algorithms has been provided in practice (experimental validation). plots with reconstruction MSE vs iterations, MSE vs time would be suggestible.\n4. Proximal operators of non-convex penalties are known to have longer computation time. So, a comparison of the proposed methods running times will establish solid ground for the paper. The authors discussed the limitations of their work.", " The authors consider the task of image reconstruction with regularization. Beginning at least a few decades past, regularizers have been adapted by various communities first from L2 quadratic penalties to L1-like edge-preserving penalties. More recently, motivated by compressed sensing researchers have used L1 or even non-convex penalties. Non-convex penalties are more difficult to analyze from a theoretical perspective, but empirically have been observed to produce the best results for image reconstruction tasks.\n\nIn this manuscript the authors propose to focus on classes of regularizers that are invex functions. Invex functions are functions where \"at any point where the derivative vanishes (stationary point), it is a global minimizer of the function.\" The manuscript's contributions are three-fold: 1) the manuscript provides the first list of regularizers with proved invexity for image reconstruction, 2) the manuscript establishes convergence guarantees for three types of image reconstruction problems with invex regularizers and 3) the manuscript provides empirical data on the performance of the regularizers. **Strengths**\n- The authors present convergence theory for three problems of substantial interest to the reconstruction community: denoising, compressed sensing, and plug-and-play reconstruction. Each of these has been applied in practical settings. The paper includes global convergence guarantees for each setting.\n- The invex approach seems to be a new and interesting perspective into inverse problems. In particular, the quasi-norm of Equation (6) has long been of great interest to the image reconstruction community due to its relations to compressed sensing, so it is encouraging to see that the current manuscript's theory can be applied to this case.\n- In addition to theoretical results, the paper also includes empirical results for each of the regularizers under consideration.\n\n**Weaknesses**\n- Some properties of H in Theorem 4 seem inconsistent with how it is used in the subsequent algorithms. For example, Theorem 4 states that H is unitary, but H is elsewhere specified to have full row rank. The specific properties of H necessary for each theorem setting should be further clarified.\n- The presentation of Table 3 is somewhat confusing.\n- The variations in invex regularizer performance is not discussed. 1. How can the H matrix in Theorem 4 be unitary if m < n? If m < n, H cannot have an orthonormal basis for vectors of size n.\n2. Could the annotation of Table 3 be improved? This is one of the most important results of the paper. I would bold the best-performing PSNR value in each row as is standard practice at machine learning conferences rather than the regularizer class. Also, the performance of Eq. (6) should be discussed.\n3. How were the datasets used? Did the authors merge all datasets under consideration? These details should be further specified for the experiments. The authors discussed limitations under the relevant theorems and lemmas, but for practitioners I think it would be better to have a dedicated Limitations section. A few items that might be considered for such a section:\n\n- The plug-and-play result only shows that the optimization sequences converges to a close estimate of the solution.\n- The theory does not seem to consider noisy measurements.\n- The reconstruction tasks under consideration seem to be limited to deconvolution.", " This manuscript revisits the concept of invexity and provides the first list of proved invex regularizers for improving image reconstruction. Convergence conditions are analyzed and experiments are conducted on practical applications. Strength:\n1.\tThe paper is well-written with compact structure and clear interpretations.\n2.\tSome useful regularizers in imaging applications are formally studied in a new perspective of invex function. Besides, a simple and valid way of constructing an invex function is provided, which may potentially inspire later researchers.\n3.\tExperimental comparisons indicate that the proposed invex regularizers have potential to improve performance in practical applications. \nWeakness:\nThe proposed new invex function doesn’t achieve the best efficiency compared with the existing non-convex functions.\n 1.\tCan you compare the performance with some CNN-based baselines?\n2.\tFor different regularizers, were the parameters (e.g. \\lambda in eq.12) chosen as the same value?\n No potential negative societal impact is mentioned. Both theoretical analysis and practical applications of invex functions are provided. I believe the proposed finite list will inspire exploration in related fields. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "bnWXxB5uKR0", "x3ZMZKRGBHx", "-0eNFI4cJoP", "Vm4TmtHQsb_i", "2ITW96jtnN", "srxk-bpmZ2", "LcilE_ky0vB", "p4PdNCvwGiw", "E1I3QWsO1dv", "nips_2022_VpHFHz57fT", "nips_2022_VpHFHz57fT", "nips_2022_VpHFHz57fT" ]
nips_2022_K4W92FUXSF9
Random Normalization Aggregation for Adversarial Defense
The vulnerability of deep neural networks has been widely found in various models as well as tasks where slight perturbations on the inputs could lead to incorrect predictions. These perturbed inputs are known as adversarial examples and one of the intriguing properties of them is Adversarial Transfersability, i.e. the capability of adversarial examples to fool other models. Traditionally, this transferability is always regarded as a critical threat to the defense against adversarial attacks, however, we argue that the network robustness can be significantly boosted by utilizing adversarial transferability from a new perspective. In this work, we first discuss the influence of different popular normalization layers on the adversarial transferability, and then provide both empirical evidence and theoretical analysis to shed light on the relationship between normalization types and transferability. Based on our theoretical analysis, we propose a simple yet effective module named Random Normalization Aggregation (RNA) which replaces the batch normalization layers in the networks and aggregates different selected normalization types to form a huge random space. Specifically, a random path is sampled during each inference procedure so that the network itself can be treated as an ensemble of a wide range of different models. Since the entire random space is designed with low adversarial transferability, it is difficult to perform effective attacks even when the network parameters are accessible. We conduct extensive experiments on various models and datasets, and demonstrate the strong superiority of proposed algorithm. The PyTorch code is available at https://github.com/UniSerj/Random-Norm-Aggregation and the MindSpore code is available at https://gitee.com/mindspore/models/tree/master/research/cv/RNA.
Accept
This paper intrdouces the relation between normalizations and adversarial transferability, and proposes a method using random normalization aggregation for enhancing adversarial robustness. Three reviewers agreed with the interesing idea, thorough expreiments, theroetical analysis, and the effectivess, so they gave acceptance score. However, one reviewer (ioBB) raised a concern on the results of Auto-attack (AA) and lack of in-depth discussion on the results. Unfortunately, AC and the reviewers failed to make a consensus on decision during the discussion period, . AC carefully read the paper, the rebuttal, and the reviewers' discussion. The main remaining issue raised by Reviewer-ioBB is it is not clear the reason why the results of AA are highter than PGD. The authors provided more extensive experimental results, focusing on comparing the results of AA (and breakdown of four AA) and PGD. Also, they conjecture these result from handling adversarial transferability under randomness-based method via normalization aggregation. AC also agrees with the authors and Reviewer-q4wp that the in-depth analysis on the reason of AA-PGD results is ouf-of-scope of this paper and theses results are consistent to recent works on adversarial transferability. So, this issue might be left as future work. Because it seems that the contribution of this paper is enough for machine leanring community except the discussion on the AA-PGD reason, AC recommends accepting this paper.
val
[ "9Mi0ZKs_KJu", "m_Pj40FCq3M", "rm_Xy5T3vIc", "uZoJD27svsF", "BMdhg5cn6u", "NEWlyXiaA2", "J5igU5TTFst", "LrBIAQhEo2", "4yBJht7-9LO", "7x3Q7KDMBKI", "iv8Gf2P3NAf", "JV85WO8uf5r", "MzFTdBRfwjP", "EFQhhXZ5No3", "nSKAALteAYA", "U5IfnGUqYIb", "Eaq81KDPOQ", "Fv9NOoBGkg9", "79myZnvaau", "gtI-3418p5W", "37C85Zf-364", "dEpSZTfAD-C", "5p_AwfsA59F" ]
[ "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We sincerely appreciate all the reviewers for their valuable comments and suggestions. We have revised the manuscript according to the comments. The updated contents are highlighted by blue text in the revised manuscript.", " ### Re Auto-Attack Results (Cont.)\nThanks for your reply again. We carefully checked the results and logs of APGD and found that the gap between APGD and AA in the previous table is a mistake$^{*}$. Sorry for the confusing results. We report the correct results of four attacks in Auto-Attacks separately in the following table:\n\n| Method | PGD | APGD | APGDT | FAB | Square | AA |\n| ------- | ---- | ----- | ------ | --- | ------ | --- |\n| RNA | 60.69 | 65.33 | 71.28 | 81.86 | 79.84 | 65.61 |\n\nAs shown in the table, both AA and APGD achieves the similar results. The remaining small gap $0.28$ could be a result of the randomness of our algorithm. The reason why APGD has similar performance to AA mainly lies in the ratio of adversarial examples generated by APGD is much larger than the others. Taking the results of ResNet-18 on CIFAR-10 as an example, the ratio of adversarial examples generated by APGD is $0.7965$, the ratio of adversarial examples generated by other attacks is $0.0118\\%$, and the ratio of remaining clean examples is $0.1917$. Thus, AA and APGD have the similar results.\n\n\n### Re Setting of AA\nFollowing the setting in other randomized defense work [38], the version of AA is set to ‘standard’, which is also same to the official implementation of [g, h].\n\nAs suggested, to further verify the performance of RNA in EOT format of AA, we set the version of AA to ‘rand’ for evaluation of RNA with ResNet-18 on CIFAR-10 where the gradients in $APGD_{ce}$ and $ APGD_{dlr}$ is computed by the expectation of $20$ EOT iterations. The results are reported in the following table:\n\n| Method | AA$_{standard}$ | AA$_{rand}$ |\n| ------- | --------------- | ---------- | \n| RNA | 65.61 | 63.38 |\n\nAs shown in the table, the rand version of AA achieves slightly higher attack success rate. Compared with other normalization baselines, RNA still shows superiority. We mainly attribute it to the designed random space with lower transferability, which makes it difficult to compute appropriate gradient via a simple expectation.\n\n[g]. On Improving Adversarial Transferability of Vision Transformers. ICLR 2022 Spotlight.\n\n[h]. Demystifying the Adversarial Robustness of Random Transformation Defenses. ICML 2022.\n\n\\* Most existing works use AA in their experiments, we also follow this tradition and use the standard code. This is our first time to conduct breakdown experiments of AA, we did a quick implementation during the rebuttal. Thanks to the thoughtful comments of the reviewer, we fixed a bug in this breakdown experiment. Obviously, this breakdown experiment and its code are isolated from the major experiment of the paper. Thus, there is no potential influence on the correctness of the whole paper.\n", " Dear authors,\n\nThank you for your response.\n\nRegarding the results -- if A-PGD achieves 63.31, this number should be AA's upper bound, since A-PGD is the first to run. Why AA has 65.61? It doesn't adds up.\n\nregarding the setting of AA -- did you ran the random version (version='rand')? this will be the fair comparison.", " ### Re Auto-Attack Results (Cont.)\n\nThanks for your reply. We first evaluate RNA under four attacks in Auto-Attack separately, including APGD, APGDT, FAB, and Square. All the attack settings are the same as those in Auto-Attack. The accuracy or RNA with ResNet-18 on CIFAR-10 under different attacks are reported in the following table:\n\n| PGD | APGD | APGDT | FAB | Square | AA |\n| ---- | ----- | ------ | --- | ------ | --- |\n| 60.69 | 63.31 | 72.12 | 81.56 | 78.84 | 65.61 |\n\nAs shown in the table, there exists a gap between PGD and APGD/AA. We mainly attribute this gap to the efficient designing of APGD and Auto-Attack where the attacks are terminated when a successful adversarial example is found. Although it is effective and efficient in deterministic models, it might not work for randomized models, such as RNA.\n\nSome relevant work also discusses the performance of Auto-Attack against stochastic models. For example, [h] mentioned\n> “One of the reasons is that AutoAttack is optimized for efficiency and so each of its attacks is usually terminated once a misclassification occurs. This is applicable to deterministic models, but for stochastic ones such as an RT defense, the adversary is better off finding the adversarial examples that maximize the expected loss instead of ones that are misclassified once” and “AutoAttack is ineffective against randomized models”\n\nin Section 6.5, which corresponds to our explanation.\n\nFinally, we argue that the major contribution of our work lies in the defense via the normalization aggregation, which explores and utilizes the adversarial transferability among different normalizations. The reason why Auto-Attack is not effective in adversarial transferability is actually out of the scope. Even the current work in ICML 2022 [h] only provides a possible explanation of the poor performance of Auto-Attack against randomized models. Thus, it remains an open problem for further research. Another new work might be required to further analyze the reasons. We thank the reviewer for the in-depth understanding of Auto-Attack and comments.\n\n[h]. Demystifying the Adversarial Robustness of Random Transformation Defenses. ICML 2022.\n", " Dear authors,\n\nI still don't fully agree with the explanation.\nLet's focus on one of the attacks, A-PGD, and leave aside the other 3 attacks. \n\nA-PGD should perform similar to PGD. What is your performance on A-PGD? \n\nNow there are two cases:\n\n1) If the results are similar to PGD, this means that AA results should be at least similar to PGD.\n\n2) If A-PGD and PGD results are not similar, it requires an explanation of the reason that adaptive learning rate gives a large gap.\n", " Thank you for your reply. We have uploaded a revised version of manuscript. The revised parts are highlighted with blue color.\n\n### Re Missing related work section\nWe add a related work section in the current uploaded version, which includes the discussion of mentioned reference by all the reviewers.\n\n### Re Updating the entire manuscript\nCurrently, we have updated the Table 1, 2 ,3 and Figure 5 in the experiment section, which are our major results. The remaining experiments only include Table 4 and Table 5, which contain the comparisons of our own algorithm with ResNet-18 on CIFAR-10, which will be probably completed by the end of rebuttal time. Thanks for your patience and support.\n\n### Re Auto-Attack results\n\nWe agree that Auto-Attack is a much stronger attack than PGD **in white-box attack setting**. However, this advantage is not effective in our setting since Auto-Attack is not powerful in **adversarial transferability**. Note that our proposed RNA designs a random space with lower adversarial transferability via the aggregation of normalization layers. In other words, it naturally forms a **transferred-based black-box attack setting** where the generated adversarial examples are fed to another unknown path due to random sampling strategy. Although Auto-Attack can easily generate successfully adversarial examples for a particular path of RNA, the Auto-Attack results in Table 1 and Table 2 are determined by the performance of generated adversarial examples on another randomly sampled path of RNA, which refers to the adversarial transferability of Auto-Attack. \n\nTo the best of our knowledge, the adversarial transferability of Auto-Attack can be worse than PGD. For example, some empirical evidence can be found in Table 7 of [g]. We put part of the results here for illustration. The following table denotes the attack success rate of adversarial examples generated by PGD and Auto-Attack using different source models including Deit-T and Deit-B. The target models correspond to the columns, including Res152, WRN, DN201, T2T-24, T2t-7, TnT, and Vit-S.\n\n| Model | Attack | Res152 | WRN | DN201| T2T-24 | T2T-7 | TnT | ViT-S |\n| ------ | ----- |------- | ---- | ------ | ------ | ----- | --- | ----- |\n| Deit-T | PGD | 15.2 | 17.9 | 20.3 | 9.6 | 30.8 | 33.6 | 63.7 |\n| Deit-T | AA | 9.0 | 8.9 | 9.3 | 4.8 | 18.8 | 10.3 | 33.7 |\n| Deit-B | PGD | 20.4 | 18.6 | 28.9 | 19.4 | 24.8 | 35.2 | 44.8 |\n| Deit-B | AA | 14.4 | 13.6 | 16.5 | 12.4 | 22.4 | 21.6 | 32.8 |\n\nAs shown in the table, PGD always achieves better attack success rate than AA, which implies that the adversarial transferability of AA can be worse than PGD.\n\n[g]. On Improving Adversarial Transferability of Vision Transformers. ICLR 2022 Spotlight.\n", " Dear authors, \n\nMany thanks for responding to my concerns on the \"probability of sampling\", \"advantages comparisons,\" and \"empirical evidence.\" My concerns have been fully resolved. \n\nI have read other reviews and replies, and there are no other issues from my end. \nI would like to increase my rating score by one. \n\nBest,\\\nReviewer q4wP \n", " I deeply appreciate the authors' efforts to answer my questions.\n\nHowever, I still have some concerns:\n\n1. I'm still missing related work section in the current uploaded version. Can the authors update their version with the promised related work section?\n\n2. I appreciate the new experiments the authors did with the updated training settings, however, I'm worried that the Neurips rebuttal time is not enough to update the entire manuscript in this revision.\n\n3. My main concern is the Auto-Attack results: Auto-Attack consists of A-PGD (which runs first by default), which is essentially PGD with adaptive learning rate. Therefore, it makes no sense to me that auto-attack robustness will be higher than PGD robustness or FGSM robustness. So, I don't think that the authors' explanation can facilitate this significant issue. Can the authors add further explanation? why A-PGD performs worse that PGD? usually A-PGD will outperform PGD, so AA results still should be lower than PGD results.\n", " Dear authors,\n\nThanks for your prompt reply, the experiments addressed my concerns, and so I raise my rating.", " ### Re Expectation over transformation (Cont.)\n\nThanks for your reply. We clarify the setting in EOTPGD. EOTPGD aims at attacking randomized model. Following TorchAttacks [26] and official implementation of [b], EOT is always combined with PGD/FGSM attack to evaluate the defense performance of randomized model, which corresponds to our setting. Thus, we conduct experiments to evaluate RNA under EOT+PGD (EOTPGD) attack using TorchAttacks [26].\n\nIn [a], Expectation over Transformation (EOT) is suggested to compute the gradient over the expected transformation to the input. In EOTPGD, instead of computing the gradient of one sampled path in each PGD iteration, the gradient is computed based on the expectation of $N$ sampling times due to the randomness of network, which we strictly follow the attack setting in [26, b]. As suggested in the comments, we increase $N$ to $100$ to evaluate the performance of RNA with more EOT iterations. Specifically, the attack iteration of PGD is set to $20$ and **in each PGD iteration**, the iteration of EOT is set to $100$. The results are reported in the following table:\n\n\n| Network | dataset | EOT$^{100}$ PGD$^{20}$ | PGD$^{20}$ |\n| ------- | ------ | --------- | --------- |\n| ResNet-18 | C-10 | 65.27 | 59.79 |\n| ResNet-18 | C-100 | 38.84 | 35.51 |\n\nAs shown in the table, the attack success rate becomes lower if EOT with more iterations is applied. We mainly attribute it the designing of random space of RNA. Due to the lower adversarial transferability among the paths as well as the exponential number of paths in this random space, it is difficult for EOT attacks to compute the accurate gradient with expectation.\n\n[a]. Synthesizing Robust Adversarial Examples. ICML 2018.\n\n[b]. Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness. CVPR 2020.\n", " We appreciate your efforts in reviewing our paper. We have tried our best to provide the experimental results as suggested. Would you mind checking our response, and is there any further question we need to clarify?", " ### Re Low AA results with WRN baselines. [Updated]\n\nAccording to the suggestions, we also follow the training setting in [a] to retrain all the baselines including RNA on CIFAR-100. The results are reported in the following tables.\n\nResNet-18:\n\n| Method | Natural | FGSM | PGD$^{20}$ | CW | MIFGSM | DeepFool | AutoAttack |\n| ------- | ------ | ----- | --------- | ----- | ----- | -------- | ---------- |\n| BN | 55.81 | 31.33 | 28.71 | 50.94 | 30.26 | 0.79 | 24.48 |\n| BGN32 | 53.16 | 30.11 | 27.75 | 48.74 | 29.11 | 0.54 | 23.29 |\n| IN | 52.92 | 27.56 | 23.16 | 47.70 | 25.91 | 2.86 | 19.33 |\n| GN32 | 51.00 | 28.32 | 25.10 | 45.82 | 27.10 | 0.79 | 21.00 |\n| LN | 48.82 | 27.05 | 23.83 | 44.07 | 26.93 | 0.80 | 19.97 |\n| RNA | 56.79 | 36.76 | 35.55 | 56.86 | 34.00 | 51.77 | 42.12 |\n\nWRN:\n\n| Method | Natural | FGSM | PGD$^{20}$ | CW | MIFGSM | DeepFool | AutoAttack |\n| ------- | ------ | ----- | --------- | ----- | ----- | -------- | ---------- |\n| BN | 60.11 | 35.40 | 31.69 | 57.11 | 34.14 | 0.23 | 28.36 |\n| BGN32 | 58.54 | 34.13 | 30.97 | 54.08 | 33.01 | 0.46 | 26.92 |\n| IN | 56.71 | 31.96 | 28.09 | 51.98 | 30.33 | 1.83 | 24.25 |\n| GN32 | 59.08 | 33.56 | 29.94 | 53.89 | 32.30 | 1.12 | 25.78 |\n| LN | 57.09 | 32.92 | 29.75 | 52.19 | 31.73 | 0.79 | 25.71 |\n| RNA | 60.57 | 37.87 | 36.04 | 60.21 | 35.58 | 55.16 | 42.43 |\n\n\nAgain, RNA achieves the best performance and WRN results are better than ResNet-18 on CIFAR-100.\n", " Dear authors, \n\nMy concerns/questions are partially addressed. The setting used in ``Re Expectation over transformation'' is a bit confusing. Specifically, EOT introduces more random transformations for generating adversarial examples, but the authors use the same PGD iteration. I suggest reporting results with more iterations, e.g., more iterations under each transformation and more transformations. \n\nIf the author can address the concern, I will raise my rating.\n\n", " ### Re Low AA results with WRN baselines.\nAccording to the suggestions, we follow the training setting in [a] to retrain all the baselines including RNA. The results are reported in the following tables.\n\nResNet-18:\n\n| Method | Natural | FGSM | PGD$^{20}$ | CW | MIFGSM | DeepFool | AutoAttack |\n| ------- | ------ | ----- | --------- | ----- | ----- | -------- | ---------- |\n| BN | 81.84 | 56.70 | 52.16 | 78.46 | 54.96 | 0.35 | 47.69 |\n| BGN32 | 77.28 | 54.60 | 50.71 | 73.57 | 53.05 | 0.39 | 46.12 |\n| IN | 77.07 | 50.07 | 42.76 | 72.51 | 47.63 | 2.27 | 38.30 |\n| GN32 | 76.69 | 52.60 | 45.66 | 72.92 | 50.26 | 0.72 | 41.83 |\n| LN | 79.81 | 53.88 | 45.44 | 75.51 | 50.72 | 1.13 | 41.48 |\n| RNA | 84.29 | 63.10 | 60.69 | 84.45 | 60.70 | 76.73 | 65.61 |\n\nWRN:\n\n| Method | Natural | FGSM | PGD$^{20}$ | CW | MIFGSM | DeepFool | AutoAttack |\n| ------- | ------ | ----- | --------- | ----- | ----- | -------- | ---------- |\n| BN | 85.27 | 60.65 | 55.06 | 82.24 | 58.47 | 0.40 | 52.24 |\n| BGN32 | 83.70 | 59.66 | 54.96 | 80.25 | 57.85 | 0.37 | 51.38 |\n| IN | 84.11 | 58.14 | 50.37 | 80.13 | 55.37 | 1.62 | 46.81 |\n| GN32 | 83.45 | 58.95 | 51.94 | 79.55 | 56.70 | 1.37 | 47.99 |\n| LN | 83.24 | 57.80 | 49.74 | 79.39 | 54.68 | 0.95 | 46.44 |\n| RNA | 86.46 | 65.73 | 63.34 | 85.68 | 62.84 | 78.18 | 67.88 |\n\nAs shown in the tables, RNA still achieves the best performance in all the scenarios after applying the training recipe in [a], which demonstrates that RNA can be easily incorporated into other defense techniques to further improve the performance.\n\n[a]. Overfitting in adversarial robust deep learning\n\n### Re Other AT methods.\nFollowing the suggestions, we replace standard AT with FAT [c] and MART [d], and we evaluate the performance of them with ResNet-18 on CIFAR-10. The results are reported as follows:\n\n| Method | Natural | FGSM | PGD$^{20}$ | CW | MIFGSM | DeepFool | AutoAttack |\n| ------- | ------ | ----- | --------- | ----- | ----- | -------- | ---------- |\n| AT | 85.33 | 63.88 | 59.79 | 84.42 | 61.05 | 76.83 | 66.35 |\n| FAT [c] | 86.93 | 61.43 | 56.48 | 86.76 | 57.13 | 73.79 | 60.70 |\n| MART [d] | 86.61 | 63.63 | 60.28 | 80.12 | 61.28 | 71.45 | 64.67 |\n\nWith other AT methods, the defense performance can be different. For example, MART achieves the best PGD$^{20}$ accuracy, FAT achieves the best CW accuracy, and AT achieves the best AutoAttack accuracy. RNA can be simply incorporated into other AT methods to further improve the defense performance.\n\n[c]. Zhang, Jingfeng, et al. \"Attacks which do not kill training make adversarial learning stronger.\" International conference on machine learning. PMLR, 2020.\n\n[d]. Wang, Yisen, et al. \"Improving adversarial robustness requires revisiting misclassified examples.\" International Conference on Learning Representations. 2019.\n\n### Re Related work.\nFollowing the suggestions, we will discuss [e] and [f] in our paper. [e] proposed to use two different BN layers for clean images and adversarial images to enhance robustness and studied the role of network capacity. [f] proposed to use two different BN layers for clean images and adversarial images so that the involvement of adversarial training can improve network generalization.\n\nFor the comparison, [e] only reported the results with deeper networks on ImageNet, such as ResNet-152, and [f] aims at improving the generalization of EfficientNet instead of the adversarial robustness. Thus, we cannot directly use the reported results in [e] and [f] for comparison. Instead, we implement the BN mixture module proposed in [e] and [f] and apply it to ResNet-18 for comparison. Note that we feed clean images to $BN_{clean}$ and adversarial examples to $BN_{adv}$ during the evaluation of BN mixture, which could be unfair for RNA. The results are reported as follows:\n\n| Method | Dataset | Natural | FGSM | PGD$^{20}$ | CW | MIFGSM | DeepFool | AutoAttack |\n| ------- | ------- | ------ | ----- | --------- | ----- | ----- | -------- | ---------- |\n| RNA | C-10 | 85.33 | 63.88 | 59.79 | 84.42 | 61.05 | 76.83 | 66.35 |\n| bnmix[e,f] | C-10 | 95.03 | 54.60 | 44.99 | 80.87 | 51.21 | 0.89 | 42.45 |\n| RNA | C-100 | 57.62 | 36.62 | 35.51 | 57.14 | 35.84 | 53.12 | 42.41 |\n| bnmix[e,f] | C-100 | 72.21 | 26.41 | 21.22 | 51.92 | 24.61 | 0.67 | 19.25 |\n\nAlthough bnmix achieves better natural accuracy since bnmix trains separate BN for clean and adversarial examples, RNA achieves much better adversarial accuracy than bnmix in all the scenarios.\n\n[e]. Intriguing Properties if Adversarial training at Scale\n\n[f]. Adversarial Examples Improve Image Recognition\n\n### Re Evaluation of the results.\nWe provide the evaluation code and pretrained model with an anonymous link:\n\nhttps://drive.google.com/file/d/1JZ3Q5WB7jgvGbAgVGc9BJ0Ol0nQjM2de/view\n\n\n### Re miscellaneous:\nThanks for the advice. We will revise the manuscript according to the suggestions.\n\n", " We thank the reviewer for the constructive comments and valuable suggestions. We follow the training setting in [a] which is suggested by the reviewer and rerun all the baselines on CIFAR-10. We find that all the baselines with WideResNet can achieve much better results. We summarize all the changes to training setting as follows:\n1. The training epoch is set to 200 instead of 160.\n2. The learning rate scheduler is set to piecewise decay instead of cosine.\n3. The standard adversarial training setting is set to PGD-10 instead of PGD-7.\n\nThese changes strictly follow the training setting from the official github of [a]. We believe these new experimental results will address the concerns. Currently, we are working on the experiments on CIFAR-100 and we will update the results once they are ready.\n\n[a]. Overfitting in adversarial robust deep learning\n\n### Re Random path at each iteration of PGD.\nWe generate the adversarial examples as suggested, which randomizes different paths at each iteration of PGD. We evaluate the performance of RNA under this attack with both ResNet-18 and WideResNet on CIFAR-10/100, marked as DP PGD. The attack settings including iterations, step size, and perturbation size remain the same as PGD in the paper. The results are reported as follows:\n\n| Network | dataset | DP PGD$^{20}$ |\n| ------- | ------ | --------- |\n| ResNet-18 | C-10 | 73.57 |\n| ResNet-18 | C-100 | 45.69 |\n| WideResNet | C-10 | 74.19 |\n| WideResNet | C-100 | 47.62 |\n\nAfter randomly sampling path in each iteration of PGD, the attack success rate becomes much lower than standard PGD. We mainly attribute it to the huge size of random space, which makes the attacker difficult to generalize to the entire space in limited number of iterations. And how to attack RNA could be an interesting topic in future research.\n\n### Re AA results on ResNet-18 are better than WRN-32.\nWe mainly attribute the lower AA results of WRN to our weak training recipe. According to the suggestions, we follow the training setting in [a] to train WRN with RNA. The results are reported in the following table:\n\n| Setting | Training | Natural | FGSM | PGD^{20} | CW | MIFGSM | DeepFool | AutoAttack |\n| ------- | ------- | ------ | ----- | --------- | ----- | ----- | -------- | ---------- |\n| ResNet-18+RNA | ours | 85.33 | 63.88 | 59.79 | 84.42 | 61.05 | 76.83 | 66.35 |\n| WRN+RNA | ours | 86.22 | 64.85 | 60.49 | 85.87 | 61.21 | 57.21 | 62.91 |\n| ResNet-18+RNA | [a] | 84.29 | 63.10 | 60.69 | 84.45 | 60.70 | 76.73 | 65.61 |\n| WRN+RNA | [a] | 86.46 | 65.73 | 63.34 | 85.68 | 62.84 | 78.18 | 67.88 |\n\nAs shown in the table, both our training setting and the one of [a] achieve similar results on ResNet-18 with RNA. However, the training setting of [a] significantly improves the performance of WRN + RNA, and the adversarial accuracy of WRN is better than ResNet-18 in all the scenarios, which implies that our training setting for WRN might be relatively weak. We will revise the results in the manuscript.\n\n[a]. Overfitting in adversarial robust deep learning\n\n### Re PGD-20 acc is lower than the Auto-Attack accuracy.\nWe mainly attribute the better AA accuracy to the attacking designing of AA and our proposed defense mechanism. Note that AA generates the adversarial examples using four attacks and stopes when the adversarial examples are found. The examples remain the clean examples if none of these four techniques attacks successfully. Comparing to PGD attack which takes 20 iterations to maximize the cross-entropy loss on all examples, these remaining clean examples after AA significantly reduces the attack success rate when all the examples are fed to another path in RNA. For example, AA generates adversarial examples using four attacks for around 80% of all examples and the rest 20% remains clean examples with ResNet-18 on CIFAR-10.\n\nIn white box setting, AA can generate powerful adversarial examples through multiple attack methods. However, the attack success rate of AA can be relatively low in our black box setting due to the randomness of our algorithm. RNA designs a random space in normalizations with lower adversarial transferability. In other words, an effective attack to RNA should have powerful adversarial transferability among paths with different normalizations, which cannot be achieved by AA.\n\nWe also find similar results in other work [a], which involves randomness in the input images, as shown in the following table.\n\n| Method | dataset | PGD$^{20}$ | AA |\n| ------- | ------ | --------- | --- |\n| Random[b] | C-10 | 47.06 | 47.27 |\n| Hedge[b] | C-10 | 48.31 | 51.82 |\n| RNA | C-10 | 60.49 | 62.91 |\n| Random[b] | C-100 | 25.43 | 31.74 |\n| Hedge[b] | C-100 | 28.23 | 35.54 |\n| RNA | C-100 | 37.76 | 41.23 |\n\n[b]. Wu, Boxi, et al. \"Attacking adversarial attacks as a defense.\" arXiv preprint arXiv:2106.04938 (2021).\n\n", " We thank the reviewer for the constructive comments and valuable suggestions.\n\n### Re Sampling probability.\n\nWe take ResNet-18 as an example, which has 20 normalization layers. The probability of sampling the same path in both attacking and inference stage is $9.53 \\times 10^{-5}$% while the probability of sampling paths with path difference of 10 is $17.62$%. Furthermore, the discussion of the influence of path difference has been included in Figure 5 (c) and Section 4.3. The stability of RNA has been also evaluated in Section 3 in our supplementary material.\n\n### Re Other random selection methods.\n\nThe involvement of randomness in other random selection scenarios is also feasible for defense. However, our proposed RNA has several advantages. To begin with, compared with the selecting sub-network in parameters, RNA does not need to maintain an actual supernet where each path contains the similar capacity of popular architectures, such as ResNet and WideResNet. For example, in the field of Neural Architecture Search (NAS), it is always expensive and time-communing to train a supernet. Furthermore, we involve the randomness in the normalizations based on both the theoretical analysis in Section 2.3 and empirical evidence in Figure 3. According to the Theorem 2.1, the upper bound of adversarial transferability is controlled by the smoothness and gradient similarity, which all have strong connections to the normalization layers in our analysis, and we provide experimental results in Table 4. Moreover, RNA is also orthonormal to other random selection methods. With the involvement of different kinds of random selections, the random space could have potential lower adversarial transferability.\n\n### Re Empirical evidence that the smoothness of different normalization layers controls the defense performance.\n\nIn Figure 2, we visualize the smoothness of different normalization in both 3D and 2D and find that IN is smoother than other normalization layers. In Table 4, we have tried several combinations of different normalization layers to form the random space. Comparing the performance of LN+BN and IN +BN, the smoother normalization layer IN achieves much better results than LN, which empirically verifies our theoretical analysis in Section 2.3.", " ### Re Other random selection methods.\n\nThe involvement of randomness in other random selection scenarios is also feasible for defense. However, our proposed RNA has several advantages. To begin with, compared with the selecting sub-network in parameters, RNA does not need to maintain an actual supernet where each path contains the similar capacity of popular architectures, such as ResNet and WideResNet. For example, in the field of Neural Architecture Search (NAS), it is always expensive and time-communing to train a supernet. Furthermore, we involve the randomness in the normalizations based on both the theoretical analysis in Section 2.3 and empirical evidence in Figure 3. According to the Theorem 2.1, the upper bound of adversarial transferability is controlled by the smoothness and gradient similarity, which all have strong connections to the normalization layers in our analysis, and we provide experimental results in Table 4. Moreover, RNA is also orthonormal to other random selection methods. With the involvement of different kinds of random selections, the random space could have potential lower adversarial transferability.\n", " We thank the reviewer for the constructive comments and valuable suggestions.\n\n### Re The connection between adversarial transferability and adversarial robustness.\n\nRNA builds a \"supernet\" which contains exponential number of paths with different normalization layers. Since we apply random sampling strategy during inference stage, the generated adversarial examples of sampled attacking path under white-box attacks are always fed to another different path for inference. This scenario corresponds to the transfer-based black-box attacks. Thus, the defense performance of this \"supernet\" built by RNA is actually determined by the level of adversarial transferability of this random space. In other words, the adversarial robustness can be boosted if the adversarial examples of paths in this \"supernet\" are difficult to be transferred to other paths. That is why adversarial transferability and adversarial robustness are strongly connected in our setting.\n\nThe reason why RNA can contribute to adversarial robustness mainly lies in the three components in Section 3, including path increment in random space, black-box adversarial training, and normalization types selection. We explain the contribution of them as follows:\n* Path increment in random space: One of the key factors of this defense mechanism lies in the huge number of possible paths, which guarantees that attacking path and inference path are different. To achieve this objective, RNA randomly samples normalization layers for each layer in the network, which exponentially increases the number of paths.\n* Black-box adversarial training: One of the most effective approach to improve adversarial robustness is adversarial training. We incorporate RNA into adversarial training in a straightforward manner to form a \"black-box\" adversarial training algorithm. Specifically, we randomly sample paths for both adversarial example generation and network optimization, which directly reduces the adversarial transferability among paths, and thus improves the adversarial robustness.\n* Normalization types selection: We provide both theoretical analysis of the connections between adversarial transferability and the type of normalization layers in Section 2.3 as well as the discussion of gradient similarity with empirical evidence in Figure 3. Specifically, the normalization layers with smaller group size tend to have lower adversarial transferability, and the normalization layers in different categories tend to have lower adversarial transferability. Thus, RNA selects BN and IN to form the random space, which reduces the overall adversarial transferability among possible paths.\n\n\n### Re The performance gap between BN and other normalizations (Line 47-50).\n\nIn Line 47-50, we discuss the gap between BN under white-box attacks and other normalization under black-box attacks. The gap we discuss in the paper refers to the gap between accuracy under white-box attacks (around 50%) and the one under black-box attacks (around 70%). Transfer-based black-box attacks naturally achieve lower attack success rate than white-box attacks, which motivates us to involve black-box setting in adversarial defense. According to the results of Figure 1 (c) and (d), we find a large gap between white and black-box accuracy. We attribute this gap to the adversarial transferability among normalization layers, which motivates us to explore the connections between normalization layers and adversarial transferability. We will revise this confusing claim in the revised manuscript.\n\n### Re Expectation over transformation.\n\nWe apply the EOTPGD attack from TorchAttacks [26] to evaluate the performance of RNA. The results are shown in the following table.\n\n| Network | dataset | EOT PGD$^{20}$ | PGD$^{20}$ |\n| ------- | ------ | --------- | --------- |\n| ResNet-18 | C-10 | 59.71 | 59.79 |\n| ResNet-18 | C-100 | 34.85 | 35.51 |\n| WideResNet | C-10 | 62.04 | 60.49 |\n| WideResNet | C-100 | 36.65 | 35.98 |\n\nWe compare EOTPGD and PGD with ResNet-18 and WideResNet on both CIFAR-10 and CIFAR-100. RNA achieves the similar performance under both EOTPGD and PGD.", " We thank the reviewer for the constructive comments and valuable suggestions.\n\n### Re Improve notation.\n\nWe will improve all the notations in the revised manuscript. Thanks for your suggestions.\n\n### Re $\\beta$ in Equation 7.\n\nThe $\\beta_a$ and $\\beta_b$ in Eq. 7 denote the upper bounds of gradient norm of network $h_a$ and $h_b$ which correspond to two paths randomly sampled from RNA. For simplicity, we dismiss the usage of subscript in the analysis of $\\beta$ in Eq. 8.\n", " This work provides a clear explanation for the relationship between normalizations and adversarial transferability, which further inspires the authors to propose a random normalization aggregation method where the adversarial robustness can be significantly improved. Strengths:\n\n1. Using adversarial transferability to boost adversarial robustness is interesting and inspiring, which is technical and methodology sound.\n\n2. Establishing the relationship between normalizations and adversarial transferability is technically sound, and, in my opinion, it will make some impact on our community and give some insights to more researchers.\n\nWeaknesses:\n\n1. The connection between the mentioned novel viewpoint of using adversarial transferability and improved adversarial robustness should be highlighted, i.e., why transferability can contribute to boosting robustness, similarly why RNA can contribute to boosting robustness, moreover, do these two reasons share the same inspiration?\n\n2. For the introduction to motivation, the authors claim that (line47-line50) the performance gap between BN and other normalizations results from “the adversarial transferability among normalizations,” which is a bit confusing and needs further justification. Specifically, the diagonal results are evaluated using white-box attacks, and the robust accuracy is about 50%, so that the gap may come from the difference between white- and black-box attacks. As this is an essential part of inspiring the authors to investigate the relationship between transferability and normalizations (line50), I suggest the authors provide a detailed justification.\n\n\n3. One concern is that the robustness may come from the designed random selection operation, so it is necessary to employ expectation over transformation, EOT. [1], for the robustness evaluation to defend against the random operation.\n\nTypos:\n\n\\tilde{x} makes Eq.2 confusing.\n\n[1] Synthesizing Robust Adversarial Examples I suggest the authors give an explanation for the difference between RNA (random selection in normalizations) and other random selection methods, i.e., selecting sub-network in parameter or feature space. Please see [Strengths And Weaknesses]", " The paper discusses the relation between normalization layers used in DNNs and adversarial transferability. The author(s) show(s) that the choice of normalization layer highly influences the success rate of transferred adversarial examples. In detail, it is shown that adversarial examples transfer worse (i.e., have a lower success rate of fooling the network) if a different normalization layer is used in the attacked network. This fact is used to motivate a novel technique to robustify existing neural network architectures: the key idea is to randomly select the used normalization layer during inference. Experiments on CIFAR-10, CIFAR-100 and ImageNet show that this approach is effective for the commonly used ResNet-18 and WideResNet32. STRENGTHS:\n- The paper proposes an effective, clear and easy-to-implement method that is well-motivated by experimental and theoretical results.\n- The extensive experimental evaluation on ResNet-18 and WideResNet32 architectures using a plethora of attacks (FGSM, PGD-20, CW, MIFGSM, DeepFool, AutoAttack) is convincing. An ablation study is conducted to show the effectiveness of the different components and the importance of different combinations of normalization layers.\n- The paper presentation is well done and only contains only a few errors (see minor remarks). Used computational resources are mentioned.\n\nWEAKNESSES:\n- The used notation can be improved at some places: \n 1. Equation (1): $x\\sim\\mathcal{X}, y\\sim\\mathcal{Y}$ suggests that $x$ and $y$ are drawn independently form the dataset. Here, $x, y\\sim\\mathcal{X},\\mathcal{Y}$ would be correct. Also in Algorithm 1 $\\{\\mathcal{X},\\mathcal{Y}\\}$ has to be a tuple instead of set.\n 2. In Equation (3): $\\hat{y}(bgn)\\_i^{(k)}$ suggests that $\\hat{y}$ is a function in terms of $(bgn)$, which is not the case. I would suggest to use $\\left(\\hat{y}_{(\\text{BGN})}^{(k)}\\right)_i$ instead.\n 3. Different symbols are used to denote multiplication (Equation (3) vs Equation (8))\n- The stated Theorem 2.1 is hard to understand. Although, I see some value in the theorem, I think the author(s) should spend some time to reformulate it:\n 1. It stated that \"the gradient norm and $\\beta$ in Equation (7) can be bounded as [...]\", however, there is no $\\beta$ in Equation (7) (only $\\beta_a$ and $\\beta_b$).\n 2. The variable $T$ is introduced multiple times in a single sentence: \"the level of adversarial transferability\" and \"attack success rate\".\n\nMINOR REMARKS:\n- Typo \"Adversarial Transfersability\" in Abstract, \"hessian\" is sometimes not capitalized\n- subscript and supperscript \"gn\" and \"bgn\" should not be typeset in math mode\n- line 96: extra period in front of references [5, 20] What is meant with $\\beta$ in Equation (7)? Limitations and potential negative societal impact have not been addressed.", " Inspired by the limited adversarial transferability across different normalizations, the authors proposed to involve randomness into the types of normalization layers and introduce a RNA module that reduces the adversarial transferability in a pre-defined random space, which improves the defense against adversarial examples. To evaluate the effectiveness of their algorithm, the authors provided experimental results on CIFAR-10/100 and ImageNet.\n The authors studied a simple yet effective randomized mechanism with normalization layers. In general, the paper reads well, and the presentation is clear. The paper is original in that it studies the connections between normalization layers and adversarial defense. The theorical analysis well explains the principle of proposed algorithm. The paper includes a clear experimental setup and a meticulous comparison with other variant baselines as well as state-of-the-art algorithms on popular benchmarks. The results seem convincing.\n\nDespite its contributions, I have several concerns:\n\n1. The authors mentioned that they adopt random sampling strategy to utilize the adversarial transferability in random space. However, there exists a chance that the similar paths are sampled during both attack and inference stage, and there is no discussion of this scenario as well as the probability of it.\n\n2. Besides normalization layers, there exist wide components in the networks which can involve randomness, such as weight parameters and architectures. Although the transferability evaluation in Fig 1 shows that there exists a poor transferability of adversarial examples among different types of normalization layers, it seems to me that this transferability also holds true for weight parameters and architectures. There is no discussion of these variant baselines.\n\n3. The authors claimed that the smoothness of different normalization layers directly controls the adversarial transferability. However, it is hard for me to find corresponding evidence in experiment section.\n As mentioned above, several questions are listed below:\n\n1. What is the risk of random sampling strategy? What is the probability of sampling the similar paths?\n\n2. How about the involvement of randomness in weight parameters and architectures? What is the advantage of proposed RNA compared with these baselines?\n\n3. Is there empirical evidence that the smoothness of different normalization layers controls the defense performance?\n The limitations have been addressed and there is no potential negative societal impact of this work.", " Researchers explore how different normalization layers affect adversarial transferability. They provide a theoretical upper bound on the adversarial transferability between normalization layers. Then, they proposed a module named Random Normalization Aggregation (RNA). RNA replaces the normalization layers in the network, and samples normalization layers randomly at each forward pass. This can generate an exponential number of possible paths, which makes it harder for attacker to exploit.\n Strengths:\n\n1. The paper is clear and easy to follow\n\n2. Theoretical part is sound\n\n3. The idea to generate random normalization paths is new to the literature\n\nWeaknesses:\n\n1. Experiment section presents some suspicious results - the results with resNet vs. WRN are counterintuitive, and results on Auto-Attack vs. PGD also do not make sense. It can be a sign of obfuscated gradients. Additionally, some of the reported results are not in line with known results. See the Questions section.\n\n2. No code/models are provided for evaluation of the results. Can the authors provide the code/models/github link for additional verification?\n\n3. No related work section. Specifically, the authors did not address some related literature that explores the same field (see Questions section). Experiments questions/concerns:\n\t\n1. What if the attacker randomizes a different path (p_a) at each iteration of PGD? Wouldn't it make the attacker more diverse, and therefore generalize better? Can the authors present such experiments?\n\t\n2. Can the authors explain why the results on resNet-18 are better than those on WRN-32? For example, on CIFAR-10, AA results on ResNet-18 are better by almost 4% compared to WRN-32\n\t\n3. Also, for CIFAR-10, FGSM and PGD-20 acc is lower than the Auto-Attack accuracy, which doesn't make sense. Can the authors explain these results?\n\t\n4. WRN-32 with BN and standard AT is essentially the standard AT method by Madry et al. [1]. It was shown that this method reaches AA accuracy of ~52-53% against AA when combined with early stopping. However, in your paper, you claim 46.44% against AA. Did the authors use early stopping for all methods? If not, I think that should present results with early stopping [2] for a fair comparison. Otherwise, the results can be linked to adversarial overfitting [2].\n\n5. It would be interesting to see how the method works when combined with other AT methods.\n\n6. The authors did not refer/compared to related work in the field of normalization layers and robustness [3, 4]. Can the authors compare their method to [3, 4] and present the results?\n\n[1] Towards Deep Learning Models Resistant to Adversarial Attacks https://arxiv.org/pdf/1706.06083.pdf\n\n[2] Overfitting in adversarially robust deep learning http://proceedings.mlr.press/v119/rice20a\n\n[3] INTRIGUING PROPERTIES OF ADVERSARIAL TRAINING AT SCALE https://arxiv.org/pdf/1906.03787.pdf\n\n[4] Adversarial Examples Improve Image Recognition https://openaccess.thecvf.com/content_CVPR_2020/html/Xie_Adversarial_Examples_Improve_Image_Recognition_CVPR_2020_paper.html\n\n\nmiscellaneous:\n\n1. Figure 4 has twice (b) instead of (c) \n\n2. Line 153 - I suggest rephrasing to- \"… to *defend against* white-box attacks …\" No.\nAuthors should discuss the limitation of adversarial training in general and their method in particular." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 7, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5, 5 ]
[ "nips_2022_K4W92FUXSF9", "rm_Xy5T3vIc", "uZoJD27svsF", "BMdhg5cn6u", "NEWlyXiaA2", "LrBIAQhEo2", "U5IfnGUqYIb", "JV85WO8uf5r", "7x3Q7KDMBKI", "MzFTdBRfwjP", "5p_AwfsA59F", "EFQhhXZ5No3", "Fv9NOoBGkg9", "nSKAALteAYA", "5p_AwfsA59F", "dEpSZTfAD-C", "Fv9NOoBGkg9", "gtI-3418p5W", "37C85Zf-364", "nips_2022_K4W92FUXSF9", "nips_2022_K4W92FUXSF9", "nips_2022_K4W92FUXSF9", "nips_2022_K4W92FUXSF9" ]
nips_2022_8hs7qlWcnGs
ToDD: Topological Compound Fingerprinting in Computer-Aided Drug Discovery
In computer-aided drug discovery (CADD), virtual screening (VS) is used for comparing a library of compounds against known active ligands to identify the drug candidates that are most likely to bind to a molecular target. Most VS methods to date have focused on using canonical compound representations (e.g., SMILES strings, Morgan fingerprints) or generating alternative fingerprints of the compounds by training progressively more complex variational autoencoders (VAEs) and graph neural networks (GNNs). Although VAEs and GNNs led to significant improvements in VS performance, these methods suffer from reduced performance when scaling to large virtual compound datasets. The performance of these methods has shown only incremental improvements in the past few years. To address this problem, we developed a novel method using multiparameter persistence (MP) homology that produces topological fingerprints of the compounds as multidimensional vectors. Our primary contribution is framing the VS process as a new topology-based graph ranking problem by partitioning a compound into chemical substructures informed by the periodic properties of its atoms and extracting their persistent homology features at multiple resolution levels. We show that the margin loss fine-tuning of pretrained Triplet networks attains highly competitive results in differentiating between compounds in the embedding space and ranking their likelihood of becoming effective drug candidates. We further establish theoretical guarantees for the stability properties of our proposed MP signatures, and demonstrate that our models, enhanced by the MP signatures, outperform state-of-the-art methods on benchmark datasets by a wide and highly statistically significant margin (e.g., 93\% gain for Cleves-Jain and 54\% gain for DUD-E Diverse dataset).
Accept
The reviewers mostly liked the paper. They mentioned the sound theoretical foundations, stability, and strong empirical performance. The rebuttal was able to convince most reviewers that the paper should be accepted for NeurIPS.
train
[ "yCbTnA-K5_-", "NN5G56U0acZ", "QTz_pimZ0j", "4PbHQT4PcGL", "NUfeeks0Gn", "lYFjiFPVrga", "522kpJ4WwrG", "Y_GG3nYqaJz", "Y0hQkpKpab0", "97rflTT4UdD", "aKLHdU0OBEX", "9hIlI6nGfaX", "p24u3trYKAx", "2yr2kpFgjE1", "c2VEJ-UMF-r", "QVkFG1s4uzD", "EYmN43JnRF", "ZekkkzxDsay", "MFweTsG5lWK", "cJUuBYwgALE", "Qrb9bOYNphv", "1wtup6iKbMz" ]
[ "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We provided detailed responses, pointing to the specific parts of the paper that articulates the points you expressed concerns. We would very much appreciate if you could consider updating the scores before the deadline today.", " Thanks so much! We are very grateful for your constructive and detailed response, your appreciaticion of our work and, of course, for raising the score!", " I decided to raise the rating because all my concerns were accurately addressed, especially the clarity of presentation. Congratulations for your solid research work!", " We thank the reviewer for the time and effort spent in reviewing our paper, and for the detailed suggestions. We greatly appreciate that you noticed the typo in Figure 4. We fixed it and submitted a revision. \nWe believe that our paper was significantly improved thanks to your comments. We hope you would find the usefulness of our methodology on topological ML to accelerate drug discovery, and consider higher rating. ", " We hope that our response is sufficiently informative for you to reconsider assessing a higher rating for our paper. \n\nWe have demonstrated that our topological compound descriptors are model-agnostic and have proven to be exceedingly competitive, yielding state-of-the-art results unequivocally over all baselines. \n\nOur extensive experimental evaluations show comparisons between ToDD and several SOTA VS methods. Moreover, we present an ablation study replacing topological fingerprints computed via multiparameter persistence with the most successful compound fingerprinting method in VS, namely Morgan fingerprints [Capecchi et al 2020]. \n\n### Citations\n[Capecchi et al 2020] Capecchi, Alice, Daniel Probst, and Jean-Louis Reymond. \"One molecular fingerprint to rule them all: drugs, biomolecules, and the metabolome.\" Journal of cheminformatics 12.1 (2020): 1-15.", " I appreciate your detailed responses and addressing my comments in the revised manuscript. I find the current text more clear and approachable. I also appreciate including raw data in the python research notebooks although the method implementation remains confidential.\n\nI have one more question regarding Figure 4, which is a great addition to the paper. Should not both filtration criterions be considered jointly (as a logical conjunction)? In the second row from the top, an oxygen atom appears in columns 3 and 4 while its atomic number is 8 (should be lower or equal 7). Similarly, hydrogen atoms attached to carbons have partial charges greater than 0.1 according to Figure 3, but they appear in the second column at the top.", " We demonstrate the robustness of our method over the existing methods with a plethora of empirical evaluations:\n\n1. We compare ToDD against 11 Virtual Screening methods across 22 drug targets on the Cleves-Jain dataset, and against 12 Virtual Screening methods across 8 drug targets on the DUD-E Diverse dataset. We outperform state-of-the-art methods on benchmark datasets by a wide and highly statistically significant margin (93% gain for Cleves-Jain and 54% gain for DUD-E Diverse dataset).\n\n2. We quantitatively analyze the explainability of our models’ success by replacing topological fingerprints computed via multiparameter persistence with the most popular fingerprinting method: Morgan fingerprints. Morgan fingerprint, also known as extended-connectivity fingerprint ECFP4, is the best performing fingerprint in small molecule virtual screening and target prediction benchmarks.\n\n**Comparison of EF 2%, 5%, 10% and ROC-AUC values between ToDD and other virtual screening methods on the Cleves-Jain dataset:**\n| Model | EF 2% (max. 50) | EF 5% (max. 20) | EF 10% (max. 10) | ROC-AUC |\n| ------------- | ------------- | ------------- | ------------- | ------------- | \n| USR | 10.0 | 6.2 | 4.1 | 0.76 |\n| GZD | 13.4 | 8.0 | 5.3 | 0.81 |\n| PS | 10.7 | 6.6 | 4.9 | 0.78 |\n| ROCS | 20.1 | 10.7 | 6.2 | 0.83 |\n| USR + GZD | 13.7 | 7.7 | 4.7 | 0.81 |\n| USR + PS | 13.1 | 7.9 | 5.0 | 0.80 |\n| USR + ROCS | 17.1 | 9.1 | 5.4 | 0.83 |\n| GZD + PS | 16.0 | 9.1 | 5.9 | 0.82 |\n| PH_VS | 18.6 | NA | NA | NA |\n| GZD + ROCS | 20.3 | 10.8 | 5.3 | 0.83 |\n| PS + ROCS | 20.5 | 10.7 | 6.4 | 0.83 |\n| ToDD-RF | 35.2 ± 2.3 | 15.6 ± 1.0 | 8.1 ± 0.4 | 0.94 ± 0.02 |\n| ToDD-ViT | 39.6 ± 1.4 | 18.6 ± 0.4 | 9.9 ± 0.1 | 0.90 ± 0.01 |\n| Relative gains | 92.9% | 83.7% | 54.1% | 13.3% |\n\n\n**EF 2% values on Cleves-Jain Dataset using ViT model trained with Morgan fingerprints vs. ToDD fingerprints:**\n| Drug Target | Morgan Fingerprints (ECFP4) | ToDD Fingerprints | \n| ------------- | ------------- | ------------- | \n| a | 25.0 | 50.0 |\n| b | 11.4 | 34.1 |\n| c | 3.8 | 46.2 |\n| d | 50.0 | 50.0 |\n| e | 10.0 | 30.0 |\n| f | 37.5 | 50.0 |\n| g | 30.0 | 50.0 |\n| h | 40.0 | 50.0 |\n| i | 20.0 | 50.0 |\n| j | 17.9 | 21.4 |\n| k | 14.3 | 39.3 |\n| l | 45.0 | 35.0 |\n| m | 38.9 | 50.0 |\n| n | 15.0 | 35.0 |\n| o | 13.3 | 20.0 |\n| p | 2.2 | 32.6 |\n| q | 18.2 | 18.2 |\n| r | 14.3 | 32.1 |\n| s | 10.0 | 33.3 |\n| t | 20.0 | 50.0 |\n| u | 27.8 | 50.0 |\n| v | 35.7 | 42.9 |\n| Mean | 22.7 | 39.5 |\n| ROC-AUC | 0.86 | 0.90 |\n\n**Comparison of EF 1% (max. 100) between ToDD and other virtual screening methods on 8 targets of the DUD-E Diverse subset:**\n| Model | AMPC | CXCR4 | KIF11 | CP3A4 | GCR | AKT1 | HIVRT | HIVPR | Avg. |\n| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | \n| Findsite | 0.0 | 0.0 | 0.9 | 21.7 | 34.2 | 39.0 | 1.2 | 34.7 | 16.5 |\n| Fragsite | 4.2 | 42.5 | 0.0 | 32.9 | 29.1 | 47.1 | 2.4 | 48.7 | 25.9 |\n| Gnina | 2.1 | 15.0 | 38.0 | 1.2 | 39.0 | 4.1 | 11.0 | 28.0 | 17.3 |\n| GOLD-EATL | 25.8 | 20.0 | 33.5 | 17.9 | 34.6 | 29.2 | 28.7 | 23.4 | 26.6 |\n| Glide-EATL | 35.5 | 20.8 | 30.5 | 15.1 | 24.0 | 31.6 | 29.0 | 22.0 | 26.1 |\n| CompM | 32.3 | 25.0 | 35.5 | 33.6 | 37.1 | 44.2 | 30.2 | 25.0 | 32.9 |\n| CompScore | 39.6 | 51.6 | 51.3 | 14.0 | 27.1 | 37.6 | 21.8 | 18.2 | 32.7 |\n| CNN | 2.1 | 5.0 | 11.2 | 28.7 | 12.8 | 84.6 | 12.2 | 9.9 | 20.8 |\n| DenseFS | 14.6 | 5.0 | 4.3 | 44.3 | 20.9 | 89.4 | 12.8 | 8.4 | 25.0 |\n| SIEVE-Score | 30.7 | 61.1 | 53.4 | 6.7 | 33.3 | 42.1 | 39.8 | 38.3 | 38.2 |\n| DeepScore | 28.1 | 56.8 | 54.3 | 37.1 | 40.9 | 59.0 | 43.8 | 62.8 | 47.9 |\n| RF-Score-VSv3 | 32.3 | 60.9 | 4.5 | 25.9 | 32.5 | 41.9 | 39.8 | 65.7 | 37.9 |\n| ToDD-RF | 42.9 ± 4.5 | 92.3 ± 3.2 | 75.0 ± 5.0 | 67.6 ± 3.4 | 78.9 ± 4.0 | 90.7 ± 1.3 | 64.1 ± 2.3 | 92.1 ± 1.5 | 73.7 |\n| ToDD-ConvNeXt | 46.2 ± 3.6 | 84.6 ± 2.8 | 72.5 ± 3.6 | 28.8 ± 2.8 | 46.0 ± 2.0 | 81.2 ± 2.5 | 37.5 ± 3.6 | 74.6 ± 1.0 | 58.9 |\n| Relative gains | 16.7% | 51.1% | 38.1% | 52.6% | 92.9% | 1.5% | 46.3% | 40.2% | 53.9% |\n\n**EF 1% values and ROC-AUC scores on DUD-E Diverse dataset using ConvNeXt model trained with Morgan fingerprints (ECFP4) vs. ToDD fingerprints:**\n| Model | EF 1% (Morgan) | ROC-AUC (Morgan) | EF 1% (ToDD) | ROC-AUC (ToDD) |\n| ------------- | ------------- | ------------- | ------------- | ------------- | \n| AMPC | 38.5 | 0.87 | 46.2 | 0.81 |\n| CXCR4 | 48.0 | 0.97 | 84.0 | 0.99 |\n| KIF11 | 57.5 | 0.95 | 72.5 | 0.97 |\n| CP3A4 | 20.5 | 0.84 | 28.8 | 0.91 |\n| GCR | 46.7 | 0.94 | 46.0 | 0.97 |\n| AKT1 | 60.0 | 0.98 | 81.2 | 0.98 |\n| HIVRT | 50.0 | 0.96 | 37.5 | 0.95 |\n| HIVPR | 61.3 | 0.98 | 74.6 | 0.99 |\n| Mean | 47.8 | 0.94 | 58.8 | 0.95 |\n", " We are very grateful for the constructive and motivating feedback and for recognizing the importance and novelty of this project on TDA for drug discovery!", " I'm afraid that the advantage of the proposed method over the existing methods is not clear.", " All my questions are well addressed. I think the paper is of great interests and the results are very convincing!", " ### On Question 6 & Limitation 1 (Merged)\nWe discuss in detail the computational complexity of our model in Section 6.3. Our model is versatile and can be scaled for large libraries by customizing the allocated computational resources. Please note that the analysis in Section C.4 shows the execution time of our computation pipeline when the feature extraction task is distributed across 8 cores of a single Intel Core i7 CPU. It is possible to parallelize computationally costlier operations such as VR-filtration by allocating more CPU cores on the HPC platform and optimize array operations (e.g., numpy) via the joblib library. Furthermore, all ToDD models require substantially fewer computational resources during training compared to current graph-based models that encode a compound through mining common molecular fragments, a.k.a., motifs [Jin et al 2020]. For instance, training a motif based GNN on GuacaMol dataset which has approximately 1.5M drug-like molecules takes 130 hours of GPU time [Maziarz et al 2021]. In contrast, once we generate the topological fingerprints via Vietoris-Rips filtration, training time of ToDD-ViT and ToDD-ConvNeXt for each individual drug target takes less than 1 hour on a single GPU (NVIDIA RTX 2080 Ti).\n\n### On Limitation 2\nBeing a highly interdisciplinary project at the nexus of algebraic topology, ML, and biochemistry, with a team of domain experts (including Mathematics/topology, applied statistics, engineering, physics, drug discovery & development) we show that mathematical foundations behind the model are solid. Since this is a drastically new approach to the Virtual Screening problem, we run extensive experiments in standard benchmark datasets, and we significantly outperform SOTA in all of them with different ML models. This proves our topological fingerprints are model agnostic, and captures highly important information about the substructures induced by the filtering functions used. In the paper, we also compare our performance in both (VS) domain performance metrics, and performance metrics in ML field. We would be happy to provide additional experiments or theoretical justification to enhance the justification rigor.\n\nThank you very much again for your suggestions and remarks to improve our paper. Please do not hesitate to post further comments or questions.\n\n### Citations \n[Jin et al 2020] Jin, Wengong, Regina Barzilay, and Tommi Jaakkola. \"Hierarchical generation of molecular graphs using structural motifs.\" International conference on machine learning. PMLR, 2020.\n\n[Maziarz et al 2021] Maziarz, Krzysztof, et al. \"Learning to extend molecular scaffolds with structural motifs.\" arXiv preprint arXiv:2103.03864 (2021).\n", " ### On Question 4\nThank you very much for pointing out this concern. As you noted above, this is a highly interdisciplinary project (including domain experts from mathematics/topology, applied statistics, engineering, physics, drug discovery & development), which involves deep technical concepts from computational topology, machine learning, and biochemistry. Given the stringent page limits, it was hard to choose which parts to keep in main text and which parts to move into the appendix. We tried our best to make the paper accessible to a wider audience. \n\nAs you suggested, we added more figures to explain the process of obtaining our topological fingerprints. We added Figure 3 and 4 for sublevel filtration, and Figure 5 for VR-filtration to visually explain the process in the appendix. VR-filtration plays an important role in our construction, but also is the most technical part. With sublevel/superlevel filtrations (vertical direction), we obtain substructures of the compounds induced by chemical functions, e.g., atom weight, ionic radius, partial charge. With VR-filtration (horizontal direction), we capture the information about distances between atoms and sizes of the rings evolving in these substructures, and therefore it provides critical information about the topological characteristics of these substructures in contrast to other methods, which do not provide such granular information [Lim et al 2020, Adams et al 2021]. As can be seen in the toy example in Section B.5, while sublevel/superlevel filtration extracts the induced subgraphs, VR-filtration gives larger graphs by adding edges for each threshold step to capture intrinsic distance information in the data.\n\nAs you recommended, we added a more thorough and clearer explanation why we use VR-filtration (Remark in Appendix Section B.5) and how VR-filtration works with Figure 5. Thank you again for your valuable input and comment.\n\n### On Question 5\nThank you for this question. The dimension/size details are given in the description of each method in the appendix Section B.4. In general, the dimension/size depends on how many filtering functions (atom weight, partial charge, bond strength, chirality, etc.) are used, number of thresholds used for these function in the persistence diagrams, and the vectorization method (Betti, Silhouette, Landscape, Persistence Images, etc.) used. \n\nIn particular, for most vectorizations, with 1 filtering function, the output is a matrix of size $m \\times r$, where $m$ is the number of thresholds for the filtering function, and $r$ is max threshold for VR-filtration, usually chosen smaller than the largest diameter (distance of farthest two atoms) of the compounds in the dataset. For example, for the Cleves-Jain dataset we chose $r=20$, while DUDE-Diverse, it was chosen as $r=30$. For atom weight function, $m$ is chosen as 10 (10 different thresholds to identify different atom weights in the compounds). This implies for each compound in the Cleves-Jain dataset we get ToDD fingerprints (with atom weight function) as matrices of size 10x20 while for the DUDE-Diverse dataset, we get matrices of size 10x30. In these matrices, 1 direction represents atom weight and the other direction represents the graph distance (VR-filtration). Similarly, if we use 2 filtering functions, say atom weight and bond strength, we would have a 3D array (1 direction atom weight, 1 direction bond strength and 1 direction graph distance (VR) of size 4x10x20 for Cleves-Jain dataset, and 4x10x30 for DUDE-Diverse dataset. Here, 4 comes from the fact that there are 4 different bond types (single, double, triple and aromatic). \n\n### Citations\n[Lim et al 2020] Lim, S., Memoli, F. and Okutan, O.B., 2020. Vietoris-Rips persistent homology, injective metric spaces, and the filling radius. arXiv preprint arXiv:2001.07588.\n\n[Adams et al 2021] Adams, H. and Coskunuzer, B., 2021. Geometric Approaches on Persistent Homology. arXiv preprint arXiv:2103.06408.\n\n", " ### On Question 3\nWe thank the referee for this thoughtful point. Imbalanced datasets present inherent complex characteristics and pose several challenges in model training. The most prevalent approaches to address this problem are undersampling the majority class (decoy compounds) and oversampling the minority class (active compounds) by generating synthetic samples using SMOTE techniques. Dealing with class imbalance while training a Random Forest (RF) classifier was actually a more challenging problem in our experiments. We use the general principle of the bootstrap method by resampling the active compounds with replacement, i.e. this is also referred to as overbagging. In addition to that, we tend to regularize the RF classifier by using shallow, pruned trees. On the other hand, our Experimental Results in Section 6.2 show that Triplet Network where Vision Transformer (ViT) serves as the backbone is more robust than the RF classifier on a small scale dataset, i.e. Cleves-Jain, and ensures less variance and less bias across the cross-validation sets. \n\nWe speculate that there are 2 reasons that cause this phenomenon: \n1. First, in comparison to training from scratch, transfer learning by fine-tuning ViT pretrained on ILSVRC-2012 ImageNet improves learning on a small scale, imbalanced dataset by including a larger and balanced auxiliary dataset. In the presence of data imbalance and inadequate number of training samples, transfer learning augments the training data and induces balance into the skewed class distribution [Al-Stouhi et al 2016]. \n2. Second, utilizing triplet margin loss enforces the network to effectively learn inter-class margins when data in the compound library exhibit highly-skewed class distribution. Unlike cross-entropy loss, weighted cross-entropy loss or focal loss, triplet margin loss mitigates the data imbalance issue by imposing a tighter constraint to preserve the similarity between the compounds from the same cluster in the embedding space. \n\n### Citations\n\n[Al-Stouhi et al 2016] Al-Stouhi, Samir, and Chandan K. Reddy. \"Transfer learning for class imbalance problems with inadequate data.\" Knowledge and information systems 48.1 (2016): 201-228.\n\n", " We thank the reviewer for the time and effort spent in reviewing our paper, and for the detailed suggestions. Reflecting your valuable comments, we made a major revision adding multiple figures in Appendix Section B.5 to facilitate the understanding of graph filtration and computation of multiparameter (MP) persistence using Vietoris-Rips (VR) simplicial complexes. We believe that our paper was significantly improved, thanks to your comments. We hope that our responses would be sufficiently informative for you to reconsider assessing a higher rating for the revised paper. \n\nDetailed responses are summarized below:\n\n### On Question 1\n\nWe completely share this perspective, which is why we do also have the ROC-AUC scores in Tables 1, 5-8.\n\nPlease note that Enrichment Factor (EF) and ROC-AUC score are not alternatives to each other, but rather they are complementary evaluation metrics. Enrichment Factor shows the model performance in finding top-ranking hits, whereas ROC-AUC score compares sensitivity/specificity pair for the ability to identify a drug target. Besides, EF is the standard evaluation metric for benchmarking computational pipelines in virtual screening tasks specific for that domain [Truchon et al 2007]. For example, in all the references in our main tables (Table 1 and 2) as well as the papers mentioned in related work, EF is used as the main performance metric to measure the success of the VS method proposed. This performance metric (i.e. EF) demonstrates the potential of drug candidates in early rankings of the method. This is also known as precision at k metric in statistics.\n\nWhile many papers published in this domain don’t include ROC-AUC scores, we did report ROC-AUC scores across all drug targets, input modalities and classification models in Tables 1, 5-8. Specifically, Table 1 shows the ROC-AUC scores of ToDD-RF, ToDD-ViT and benchmark models on the Cleves-Jain dataset. Furthermore, Tables 5-6 show the ROC-AUC scores across different domain properties used for graph filtration on the Cleves-Jain dataset using ToDD-RF and ToDD-ViT. Similarly, Tables 7-8 show the ROC-AUC scores across different domain properties used for graph filtration on the DUD-E Diverse dataset using ToDD-RF and ToDD-ConvNeXt. Our ROC-AUC scores outperform the benchmark models and prove the validity of our experiments by combining the measures of sensitivity and specificity, yet the enrichment factor is the key metric in this task. Provided a dataset of thousands or millions of compounds, we are interested to see the top-ranking ligands that can inhibit a drug target, i.e., recognizing active ligands with the highest effectiveness has more practical utility than classifying ligands as active or decoy. \n\n### On Question 2\nThank you very much for sharing these references. We added them to our paper (Section 2.1). First, in our main results (Table 1 and 2), we shared all the state of the art results published in these datasets. Some of them [7, 36, 42, 83, 91] are using the fingerprinting method. We also compared our method with the most popular fingerprint in the LBVS domain, namely Morgan fingerprints. We have a comprehensive section for this comparison in the Appendix. In Appendix C.5, we give the comparison results with our methods when used with Morgan fingerprints instead of TODD fingerprints. The other fingerprints you mentioned are also used, but Morgan fingerprints are known to outperform them in most datasets. As you mentioned, fingerprinting is a well-studied subject in virtual screening, and in the paper [Capecchi et al 2020], the authors provide an expansive comparison of the performances of these fingerprints with each other. We hope this addresses your question. While we have been constrained primarily by the stringent page limits, we can run and add further experiments with other fingerprints if requested.\n\n### Citations\n[Capecchi et al 2020] Capecchi, A., Probst, D., & Reymond, J. L. (2020). One molecular fingerprint to rule them all: drugs, biomolecules, and the metabolome. Journal of Cheminformatics, 12(1), 1-15. \n\n[Truchon et al 2007] Truchon, J.F. and Bayly, C.I., 2007. Evaluating virtual screening methods: good and bad metrics for the “early recognition” problem. Journal of chemical information and modeling, 47(2), pp.488-508.", " ### On Computational Limitations\nThank you very much for pointing this out. Yes, you are right, increasing the number of filtering functions is definitely adding to the computational cost. When used for large libraries, this can degrade the execution performance. We added this limitation (given below) to Appendix D.2. \n\nWe discuss in detail the computational complexity of our model in Section 6.3. Our model is versatile and can be scaled for large libraries by customizing the allocated computational resources. Please note that the analysis in Section C.4 shows the execution time of our computation pipeline when the feature extraction task is distributed across 8 cores of a single Intel Core i7 CPU. It is possible to parallelize computationally costlier operations such as VR-filtration by allocating more CPU cores on the HPC platform and optimize array operations (e.g., numpy) via the joblib library. Furthermore, all ToDD models require substantially fewer computational resources during training compared to current graph-based models that encode a compound through mining common molecular fragments, a.k.a., motifs [Jin et al 2020]. For instance, training a motif based GNN on GuacaMol dataset which has approximately 1.5M drug-like molecules takes 130 hours of GPU time [Maziarz et al 2021]. In contrast, once we generate the topological fingerprints via Vietoris-Rips filtration, training time of ToDD-ViT and ToDD-ConvNeXt for each individual drug target takes less than 1 hour on a single GPU (NVIDIA RTX 2080 Ti).\n\n### On Minor Comment 1\nThank you very much for pointing out this oversight. We corrected the phrase as follows:\n\nIn computer-aided drug discovery (CADD), virtual screening (VS) is used for identifying the drug candidates that are most likely to bind to a molecular target in a large library of compounds.\n\n### On Minor Comment 2\nThank you very much for this valuable suggestion. Indeed, our follow-up and more extensive project is to study the problem of “molecular property prediction” by adapting our model to regression/classification tasks. \n\nThank you very much again for your suggestions and remarks to improve our paper. Please do not hesitate to post further comments or questions.\n\n### Citations\n[Jin et al 2020] Jin, Wengong, Regina Barzilay, and Tommi Jaakkola. \"Hierarchical generation of molecular graphs using structural motifs.\" International conference on machine learning. PMLR, 2020.\n\n[Maziarz et al 2021] Maziarz, Krzysztof, et al. \"Learning to extend molecular scaffolds with structural motifs.\" arXiv preprint arXiv:2103.03864 (2021).\n", " ### On Question 1\nThank you very much for this suggestion. In Figure 4, we visualize how filtering with two chemical functions (atom weight and partial charge) induce the relevant substructures of a small compound, cytosine (Figure 3). In our experiments, we mainly used atomic weight, partial charge, and bond strength as filtering functions as they are independent of each other, and together they give a very fine filtering of the datasets. Note that in addition to these filtering functions, we always use VR-filtration (graph distance) in one of the directions in our ToDD fingerprints. Since this problem is ligand similarity search for general dataset, we used these most common functions. However, depending on the target protein, these functions can be customized for more targeted information extraction. In particular, for molecular prediction problems, these functions should be chosen very carefully so that the induced substructures indeed relate to the molecular property studied. There are several functions to be used for this goal like ionic radius, chirality, aromaticity, orbital hybridization, number of bonded Hydrogen atoms, etc. In a follow-up project, we aim to expand our efforts in this particular direction.\n\n### On Question 2\nWe do not generate molecular conformations. Technically, persistence diagrams resulting from Vietoris-Rips filtrations use the distance matrix, which contains the geodesic distances taken pairwise between the atoms of a compound. CNNs are known to have inductive bias and they heavily rely on the existence of a certain type of spatial structure in the data. In comparison to CNNs, ViT has much less spatial inductive bias, since it cuts the input sample into patches and encodes positional embeddings that allow the model to retain positional information and encode positionally invariant relationships [Dosovitskiy et al 2020]. This concept can be particularly useful in the task of property prediction where distinct molecular conformers impact the property to estimate, e.g., prediction of atomization energies relies heavily on electrostatic interaction between the nuclei in 3D space. \n\n### On Question 3\nFor training, given the multiparameter Vietoris-Rips persistent homology matrices of 3 compounds (anchor, positive and negative), Vision Transformer (ViT) or ConvNeXt models provide embedding vectors of the anchor sample (reference input), positive sample (ligand that inhibits the same drug target with anchor) and negative sample (ligand that is inactive against a drug target). Triplet margin loss enforces the order of distances by minimizing the Euclidian distance from the anchor to the positive input and maximizing the Euclidian distance from the anchor to the negative input. Its formulation is similar to hinge loss requiring a soft margin with a slack variable\n$\\alpha$: \n$L(x, x^{+}, x^{-})=\\max(0, \\alpha + \\lVert \\mathbf{f(x)-f(x^{+})} \\rVert_{2} - \\lVert \\mathbf{f(x)-f(x^{-})} \\rVert_{2})$. \n\nThis objective has been demonstrated to offer enhancements of embedding in learning to rank tasks [Wang et al 2019]. Hence we use the triplet margin loss to finetune the parameters of the pretrained networks.\n\nIn order to predict the ligand ranking: \n\n1. We compute the multiparameter Vietoris-Rips persistent homology features of the test compounds and feed them to the trained ViT or ConvNeXt model to acquire their embeddings. \n2. The embedding of each ligand from the compound library is compared against the embedding of the anchor sample to predict the ligand ranking. Specifically, we measure the similarity score using the inverse of the Euclidean distance between the embedding of an anchor and drug candidate. This similarity score is applied to predict the ligand ranking and calculate EF scores. \n\nWe thoroughly elaborated this concept in Section 6.1 in the Enrichment Factor paragraph.\n\n### Citations\n[Dosovitskiy et al 2020] Dosovitskiy, Alexey, et al. \"An image is worth 16x16 words: Transformers for image recognition at scale.\" arXiv preprint arXiv:2010.11929 (2020).\n\n[Wang et al 2019] Wang, Xinshao, et al. \"Ranked list loss for deep metric learning.\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019.\n", " ### On Weakness 2\n\nThank you for this thoughtful question. In [Capecchi et al 2020], the most common fingerprints in VS are compared. Yes, some of the baselines we cited are Structure-Based Virtual Screening (SBVS) Methods [7, 91, 42, 36, 83], and they “also” use the target information as well as the active ligand information. However, as mentioned in the experiments section (Section 6.1), we use exactly the same settings (5-fold cross-validation) as other methods to ensure fair comparison. All LBVS and SBVS methods cited in our main tables (Table 1 and 2) use the same number of ligands in their training set. However, by our Ligand-Based Virtual Screening (LBVS) approach, we focus on producing an effective fingerprint of the compounds where the ML tools can work very efficiently. In other words, our approach makes ML tools work more effectively by converting the classification task into a suitable form to study.\n\nIn general, this question can also be formulated as a choice between LBVS vs. SBVS. In theory, some suggest SBVS is better than LBVS in general, while others believe the opposite is true [please see Ripphausen et al 2010, Yang et al 2019 for discussion]. In general, SBVS methods are computationally more costly than LBVS methods. Therefore, SBVS could be very effective on specific targets with small compound libraries. But for large libraries with more general tasks, LBVS appears to be more effective with the recent advances in machine learning. Gains in performance yielded by our approach also supports this conclusion. \n\nIn our ablation study (Appendix Section C.5), by using the Vision Transformer (ViT) and ConvNeXt models, we compared the performance of Morgan Fingerprints and our TODD Fingerprints on both Cleves-Jain dataset (Table 10) and DUD-E Diverse dataset (Table 11). The results show that our fingerprints outperform the most popular fingerprint in the field for these benchmark datasets.\n\n### On Weakness 3\n\nIn Section 6.4, we tested a number of ablations of our model to analyze the effect of its individual components and to further investigate the effectiveness of our topological fingerprints. One of them is investigating the network architectures and loss objectives to determine the most suitable one for deep metric learning. Our code has a flexible implementation with several switch cases and offers a user-friendly command-line interface to test the robustness of different network architectures and loss objectives such that a Siamese network trained with a contrastive loss or a Triplet network trained with circle loss can be easily compared against our model choice: a Triplet network trained with triplet margin loss. We run several experiments to choose the most suitable deep metric learning technique; however quantifying the results of different network architecture choices for all the drug targets from both the Cleves-Jain and DUD-E Diverse datasets using 5-fold cross-validation requires training 135 different models.\n\n### On Weakness 4\n\nWe strive for reproducibility but this work is a result of an interdisciplinary collaboration between academia (Mathematics/Topology, Applied Statistics, Engineering, Physics) and industry (Drug Discovery & Development). While we outline the details of the method in the paper, the source code could not be shared because of legal and IP related constraints. We agree that the reproduction of the work would be easier with an accompanying code. Following the referee’s suggestion, we are sharing a version of the code to ensure that our results are trustworthy and reproducible.\n\n### Citations\n[Ripphausen et al 2010] Ripphausen, P., Nisius, B., Peltason, L. and Bajorath, J., 2010. Quo vadis, virtual screening? A comprehensive survey of prospective applications. Journal of medicinal chemistry, 53(24), pp.8461-8467.\n\n[Yang et al 2019] Yang, X., Wang, Y., Byrne, R., Schneider, G. and Yang, S., 2019. Concepts of artificial intelligence for computer-assisted drug discovery. Chemical reviews, 119(18), pp.10520-10594.\n\n[Capecchi et al 2020] Capecchi, A., Probst, D., & Reymond, J. L. (2020). One molecular fingerprint to rule them all: drugs, biomolecules, and the metabolome. Journal of Cheminformatics, 12(1), 1-15. ", " We thank the reviewer for the time and effort spent in reviewing our paper, and for the detailed suggestions. Reflecting your valuable comments, we made a major revision adding multiple figures in Appendix Section B.5 to facilitate the understanding of graph filtration and computation of multiparameter (MP) persistence using Vietoris-Rips (VR) simplicial complexes. Additionally, we shared the code that is relevant to reproduce results. We believe that our paper was significantly improved, thanks to your comments. We hope that our responses would be sufficiently informative for you to reconsider assessing a higher rating for the revised paper. \n\nDetailed responses are summarized below:\n\n### On Weakness 1\n\nThank you very much for pointing out this concern. As you noted above, this is a highly interdisciplinary project (with domain experts from mathematics/topology, applied statistics, engineering, physics, drug discovery & development), which involves deep technical concepts from computational topology, machine learning, and biochemistry. Given the page limits, it is challenging to choose which parts to keep in main text and which parts to move into the appendix. We tried our best to make the paper accessible to a wider audience, especially relevant to this conference. Per your suggestion, we added more figures to explain the process of obtaining our topological fingerprints. In particular, in a toy example (Figure 4) we explain how filtering with two chemical functions (i.e., atom weight and partial charge) induce the relevant substructures of a small compound, cytosine (Figure 3). In another toy example (Figure 5), we explain how Vietoris-Rips complexes capture topological information of these substructures by using graph distances. We revised Table 10 and 11 to simplify the comparison. We further made several minor edits to clarify exposition. If you have further suggestions for reorganizing the paper, we are happy to make revisions as requested within the constraints of the 9-page limit. \n\nWe added Figure 3 and 4 for sublevel filtration, and Figure 5 for VR-filtration to visually explain the process in the appendix. VR-filtration plays an important role in our construction, but also is the most technical part. With sublevel/superlevel filtrations (vertical direction), we obtain substructures of the compounds induced by chemical functions, e.g., atom weight, ionic radius, partial charge. With VR-filtration (horizontal direction), we capture the information about distances between atoms and sizes of the rings evolving in these substructures, and therefore it provides critical information about the topological characteristics of these substructures in contrast to other methods, which do not provide such granular information [Lim et al 2020, Adams et al 2021]. As can be seen in the toy example in Section B.5, while sublevel/superlevel filtration extracts the induced subgraphs, VR-filtration gives larger graphs by adding edges for each threshold step to capture intrinsic distance information in the data. As you recommended, we added a more thorough and clearer explanation why we use VR-filtration (Remark in Appendix Section B.5) and how VR-filtration works with Figure 5. Thank you again for your valuable input and comment.\n\n### Citations\n[Lim et al 2020] Lim, S., Memoli, F. and Okutan, O.B., 2020. Vietoris-Rips persistent homology, injective metric spaces, and the filling radius. arXiv preprint arXiv:2001.07588.\n\n[Adams et al 2021] Adams, H. and Coskunuzer, B., 2021. Geometric Approaches on Persistent Homology. arXiv preprint arXiv:2103.06408.", " We thank the reviewer for appreciating the novelty of this interdisciplinary approach and for the valuable feedback and references. It has been a rewarding but challenging process to bring domain experts together from vastly different disciplines including mathematics - topology, applied statistics, engineering, physics, drug discovery & development for this collaboration. We are very much encouraged to see that the reviewer found our work exciting and results promising. \n\nPer your suggestions, we made a major revision in the Related Work section (Section 2.3). Specifically, we include all the references suggested, and discuss the relevance of TDA in molecular data analysis and protein-ligand binding affinity prediction to accelerate drug discovery for a more detailed review. \n\nIn general, we hope that TDA concepts, as outlined in the papers listed by the Reviewer and the one presented here, will contribute to shaping a new research direction in the ML community on topological ML for drug discovery.\n\nThank you very much again for your suggestions to improve our paper. Please do not hesitate to post further comments or questions.\n", " In this paper, a new type of molecular fingerprint is introduced, and its application in the virtual screening setup is shown. The fingerprint is based on multiparameter persistence homology that produces topological descriptions of molecules in the form of multidimensional vectors. The methodology expands single persistence methods and applies multiple filtering functions, which can be based either on atom distances or chemical properties. Architectures such as Vision Transformer and ConvNeXt are used to process the multidimensional fingerprints. Finally, the triplet loss is used to train these models for the virtual screening task. The method is compared against many baselines on two datasets. Additionally, the computational complexity is calculated, and the stability of the multiparameter persistence fingerprints is formally proven in the supplementary material. Strengths:\n- The theoretical foundations of the paper are sound. The persistence homology is thoroughly explained, and the modifications are presented using a clear mathematical notation (though some definitions may be found only in the supplementary materials).\n- The stability of the proposed fingerprint is formally proven.\n- The experimental evaluation on two datasets shows the strong performance of the proposed method. The number of baseline models in the comparison is impressive.\n- A section about computational complexity was included in the paper.\n- The ablation study corroborates the usefulness of multiparameter persistence and explains architectural choices. \n\nWeaknesses:\n- Unfortunately, the method is not presented in a clear way. Some of the details were moved to the supplementary materials, so it is difficult to grasp the idea before reading the appendix and rereading the paper. For example, VR-filtration is not explained in the main text, but this term is used multiple times. Maybe it would be a good idea to add a figure explaining how the fingerprint is calculated (showing VR-filtering on one axis and property-based filtering on the other one).\n- The proposed method shows a great improvement over the other method included in the comparison, but these methods are mostly scoring functions that were trained with a significantly different objective. Please, correct me if I am wrong, but I think that most of these methods use docking poses to score ligands and this is how they build their rankings. The MP Fingerprint, on the other hand, is trained to separate active ligands from decoys only based on the topological features and ignoring the 3D conformations. It would be useful to add a few simple baselines, e.g. a random forest model on Morgan fingerprints (similar to the ablation study, but I cannot find the information about which model was used in this experiment).\n- Section 6.4 mentions a comparison of more contrastive objectives, but the (quantitative) results of the preliminary experiments are not included in the paper.\n- The code is not provided, which makes the results difficult to reproduce.\n\nMinor comments:\n- In the abstract it is said that “VS is used for comparing a library of compounds against known active ligands to identify drug candidates…” This description of VS is inaccurate as it is not the only way to perform virtual screening. For example, there are structure-based approaches to the VS that can be performed even if no active ligands are known.\n- It would be interesting to see the performance of the MP fingerprints in different benchmarks, e.g. ADMET property prediction.\n 1. How were the atomic properties (atomic mass, partial charges) selected for the evaluation? Could you visualize graph hierarchies w.r.t. these filtration functions for some example compounds?\n2. In section 6.1 it is said that “ViT is more robust to distinct arrangements of atoms in space, also referred to as molecular conformation.” Are conformations encoded in the fingerprint? Based on the description in section 4, the fingerprints use only graph edge distances.\n3. How is the triplet loss used to predict the ligand ranking, which is used to calculate EF scores?\n In Appendix D, only positive societal impacts are presented. The computational complexity is a limitation that was analyzed in Section 6.3, but not mentioned in the section about method limitations. The number of atomic properties used as filtering functions not only increases the cost of calculating the fingerprint, but also requires more memory and processing power of the ML models.", " \nThe authors proposed a new topological fingerprint in virtual screening to understand structural organization of chemical compounds. The method is based on multiparameter persistence homology, which produces topological fingerprints of the compounds as multidimensional vectors. They show that the proposed topological descriptors outperform the state-of-the-art methods on benchmark datasets.\n Fingerprinting is an important research topic in drug discovery.\nThe proposed method provided high accuracy.\n\nThere are many previous works on chemical fingerprints and descriptors, but the relationship with related works is not clear.\nThe authors did not compare the performance with many other fingerprints.\n The authors should use popular performance measures such as AUC scores of ROC curve and precision-recall curve. I'm afraid that researchers in the NeurIPS community are not familiar with the performance measures used in this paper (e.g., enrichment factor alpha %).\n\nThere are many fingerprints and descriptors in chemoinformatics. Examples include the following papers:\nFCFP (Rogers et al, J. Chem. Inf. Model., 2010)\nCDK graph (Yap et al, J. Comput. Chem., 2011)\nCDK hybridization (Steinbeck et al, Curr. Pharm. Des., 2006)\nKlekota-Roth (Klekota et al, Bioinformatics, 2008)\nMACCS (Durant et al, J. Chem. Inf. Comp. Sci., 2002)\nPubChem (Chen et al, J. Chem. Inf. Model., 2009)\nE-state (Hall et al, J. Chem. Inf. Comp. Sci., 1995)\nThe proposed fingerprint is better than the other fingerprints?\n\nThe prediction accuracy in virtual screening depends too much on the number of positive examples (active compounds) in the training set. In my experience, transformer-based methods do not work well when there are few positive examples in the training set.\n\nIt is hard to follow this paper.\nThe explanations on the detailed methods are not clear.\n\nThe output of the propose method is not clear.\nWhat is the dimension/size of the proposed topological fingerprint?\n\nThe number of compounds in the chemical library is often huge. How is the scalability of the proposed method?\n The paper does not discuss the limitations of the proposed method clearly. \nThe performance is not rigorously verified. \n", " In this paper, the authors develop multiparameter persistence (MP) homology based topological fingerprints for virtual screening. In their model, a compound is decomposed into chemical substructures at different scales and their topological information is extracted by using MP homology features. These topological fingerprints are combined with learning models. Their models are extensively tested on two benchmark datasets, i.e., Cleves-Jain and DUD-E Diverse dataset. It has been found that their models have better performance than existing models. The models are innovative and the results are great! However, TDA-based learning models for drug design (including virtual screening) has been a hot research areas for the past decade. There is a great amount of works in this area that is missing in the current paper. The major issue is the missing of important literature in the related work. Even though the authors have mentioned some TDA-based drug design models, i.e., refs 13-15, they have been only very briefly mentioned in the supplementary information. Further, these works are from 2017 to 2018. In fact, recently, there are great progresses in TDA-based drug design (and virtual screening). The authors should discussed them in the related works. \n\nFor instance,\n\n1) Duc Duy Nguyen, Zixuan Cang, Kedi Wu, Menglun Wang, Yin Cao and Guo-Wei Wei, Mathematical deep learning for pose and binding affinity prediction and ranking in D3R Grand Challenges, Journal of Computer Aided Molecular Design, 33, 71-82 (2019).\n\n2) Duc Nguyen, Zixuan Cang, and Guo-Wei Wei, A review of mathematical representations of biomolecular data, Physical Chemistry Chemical Physics, 22, 4343-4367 (2020).\n\n3) Duc Duy Nguyen, Kaifu Gao, Menglun Wang, and Guo-Wei Wei, MathDL: Mathematical deep learning for D3R Grand Challenge 4, Journal of Computer Aided Molecular Design, 34, 131-147 (2020).\n\n4) Zhenyu Meng and Kelin Xia, \"Persistent spectral–based machine learning (PerSpect ML) for protein-ligand binding affinity prediction.\" Science Advances, 7 (19), eabc5329 (2021)\n\n5) Peiran Jiang, Ying Chi, Xiao-Shuang Li, Xiang Liu, Xian-Sheng Hua, and Kelin Xia, \"Molecular persistent spectral image (Mol-PSI) representation for machine learning models in drug design.\" Briefings in Bioinformatics, 23 (1), bbab527 (2022)\n\n6) Xiang Liu, Huitao Feng, Jie Wu, and Kelin Xia, \"Dowker complex based machine learning (DCML) models for protein-ligand binding affinity prediction.\" PLOS Computational Biology, 18(4), e1009943 (2022)\n\n7) Xiang Liu, and Kelin Xia, \"Persistent Tor-algebra based stacking ensemble learning (PTA-SEL) for protein-protein binding affinity prediction\", ICLR 2022 Workshop on Geometrical and Topological Representation Learning (2022)\n\n8) Xiang Liu, and Kelin Xia, \"Neighborhood complex based machine learning (NCML) models for drug design.\" In Interpretability of Machine Intelligence in Medical Image Computing, and Topological Data Analysis and Its Applications for Medical Data, pp. 87-97. Springer, Cham (2021).\n None." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 4, 9 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "NUfeeks0Gn", "c2VEJ-UMF-r", "4PbHQT4PcGL", "lYFjiFPVrga", "522kpJ4WwrG", "c2VEJ-UMF-r", "Y0hQkpKpab0", "97rflTT4UdD", "aKLHdU0OBEX", "MFweTsG5lWK", "9hIlI6nGfaX", "p24u3trYKAx", "2yr2kpFgjE1", "Qrb9bOYNphv", "QVkFG1s4uzD", "EYmN43JnRF", "ZekkkzxDsay", "cJUuBYwgALE", "1wtup6iKbMz", "nips_2022_8hs7qlWcnGs", "nips_2022_8hs7qlWcnGs", "nips_2022_8hs7qlWcnGs" ]
nips_2022_9cU2iW3bz0
Score-Based Diffusion meets Annealed Importance Sampling
More than twenty years after its introduction, Annealed Importance Sampling (AIS) remains one of the most effective methods for marginal likelihood estimation. It relies on a sequence of distributions interpolating between a tractable initial distribution and the target distribution of interest which we simulate from approximately using a non-homogeneous Markov chain. To obtain an importance sampling estimate of the marginal likelihood, AIS introduces an extended target distribution to reweight the Markov chain proposal. While much effort has been devoted to improving the proposal distribution used by AIS, by changing the intermediate distributions and corresponding Markov kernels, an underappreciated issue is that AIS uses a convenient but suboptimal extended target distribution. This can hinder its performance. We here leverage recent progress in score-based generative modeling (SGM) to approximate the optimal extended target distribution for AIS proposals corresponding to the discretization of Langevin and Hamiltonian dynamics. We demonstrate these novel, differentiable, AIS procedures on a number of synthetic benchmark distributions and variational auto-encoders.
Accept
The submission presents a novel and interesting method using recent advances in score-based diffusion to improve the recently proposed differentiable AIS log marginal likelihood estimates. The experiments clearly show the benefit of using Monte Carlo diffusion. The writing is clear and of high quality. For these reasons, all reviewers were unanimous in recommending acceptance. AC notes: (1) out of curiosity, how is the adjusted Langevin and Hamiltonian versions perform on the static targets?, and (2) further to the question below on the timings, would it be useful to add a comparison where the time is kept similar between U(L/H)A and that with MCD, for example, 2K intermediate densities for U(L/H)A and K for the MCD variants.
val
[ "Pep9osFbTxX", "EQhPXhjGBCa", "fZWzYY-Vcu2", "JF9MeCfyEcRo", "g3TNWW6WzU1", "cTS8--uXS5k", "q2kc1YSi7m", "dHWGvP2Udjn", "GeLoADmowWw", "igrOTHHpRGR", "sB6CykdwO-b" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear author, I appreciate the clarification on amortized inference and the SOTA result reassurance. Your paper looks promising, and I will raise my score. ", " Dear authors,\nthanks for your message, and apologies for being a bit late with my comments.\nAll looks good to me, I will raise my score and cross fingers for this paper to make it to the conference.\n\nCheers\n", " Thank the authors to make the clarification. They answer my questions.\n\n I believe the authors will add the results to their final version. Overall, I think this is a good paper.", " * We agree that it would be useful to include the baselines in the ablation results in Figure 4. Note that this would be in the form of horizontal lines for each baseline, with values taken from Table 1 (for the left plot in Fig 4). The values are ULA and UHA are -85.62 and -8.20 respectively, compared to the range [-5,0] of MCD. We will add this in the final appendix. This makes it clear that our MCD extension outperforms these baselines drastically even for smaller score networks with a supposedly larger score error. Clearly, as the score network error becomes larger, our method performs worse. E.g. MCD-ULA with the smallest MLP reaches a log Z of ~ -5, compared to ~-8 of UHA without MCD. As argued above though, noise-conditional score estimation is known to work quite well, even on the complex datasets considered in score-based generative models, and the additional computational effort seems to be justified in the examples shown.\n\n* We only included time results for the VAE target as they are slightly easier to measure accurately as the target evaluation is more expensive and therefore overhead from function calls etc distorts the timings less. But yes, the ratio also applies to the static targets from Sec 4.1. E.g. on the 200-dim Gaussian mixture target from Table 1, ULA and ULA-MCD take ~0.1s and ~0.24s respectively. We will add the timings (with the caveat that they are less accurate for fast to evaluate targets) in the final version of the appendix. Yes, the iteration time in the VAE experiment includes a gradient step on the end-to-end differentiated ELBO (including VAE encoder/decoder, and score model). We highlight that the increased computational time from the MCD method comes from backproping through the score network. Both the baselines and MCD need to backprop through the VAE parameters.\n\n* The batch size (number of samples used to estimate elbo/score) during training was either 256 or 512. We forgot to add those numbers and will update the section in an updated version. We use substantially more samples at test time to obtain tighter estimates (done for all methods).\n\nWe hope this answers your questions.", " The authors have answered most of the questions. But I am still confused about two of them.\n* For the ablation study: I think the author should add the curve of the baselines in Figure 4. The interesting part of this ablation is to quantify how much expressiveness the score model needs so as to outperform the baselines. Intuitively, if the target distribution becomes more complex, we will need a more flexible score model to match the score. I am curious if there is a threshold of the score matching error such that only the score models better than the threshold can improve the efficiency of the baselines. And if there exists such a threshold, how does it change when the target distribution becomes more difficult (e.g. EBMs on harder datasets)? \n* For evaluation & training time: I am confused about the statement \"The total computational load of our MCD extensions is roughly 2x that of the baselines (see Table 3 in the revised version of paper)\" Does this ratio only applies to amortized VI or it also applies to the static sampling in section 4.1? Does the computational load of MCD include the training time of score functions and VAE? Does the computational load of MCD include the training time of VAE? Since Section E.3 reports that 16384 samples are used to estimate $\\log Z$, I think it will be a good explanation if the paper can report how many samples are used to train the score model.", " We would like to thank the reviewer for their high quality and comprehensive review. We are pleased to hear that the reviewer recognizes the importance of the topic.\n\n\nWorkshop paper: It is our understanding that the publication of a related workshop paper should have no bearing on the score of the current submission as pointed out in the following entry of the [reviewer FAQ](https://docs.google.com/document/d/1uKc89EcSh0hZJo8KJtY0Lh6ZTWPLZvqWHNhR8WsNDoQ/edit):\n“Q: What if I’ve seen similar work in a NeurIPS/ICML workshop?\nA: We allow work that has been submitted to non-archival workshops to be submitted to NeurIPS. To maintain anonymity, do not mention the workshop paper in your review.”\n\nRelationship to diffusion models: We thank the reviewer for this excellent suggestion. We agree that it would help the exposition of the paper to further elaborate the contrast of diffusing target->noise vs noise->target. We have revised the paper in this direction, had some comments in the main text and added a discussion in Appendix D.\n\nComparisons to EBMs / density estimation:\nWe were not completely sure what this comment referred to. We believe the reviewer was asking why we did not apply our method towards training energy-based models. If this was indeed the case, we provide the following response: \n \nThe main focus of this work is a method for partition function estimation. One very appealing application for such methods would be in training energy-based models. There exists prior work using inference methods such as ours for this task [1, 3]. In particular some prior works [2, 4] have utilized inference methods based on MCMC-sampling with some success. There arises additional complexity when working EBMs due to difficulty in evaluation. EBM evaluation remains an open research problem and it is very difficult to obtain tight enough bounds on likelihood to reliably compare the performance of models trained in different ways. For this reason we decided to focus our evaluation on inference problems with easy-to-compute evaluation metrics. We are very excited to apply our method towards EBM training in the future, but we felt this to be slightly outside the context of the current work and applying our method to this task should be the focus of a subsequent publication. \n\nWe hope this answers your question. If we misunderstood, we apologize, and if possible could you please elaborate more to improve our understanding? \n\n* [1] https://arxiv.org/abs/2010.04230\n* [2] https://proceedings.neurips.cc/paper/2019/hash/767d01b4bac1a1e8824c9b9f7cc79a04-Abstract.html\n* [3] https://arxiv.org/pdf/1901.08508.pdf\n* [4] https://arxiv.org/pdf/2006.06897.pdf\n\n\nQuestions:\n1) Thanks for pointing this out. This is indeed a typo. We have corrected it.\n2) This is correct as written. This can be checked given $q_{k+1}(x_{k+1})=\\int q_k(x_k)F_{k+1}(x_{k+1}|x_k)dx_k$.\n3) Thanks for pointing this out, we have suppressed the use of $x,x’$ and this indeed improves clarity.\n4) Absolutely, this is a great suggestion and we have added a few sentences in the main paper and a section in the Appendix (Appendix D) to discuss this.", " We thank the reviewer for their thoughtful comments. We are pleased to hear that they find the writing of high quality and easy to read.\n\nAblations for score model: We agree with the reviewer that an ablation of the expressiveness of the score model would be an excellent addition to the paper. We have added an additional table in the appendix (Figure 4) that explores the impact of the score model architecture. We notice improvements with more expressive score models. The impact of the score model architecture is larger when using a smaller number of sampling steps. This aligns with intuition since, as the step size increases, the distributions which the score model needs to approximate becomes less smooth.\n\nScore-learning vs partition functions: The reviewer states that learning the score is not easier than estimating partition functions. In general scenarios it would, but here we are estimating the scores of a diffusion process we can sample from easily and this boils down to solving a regression problem. In the context of score-based generative modeling, one similarly estimates the scores of a diffusion process (albeit a diffusion different from ours) one can sample from easily. The explosion of work in this area shows that the scores of very complex distributions can be learned efficiently.\n\nBias: You are correct that unadjusted samplers will lead to bias in the resulting samples. In the context of time-homogeneous ULA, there has been much recent work quantifying this bias and we have included some references. However, note here that these samplers are used to build the proposal distribution Q (Eq 2) in an importance sampling scheme that targets the distribution of interest. So in particular, we do obtain an unbiased estimate $w(x_{0:K})$ of the evidence (Eq 4) from which we can build directly an ELBO by taking the expectation of the logarithm of this estimate. Such an approach is also adopted in all previous work on the topic; see e.g. Wu et al., 2020; Thin et al., 2021, Geffner, T. and Domke, J. (2021), and Zhang, G., Hsu, K., Li, J., Finn, C., and Grosse, R. (2021).\n\nFair evaluation & training time: The reviewer raises a good point that learning the score while sampling comes with an additional computational cost. The natural way to “match” the computational budget of ULA/UHA with that of our MCD extensions is to increase the number of integration steps of the baselines, which translates into the number of target gradient evaluations. The total computational load of our MCD extensions is roughly 2x that of the baselines (see Table 3 in the revised version of paper). With that in mind, we refer the reviewer to Tables 1 & 2, where our method performs favorably when using a quarter (64 instead 256 sampler steps) of the total gradient target evaluations.\n", " We appreciate the review and are pleased that the reviewer finds the paper well motivated and well written.\n\nAmortized Inference: AIS methods are traditionally used to estimate normalizing constants for fixed distributions. There is, however, a growing body of research that studies the benefits of AIS in the context of generative models, i.e. for posterior inference (Thin et al, 2021, Geffner & Domke 2022, Zhang et al., 2022). For deep generative models, this typically means dealing with an amortized inference process and requires end-to-end differentiability. For our work, this requires further conditioning of the score estimator on the current inputs of the model. We demonstrate how to adapt our method to this case in the paper, and that this leads to significant improvements, inline with previous work. \n\nSOTA: AIS (Neal, 2001), also known as the Jarzynski-Crooks equality in physics (Jarzynski, 1997; Crooks, 1998), is indeed the gold standard for normalizing constant estimation, with a huge body of literature devoted to the problem - those three papers combined still receive over 1,000 citations per year. Note that Metropolis corrected versions of AIS require using high variance REINFORCE type estimators for end-to-end gradients, and therefore make learning sampling parameters very difficult (Thin et al., 2021). Among the unadjusted AIS variants, indeed the recently proposed UHA (Geffner and Domke, 2021; Zhang et al., 2021), which itself is based on ULA (Wu et al., 2020; Thin et al., 2021) are SOTA to our knowledge.\n\nNumerical stability: The reviewer makes an excellent point. As we describe in our Limitations Section 5, we have found in our experiments that numerical stability can be a problem. We first would like to argue that our proposed scheme drastically reduces the variance of the AIS estimator. This naturally translates into more stable gradients when differentiating through the forward sampling paths. See e.g. Table 3 where MCD-UHA drastically reduces the error bars (supposedly from unstable chains) of the UHA estimator. Second, it is a well known problem that differentiating through long sampling paths leads can be unstable. It is out of the scope of our work to investigate detailed solutions to this problem, but we have e.g. observed that placing a stop_gradient around $\\nabla_x \\log \\pi(x)$ leads to improvements (albeit at the expense of losing some of our theoretical underpinnings).\n\nBold face results: We did not highlight any particular (e.g. the best) results in Tables 1&2 as we would like to invite the reader to not only look for the best performing setup, but also e.g. notice the relative performance between using different numbers of steps, with or without MCD, etc. For example, MCD-ULA with a ¼ the number of sampler steps still outperforms ULA in Table 1.", " This paper proposes an improved Annealed Importance Sampling (AIS) for score-based diffusion models. The author argues that past papers on AIS have predominantly focused on modifying the intermediate distributions or the Markov kernel but have overlooked the suboptimal nature of the extended target distribution. Thus, the paper integrates a new AIS procedure in combination with recent works to approximate the extended target distribution. Strengths:\n- The paper is well motivated and theoretically proves the correctness of the proposed methods.\n- Overall, the paper is well written and explained useful insight on the limitations on the Markov kernel:\n - E.g. the illustration of the suboptimality of a homogeneous MCMC chain give good intuition on the current problem of AIS.\n\nWeaknesses:\n- While experimental results show improvement, it would be more convincing if there were more real-world dataset experiments (other than binarized MNIST).\n\nMinor Suggestions:\n- Why not bold the best log Z estimate on Table 1 & 2? - Were there any experiments done on ULA-MCD and UHA-MCD on more complex distributions? I understand estimating the normalizing constant is very difficult, but a single experiment would be nice to see.\n- Why do we need applications for amortized inference? Also, why not experiment on more datasets?\n- I am not very familiar with AIS, are differentiable AIS and ULA and UHA the current SOTA benchmarks for likelihood estimation?\n- How would you provide a solution for the numerical instability in optimizing due to the unadjusted Langevin and Hamiltonian sampler? This seems like an imperative issue that needs to be addressed. The authors transparently addresses the limitations of their work and have a section dedicated to limitations. ", " Annealed importance sampling is one of the most effective methods for marginal likelihood estimation. The commonly used extended target distribution is simple but sub-optimal. The paper leverage the recent progress in score-based generative modeling to approximate the optimal extended target distribution for annealed importance sampling and give the discretization of Langevin and Hamiltonian dynamics. On a number of synthetic distributions and variational auto-encoders, the proposed method obtains better performance compared to existing methods. * The writing is of high quality. The paper gives a concise review of the annealed importance sampling and illustrates the limitations clearly via two examples. Then, the paper provides necessary details in deriving the Langevin and Hamiltonian algorithms and makes them easy to read.\n\n* As a sampling algorithm, the effectiveness of the proposed method needs further consideration. In particular, the effectiveness of the proposed method relies on the qualities of the learned score function. However, learning the score models is not easier than estimating the partition functions. Whether the proposed method can apply well to hard distributions is still doubtful. It will be helpful if the paper can provide an ablation study that investigates the relationship between the quality of the learned score functions and the MCD samplers.\n\n* Although I have some reservations about the effectiveness of the current algorithm due to the reason mentioned above, its contribution to further studies on extended target distributions should be recognized. For example, when the energy function is learned via a score-based model and is associated with a score function in nature, the proposed method could be very helpful.\n\n* The evaluation in the experiment is not fair. We can run ULA and UHA directly, but we need to learn the score functions to run ULA-MCD and UHA-MCD. The paper should also report the learning time and the sampling time such that the readers can have better judgments.\n * Considering unadjusted algorithms are used, will the error in score estimation brings bias in the sampling? If so, is it able to quantify the bias in terms of the estimation error?\n\n* What is the training time of the score model? How is it compared to the sampling time? The paper mentions some limitations of the proposed methods. I think it misses two important problems:\n1. The method requires learning a score model, which is time consuming and hard. This factor reduces the efficiency or even correctness of the proposed method.\n2. The estimation error in the score model may lead to bias in sampling, and the paper lacks a quantification of the potential bias.", " The key problem this work addresses is the suboptimality of the extended target distribution that is widely used in current annealed importance sampling (AIS) methods. The authors, building on previous literature, show that the custom choice of the backward Markov kernel of the unnormalized extended target is comfortable but suboptimal in terms of variance.\nTo overcome such limitations, the authors extend some key observations that have been made in the literature of diffusion processes used for generative modeling (e.g. Sohl-Dickstein et al., 2015), and draw a connection between forward/backward diffusion and forward/backward transition kernels used in importance sampling. Armed with this realization, the authors show that AIS can be studied through the lenses of forward and backward diffusion SDEs: the forward diffusion begins in a simple to sample from distribution, and gradually transforms it into the target distribution (or an extension thereof), while the backward diffusion corresponds to the (hypothetical) process of transforming the target to a simple distribution.\nIn addition, and as an extension to a previous paper that appeared in a recent workshop, the authors consider Hamiltoinan dynamics, and derive a similar connection to AIS, but with improved performance due to a higher fidelity representation of the paths taken by the Hamiltonian SDEs, when compared to first order dynamics.\nFor practical reasons, the authors produce discretized versions of the dynamics they study, and implement numerical integration schemes for the simulation of the SDEs, where necessary.\nNumerical results on both static distributions and on an instance of a variational autoencoder indicate dramatic performance improvements of the proposed approach when compared to vanilla and Hamiltonian vanilla versions of AIS. Strengths:\n* This is a very important topic, and the performance gains from the proposed method are substantial\n* I liked the idea of connecting the dots between AIS and (score-based) diffusion models\n* The extension to Hamiltoinan dynamics is interesting, and a nice complement to [1]\n\nWeaknesses:\n* I think the exposition of this paper should be improved: the connection with diffusion models is somehow reversed, as in that literature the forward process is going from the target distribution to a simple one, and the backward process is doing the opposite, whereas in this paper it is the other way around. This could be discussed in the appendix. Also, I would study the \"steady state\" distribution of the forward process defined in eq 10, by means of Fokker-Plank equations, to show that indeed the forward process moves an \"easy\" distribution to the target one. This can be helpful for a reader familiar with diffusion models, whereby you can demonstrate that for variance preserving methods, the process converges to isotropic Gaussian.\nTo put things differently: a reader with familiarity in diffusion models knows that the forward process \"destroys\" data, whereas in this work it is the other way around.\n* experiments only compare the proposed method to baseline AIS and to Hamiltonian AIS (which I appreciated a lot). However, this work could be cast under the umbrella of energy based models and density estimation. Then, would it make sense to compare it to such line of work?\n* there exist a workshop paper presenting preliminary ideas that are extended in this work. The main difference here is the treatment of Hamiltonian dynamics, and the experiments with the variational autoencoder. If accepted, this paper could maybe give more space to the Hamiltonian part, and most importantly, to the excellent results obtained. This is the main reason why, in its current form, the key contribution of this paper is somehow diluted, and seem a repetition: I like this work, and I would be happy to raise the score I gave which is only tainted by this discussed weakness\n\n[1] Tim Dockhorn and Arash Vahdat and Karsten Kreis, Score-Based Generative Modeling with Critically-Damped Langevin Diffusion, ICLR 2022\n Here are some questions, but these are somehow superficial as I didn't find any major flaws in the paper.\n\n1- I think there is a typo in eq 12: the term $ 2 \\nabla \\log s_{\\theta} ( \\cdot ) $ should read $ 2 s_{\\theta} ( \\cdot ) $\n\n2- I may have made mistakes in my own derivations, but isn't there a typo in eq 6 for the backward transition kernel? In the numerator you should have $q_{k+1}(x_k)$ and not $q_{k}(x_k)$? I'm not 100% sure here\n\n3- May I suggest to improve the notation? I find it difficult to follow when $x, x'$ are used in place of $x_{k+1}, x_k$\n\n4- Would it make sense to have an additional section in the appendix to clarify not only the similarities but also the differences w.r.t. score-based diffusion models? See my comment on the weaknesses above.\n\n== Post Rebuttal == \nThanks for your answers to the questions and comments above. I have modified my score, raising it to an accept/7. I think they did address limitations appropriately." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "dHWGvP2Udjn", "cTS8--uXS5k", "JF9MeCfyEcRo", "g3TNWW6WzU1", "q2kc1YSi7m", "sB6CykdwO-b", "igrOTHHpRGR", "GeLoADmowWw", "nips_2022_9cU2iW3bz0", "nips_2022_9cU2iW3bz0", "nips_2022_9cU2iW3bz0" ]
nips_2022_0oQv1Ftt_gK
Rethinking Counterfactual Explanations as Local and Regional Counterfactual Policies
Among the challenges not yet resolved for Counterfactual Explanations (CE), there are stability, synthesis of the various CE and the lack of plausibility/sparsity guarantees. From a more practical point of view, recent studies show that the prescribed counterfactual recourses are often not implemented exactly by the individuals and demonstrate that most state-of-the-art CE algorithms are very likely to fail in this noisy environment. To address these issues, we propose a probabilistic framework that gives a sparse local counterfactual rule for each observation: we provide rules that give a range of values that can change the decision with a given high probability instead of giving diverse CE. In addition, the recourses derived from these rules are robust by construction. These local rules are aggregated into a regional counterfactual rule to ensure the stability of the counterfactual explanations across observations. Our local and regional rules guarantee that the recourses are faithful to the data distribution because our rules use a consistent estimator of the probabilities of changing the decision based on a Random Forest. In addition, these probabilities give interpretable and sparse rules as we select the smallest set of variables having a given probability of changing the decision. Codes for computing our counterfactual rules are available, and we compare their relevancy with standard CE and recent similar attempts.
Reject
This paper highlights a series of limitations of existing methods in the literature on algorithmic recourse (e.g., recourses are not implemented exactly and are often noisy), and posits new definitions for Local and Regional Counterfactual Rules and proposes a novel algorithmic framework to learn them. All the reviewers opine that there is very little motivation for why the new definitions should be exactly the way they are, and what their strengths and weaknesses are. In addition, it is unclear when we can expect the proposed approach to provide a good estimate of the criterion. Furthermore, the problems highlighted in this work have been explored in several recent works (e.g., Dominguez-Olmedo et. al., ICML 2022, On the adversarial robustness of causal algorithmic recourse; Pawelczyk et. al., 2022, Let Users Decide: Navigating the Trade-offs between Costs and Robustness in Algorithmic Recourse). We encourage the authors to address the aforementioned aspects, discuss related works, and also compare the proposed approach with these works.
train
[ "4jQAfTmwhC_", "rf0FToJYAJR", "SGVf-bkNXwM", "q0piYh-w-LT", "rzr1m5Ccc0t", "ougHsWQjknj", "iDN-SU2Pfkf", "-IXS9ZaDX4C", "t67oJZ161NH", "wOgG82jiOU", "aA5RCc-Djap", "0dOErpyxBr8" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " ** \"notice that asymptotic consistency of an estimator does not mean that it is always good in practice\": we do agree with the reviewer that the consistency is not the definitive answer, and that the rate of convergence is a much better answer. Nevertheless, we think that we already provide several new concepts (Counterfactual Decision Probability, A computational efficient + consistent estimator, Local Counterfactual Rules and Local Regional Rules) and numerous comparisons with the (best) \"State-of-Art\" related approach. For space constraints and for having a focused paper, we will address the limitations due the quality of estimations and their impacts in a subsequent work, and indicate them in conclusion. \n\n** As said before, we do not require to select the counterfactual in the rectangle by following the distribution of X. The rectangle already contains the information (in a way they are a summary of the conditional distribution) and the user does not need to have additional knowledge of the distribution of the data. As it is remarked by reviewer VVsi, we do not provide guarantee for any point in the rectangle, and this is why when we need to select only one point in a rectangle, in propose to use an isolation forest (learnt previously) that can avoid having to different distribution from the original one. Giving a guarantee will be addressed in a subsequent work. ", " We thank reviewer VVsi for the discussion and the new comments. We agree that learning conditional distributions $Y\\vert X$ is a simpler situation than learning all the conditional distributions $X_S \\vert X_{\\bar{S}}$. Indeed for classification, it is the same inferential problem of supervised learning. Concerning regression, the problem becomes a quantile regresssion problem which is also relatively well-known supervised problem. \nWe have shown that the counterfactual problem can be addressed by using standard supervised learning tools. The problems raised by the reviewer are not specific to our method, but they are criticisms to the limitations of tree-based models. We agree that there might be some uncertainties when estimating the rules, but we thing they can be addressed with appropriate development or adaptation of Random Forests models. Typically, Random Forests tend to be well-understood as their uncertainties , see for instance [1], [2]\nAt the opposite, other methods that need to estimate all the conditional densities (in particular mixed data) are exposed to high theoretical or computational challenges (as sota methods like [3,4,5] for instance) does not have such kind of consistency results. \n\n[1] V-statistics and Variance Estimation. Z Zhou, L Mentch, G Hooker J. Mach. Learn. Res. 22, 287:1-287:48\n[2] Athey, Susan, Julie Tibshirani, and Stefan Wager. Generalized Random Forests. Annals of Statistics, 47(2), 2019.\n[3] Variational Autoencoder with Arbitrary Conditioning \n[4] ACFlow: Flow Models for Arbitrary Conditional Likelihoods \n[5] Arbitrary Conditional Distributions with Energy", " \"We have implicitly given the assumptions on the law of X under which\nour approach works.\"\n\nI cannot find these in the paper. The reference to [1] is very helpful,\nbut notice that asymptotic consistency of an estimator does not mean\nthat it is always good in practice. By analogy: otherwise all\nclassification tasks could be solved by K-nearest neighbour. There\nshould therefore be some (empirical or theoretical) evaluation of the\nproposed estimator in the paper, as well as some discussion on the\nsensitivity of the final explanations to estimation errors in the\nestimator.\n\n\n\"A user can pick any feasible point for him in the region $C_S(x)$\"\n\nThe definition of CDP does not guarantee anything about individual\npoints in a rectangle, only about points selected at random according to\nthe conditional distribution, which may be highly non-uniform. So the\nproposed approach of picking any feasible point does not give the user\nany guarantees.\n\n\"As we stated in Lines 96-99, by construction, our methods are stable\nbecause they give a range of values that guarantee a high probability of\nchanging the decision as long as the perturbations are within our\nranges.\"\n\nAs explained in the previous point, this is not what your methods\nguarantee. But your experiment would indeed be sufficient to back up the\nclaim if added to the paper.", " I thank the authors for their extensive response to my criticisms.\n\n* My summary was indeed too narrow: it is clear that the motivation for\n the paper is not just the plausibility problem, but also the other\n reasons given by the authors in their response, like robustness to\n perturbations. I have updated the review.\n\n* About estimating the law of X vs the law of Y|X: the crucial object\n that is estimated in CDPs is a conditional probability in terms of X,\n so my criticisms all apply. The authors are correct, however, that\n this conditional probability is estimated with estimators derived from\n random forests rather than \"via random forests\". I have reworded my\n review more carefully.", " We really appreciate that you acknowledge our contributions and found our paper “Good.”. We thank you for the time you took to review the paper.\n\n> Q: I think an illustration inf the form of a 2d data set + decision boundary might help even more to understand Local and Regional Counterfactual Rules. \n\nWe will add a simple 2D dataset (e.g., moon dataset) in the Appendix, where we can observe the decision boundary and the Local and Regional Counterfactuals rules.\n\n> Q: I am bit concerned about generating rectangles for high-dimensional spaces (i.e. curse of dimensionality) \n\nWe agree that generating good hyperrectangles is a very difficult problem in high dimensions. However, in our approach, we start by reducing the dimension by selecting the right directions and searching the hyperrectangles only in these directions. Moreover, we take advantage of the partition already given by the Random Forest or the Projected RF. Thus, finding the good hyperrectangles consists of finding the compatible leaves of the forest contrary to AReS or CET that search randomly for the good rectangles in all directions.\n\n> Q: If I understood it correctly, the users always used a LightGBM as a black-box model that is going to be explained. Since the authors propose a model agnostic method, I think evaluating on a single model only is enough\n\nLGBM and other tree ensembles are well approximated by Random Forest, but we have also tested on different models (typically a Neural Network, Tabnet [1]). They are well approximated, and we can obtain counterfactuals in both cases. We will give the results in the Appendix. \n\n[1] .Sercan Ö. Arik, Tomas Pfister. TabNet: Attentive Interpretable Tabular Learning. ", " > Q: \"it is not clear when it can be expected to work and when not, or how to detect whether it can be applied for a particular type of data.\"\n\nAs said before, the theoretical assumptions of our approach are the same as the ones used in [1] to estimate the Same Decision Probability (SDP). Note that, as in prior works (AReS, CET), our approach cannot guarantee to change the output for all individuals. However, contrary to prior methods, we give additional information by computing the Counterfactual Divergent Probability: in practice, when this probability is high, the rule always changes the decision. The probability of changing the decision is controlled by the hyperparameter $\\pi_C$ as well as the approximation errors of the estimators.\n\nOn the other hand, we mentioned in Lines 400-401 that our methods, being based on tree-based models, work mainly for tabular data. \n\nFinally, we thank the reviewer for the suggestions to improve the paper's clarity. We will take this into account in the final version.\n\n[1]: Salim I Amoukou and Nicolas JB Brunel. Consistent sufficient explanations and minimal local rules for explaining regression and classification models. arXiv preprint arXiv:2111.04658, 2021", " > Q: There should be a discussion on when this is reliable. For instance, this cannot work well when explaining inputs x from low-probability regions for which the exact distribution cannot be estimated very well, or if the true distribution is very irregular, or if the input dimension is large. It should also be discussed how sensitive the definition is to estimation errors in the distribution of x.\n\nWe have implicitly given the assumptions on the law of X under which our approach works. It should be noted that all our methods are based on the Random Forest, the estimators derived from it, and in particular, the Conditional Decision Probability (CDP). We have shown in Section 4 that estimating the CDP is equivalent to estimating the (Same Decision Probability) SDP. In section 4 of [1], we can find all the assumptions on the distribution of X for approximating these quantities using the Random Forest. Although these are only asymptotic guarantees, to the best of our knowledge these are the best results that can be found for estimators based on Random Forests. However, we will put the assumptions more explicitly in the latest version.\n\n> Q: Definition 3.3 of Local Counterfactual Rules only provides a necessary requirement that the rectangles should satisfy, so it is in fact not a full definition\n\nWe apologize because this comment may have been induced by an oversight: we forgot to mention in the definition that we consider only Rules $C_S(\\boldsymbol{x};\\mathcal{Y}^\\star)$ satisfying (3.1) *with maximal volume* hyperrectangle. It is indicated in line 176, but this should be part of the definition. It is how the Local-CR are computed in the paper. We will change the definition accordingly. Nevertheless, we consider changing maximal volume (straightforwardly computed with the Lebesgue measure) to maximal probability $P_{\\boldsymbol{X}} (C_S(\\boldsymbol{x};\\mathcal{Y}^\\star))$. \nConsequently, this change should also answer the following comment about the size of that $P(C_S(x;y*))$ that can be arbitrarily small. In addition, we emphasize that no mathematical definition of a Counterfactual Rule has been proposed before (AReS or CET), it is the first attempt, and all methods rely on comparisons based on accuracy, coverage, and sparsity to select the best rule.\n\n> Q: \"It is not clear how a user is supposed to obtain recourse from hearing a rectangular region\"\n\nOne objective of our approach is to give a flexible way to change the decision: a user can pick any feasible point for him in the region $C_S(\\boldsymbol{x};\\mathcal{Y}^\\star)$, the rule guarantees a high probability of changing the decision (greater than $\\pi_C$). In some applications, the user can have domain-specific expertise that permits him to choose a feasible point. Therefore, it does not need to know the distribution, he just needs $C_S(\\boldsymbol{x};\\mathcal{Y}^\\star)$, that has been computed beforehand. For the sake of completeness and comparisons with the closest alternative approaches (AReS and CET), we propose a simulated annealing approach for sampling a single recourse in $C_S(\\boldsymbol{x};\\mathcal{Y}^\\star)$. Our methodology is described in the Experiments Section (but we propose to introduce and discuss our algorithm in Section 5 (as a Section 5.2) in the final version) for generating this single recourse strategy derive from the rule. With this methodology, we do not require to know the distribution of $X$, but we use only an isolation forest to avoid generating outliers. \n\n > Q: The introduction (line 98) claims that their approach \"partly answer[s] the problem of noisy responses to 99 prescribed recourses\" identified by Pawelczyk et al., 2022, but this claim is not backed up.\n\nAs we stated in Lines 96-99, by construction, our methods are stable because they give a range of values that guarantee a high probability of changing the decision as long as the perturbations are within our ranges. Thus, since in [2], they just add small Gaussian noises, the perturbed vectors most likely stay in the intervals. We also conducted the experiment of [2]: We normalize all the datasets so that $x \\in [0, 1]$, we add gaussian noise ε to the prescribed recourse, where $ε \\sim N (0, \\sigma^2)$ and $\\sigma^2 \\in \\{0, 0.01, 0.025, 0.05\\}$. We compute the *Accuracy: the fraction of unseen instances where the action and perturbed action lead to the same output*. We computed the *Accuracy* for COMPAS, Diabetes, and used the simulated annealing approach with the Local-CR of Section 6 to generate the actions. The *Accuracy* for the different perturbations $\\{0.01, 0.025, 0.05\\}$ are: Compas$=[0.989,0.988, 0.988]$, and Diabetes$=[0.966,0.97, 0.969]$ respectively.\n\n[1]: Salim I Amoukou and Nicolas JB Brunel. Consistent sufficient explanations and minimal local rules for explaining regression and classification models. arXiv preprint arXiv:2111.04658, 2021\n\n[2] .Martin Pawelczyk et al. Lakkaraju. Algorithmic recourse in the face of noisy human responses, 2022.\n", " We are very grateful for the time taken to elaborate this detailed review, and we think that part of the critics/limitations raised may have been generated by a possible misunderstanding of our work.\n\nFirst of all, in the given summary, it seems that we are only trying to solve the plausibility problem (see below).\n\n> Summary: \"The paper aims to address the problem that counterfactual explanations can be implausible, by which it means that they are too different from typical samples from the distribution of feature vectors x. To do this it proposes to replace counterfactuals, which are single points, by whole rectangular regions that switch the class of the input with high probability. These rectangular regions can then be communicated in the form of rules, which are called Local Counterfactual Rules. There is also an extension called Regional Counterfactual Rules, which explains regions instead of single inputs.\"\n\nHowever, we have highlighted in Section 1 and 2 that we propose rules to synthesize the diverse Counterfactual Explanations given by the classic methods, find stable regions (not close to decision boundaries) to ensure robustness to perturbations. In addition, these rules allow us to have a global picture of the model to detect certain patterns (e.g. application in fairness) while being as interpretable as possible by guaranteeing sparsity. Our methods rely on a statistical estimator (with asymptotic guarantees) and not on heuristics or constrained optimization like classical methods. This also answers the question raised about the little motivation of our methodology.\n\n> Q: It is stated that \"The definitions of the local and regional counterfactual rules rely on the distribution of the inputs which therefore has to be estimated very accurately. This is done using several variants of random forests.\" An important weakness is the fact that the \"the approach critically relies on estimating the true distribution of x via random forests.\"\n\nThis is not what we do. Our objective is to find counterfactual rules for $(\\boldsymbol{X}, Y)$ or $(\\boldsymbol{X}, f(\\boldsymbol{X}))$ by using a Random Forest proxy $\\hat{f}$ learnt from the data set $\\{(x_i,y_i)\\}$ or $\\{(x_i,f(x_i))\\}$. We need the random forest proxy $\\hat{f}$ because our algorithm uses the tree structure for computing the rules efficiently. This means that we do not estimate the law of the feature vector $\\boldsymbol{X}$, but only the conditional law $Y\\vert \\boldsymbol{X}$ (or $f(\\boldsymbol{X})\\vert \\boldsymbol{X}$) with a standard Random Forest. Among others, we need to compute the Counterfactual Decision Probability $P(f(\\boldsymbol{X}) \\in \\mathcal{Y}^\\star \\left| X_{\\bar{S}}=x_{\\bar{S}}\\right)$. While $P(f(\\boldsymbol{X}) \\in \\mathcal{Y}^\\star)$ can be easily computed for any ML model with Monte-Carlo for instance, the computational challenge is for the conditional probability. A direct way would be to compute the conditional probability $P(X \\vert X_{\\bar{S}}=x_{\\bar{S}})$ and to integrate $f(\\boldsymbol{X})$ with respect to this conditional probability ; or to be able to simulate from this conditional probability, and use a Monte Carlo approximation. We agree that such an approach have to face with a very challenging inferential problem of learning the distribution of $\\boldsymbol{X}$ and all the conditional distributions $X_S \\vert X_{\\bar{S}}$, for all the coalition of variables $S$. Another approach would be to estimate another model $\\hat{f}(X_{\\bar{S}})$, but this would be very costly. As a consequence, we do not use these approaches, and we focus on approximating Random Forests. Indeed, our approach heavily relies on the use of Random Forests and the so-called \"Projected Random Forest trick\", introduced in [1] that permits to extract an efficient estimator of $f(\\boldsymbol{X}_{\\bar{S}})$ directly from the Random Forest $\\hat{f}(\\boldsymbol{X})$.\n\n[1]. Clément Bénard, Gérard Biau, Sébastien Da Veiga, and Erwan Scornet. Shaff: Fast and consistent\n409 shapley effect estimates via random forests. arXiv preprint arXiv:2105.11724, 2021a.", " We really appreciate that you acknowledge our contributions and finding our ideas “technically strong” and \"solve a well-motivated problem\". We appreciate your specific suggestions, which will help us improve our paper's quality. We thank you for the time you took to review the paper.\n\n> Q: Can the approach from Anchors be used in a similar way to solve the CF regions problem?\n\nYes. Indeed, ARes [1] used the approach from Anchors to solve the CF regions problem. Both start by discretizing/binning the variables and then sample randomly among the bins until they find a rule that satisfies some constraint e.g. coverage, accuracy, and sparsity. \n\n> Q: If I understand correctly, the output policies/regions may not be 100% correct (i.e., may not flip the label). Wouldn't this violate the expectation of a counterfactual explanation?\n\nIt is also the case for other models (AReS, CET). However, with our methods, as prior information, we compute the Counterfactual Divergent Probability: in practice, when this probability is high, the rule always changes the decision. The probability of changing the decision is controlled by the hyperparameter $\\pi$ as well as the approximation errors of the estimators.\n\n> Q: Can you provide some examples of the policy learnt by the model, e.g., on Compas? A qualitative analysis of the regions and their interpretability will help.\n\nFor instance, If $x =$\\{age=30, juv_fel_count=0, juv_misd_count=0, juv_other_count=0, priors_count=0, race:African-American = 0, race:Asian = 0, race:Caucasian = 0, race:Hispanic= 1, race:Native American = 0, race:Other = 0, c_charge_degree:F = 0, c_charge_degree:M = 1, gender=1\\}, the output is 1. We can change the decision with probability CDP=0.92 by applying this rule $C_S(x)$ = \\{juv_misd_count = 1 and prior count >= 4.5\\}.\n\nAs suggested by the Reviewer, we will add in the final version a qualitative analysis of the Counterfactual Rule learnt on Compas, a more detailed discussion about the similarities with Anchors and the other methods (ARes, CET), and a pseudo-code algorithm for the complete procedure. \n\n[1]: Salim I Amoukou and Nicolas JB Brunel. Consistent sufficient explanations and minimal local rules\n407 for explaining regression and classification models. arXiv preprint arXiv:2111.04658, 2021", " To explain machine learning models, the paper proposes learning counterfactual example rules (regions) rather than generating individual counterfactual examples. This has benefits in stability and avoids the variance of producing individual examples. ### Strengths\n* Useful formulation to think of counterfactual regions versus examples.\n* Technically sound method using random forests that avoids discretization parameters in past work\n* Evaluation is reasonable and compares to past work.\n\n### Weaknesses\n* The writing can be improved. The idea of using Random Forests and ignoring Sbar features is simple enough. It will be nice if the paper can provide intuitive examples as in Figures 2,3,4 before presenting the technical details.\n* possibly similarities to the Anchors paper that has not been discussed https://homes.cs.washington.edu/~marcotcr/aaai18.pdf\n\nThe paper solves a well-motivated problem. Counterfactual example methods, by their nature, can be unstable and lack guarantees. Providing rules or sets of feature values instead of individual examples is an important advance. I wish the authors had engaged more with explanations literature outside CFs that does propose such rules (e.g., the Anchors paper linked above). \nOther than that, I think the paper is technically strong and I like the use of Random Forest as an estimation tool given its non-parametric properties. The writing can be improved to make it more intuitive. It will help to describe section 5 in an algorithm or a table to understand the different steps. It looks like there are multiple steps in the optimization and it is unclear how they interact. \n * Can the approach from Anchors be used in a similar way to solve the CF regions problem?\n* If I understand correctly, the output policies/regions may not be 100% correct (i.e., may not flip the label). Wouldn't this violate the expectation of a counterfactual explanation?\n* Can you provide some examples of the policy learnt by the model, e.g., on Compas? A qualitative analysis of the regions and their interpretability will help. The provided policies/regions may not be 100% correct. That is a limitation.", " The paper aims to address multiple problems with counterfactual\nexplanations that consist of single points, by replacing them by\nrectangular regions that switch the class $f(x)$ of an input x with high\nprobability. These rectangular regions can then be communicated in the\nform of rules, which are called Local Counterfactual Rules. There is\nalso an extension called Regional Counterfactual Rules, which explains\nregions instead of single inputs x.\n\nThe definitions of the local and regional counterfactual rules rely on a\ncertain conditional distribution of the inputs, which therefore has to\nbe estimated very accurately. An important observation of the paper is\nthat this estimation can be made computationally efficient if $f$ is first\napproximated by a random forest $\\hat{f}$.\n\nFinally there are experiments that compare the new methods to existing\nmethods on several data sets.\n\n The structure of the paper is to discuss a series of limitations of\nexisting methods in the literature, and then to posit new definitions\nfor Local and Regional Counterfactual Rules. Unfortunately, however,\nthere is very little motivation for why the new definitions should be\nexactly the way they are, and what their strengths and weaknesses are.\nAnd they do strike me as having at least the following significant\nweaknesses (I will focus on Local Counterfactual Rules here):\n\n* Definition 3.3 of Local Counterfactual Rules only provides a necessary\n requirement that the rectangles should satisfy, so it is in fact not\n a full definition.\n* The rectangles produced by Local Counterfactual Rules may\n actually be *implausible*, i.e. definition 3.3. leaves open the\n possibility that P(C_S(x;y*)) is arbitrarily small, because it is only\n a requirement about a probability conditional on C_S(x;y*).\n* The approach critically relies on estimating a conditional probability\n for x. There should be a discussion on when this is reliable. For\n instance, this cannot work well when explaining inputs x from\n low-probability regions for which the exact distribution cannot be\n estimated very well, or if the true distribution is very irregular, or\n if the input dimension is large. The conditional probability is also\n estimated for each possible rectangle C_S(x), which will inflate the\n estimation error.\n* It is not clear how a user is supposed to obtain recourse from hearing\n a rectangular region. Definition 3.3 implies that, if the user picks a\n point from the region according to the true distribution of x, then\n they will obtain recourse with high probability. But a) the user does\n not know the true distribution of x, and b) even if they did, it would\n seem unreasonable to expect them to modify their features in\n accordance with this distribution. The introduction (line 98) claims\n that their approach \"partly answer[s] the problem of noisy responses\n to 99 prescribed recourses\" identified by Pawelczyk et al., 2022, but\n this claim is not backed up.\n\nPresentation:\n\nThe presentation is sufficient, but clarity could still be improved\nand claims could be substantiated better. For example: \n- There are multiple cases where terms are unclear or become clear only\n later, especially in the introduction. For example, line 51 refers to\n \"the best promising directions\", but it is not clear by which measure\n 'best' is determined.\n- In line 34 it is claimed that solutions from prior work are \"not\n entirely satisfactory\", but it is not specified in which way they are\n not satisfactory.\n- Definition 3.2 confuses the distinction between a \"Divergent\n Explanation\" and a \"Minimal Divergent Explanation\".\n The limitations of the paper seem very fundamental, and would require a\nsignificant revision to address.\n Limitations of the approach have not been sufficiently addressed, as\ndiscussed above. In particular, it is not clear when it can be expected\nto work and when not, or how to detect whether it can be applied for a\nparticular type of data.\n\n", " The authors propose to counterfactual rules instead of counterfactual samples as explanations -- i.e. the explanation is a (set of) rule(s) instead of single sample (counterfactual). Good paper!\n\nOnly minor issues:\nI think an illustration inf the form of a 2d data set + decision boundary might help even more to understand Local and Regional Counterfactual Rules.\nI am bit concerned about generating rectangles for high-dimensional spaces (i.e. curse of dimensionality) -- maybe the authors could briefly comment on this.\nIf I understood it correctly, the users always used a LightGBM as a black-box model that is going to be explained. Since the authors propose a model agnostic method, I think evaluating on a single model only is enough -- the authors should consider several highly different models for their empirical evaluation. None None" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4 ]
[ "SGVf-bkNXwM", "q0piYh-w-LT", "iDN-SU2Pfkf", "-IXS9ZaDX4C", "0dOErpyxBr8", "iDN-SU2Pfkf", "-IXS9ZaDX4C", "aA5RCc-Djap", "wOgG82jiOU", "nips_2022_0oQv1Ftt_gK", "nips_2022_0oQv1Ftt_gK", "nips_2022_0oQv1Ftt_gK" ]
nips_2022_HuiLIB6EaOk
VTC-LFC: Vision Transformer Compression with Low-Frequency Components
Although Vision transformers (ViTs) have recently dominated many vision tasks, deploying ViT models on resource-limited devices remains a challenging problem. To address such a challenge, several methods have been proposed to compress ViTs. Most of them borrow experience in convolutional neural networks (CNNs) and mainly focus on the spatial domain. However, the compression only in the spatial domain suffers from a dramatic performance drop without fine-tuning and is not robust to noise, as the noise in the spatial domain can easily confuse the pruning criteria, leading to some parameters/channels being pruned incorrectly. Inspired by recent findings that self-attention is a low-pass filter and low-frequency signals/components are more informative to ViTs, this paper proposes compressing ViTs with low-frequency components. Two metrics named low-frequency sensitivity (LFS) and low-frequency energy (LFE) are proposed for better channel pruning and token pruning. Additionally, a bottom-up cascade pruning scheme is applied to compress different dimensions jointly. Extensive experiments demonstrate that the proposed method could save 40% ~ 60% of the FLOPs in ViTs, thus significantly increasing the throughput on practical devices with less than 1% performance drop on ImageNet-1K.
Accept
The authors present a method to improve ViT efficiency by pruning channels and tokens using a selection mechanism that emphasizes low spatial frequency information. In particular they propose two measures: Low Frequency Sensitivity (LFS) and Low Frequency Energy (LFE). 1) LFS comprises two parts (Eq 3), the contribution between is controlled by a weighting hyper-parameter (though ultimately a Taylor approximation is used): a. The difference in the loss function of the model on the low-pass image with and without a removed weight. b. The difference in KL divergence of the class token before and after low-pass filtering with and without the weight in question. 2) LFE measures the proportionate energy of a token among the energy of all tokens after low-pass filtering (Eq 6), combined with a measure of the attention weights (Eq 8). Pruning is carried out via a bottom-up cascade mechanism (Sec 3.4). Performance is evaluated on a variety of model sizes on ImageNet 1K. Pros: - [R] First work to compress a model emphasizing low-frequency information - [AC/R] Throughput improvement is significant with minimal performance drop. - [R] Clear and correct formulas. - [AC/R] Well written. - [AC/R] Outperforms other compression methods. Cons: - [R] More thorough evaluation needed (ImageNet Real/V2 and CIFAR 10/100). Authors followed up and provided this requested evaluation data. - [R] No comparison on the impact of the number of epochs. Authors supplied information. - [R] No ablation on hyperparameter for LFS. Authors supplied information. - [R] Unclear how the method might work with window based ViT such as Swin. Authors provided these additional experiments. - [R] Unclear what is the cutoff for low/high frequency components. Authors provided ablation of varying cutoffs. Paper has unanimous accept ratings. Authors addressed reviewer concerns, and reviewers complimented authors for a good job. AC recommends accept. AC Rating: Strong Accept
train
[ "JZsi4eJRd1I", "ZNqt4Wm24U", "nPvl7hDtZLa", "Quxley1wJNc", "iw_SXWyBgt6", "lt7WGJSwwlt", "6SNAvcr1DlV", "oW0I_g5KFW", "8w19rqu2dXx", "H-AGPlkcRr" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the comments and suggestions, we will discuss the MiniViT, TinyViT, and other related ones in the revised vision.", " I am glad to see the additional experiments and details, which addressed my concerns. Thanks to the authors for the efforts in this discussion. After reading the comments of other reviewers, I think this paper should be got in.\n\nMoreover, since the compression of vision transformers emerges several new techs, there are many recent good papers studying this topic, pls consider discussing and comparing with the following works and other related ones:\n- MiniViT: Compressing Vision Transformers with Weight Multiplexing\n- TinyViT: Fast Pretraining Distillation for Small Vision Transformers", " Thank you for the comments, we will include the distribution of LFS and LFE in the appendix and the explanation of the cutoff-point in the revised version.", " Thank you authors for the great effort on the rebuttal. Authors have addressed all my concerns.\n\nIn the revised version, please include the **distributions of LFS and LFE (results shown in the rebuttal) and consider explaining the cutoff-point between low-frequency and high-frequency.**\n\nOverall, this paper is interesting, well executed, and constitutes a good scientific contribution in my opinion. **I will increase my recommendation to Accept**. Good job on the rebuttal!", " **R1W1**: Thank you for the suggestion. We have evaluated our pruned models on ImageNet-Real and CIFAR-10. Due to the time-limited, we will continue to update more results in the future. The results are shown as follow:\n\n|Dataset|Model|Top1 (%)|Top5 (%)|FLOPs (G)|Params (M)|\n|:-|:-|:-|:-|:-|:-|\n|||base/pruned|base/pruned|base/pruned|base/pruned|\n|ImageNet-Real|DeiT-Tiny|80.1/79.5|94.4/94.2|1.3/0.7|5.7/5.1|\n||DeiT-Small|85.7/85.4|96.9/96.7|4.6/2.1|22.1/17.7|\n||DeiT-Base|86.8/86.2|97.1/96.9|17.6/7.5|86.4/63.5|\n|CIFAR-10|DeiT-Tiny|98.1/97.9|99.9/99.9|1.3/0.7|5.5/4.9|\n||DeiT-Small|98.6/98.6|99.9/99.9|4.6/2.1|21.7/17.3|\n||DeiT-Base|98.9/98.8|99.9/99.9|17.6/7.5|85.8/62.7|\n\n**R1W1&R1Q1**: \n\n1) Thank you for the comments. The finetuning epoch of DeiT-Base/Small/Tiny is set to 75/150/300 for the trade-off between finetuning time and performance due to the limited GPU resources. In fact, DeiT-Base and DeiT-Small will achieve the best performance with 150 and 300 epochs, respectively. In our experience, larger models need fewer finetuning epochs to recover the performance because of their stronger model capacity. This conclusion is similar to that in the self-supervised pre-training task, where ViT-B needs to be finetuned for 100 epochs, while ViT-L needs finetuning for only 50 epochs. Thus, the pruned DeiT-Small/Dei-Base is only finetuned for 150/75 epochs while the DeiT-Tiny needs 300 epochs to be on par with SOTA performance.\n\n2) The SPViT conducts pruning from pretrained DeiT models (300 epochs) and finetunes the pruned models for 130 epochs. Because SPViT only reduces FLOPs by 20%$\\sim$30% (vs 40%$\\sim$60% FLOPS for our models), it needs fewer finetuning epochs to recover the performance. S2ViTE trains and prunes the models from scratch for 600 epochs. Compared to S2ViTE, we use less epochs in total. The performance and training time with different finetuning epochs for three models are listed as following:\n\n|Model|75 epochs|150 epochs|300 epochs|GPU Days (A10)|\n|:-|:-|:-|:-|:-|\n||Top1/Top5(%)|Top1/Top5(%)|Top1/Top5(%)|75ep/150ep/300ep|\n|DeiT-Tiny|70.1/89.9|70.8/90.2|71.6/90.7|2.3/4.6/**9.2**|\n|DeiT-Small|78.6/94.4|79.4/94.6|79.8/94.9|3.3/**6.7**/13.3|\n|DeiT-Base|81.2/95.2|81.3/95.3|81.3/95.1|**7.5**/15.0/30.0|\n\n**R1W3**: Thank you for the comments. There are two main reasons. (1) The baseline is strong. The original DeiT-Small already achieves 79.8% (upper bound), so improving the performance from 79.2 to 79.4 is a significant improvement with such a strong baseline. We can observe LFS achieve a larger improvement by 0.5% for a weaker baseline in Table 3. (2) As lines 270$\\sim$277 demonstrated, for a fair comparison, we keep the same pruning ratios (determined by our VTC-LFC using BCP) of each block in all experiments. Thus, only the pruning order (block by block) determined by BCP will influence the results in Table 5. If we do not control the pruning rate, w/o BCP (equals to \"LFS + LFE\") achieves 78.7% with 55.0% Flops reduction in Table 3, the improvement from the BCP reaches 0.7% (79.4% vs 78.7%). The improvements can verify the effectiveness of the proposed modules.\n\nAs BCP is not a major contribution in this paper, we haven’t included a lot of ablation studies for BCP. We will update the results in the revised version.\n\n**R1Q2**: The main reason for pruning from the first block to the last one is that the pruning in previous blocks will change the inputs of their subsequent blocks, which may influence the value of our proposed LFS and LFE. If the last block is pruned first, the selected channels and tokens in this block will need to be re-adjusted when former blocks are compressed. Instead, if the former blocks are pruned firstly, the inputs for the subsequent blocks are fixed. We compare both pipelines on DeiT-Small, and the results are shown as follow:\n\n|Pipeline|&#124;Top1 w/o FT|&#124;Top5 w/o FT|&#124;Top1 w/ FT|&#124;Top5 w/ FT|&#124;FLOPs|&#124;Params|\n|:-|:-|:-|:-|:-|:-|:-|\n|Bottom-up (from first to last)|&#124;68.3|&#124;89.1|&#124;79.4|&#124;94.6|&#124;2.1G|&#124;17.7M|\n|Top-down (from last to first)|&#124;68.2|&#124;89.0|&#124;78.9|&#124;94.5|&#124;2.1G|&#124;17.6M|\n\nIt can be found that the bottom-up pipeline achieves better performance than the other pipeline, which demonstrates the significance of pruning from the first block to the last one.\n\n**R1Q3**: The hyper-parameter λ is set to 0.1 in the experiment. Here, we show the results of pruning 20% channels from DeiT-Small with different λ. Due to the time limit, only the results without finetuning have been obtained. It can be found that the model achieves the best accuracy when λ is 0.1. In addition, the proposed method is not very sensitive to the value of λ (λ>0).\n\n|$\\lambda$|&#124;0.0|&#124;0.1|&#124;0.2|&#124;0.3|&#124;0.4|&#124;0.5|&#124;0.6|&#124;0.7|\n|-|-|-|-|-|-|-|-|-|\n|Top1|&#124;65.90|&#124;**69.76**|&#124;69.70|&#124;69.73|&#124;69.73|&#124;69.70|&#124;69.67|&#124;69.67|\n\n\n", " **R2W1&R2Q1&R2Q2**:We have evaluated the proposed methods on LV-ViT-S and Swin-Tiny. To simplify, we do not use the token labels in the original LV-ViT. Due to the token downsampling and the window shifting, we cannot apply token pruning to Swin models yet. Thus, only the channel pruning is adopted on Swin. We will study the token pruning for such window-based attention models in further works. Due to the time limit, the experiment has only run once. The results are listed as below. Our method outperforms the baseline on both models, showing that the proposed method is effective for different transformer backbones. We will conduct more experiments and update the results in the next version.\n\n| Model |&#124; Top1(%) |&#124; Top5(%) |&#124; FLOPs(G) |&#124; Param(G) |\n|-----------------------|---------|---------|----------|----------|\n| LV-ViT-S |&#124; 83.2 |&#124; 96.3 |&#124; 6.5 |&#124; 25.8 |\n| NViT*+EViT (baseline) |&#124; 81.5 |&#124; 95.3 |&#124; 3.3 |&#124; 20.2 |\n| VTC-LFC (ours) |&#124; 81.8 |&#124; 95.6 |&#124; 3.2 |&#124; 20.2 |\n\n| Model |&#124; Top1(%) |&#124; Top5(%) |&#124; FLOPs(G) |&#124; Param(G) |\n|------------------|---------|---------|----------|----------|\n| Swin-Tiny |&#124; 81.1 |&#124; 95.5 |&#124; 4.5 |&#124; 28.3 |\n| NViT* (baseline) |&#124; 79.4 |&#124; 94.7 |&#124; 3.3 |&#124; 17.1 |\n| VTC-LFC (ours) |&#124; 79.8 |&#124; 94.8 |&#124; 3.3 |&#124; 17.1 |", " **R3W1**: The pseudocode of the overall compression procedure is shown in Appendix A.1, where we illustrate the detailed procedures of pruning tokens and channels from the first block to the last block. The distillation component in line 159 is the KL divergence (in Eq.(3)) between two kinds of CLS tokens, one is the output CLS token when the input is the natural image and the other one is the CLS token corresponding to the filtered image. \n\n**R3W2**: Thanks for your suggestions. The distribution of LFS and LFE in baseline (original DeiT-Tiny) and VTC-LFC (our pruned DeiT-Tiny) ViTs are shown as follows. The distribution statistics is presented in the following tables (as figures are not supported here).\n| LFS | | 0~5e-5 | 5e-5~1e-4 | 1e-4~1.5e-4 | 1.5e-4~2e-4 | 2e-4~2.5e-4 | 2.5e-4~3e-4 |\n|------|-------|:-:|:-:|:-:|:-:|:-:|:-:|\n| Density (%) | Baseline (original DeiT-Tiny) | 87.57 | 4.65 | 2.30 | 1.10 | 0.82 | 0.55 |\n| Density (%) | VTC-LFC (pruned DeiT-Tiny) | 85.06 | 5.64 | 2.58 | 1.52 | 0.87 | 0.76 |\n\n| LFE | | 0.28~0.29 | 0.29~0.3 | 0.3~0.31 | 0.31~0.34 | 0.34~0.35 |\n|------|-------|:-:|:-:|:-:|:-:|:-:|\n| Density (%) | Baseline (original DeiT-Tiny) | 9.69 | 10.86 | 10.92 | 9.94 | 8.35 |\n| Density (%) | VTC-LFC (pruned DeiT-Tiny) | 9.26 | 10.59 | 10.89 | 9.93 | 8.43 |\n\n**R3W3**: The cutoff factor in our experiments is similar to the radial averaging of the 2D Fourier spectrum as in [1,2,3,4]. We will add the illustration in the revised version. Figure 4 (c) shows the performances of pruned models with different cutoff factors, and the model achieves the best performance with the value of 0.1. We have also visualized the filtered images and their corresponding Fourier spectrum in Appendix A.8.\n\n**R3W4**: Thank you for the suggestion. We have rewritten the caption of Figure 3 as below for a better illustration. The next version of our paper will be updated accordingly.\n\nFigure 3: Pipeline of Bottom-up Cascade Pruning. The compression is executed from the first block to the last one. In each block, the number of tokens is firstly reduced (red boxes) by selecting tokens based on the LFE-based criterion. When the performance drop induced by token pruning reaches a pre-set threshold, the number of tokens is fixed and the channel pruning (yellow boxes) is started. The channels are gradually removed according to their LFS until the accuracy drop reaches a pre-set threshold.\n\n**R3Q1**: Thank you for the suggestion. We have included the top-5 accuracies in Tables 1, and 2. The revised tables are as follows: \n\nTable 1: Comparison with state-of-the-art methods on ImageNet-1k. 'FLOPs $\\downarrow$' denotes the reduction ratio of FLOPs. We report two versions with different parameter sizes for our method.\n|Method|&#124; |DeiT-Tiny| |&#124; |DeiT-Small| |&#124; |DeiT-Base| |\n|:-:|:-|:-|:-|:-|:-|:-|:-|:-|:-|\n| |&#124;Top1/Top5(%)|FLOPs $\\downarrow$|Params|&#124;Top1/Top5(%)|FLOPs $\\downarrow$|Params|&#124;Top1/Top5(%)|FLOPs $\\downarrow$|Params|\n|Baseline |&#124; 72.2$/$91.1 | $-$ | 5.7M |&#124; 79.8$/$95.0 | $-$ | 22.1M |&#124; 81.8$/$95.6 | $-$ | 86.4M |\n|SCOP |&#124; 68.9$/-$ | 38.4\\% | 5.7M |&#124; 77.5$/-$ | 43.6\\% | 22.1M |&#124; 79.7$/-$ | 42.0\\% | 86.4M |\n|PoWER |&#124; 69.4$/-$ | 38.4\\% | 5.7M |&#124; 78.3$/-$ | 41.3\\% | 22.1M |&#124; 80.1$/-$ | 39.2\\% | 86.4M |\n|CP-ViT |&#124; 71.2$/-$ | 43.3\\% | 5.7M |&#124; 79.1$/-$ | 42.2\\% | 22.1M |&#124; 81.1$/-$ | 41.6\\% | 86.4M |\n|EViT |&#124; $-/-$ | $-$ | $-$ |&#124; 78.5$/$94.2 | 50.0\\% | 22.1M |&#124; 81.3$/$95.3 | 34.1\\% | 86.4M |\n|HVT |&#124; 69.7$/$89.4 | 46.2\\% | 5.7M |&#124; 78.0$/$93.8 | 47.8\\% | 22.1M |&#124; $-/-$ | $-$ | $-$ |\n|IA-RED$^2$ |&#124; $-/-$ | $-$ | $-$ |&#124; 79.1$/$94.5 | 31.5\\% | 22.1M |&#124; 80.3$/$95.0 | 33.0\\% | 86.4M |\n|S$^2$ViTE |&#124; 70.1$/-$ | 23.7\\% | 4.2M |&#124; 79.2$/-$ | 31.6\\% | 14.6M |&#124; 82.2$/-$ | 33.1\\% | 56.8M |\n|SPViT |&#124; 70.7$/$90.3 | 23.1\\% | 4.9M |&#124; 78.3$/$94.3 | 28.3\\% | 16.4M |&#124; 81.6$/$95.5 | 33.1\\% | 62.3M |\n|VTC-LFC |&#124; 71.6$/$90.7 | 46.7\\% | 5.1M |&#124; 79.4$/$94.6 | 54.4\\% | 17.7M |&#124; 81.2$/$95.2 | 57.6\\% | 63.5M |\n|VTC-LFC |&#124; 71.0$/$90.4 | 41.7\\% | 4.2M |&#124; 79.6$/$94.8 | 47.1\\% | 15.3M |&#124; 81.6$/$95.6 | 54.4\\% | 56.8M |\n\nTable 2: Throughput of baselines and compressed models. 'Speed up' means the improvement in throughput. 'base' denotes the baseline model, and 'pruned' is the compressed model.\n|Model|&#124;Top1|&#124;Top5|&#124;Throughput|&#124;Speed up|\n|:-:|:-|:-|:-|:-|\n||&#124; (base/pruned)|&#124; (base/pruned)|&#124; (base/pruned)|&#124; |\n|DeiT-Tiny |&#124; 72.2\\%$/$71.6\\% |&#124; 91.1\\%$/$90.7\\% |&#124; 2648.7$/$4496.2 |&#124; 69.7\\% |\n|DeiT-Small |&#124; 79.8\\%$/$79.4\\% |&#124; 95.0\\%$/$94.6\\% |&#124; 987.9$/$1946.3 |&#124; 97.0\\% |\n|DeiT-Base |&#124; 81.8\\%$/$81.2\\% |&#124; 95.6\\%$/$95.2\\% |&#124; 314.7$/$651.9 |&#124; 107.1\\% |\n", " In this paper, the authors proposed the pruning method with low-frequency for ViTs. The metrics named low-frequency sensitivity for channel pruning, and low-frequency energy for token pruning are proposed, where the low-frequency image and feature are introduced. Besides, a bottom-up cascade pruning is proposed to prune the channels and tokens gradually. The experiments show a good reduction ratio of FLOPs and throughput with less performance drop, and the effectiveness of each proposed components. Strengths:\n1. The authors introduced low-frequeny image and feature to prune the ViT models. It is the first work to compress the model in the frequency domain.\n2. The improvement on the throughput is significant with less performance drop.\n3. The formulas of the proposed pruning method are clear and correct.\n4. The paper is well written.\n\nWeaknesses:\n1. The performance metric is not sufficient. The pruned model can be evaluated on ImageNet-Real/V2 and downstream classification datasets, like CIFAR-10/100.\n2. In Table 1, there is no comparison on the number of the finetuning epochs.\n3. In Table 5, the accuracy of the model without LFS and BCP after finetuning is close by 0.2 to VTC-LFC. The difference is too small to show the effectiveness of the proposed method. 1) In the section 4.1 Line 228, the pruned models DeiT-Tiny/DeiT-Small/DeiT-Base are finetuned for 300/150/75 epochs, respectively.\n - Why the number of epochs for the three models are different?\n - How many epochs to finetune the pruned models for other pruning methods, like SPViT, S2ViTE? Is it comparable to that in this paper?\n - It usually takes 300 epochs to train DeiT-Tiny on ImageNet-1k. Why the pruned DeiT-Tiny needs finetuning for 300 epochs? \n\n2) In the section 3.4 Bottom-up Cascade Pruning, the model is pruned from the first block to the last one. Is there any comparison to the method from the last block to the first one?\n3) In line 164, how to set the hyper-parameter λ in the experiment? Is there any ablation study on it? Yes", " This paper introduces a model compression approach based on low-frequency components for vision transformers. Channel pruning based on low-frequency sensitivity, token pruning based on low-frequency energy, along with bottom-up cascade pruning scheme effectively reduce the model complexity while maintaining high model accuracy. strengths: \n1. it proposes a novel approach for model compression with low-frequency component, by leveraging the intrinsic property of self-attention modules as low-pass filters. \n2. it introduces the bottom-up cascade pruning framework for model compression. \n3. it outperforms many SOTA vision transformer compression methods with higher accuracy at reduced model complexities \n\nweaknesses: \nit's unclear how effective this approach would be when applied to other ViT models such as LV-ViT. 1. How effective would this approach be when applied to other ViT models such as LV-ViT? \n2. Would it be useful for window-based attention and hierarchical model structure such as Swin? adequate ", " The major contributions of this paper are 2-fold:\n1) By leveraging on the understanding regarding the low-frequency information preference of ViTs, this work proposes improved channel pruning and token pruning schemes to compress ViTs. This is achieved using 3 components below:\n- **Channel pruning** : A quantitative metric called LFS (Low Frequency Sensitivity) is proposed to estimate the importance of model parameters by largely considering low-frequency components in images. This is used for channel pruning.\n- **Token pruning** : A quantitative metric called LFE (Low Frequency Energy) is proposed to estimate the percentage of low-frequency information in tokens. \n- **A bottom-up cascaded pruning framework** is proposed to jointly compress channels and tokens of ViTs.\n\n2) The proposed method obtains large improvements in ViT compression (i.e.: reduce FLOPs by 40%-60% at the expense of only 1% top1 accuracy) in ImageNet based classification setups.\n **Strengths:**\n1) This paper is written well. It is easy to follow.\n2) The improvements in terms of FLOPs and parameter reduction using VTC-LFC is impressive. VTC-LFC has potential to be useful in edge-intelligence applications (where ViTs are superior to CNN based architectures).\n\n**Weaknesses:**\n1) **The overall compression procedure is unclear**. I.e.: There is also a knowledge distillation component (Line 159). Can the authors show the overall algorithm (a pseudocode will be useful)? \n2) **Only final top1 accuracy of VTC-LFC is insufficient to understand the significance of the proposed method / More analysis is needed**: This paper could significantly benefit from more analysis / statistics regarding channel pruning and token pruning. Given that LFS and LFE are scalar values, can the authors show the distribution of these values in Baseline / VTC-LFC ViTs? (A simple histogram could be useful). \n3) **The cutoff point between low-frequency and high frequency for the experiments is not clear**: How is Figure 4 ( c ) obtained? Is it obtained by radial averaging of 2D Fourier spectrum similar to these deepfake works ( [1, 2, 3, 4] )?\n4) The caption of Figure 3 is too plain. Please consider explaining the details as Figure 3 is very important in this paper.\n\nOverall I enjoyed reading this paper. In my opinion, the strengths of this paper outweigh the weaknesses by a small margin. I’m happy to change my opinion based on the rebuttal. \n\n=================\n\n[1] Dzanic, T., Shah, K., & Witherden, F. (2020). Fourier spectrum discrepancies in deep network generated images. Advances in neural information processing systems, 33, 3022-3032.\n\n[2] Durall, R., Keuper, M., & Keuper, J. (2020). Watch your up-convolution: Cnn based generative deep neural networks are failing to reproduce spectral distributions. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 7890-7899).\n\n[3] Chandrasegaran et al., 2021: \"A closer look at fourier spectrum discrepancies for cnn-generated images detection.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.\n\n[4] Schwarz et al., 2021: \"On the Frequency Bias of Generative Models.\" Advances in Neural Information Processing Systems 34 (2021).\n\n Please see Weaknesses section above for a list of questions. Further please consider answering the following questions:\n\n1) Can the authors include Top-5 accuracies for Tables 1, 2?\n Briefly mentioned in Checklist 1(C)." ]
[ -1, -1, -1, -1, -1, -1, -1, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 5, 5, 3 ]
[ "ZNqt4Wm24U", "iw_SXWyBgt6", "Quxley1wJNc", "6SNAvcr1DlV", "oW0I_g5KFW", "8w19rqu2dXx", "H-AGPlkcRr", "nips_2022_HuiLIB6EaOk", "nips_2022_HuiLIB6EaOk", "nips_2022_HuiLIB6EaOk" ]
nips_2022_YiFQqYAk1xH
Dynamic Fair Division with Partial Information
We consider the fundamental problem of fairly and efficiently allocating $T$ indivisible items among $n$ agents with additive preferences. The items become available over a sequence of rounds, and every item must be allocated immediately and irrevocably before the next one arrives. Previous work shows that when the agents' valuations for the items are drawn from known distributions, it is possible (under mild technical assumptions) to find allocations that are envy-free with high probability and Pareto efficient ex-post. We study a \emph{partial-information} setting, where it is possible to elicit ordinal but not cardinal information. When a new item arrives, the algorithm can query each agent for the relative rank of this item with respect to a subset of the past items. When values are drawn from i.i.d.\ distributions, we give an algorithm that is envy-free and $(1-\epsilon)$-welfare-maximizing with high probability. We provide similar guarantees (envy-freeness and a constant approximation to welfare with high probability) even with minimally expressive queries that ask for a comparison to a single previous item. For independent but non-identical agents, we obtain envy-freeness and a constant approximation to Pareto efficiency with high probability. We prove that all our results are asymptotically tight.
Accept
All reviewers are positive about the paper and found that the problem of sequential/online fair allocation of indivisible items interesting (and relatively new), and the theoretical results significant and sometimes surprising. Technically, the paper also has a "bandit" flavor, which makes it a good fit for NeurIPS.
train
[ "6o8PIrq3mRQ", "fXj7VJ0vaA76", "KpuqPphKG8hc", "6R_-t2PRJUR", "16Lb8YY5Cxj", "B6hqF0mQmPx", "alWIUJKzqgL", "K7a4YFuUjSr", "FY3kxSr4YD", "R51oSWk8W_q", "RH1mgpZTJa_" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I have read your reply. I appreciate the additional remark on the adversarial and random order models. Although I have not understood exactly the impossibilities in the random order model, mentioning those points in the main body would strengthen the motivation to consider the iid model.", " Thank you for addressing my questions. The result in the paper are novel and certainly interesting. I agree with other reviewers that the algorithmic approaches are for the most part straightforward; however, the novelty and challenges often rely on technical proofs. And I believe the proofs are certainly non-trivial and require reasonable technical work.\n", " Thank you for your review and constructive feedback. Below we answer your question.\n\n- “Have the authors considered other input models (such as adversarial or random order)? Is the iid problem setting the only one in which such strong (and asymptotically tight) results be achieved?”\n\nIn the adversarial model, the answer is known: Even with known values, it is impossible to simultaneously achieve envy-freeness and Pareto efficiency. In fact, “random allocation” is optimal in terms of envy, and everything with smaller envy than “dictatorship” is as inefficient as “random allocation” (see [Zeng and Psomas, 2020]). And, both “random allocation” and “dictatorship” can be executed in the partial information model, giving a clear picture of what can and what cannot be done.\n", " Thank you for your review and constructive feedback. Below we answer your questions.\n\n- “If we consider fairness at each step (or even epoch), what type of EF guarantees can we expect? Is there going to be a phase transition at some step similar to the phase transition shown by Dickerson et al.? If yes, when would the phase transition occur?”\n\nThe phase transition will depend on the distribution, so we cannot make a distribution-independent statement of the form you ask for. But, for every distribution, there is some epoch where you are “accurate enough” (in terms of eps-accuracy) to guarantee EF. Past this epoch, things should generally work well. This “many epoch” approach is needed for getting a distribution-free algorithm. That is, we can construct distributions that need arbitrarily many epochs to become EF (for example, if values are iid and very close to a point mass, it may be the case that bundle sizes need to be within some additive delta of each other to get EF).\n\n- “In the model that takes ordinal information as rankings, the preliminaries section does not properly explain how to deal with weak preferences or ties. How does the algorithm deal with ties when eliciting ranks of the fresh item? Can you elaborate on this?”\n\nSince distributions are continuous without point masses, ties are unlikely (zero probability events). So, our algorithms can deal with ties in an arbitrary way (e.g., “if this item is tied with a previous item for any agent, ignore it”) without any changes in the formal guarantees. We will include a clarification in the preliminaries.\n\n- “Updating memory: in Algorithm 2, at each epoch an agent’s memory is updated with comparison with (prior) k^6 items. What is the reason behind such updating mechanism? If an agent’s error is not within the tolerance, an arbitrary item is chosen to be remembered. This seems a bit arbitrary (no pun intended). Is the reason mainly due to assumptions on distributions? Wouldn’t it be more beneficial to remember a certain item (say the one with the average highest ranking)?”\n\nThis specific choice is just to make our analysis and algorithm description a bit cleaner. In the analysis, we see that the probability this occurs is so small that, when it happens, it can essentially be handled in any reasonable way. For succinctness, it happened to be more convenient to force the sampling phase length and bound the probability of error rather than continuing to sample and bounding the expected length of sampling. Also as a small note, in our model, we are only allowed to assign an item to memory on arrival, so we can’t go back and choose a reasonable item we’ve seen in the past.\n\n- “Algorithm 1 uses more memory compared to Algorithm 2 but they both achieve the same guarantee on the non-iid model. However, Algorithm 1 offers shorter epochs. Could Algorithm 1’s bound improve with longer epochs? In other words, is the bound tight both for iid and non-iid models?”\n\nThe EF guarantee for both algorithms is the same (and can’t be improved asymptotically). Given a distribution and a fixed number of items, if one closely looks at the probability bounds in our proofs, Algorithm 1 will have a smaller probability of not achieving EF (but, of course, for both algorithms this probability is vanishing).\nIn terms of efficiency, these are the best bounds we can find. Using our current analysis on longer epochs/more items would not help as we would not be able to bypass the bounds on the “ideal” algorithms which are currently the same; it is plausible that a different analysis can separate the two algorithms in terms of their efficiency guarantees (but still their efficiency is upper bounded by 18/19).\n\n- “The crux of the paper is assuming envyfreeness and devising exploration techniques to achieve reasonable welfare approximations. I wonder whether it is possible to improve on efficiency (or its approximations) by relaxing the fairness notion to EF1, proportionality, etc.? The first question is whether the impossibility results in Theorem 4 or 12 still hold.”\n\nBoth theorems are for two agents, therefore the same results hold for proportionality.\n\nTheorem 12 definitely holds if we replace EF with EF1: notice that EF is used in the proof to bound bundle sizes; EF1 (or even weaker notions) would actually give the same bounds (see supplementary material, starting at line 823).\n\nTheorem 4 does not seem to hold immediately, but we believe it should (some other trick would be needed).\n\n- “In general, simultaneously achieving fairness and strategyproofness is a challenging task; and it is often impossible in fair division. I wonder if there is a way to achieve strategyproofness under the settings of this paper (e.g i.i.d bounded or unbounded models)?”\n\nThis is a very interesting open problem. It is plausible that uncertainty about the future can help the designer in achieving strategyproofness. To the best of our knowledge, this is open even in the known value setting.", " Thank you for your review and constructive feedback. Below we answer your question.\n\n- “Is there any relationship between the problem of this paper and the secretary problem? The idea of deciding a threshold by estimation also appears in papers related to the secretary problem.”\n\nThere is no technical connection to the secretary problem or other problems in optimal stopping theory (e.g. prophet inequalities) as far as we know. Also, see our response to Reviewer rhEt regarding the random order version of our problem.\n", " Thank you for your review and constructive feedback. Below we answer your question.\n\n- “What if the distributions are not bounded or continuous? How will they change the current results?”\n\nFor non-continuous distributions, ties might lead to technical difficulties. For example, non-continuous distributions allow point masses, making EF allocations generally impossible (e.g. the parity of the number of items starts to matter).\nUnbounded distributions also seem difficult to handle. For example, there exist unbounded distributions where every agent *must* get their favorite item, which means that a lot of our asymptotic bounds/sums of independent things do not work out nearly as nicely.\n", " We thank all reviewers for the thorough reviews and helpful comments. Since there are no concerns/questions that are common across reviewers we will respond to each reviewer’s questions in separate comments. We will incorporate the valuable suggestions from all reviewers in the final version of this paper.", " Fair division of indivisible goods is a flouring research area in computation social choice. There are some works that study the dynamic setting when the items arrive one by one, and the algorithm needs to make immediate and irrevocable decisions at the time when a new item arrives. The existing works already show that when the values are independently drawn from an identical distribution, we can achieve envy-free with high probability and PO simply by allocating each item to the agent who has the highest value. This result can be extended to the cases of agent-specific distributions and correlated agents. All these results require the knowledge of distributions and the realized values, which motivates the current work to study the partial information setting when the algorithm only has ordinal (relative) rankings. \n\nThe current work assumes that at the time a new item arrives, the relative rankings of the item in each agent’s bundle are revealed to the algorithm, based on which the algorithm needs to allocate whom to allocate it. Different from the cardinal setting, the authors first showed that envy-freeness with high probability is incompatible with (exact) Pareto optimality, but compatible with approximate PO and utilitarian optimal welfare. The algorithm, consisting of exploration/sampling phase and exploitation/ranking phase, is natural, where the first phase is used to store samples that can be used to calculate the empirical quantiles. The authors (approximately) optimized the length of the sampling length where longer means more accurate empirical quantiles but more inefficient allocations. \n\nSecondly, the authors also studied a more demanding setting where the relative ranking is restricted to a fixed number of items, representing the limited memory situation. Fortunately, the previous results can almost be recovered. \n\nFinally, the authors also explored the extent to which fairness and efficiency are compatible for non-iid models. \n\nQuestions:\n\nWhat if the distributions are not bounded or continuous? How will they change the current results?\n\nTypos:\nThe paper is well written, as far as I check. I did not find many typos, but \\ldots and \\cdots should be consistent. Strengths: \n\nThe partial-information model is very interesting to me. The results are natural and complete. The paper is well-written and easy to follow. \n\nWeaknesses: \n\nI worry about the technical contribution. The algorithms are very standard by using the first phase to record the samples which provide empirical quantile estimations for items in the second phase. Of course, the results are not straightforward, but I did not find many technical novelties. What if the distributions are not bounded or continuous? How will they change the current results? N.A.", " This paper studies a fair allocation problem in a dynamic setting, where items arrive one by one. Agents have additive valuations and a valuation of an items follows an unknown distribution (i.i.d/non-i.i.d.). In this paper, it is assumed that an algorithm cannot know the cardinal information of the valuation, but can obtain the ordinal information. For the i.i.d. case when agents can compare a new item with any subsets of previously allocated items, the authors give an algorithm whose output is EF and also $(1-\\epsilon)$-approximation to optimal utility with high probability. When agents can compare a new item with one of previously allocated items, the authors give an algorithm whose output is EF and also $((1-1/e)^2-\\epsilon)$-approximation to optimal utility with high probability. Finally, for the non-i.i.d. case, the authors show that the aforementioned two algorithms output an allocation that is EF and 1/e-approximately Pareto optimal with high probability. - Originality: The problem setting where an algorithm has access only to the ordinal information is novel in the line of dynamic settings. The idea of proposed algorithms is to estimate the distribution by sampling, and this is done by combining existing ideas carefully.\n- Quality: The results seem correct as far as I have checked. It is better to mention some application if the authors have some in mind. The framework of the proposed algorithms looks like the $\\epsilon$-greedy algorithm of the stochastic bandit problem. It may be possible to improve the performance by incorporating ideas of bandit algorithms.\n- Clarity: I feel that this paper is mostly well-written. I recommend the authors to place a table that summarizes the contributions (about possibilities and impossibilities) so that the reader can grasp the main results at a glance.\n- Significance: I do not think that the fair allocation attracts much attension in the NeurIPS community. Nevertheless, the proposed algorithms is built by a kind of learning technique, and seem closely related to the bandit algorithms, so I guess that this paper is related to this conference.\n- Minor comments:\n - The term $\\alpha$-Pareto efficient can be changed to $\\alpha$-Pareto Optimal for consistency of terms. In Theorem 13, the authors use “$(1/e-\\epsilon)$-Pareto optimal.”\n - line 169: $v_i(V_j)$ → $v_i(A_j)$\n - If $c$-strongly-EF was introduced before, it is better to add a reference in the definition.\n - For the non-i.i.d. model, the criteria of efficiency is moved on to the Pareto optimality from the utilitarian welfare. I guess that this is because it is hard to approximate the utilitarian welfare in the non-i.i.d. case. It may be helpful if the authors mention about this in the introduction of Section 6.\n - There are some other recent papers related to the online fair allocation, say,\n - Banerjee, Gkatzelis, Gorokh, and Jin. Online Nash Social Welfare Maximization with Predictions, SODA 2022\n - Kawase and Sumita. Online Max-min Fair Allocation, SAGT 2022\n \n If the authors find some relationship, please refer these papers. Is there any relationship between the problem of this paper and the secretary problem? The idea of deciding a threshold by estimation also appears in papers related to the secretary problem. N/A", " The paper studies a standard sequential item allocation problem where items need to be irrevocably allocated to agents upon arrival. The goal is to achieve some notion of fairness (e.g. envyfreeness) and efficiency (e.g. Pareto efficiency). This paper extends the previous works by focusing on achieving these properties only through partial information (e.g. ordinal relative ranking of items). The authors focus on two main models\n\n1)\tUnbounded memory model where an algorithm learns the relative ranking of the item compared to prior items, and\n2)\tBounded memory model where each agent only “remembers” a single prior item and the algorithm can only compare against that item.\n\nThe paper studies agents with i.i.d (known or unknown) distributions as well as non-i.i.d distributions. In both cases, the authors devise careful sampling algorithms to achieve envyfreeness and some approximation to welfare with high probability.\n Strengths:\n\n-\tFair division is a fundamental research direction within AI and computational social choice; the dynamic aspects of fair division, whether it is with respect to dynamic item allocation or elicitations are all intriguing directions with practical value. The current paper makes progress in three different dimensions: achieving fairness and efficiency, dynamic resource allocation, and elicitation frameworks and information loss (due to ordinal vs. cardinal elicitation formats). \n\n-\tIt is well known that when there are more items than agents (larger than a logarithmic factor), envyfree allocations exist with high probability (Dickerson et al. 2014). While achieving envyfreeness or efficiency separately are trivial under distributional assumptions, simultaneously achieving both is a challenge. Thus, the current paper focuses on sampling/exploration frameworks to improve efficiency bounds (while EF is easy) with unbounded memory (and thus, comparisons). The most surprising result is that the authors show that you can achieve these bounds for efficiency with high probability even if agents are forgetful, i.e., they can only compare a new item with a single prior item.\n\n-\tThe results in the paper including incompatibility results (Theorems 4 and 12) and the proof of the two algorithms involve challenging (and non-trivial) arguments and seem to be sound--although the write up of the proofs can be improved (see my next comments).\n\nWeaknesses:\n\n-\tThe write up of the paper can become quite challenging at various points. There is a general assumption that a reader should be more or less familiar with the techniques, reasons they work, etc. which can make the paper inaccessible to a general NeurIPS audience. On the other hand, even for a researcher in computational social choice and fair division, some sections and explanations lack sufficient intuition, which could make the paper hard to follow.\n\n-\tThe proofs are written mainly formally, which is a plus. However, I was not able to fully follow some of the proof arguments, perhaps due to missing steps. It could be nice to include full proofs and provide intuition on the most challenging aspects of the proof (some steps are easy to see).\n\n-\tIt is a bit awkward to end a paper with no conclusions or mention of future directions. The ending of the paper seems a bit abrupt.\n Phase transitions: The paper only focuses on achieving envy-freeness at the end in contrast to satisfying some notion of fairness at each step. If we consider fairness at each step (or even epoch), what type of EF guarantees can we expect? Is there going to be a phase transition at some step similar to the phase transition shown by Dickerson et al.? If yes, when would the phase transition occur?\n\n\nIn the model that takes ordinal information as rankings, the preliminaries section does not properly explain how to deal with weak preferences or ties. How does the algorithm deal with ties when eliciting ranks of the fresh item? Can you elaborate on this?\n\nUpdating memory: in Algorithm 2, at each epoch an agent’s memory is updated with comparison with (prior) k^6 items. What is the reason behind such updating mechanism? If an agent’s error is not within the tolerance, an arbitrary item is chosen to be remembered. This seems a bit arbitrary (no pun intended). Is the reason mainly due to assumptions on distributions? \nWouldn’t it be more beneficial to remember a certain item (say the one with the average highest ranking)? \n\nAlgorithm 1 uses more memory compared to Algorithm 2 but they both achieve the same guarantee on the non-iid model. However, Algorithm 1 offers shorter epochs. Could Algorithm 1’s bound improve with longer epochs? In other words, is the bound tight both for iid and non-iid models?\n\nThe crux of the paper is assuming envyfreeness and devising exploration techniques to achieve reasonable welfare approximations. I wonder whether it is possible to improve on efficiency (or its approximations) by relaxing the fairness notion to EF1, proportionality, etc.?\nThe first question is whether the impossibility results in Theorem 4 or 12 still hold. \n\nIn general, simultaneously achieving fairness and strategyproofness is a challenging task; and it is often impossible in fair division. I wonder if there is a way to achieve strategyproofness under the settings of this paper (e.g i.i.d bounded or unbounded models)? \nLet me elaborate: in fair division models, strategyproofness is quite challenging to achieve in general (there are several incompatibility results even with relaxations to fairness). However, there may be some hope in the models with i.i.d distributions, or models that limit elicitation process (such as this paper), and/or when randomization can help alleviate the incompatibilities. \n\n\nProof of theorem 4 constructs an instance with two agents where an arbitrary agent receives an item that is not liked so much by it, but it is liked by the other agent. Although constructing such examples/events is easy, the argument is quite neat and shows the limitations of satisfying EF and a weak efficiency notion with high probability even in the i.i.d model. \n No. The authors do not explicitly discuss any limitations to their results. In fact, I encourage the authors to include a concluding remarks section and discuss not only the limitations (or boundaries) of their results but also some potential future directions.", " The paper considers notions of fairness in an online (dynamic) input setting, wherein items arrive one at a time and must be irrevocably allocated to one of the $n$ agents of the system. Specifically, the authors focus on the \"partial information\" setting where the algorithm has access to ordinal, rather than cardinal information. This is an interesting problem context that, in spite of its increased complexity, is shown to allow for strong fairness guarantees. The paper presents asymptotically tight results for envy-freeness and approximation social welfare maximization under different partial information assumptions (with high probability arguments). Strengths:\n1. The writing is very clear and easy to follow throughout the paper. Discussion of the results and presentation of their proofs is also nicely presented.\n\n2. The problem context is very interesting and seems to have only been considered in a handful of papers. Additionally, all presented results are asymptotically tight (proofs seem correct) which nicely opens and closes certain questions.\n\nWeaknesses:\n1. The authors appear to be missing some important citations that discuss related work. In particular, \"Dynamic Fair Division with General Valuations\" (Li el al 2018) seems to be a notable omission. Other papers of interest that are not cited: \"A simple online fair division problem\" (Bogomolnaia et al. 2019) and \"Online Nash Social Welfare Maximization with Predictions\" (Banerjee et al 2022). \n\n2. While the problem context is well-defined, it does feel that the motivation for it is somewhat lacking. A more extensive discussion of the related work and how the present work fits into that context + some discussion and concluding remarks would be beneficial in this regard for the uninitiated reader.\n\n3. Though most of the writing and proofs are clear, it might be beneficial to defer more of the highly analytic steps to the appendix in favor of a more intuitive discussion of the result (which could in turn save some space to include the above).\n\n4. The iid input assumption seems a bit weak. Do any results exist in the partial information model for the adversarial or random order input models? Have the authors considered other input models (such as adversarial or random order)? Is the iid problem setting the only one in which such strong (and asymptotically tight) results be achieved?\n\n The authors mention the correlated input model as a future direction (that is not considered in the submission). I think the above mentioned input models should also be considered limitations of the present work as the iid input is necessarily the strongest assumption." ]
[ -1, -1, -1, -1, -1, -1, -1, 5, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 3 ]
[ "16Lb8YY5Cxj", "6R_-t2PRJUR", "RH1mgpZTJa_", "R51oSWk8W_q", "FY3kxSr4YD", "K7a4YFuUjSr", "nips_2022_YiFQqYAk1xH", "nips_2022_YiFQqYAk1xH", "nips_2022_YiFQqYAk1xH", "nips_2022_YiFQqYAk1xH", "nips_2022_YiFQqYAk1xH" ]
nips_2022_78T4K99jvbE
Set-based Meta-Interpolation for Few-Task Meta-Learning
Meta-learning approaches enable machine learning systems to adapt to new tasks given few examples by leveraging knowledge from related tasks. However, a large number of meta-training tasks are still required for generalization to unseen tasks during meta-testing, which introduces a critical bottleneck for real-world problems that come with only few tasks, due to various reasons including the difficulty and cost of constructing tasks. Recently, several task augmentation methods have been proposed to tackle this issue using domain-specific knowledge to design augmentation techniques to densify the meta-training task distribution. However, such reliance on domain-specific knowledge renders these methods inapplicable to other domains. While Manifold Mixup based task augmentation methods are domain-agnostic, we empirically find them ineffective on non-image domains. To tackle these limitations, we propose a novel domain-agnostic task augmentation method, Meta-Interpolation, which utilizes expressive neural set functions to densify the meta-training task distribution using bilevel optimization. We empirically validate the efficacy of Meta-Interpolation on eight datasets spanning across various domains such as image classification, molecule property prediction, text classification and speech recognition. Experimentally, we show that Meta-Interpolation consistently outperforms all the relevant baselines. Theoretically, we prove that task interpolation with the set function regularizes the meta-learner to improve generalization. We provide our source code in the supplementary material.
Accept
This paper uses a set transformer to create new tasks at meta training time when the amount of data for meta training is scarce. This approach seems to be highly effective and will make a worthwhile contribution to the few-shot learning toolbox. Many of the reviewer concerns were addressed through additional ablations and experiments, e.g., using MAML instead of protonets. Please include these experiments in the final draft as they add quite a bit to the paper.
train
[ "fubRdmMYex0", "vSnDqbqYcaX", "zZnQ16hWWv", "Y43k845vRK", "8OifB207VoI", "S121PAuwnyw", "f5TNbFLNGzo", "dwPK3cK2SvG", "Nw5Svgwz2WS", "z3DYT9UBfZ6", "tH1o9E7tx0r", "7JFxsh-_Ykz", "B7DdQcPUiCW", "ENmghKRsLbx", "_pF6TwwBIT", "VYe0wbTdz0w", "Q2o2uZ9rve0o", "L3B4TqnXHRyY", "i0XqiG-bvM", "byYCaa__H16", "Xu-XiIShmQ", "4sXDdLvo3FK" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for taking the time and effort to review and re-evaluate our work. As suggested, we report how the test accuracy changes as we vary the number of **meta-validation tasks** in the table below and include it in the revision. Although the performance of Meta-Interpolation slightly decreases if we reduce the number of meta-validation tasks, it still outperforms the baselines by large margins, regardless of the number of meta-validation tasks.\n\n\n**[Test Accuracy on ESC50]**\n| Model | 5 Tasks | 10 Tasks | 15 Tasks (full) |\n|--------------------|:--------------------------------:|:--------------------------------:|:--------------:|\n| ProtoNet | $69.48\\pm1.03$ | $69.20\\pm1.17$ | $69.05\\pm1.49$ |\n| MLTI | $68.09\\pm2.07$ | $69.40\\pm2.02$ | $70.62\\pm1.96$ |\n| Meta-Interpolation | $\\textbf{77.68}\\pm\\textbf{1.38}$ | $\\textbf{77.13}\\pm\\textbf{1.23}$ | $\\textbf{79.22}\\pm\\textbf{0.96}$ |", " Thank you for your response and suggestion. We will include the experimental results in camera ready version since we have less than 12 hours for the interactive discussion. For the experiment on ESC dataset, we have included it on Appendix F. We thank you again for your taking time and effort for the review.\n\nThanks, Authors", " Thank you for the rebuttal, which addresses most of my concerns.\nHowever, the rebuttal does not provide experiments on the sensitivity of the proposed method to the size of meta-validation. \nAlthough the authors claim that the size of meta-validation is small for few datasets (NCI, Tox21, GLUE-SciTail), the ratio of meta-training:meta-validation is still high (e.g., 1:1 for GLUE-SciTail). And, other datasets (ESC-50, Mini-ImageNet, CIFAR-100-FS) still have a large number of meta-validation sets.\nThe fact that the proposed algorithm requires meta-validation set to tune hyperparameters, while the proposed method aims for meta-training with fewer number of tasks, still hints a gap that exists between the proposed method and the goal.\nThe experiments on how the size of meta-validation set affects the performance (esp. on datasets, such as Mini-ImageNet, that have a large number of meta-validation set) would be a nice touch that would lead to interesting discussions.\nRegardless, the proposed method is novel and interesting. I have raised my rating to 6.", " The reviewer noticed that the author did not include the MAML (implicit) in their paper, and the reviewer w9NJ found the same problem. It is probably because the author did not realize they had been authorized to revise their paper in the open review procedure. The author strongly encourages the authors to include a complete comparison with MAML (implicit) in table1,2,, since the extra empirical study is the main reason for the reviewer to raise this paper's rating. ", " Thanks for the responses and clarification. Most of my concerns have been addressed, and I have raised my rating to 6.", " Thank you for taking time and effort for carefully evaluating our responses. As suggested, we have included the RMNIST result in Table 16 in the revision. We thank you again for your helpful suggestions which significantly improved the quality of our paper.\n\nThanks, Authors", " Dear authors,\n\nThank you, I am asking the reviewers to engage with your responses before the deadline tomorrow. I sincerely hope that they will do so.\n\nRegards,\n\nAC", " Thanks for the responses, they have mostly addressed my concerns, and I have raised my rating to 7.\n\nThe random vectors experiment was also interesting, though I noticed the RMNIST result is only the in the text but not the table in the revision; I would recommend also adding it to the table in the final version, as it shows a nice contrast.", " We thank you for taking the time to carefully review our work and evaluate our responses. Could you reflect the updated score in the original review? We thank you again for your helpful suggestions which significantly improved the quality of our paper.\n\nThanks, Authors\n\n", " The response has addressed my concerns. I would like to change the rating score to \"7 Accept\" of this work.", " Dear Reviewers,\n\nCould you please go over our responses and the revision since we can have interactions with you only by this Tuesday (9th 8:00 PM UTC)? We have responded to your comments and faithfully reflected them in the revision, and provided additional experimental results that you have requested. We sincerely thank you for your time and efforts in reviewing our paper, and your insightful and constructive comments.\n\nThanks, Authors", " We really appreciate all the Reviewers for their constructive comments. We have responded to the individual comments from the Reviewers below, and believe that we have successfully responded to most of them. We have included the discussions and results of the suggested experiments in the revision. Here we briefly summarize the updates we have made to the revision: \n\n- We have included additional experiments with first-order MAML on the ESC50 dataset in Appendix F.1, as suggested by Reviewer vsLv and 7Fn3.\n\n- We have included the experiments on the effects of the number of meta-training tasks in Appendix F.2, as suggested by Reviewer Q5Gf.\n\n- We have performed ablation studies on the location of interpolation in Appendix F.3, as suggested by Reviewer 7Fn3 and Q5Gf.\n\n- We have included the experiments with Deepsets in Appendix F.4, as suggested by Reviewer vsLv.\n\n- We have performed experiments for different interpolation strategies including mixing a support set and gaussian noise with zero mean and unit variance in Appendix F.5, as suggested by Reviewer Q5Gf and w9NJ.\n\n- We have corrected the typos in Algorithm1, as pointed out by Reviewer w9NJ.\n\n- We have clarified the ambiguous sentence on line 156-157, as suggested by Reviewer w9NJ.", " **[Q1]**. Will neural set functions other than set transformer be able to improve the few-task few-shot performance? It would be more convincing if the author can provide more empirical analysis on how to choose the set neural networks.\n\n- We choose Set Transformer over DeepSets, since Set Transformer can model higher-order and pairwise interactions among the set elements via the attention mechanism while DeepSets uses a very simple aggregation and message-passing scheme, and thus obtains better empirical performance on various tasks. We report the accuracy of Meta-Interpolation with DeepSets in the table below to demonstrate this point. DeepSets improves the performance of ProtoNet, which shows the general effectiveness of our method regardless of the set encoding scheme, but underperforms Set Transformer. We will include this new result in the revision to better justify the choice of the set encoder.\n\n**[Accuracy on ESC50 with different set functions]**\n\n| Set Function | Accuracy |\n|----------------------|:--------------------------------:|\n| ProtoNet | $69.05\\pm 1.69$ |\n| DeepSets | $74.26\\pm 1.77$ |\n| SetTransformer | $\\textbf{79.22}\\pm \\textbf{0.96}$ |\n\n\n---\n**[Q2]**. The method is claimed to be good at the field beyond vision. How is the performance in solving NLP low-resource problems?\n\n- Other than the vision and medical datasets, we also performed experiments for few-shot sentence classification using the GLUE-SciTail dataset. The results in Table 1 shows that our method outperforms all the Mixup-based augmentation methods, even on this NLP dataset. \n\n **[Few-shot classification on GLUE-SciTail]**\n| Method | Accuracy |\n|------------------------|:----------------------------------:|\n| ProtoNet | $72.59 \\pm 0.45$ |\n| MetaReg | $72.08 \\pm 1.33$ |\n| MetaMix | $72.12 \\pm 1.04$ |\n| MLTI | $71.65 \\pm 0.70$ |\n| ProtoNet + ST | $72.37 \\pm 0.56$ |\n| **Meta-Interpolation** | $\\textbf{73.64} \\pm \\textbf{0.59}$ |\n---", " **[Q1]**. Please analyze the principle of generating new classes by randomly aligning the category orders in two tasks. Since its prototype contains information of two classes, which may be quite different.\n\n \n\n- Please recall that each task consists of its own classes for the K-way N-shot classification problem and our goal is to generate a new task by interpolating two classes.\n\n \n\n- If we have classes {$c_1, c_2, c_3$} and {$e_1, e_2, e_3$} for task1 and task2 respectively, we can construct a **new task** by assigning a new label for each combination {$c_1, e_2$} , {$c_2, e_3$}, and {$c_3, e_1$} and interpolate two instances from corresponding classes. For instance, we can mix an instance from the class $c_1$ and the one from the class $e_2$, and assign a new class {$c_1, e_2$} to the mixed example.\n\n \n\n- If we interpolate both support and queries with Manifold Mixup by following the above procedure, it is exactly how MLTI [1] creates new classes for task augmentation.\n\n \n\n- However, we observe that a meta-learner still overfit to the interpolated tasks as shown in Table below. Thus we propose to interpolate only support sets not query sets, which results in harder tasks that the meta-learner cannot easily memorize.\n\n\n- Since we do not interpolate query sets, there is no explicit prototype for the query set to be matched with. Instead, we match the query points from class $c_1$ with prototypes of {$c_1, e_2$} since the other prototypes do not contain any information of the class $c_1$.\n\n\n**[Accuracy for each interpolation strategy on ESC50]**\n| **Interpolation Strategy** | **Accuracy** |\n|--------------------------|:--------------------------------:|\n| Query | $78.19\\pm0.84$ |\n| Query+Support | $76.87\\pm0.94$ |\n| Support | $\\textbf{79.22}\\pm\\textbf{0.96}$ |\n \n\n", " **[Q1]**. Theoretical section claims to show a reduction in Rademacher complexity in simplified settings. The end result of this section also seems to rely on reducing the upper bound by decreasing the rank of a covariance-like matrix of task vector differences, though no discussion on how or why its rank might be reduced.\n\n- Let $x=\\phi(\\bar x)$ where $\\bar x \\in \\mathbb{R}^{\\bar d}$ is the original data space and $x=\\phi(\\bar x) \\in \\mathbb{R}^{d}$ represents some features of $\\bar x$. The full rank case corresponds to the situation where the data lives in the whole space of $\\phi^{-1}(\\mathbb{R}^d) \\subseteq \\mathbb{R}^{\\bar d}$. In practice, we expect that the data lives in some (lower dimensional) manifold (instead of the whole space) where the rank is automatically reduced. \n\n---\n**[Q2]**. Splitting out additional classes into a separate validation set to train the set transformer seems like a lot of data to take away from training the feature model parameters. Is there a way to continue using these for both features and set transformer, maybe changing the held-out classes used in hypergrad periodically each outer loop?\n\n- Thank you for your insightful suggestion. However, we need a separate meta-validation set for early stop as well as bilevel optimization. Moreover, we want to emphasize that the other baselines also require a separate validation set for early-stopping. Otherwise, they severely overfit to meta-training tasks since there are fewer tasks than conventional meta-learning scenario. Lastly, we use the same size of meta-validation set as all the baselines and our method does not require extra data. \n---\n**[Q3]**. How important is the second support set for use in the transformer? What if random vectors are input to the set transformer instead of task 2 vectors? \n\nThank you for insightful suggestion. As suggested, we interpolate support and gaussian random noise with zero mean and unit variance for task augmentation. Although mixing the support with the noise improves the performance of Prototypical Network on ESC50, it significantly degrades the test accuracy of Prototypical Network on RMNIST. In contrast, interpolating two support sets consistently boost the performance of the model, which shows the necessity of interpolating the two support sets.\n\n\n**[Ablation Study for Interpolation of two support sets ]**\n\n| Dataset | ProtoNet | Support1 + Noise | Support1 + Support |\n|----------------|----------------|:----------------:|:------------------:|\n| **RMNIST** | $75.35\\pm1.63$ | $69.60\\pm1.60$ | $\\textbf{83.24}\\pm\\textbf{1.59}$ |\n| **ESC50** | $69.05\\pm1.69$ | $78.27\\pm1.24$ | $\\textbf{79.22}\\pm\\textbf{0.96}$ |\n---\n**[Q4]**. Since there is just one set of set transformer parameters $\\lambda$, does this limit the amount of augmentation or interpolation? If the same data elements and permutations are used twice with same lambda parameters, the set transformer will output the same features both times, unless there is another source of randomness in the transformer that I missed. Does this limit the approach at all? \n\n- As we described the architecture of Set Transformer in Appendix D.3, **dropout** is the source of randomness to make output of Set Transformer stochastic. This is essential for generating diverse task augmentations, which explains how our Meta-Interpolation can generate a large number of diverse tasks given only three tasks, as shown in Figure 3b. \n\n---\n\n**[Q5]**. \"interpolating either support or query sets achieves higher loss than interpolating both\" --- reading this sentence alone (before sec 5), it's ambiguous if what is meant by \"higher loss\" is better or worse. \n\n\n- By showing a higher **training loss**, we want to empirically support our intuition that interpolating only support sets makes the meta-training tasks harder so that meta-learner cannot easily memorize them. \n\n- It is hard to tell if a higher **training loss** is better since a meta-learner may underfit and show poor generalization. However, if the **training loss is higher** as a consequence of proper regularization, such as ours, the model achieves **lower validation loss** and better generalization performance as shown in Figure 2. \n\n--- \n\n**[Q6]**. Some small typos and inconsistencies, lines 4,6 $\\mathcal{L}_V$ suggests this is the loss but is already the grad, would be more consistent to accumulate a sum of losses in $\\mathcal{L}_V$ and then set $\\mathbf{v}_1 \\leftarrow \\frac{\\partial\\mathcal{L}_V}{\\partial \\theta}$. also ``single\" is misspelled in the subscript\n\n- Thanks for pointing out the typos and inconsistencies. For the inconsistency, we have corrected it as you suggested. For the typo, the notation $\\mathcal{L}_\\text{single}$ is inconsistent with $\\mathcal{L}_\\text{singleton}$ in Equations (2), so we have changed it to $\\mathcal{L}_\\text{singleton}$ in the revised version.", " \n**[Q2]**. Why are only the support sets interpolated but not the query sets?\n \n\n- As stated in line 152, the intuition behind interpolating support sets while keeping query sets intact is to make the task harder so that the meta-learner cannot easily memorize the tasks.\n\n- Our theoretical analysis support our intuition in that our interpolation strategy induces data-dependent regularization and improves the generalization bound of the meta-learner.\n\n- Lastly, in Table below, we empirically show that interpolation strategy which mixes only support sets outperforms the others.\n\n \n\n\n**[Accuracy for each interpolation strategy on ESC50]**\n\n| **Interpolation Strategy** | **Accuracy** |\n|--------------------------|:--------------------------------:|\n| Query | $78.19\\pm0.84$ |\n| Query+Support | $76.87\\pm0.94$ |\n| Support | $\\textbf{79.22}\\pm\\textbf{0.96}$ |\n\n \n \n---\n\n\n**[Q3]**. If two new tasks contain the same query set but different support set, then, how to avoid the optimized support sets falling into the original one in task 1?\n\n- The interpolated of the support sets cannot collapse to the original support set due to the following reasons. First, sampling exactly the same query sets is highly unlikely since we uniformly sample query sets from the full batch. Even if this happens, the hidden representations of two support sets are different due to the stochasticity introduced by the **dropout layer ** in the Set Transformer. \n---\n**[Q4]**. Does this augmentation make task 1 and task 2 more similar?\n\n- This is a misunderstanding. We do not alter the original tasks with our proposed augmentation at all. Our method generates a new task given two tasks. Moreover, as shown in Figure 3, the prototypes of the different tasks are well separated, and does not show any sign of any prototypes being similar to each other. \n\n---\n**[Q5]**. More training details should be provided. For example, how many new tasks do you generate? Do the generated tasks make the comparison with other methods unfair?\n\n- As shown in Algorithm 1, for each training step, we generate as many new tasks as batch size $B$, which is exactly the same batch size for all the other baselines.\n\n---\n**[Q6]**. An ablation study about the number of training tasks and the performance of the proposed method, and the layer to implement augmentation may be better.\n\n\n- Thank you for the suggestion. First, we report how the test accuracy changes as we vary the number of meta-training tasks in the table below. The results show that our Meta-Interpolation consistently outperforms the baselines by large margins, regardless of the number of the tasks. \n\n**[Experiments on ESC50 dataset as varying the number of training tasks]**\n\n| **Model** | **5 Tasks** | **10 Tasks** | **15 Tasks** | **20 Tasks** |\n|------------------------|:--------------------------------:|:---------------------------------:|:---------------------------------:|:--------------------------------:|\n| ProtoNet | $51.41\\pm3.93$ | $60.63\\pm3.61$ | $65.49\\pm2.05$ | $69.05\\pm1.49$ |\n| MLTI | $58.98\\pm3.54$ | $61.60\\pm2.04$ | $66.29\\pm2.41$ | $70.62\\pm1.96$ |\n| **Meta-Interpolation** | $\\textbf{72.74}\\pm\\textbf{0.84}$ | $\\textbf{74.78}\\pm\\textbf{1.43}$ | $\\textbf{77.47}\\pm\\textbf{1.33}$ | $\\textbf{79.22}\\pm\\textbf{0.96}$ |\n\n\n- We further report the test accuracies of our Meta Interpolation at different layers, in the table below.\n\n**[Accuracy on ESC50 with different layers for interpolation]**\n| **Location of Interpolation** | **Accuracy** |\n|-----------------------------|:--------------------------------:|\n| Input Layer | $66.83\\pm1.31$ |\n| Layer 1 | $74.04\\pm2.05$ |\n| Layer 2 | $\\textbf{79.22}\\pm\\textbf{0.96}$ |\n| Layer 3 | $77.62\\pm1.46$ |\n\n## Reference\n[1] Yao, Huaxiu, Linjun Zhang, and Chelsea Finn. \"Meta-Learning with Fewer Tasks through Task Interpolation.\" International Conference on Learning Representations. 2021.", " **[Q1]**. How sensitive is the proposed method to meta-validation dataset size compared to other works? \n\n- Other than image datasets, as shown in Appendix D1, the size of the meta-validation set is extremely small (most of them consist of only two tasks). Thus our method is not sensitive to the size of the meta-validation set nor requires a large validation set, while being able to achieve significant performance improvements over the baselines.\n\n---\n**[Q2]**. The proposed method is applied to only prototypical networks. Why is the proposed method not applied to MAML?\n\n- As we stated in the main paper, we focus solely on metric based meta-learning approaches (line 108-110) due to its efficiency and better empirical performance over the gradient based methods for the few-task meta-learning problem. The main challenge in applying our method to gradient-based meta-learning methods is that when our method is combined with bi-level gradient-based methods, it yields a tri-level optimization problem as it requires differentiating through second order derivative. The tri-lvel optimization is still known to be challenging problem [2, 3]. \n\n- However, it is possible to apply our method to first-order MAML [1] which approximate the Hessian with a zero matrix, and we provide additional experiments with this model on the ESC50 dataset. The experimental results show that Meta-Interpolation outperforms the relevant baselines, which shows the general applicability of our method. However, as mentioned before, the original Meta-Interpolation with metric-based meta-learning largely outperforms this model. \n\n**<Experiments on ESC50 dataset with first-order MAML>**\n| **Method** | **Accuracy** |\n|----------------------|:--------------------------------:|\n| MAML w/ first-order | $72.14\\pm 0.73$ |\n| MAML w/ first-order + MLTI | $71.52\\pm 0.55$ |\n| MAML w/ first-order + Meta-Interpolation| $76.68\\pm1.02$ |\n| **ProtoNet + Meta-Interpolation** | $\\textbf{79.22}\\pm\\textbf{0.96}$ |\n\n\n---\n**[Q3]**. At which layer is the set transformer located? Is the location randomly sampled, similar to MLTI? If not, why so? Could you provide the ablation study on this?\n\n- As shown in Table 10 and 11 from Appendix D.4, we fix the location of interpolation. Otherwise we cannot use the same architecture of Set Transformer to interpolate output of different layers since hidden dimension of each layer is different. Moreover, we report the test accuracy for each layer to be interpolated. Interpolating hidden representation of support sets from layer 2, which is used in the main paper, achieves the best performance.\n\n**<Accuracy on ESC50 with different layers for interpolation>**\n| **Location of Interpolation** | **Accuracy** |\n|-----------------------------|:--------------------------------:|\n| Input Layer | $66.83\\pm1.31$ |\n| Layer 1 | $74.04\\pm2.05$ |\n| Layer 2 | $\\textbf{79.22}\\pm\\textbf{0.96}$ |\n| Layer 3 | $77.62\\pm1.46$ |\n\n## References\n[1] Finn, Chelsea, Pieter Abbeel, and Sergey Levine. \"Model-agnostic meta-learning for fast adaptation of deep networks.\" International conference on machine learning. PMLR, 2017.\n\n[2] Blair, Charles. \"The computational complexity of multi-level linear programs.\" Annals of Operations Research 34 (1992).\n\n[3] Chen, Richard Li-Yang, Amy Cohn, and Ali Pinar. \"An implicit optimization approach for survivable network design.\" 2011 IEEE network science workshop. IEEE, 2011.", " \n**[Q3]**. The author should provide more discussion of why meta-interpolation is limited in metric-based meta-learning. Otherwise, provide experimental results for gradient based methods.\n\n\n- As stated in the main paper (line 108-110), we focus solely on metric based meta-learning due to its efficiency and better empirical performance over the gradient based methods for the few-task meta-learning problem. \n\n- A crucial practical challenge in combining our method with the original MAML [1], is that it yields a tri-level optimization problem which requires differentiating through second order derivatives and the tri-level optimization still remains a **challenging** problem [2.3]. Thus, instead of using the original MAML, we provide additional experimental results on the ESC50 dataset using first-order MAML which approximates the Hessian with a zero matrix. The experimental results show that Meta-Interpolation with first-order MAML outperforms MLTI, as shown below, which again confirms the general effectiveness of our set-based augmentation scheme. However, it largely underperforms our original Meta-Interpolation framework with metric-based meta-learning. \n\n**<Experiments on ESC50 dataset with first-order MAML>**\n| **Method** | **Accuracy** |\n|----------------------|:--------------------------------:|\n| MAML w/ first-order | $72.14\\pm 0.73$ |\n| MAML w/ first-order + MLTI | $71.52\\pm 0.55$ |\n| MAML w/ first-order + Meta-Interpolation| $76.68\\pm1.02$ |\n| **ProtoNet + Meta-Interpolation** | $\\textbf{79.22}\\pm\\textbf{0.96}$ |\n\n\n---\n**[Q4]**. The method is a little bit incremental.\n\n- We do not believe that our method is incremental, since the use of an expressive set function allows to consider higher-order interactions among the set elements to create diverse augmentations that are off the simplex created by the highly **non-linear transformation** of the original samples, unlike Mixup-based strategies. Please See Figure 3 for the t-SNE visualization of the generated augmentations. We believe that this scheme alone is highly novel and differs from any of the existing methods that simply interpolates between two instances. \n\n- Secondly, our interpolation strategy which generates augmentations only from the support sets, without using the query sets, is highly novel in the context of recent task augmentation strategies, as it makes tasks harder so that the meta-learner cannot easily memorize them. \n\n- Lastly, we provide non-trivial theoretical analysis showing that Meta-Interpolation induces data-dependent regularization and reduces the Rademacher complexity for better generalization. \n\n## References\n[1] Finn, Chelsea, Pieter Abbeel, and Sergey Levine. \"Model-agnostic meta-learning for fast adaptation of deep networks.\" International conference on machine learning. PMLR, 2017.\n\n[2] Blair, Charles. \"The computational complexity of multi-level linear programs.\" Annals of Operations Research 34 (1992).\n\n[3] Chen, Richard Li-Yang, Amy Cohn, and Ali Pinar. \"An implicit optimization approach for survivable network design.\" 2011 IEEE network science workshop. IEEE, 2011.\n\n", " The paper explores few-task meta-learning that attempts to ease the difficulty and cost of massive task construction. It challenges the existing few-task meta-learning approaches that heavily rely on the Manifold-Mixup-based task augmentation, which is hard to generalize to cases beyond computer vision. In this case, the author introduces a set-value neural net (Set transformer) as the tool to augment the tasks. The author provides a theorectical ground that Set transformer is able to control the Rademacher complexity for better generalization. 5 dataset and 3 image-based FS benchmarks have been used to evaluate the method. 1. The paper is well-organized and well-written.\n2. The method seems simple but quite general and effective across many scenarios.\n3. Theoretical results significantly motivate the method.\n4. Massive empirical studies and the results are convincing. The paper may deserve a higher rating score if the questions below have been fully addressed.\n1. Only Set-transformer has been considered as the implementation of meta-interpolation. Will other neural set functions be able to improve the few-task FS performance? It would be more convincing if the author can provide more empirical analysis on how to choose the set neural networks, e.g., change the architecture of the Set-transformer or use other types of set neural networks (for instance, Deep Set).\n2. The method is claimed to be good at the field beyond vision. How is the performance in solving NLP low-resource problems? \n3. Please refer to the first statement in limitation. 1. Distinct from the typical Mixup-based task augmentation baselines, the method is only implemented (only available?) in metric-based meta-learning, more precisely. ProtoNet. The author should provide more discussion of why meta-interpolation is limited in metric-based meta-learning, otherwise, provide the empirical performances of other meta-learning frameworks (e.g., MAML and more others) + meta-interpolation and further compare them with typical Mixup-based task augmentation baselines.\n2. The method is a little bit incremental.", " The paper introduces a new task augmentation method for meta-learning with few training tasks available. The proposed method employs a set-to-set function (instantiated as set transformer) that performs interpolation of features from the support set to create a new task. The parameters of a set transformer are learned via billevel optimization, using a meta-validataion dataset for tuning the parameters (considered as hyperparameters) to achieve the generalization. The paper performs experiments on the eight datasets to present the effectiveness of the proposed method, Meta-Interpolation. ### Strengths\n- The paper presents extensive experimental results across diverse datasets.\n- The idea of employing meta-learnable set-to-set function for feature interpolation is novel.\n\n### Weakness\n- The number of parameters of the set transformer may be large enough that the proposed method may be sensitive to the size of meta-validation dataset. In fact, I noticed that the ratio of meta-training:meta-validation is quite high (e.g., 12:16 in the case of Mini-Imagenet-S).\n- The proposed method is applied to only prototypical networks. - What is the sensitivity of the proposed method to meta-validation dataset size, compared to other works, such as MLTI [48] that also performs feature interpolation?\n- At which layer is the set transformer located? Is the location randomly sampled, similar to MLTI? If not, why so? Could you provide the ablation study on this?\n- Why is the proposed method not applied to MAML? The similar work MLTI [48], which also performs feature interpolation, has shown the effectiveness on MAML as well. I believe providing experimental results on MAML will strengthen the paper. While the paper provides limitations, I believe there are more limitations to discuss:\n- the possible need for large meta-validation dataset (such high demand may break the assumption of few tasks). \n- Validated only for prototypical networks", " To tackle meta-overfitting in few-task meta-learning, the authors propose Meta Interpolation based on set transformer to generate domain-agnostic tasks. They assign a new class k by re-arrang K classes in two tasks, and the new class k is the k_{th} class in the two new permutations. Then, the corresponding support set is constructed with interpolating function using bilevel optimization. Since the interpolating function is optimized to minimize the loss on the validation tasks, the proposed method can achieve task augmentation for each specific domain. The effectiveness of Meta Interpolation is proved both experimentally and theoretically. Pros:\n1. The task of task generation is interesting.\n2. The effectiveness of the proposed method is proved both experimentally and theoretically.\n\nCons:\n1. The reasonability of using the he interpolating function to generate new support set, while keeping the query set unchanged is unclear.\n2. The way to generate new classes is somewhat hasty. 1. Please analyse the principle of generating new classes by randomly aligning the category orders in two tasks. Since its prototype contains information of two classes, which may be quite different.\n2. Why change the support set and keep the query set unchanged? If two new tasks contain the same query set but different support set, then, how to avoid the optimized support sets falling into the original one in task 1?\n3. Does this augmentation make task 1 and task 2 more similar?\n4. More training details should be provided. For example, how many new tasks do you generate? Do the generated tasks make the comparison with other methods unfair (e.g., the model is updated more times during one training epoch, since more tasks are optimized in one epoch)? And which layer do you perform augmentation?\n5. An ablation study about the number of training tasks and the performance of the proposed method, and the layer to implement augmentation may be better. An ablation study about the number of training tasks and the performance of the proposed method may be better show the practical applicability.", " This paper presents a method for combining tasks in meta-learning when original base class domain data is scarce. At meta-training time, a set transformer is used combine support set features from two random tasks, while comparing to queries from just the first. The set transformer can therefore use the second random support set to apply random augmentations to the first, or not, as beneficial for the application. Since the meta-training data is limited and easy to overfit, the random augmentations might never be used when trained jointly --- the transformer parameters are instead learned on a validation set with separate set of classes, using bilevel optimization with HyperGrad. The method is evaluated on datasets in multiple domains, including chemical, text, speech and images, finding consistent improvements.\n This is an interesting approach that furthers work in task augmentation, extending beyond interpolation to learned mixing. It's a little bit complex, using a set transformer, separate class split and bilevel optimization, and it's not entirely clear all components are necessary in their current form --- however, there are ablations showing each contributes, and particularly that a separate optimization is required. I have a few questions below on how these might be able to be simplified, but also think the system is reasonable as-is to demonstrate the effectiveness of the approach.\n\nThere is also a theoretical section, but was very dense and a little outside my expertise; this claims to show a reduction in Rademacher complexity in simplified settings, but I didn't follow it in detail. The end result of this section also seems to rely on reducing the upper bound by decreasing the rank of a covariance-like matrix of task vector differences, though no discussion on how or why its rank might be reduced.\n\nOverall, the method is explained well and empirical results demonstrate its effectiveness, with particularly strong improvement in ESC-50. The degree to which all components are needed in their current form, or whether they can be simplified, is unclear. I would have have liked to see additional ablations on the form of the set transformer, random mixing vector selection, and validation set sizes. But the paper does present ablations showing all components are indeed needed together at a high-level to justify the approach.\n Additional questions:\n\nSince original base data is scarce to begin with, splitting out additional classes into a separate validation set to train the set transformer seems like a lot of data to take away from training the feature model parameters. Is there a way to continue using these for both features and set transformer, maybe changing the held-out classes used in hypergrad periodically each outer loop?\n\nHow important is the second support set for use in the transformer? What if random vectors are input to the set transformer instead of task 2 vectors?\n\nSince there is just one set of set transformer params $\\lambda$, does this limit the amount of augmentation or interpolation? If the same data elements and permutations are used twice with same lambda params, the set transformer will output the same features both times, unless there is another source of randomness in the transformer that I missed. Does this limit the approach at all? Fig 3b shows a likely considerable amount of interpolation using several combinations of points, but could even more be generated from each pair using additional random sampling either of the transformer params or auxiliary input?\n\n\nl.155 \"interpolating either support or query sets achieves higher loss than interpolating both\" --- reading this sentence alone (before sec 5), it's ambiguous if what is meant by \"higher loss\" is better or worse. Reading this section, this question was very much on my mind, so seeing that it would be addressed by this sentence was great, but it wasn't clear which way it would be resolved.\n\nalg.2: some small typos and inconsistencies, lines 4,6 L_v suggests this is the loss but is already the grad, would be more consistent to accumulate a sum of losses in L_v and then set v_1 <- d/d\\theta L_v. also \"single\" is misspelled in the subscript\n\n -" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 4 ]
[ "zZnQ16hWWv", "Y43k845vRK", "Q2o2uZ9rve0o", "Nw5Svgwz2WS", "ENmghKRsLbx", "dwPK3cK2SvG", "tH1o9E7tx0r", "_pF6TwwBIT", "z3DYT9UBfZ6", "L3B4TqnXHRyY", "nips_2022_78T4K99jvbE", "nips_2022_78T4K99jvbE", "i0XqiG-bvM", "Xu-XiIShmQ", "4sXDdLvo3FK", "Xu-XiIShmQ", "byYCaa__H16", "i0XqiG-bvM", "nips_2022_78T4K99jvbE", "nips_2022_78T4K99jvbE", "nips_2022_78T4K99jvbE", "nips_2022_78T4K99jvbE" ]
nips_2022_WyiM4lDJOcK
How To Design Stable Machine Learned Solvers For Scalar Hyperbolic PDEs
Machine learned partial differential equation (PDE) solvers trade the robustness of classical numerical methods for potential gains in accuracy and/or speed. A key challenge for machine learned PDE solvers is to maintain physical constraints that will improve robustness while still retaining the flexibility that allows these methods to be accurate. In this paper, we show how to design solvers for scalar hyperbolic PDEs that are stable by construction. We call our technique 'global stabilization.' Unlike classical numerical methods, which guarantee stability by putting local constraints on the solver, global stabilization adjusts the time-derivative of the discrete solution to ensure that global invariants and stability conditions are satisfied. Although global stabilization can be used to ensure the stability of any scalar hyperbolic PDE solver that uses method of lines, it is designed for machine learned solvers. Global stabilization's unique design choices allow it to guarantee stability without degrading the accuracy of an already-accurate machine learned solver.
Reject
Thank you for your submission to NeurIPS. All four reviewers authors are enthusiastic of this work, though three of the four reviewers had major concerns with the actual submission. In response, the authors carried out a major revision within a very short turn-around, completely rewriting most of the paper. All four reviewers are highly appreciative of the authors' responsiveness in replying to the reviews, in submitting new revisions taking into account reviewer comments. Unfortunately, some concerns remain even after the author-reviewer discussion period closed. Broadly, the three reviewers all felt that there had been too many edits of the paper to be able to verify/falsify the newest content in all detail. The new revision is sufficiently different as to necessitate a fresh set of reviews, in a way that the conference format of NeurIPS is not able to facilitate. Overall, the core idea is interesting and potentially very useful. The paper studies a critical problem typically overlooked by the current trend of combining ML with PDE solvers, and that it is transparent and upfront about its limitations. The paper asks a good question that can inspire important follow-up works. I highly encourage the authors to take reviewer comments into account, to expand into a complete, detailed experimental section and compare against other ML-based approaches, and to resubmit to an upcoming ML conference.
train
[ "gsusE7k06Dy", "CcsPz7HVZsV", "Zclbg7AOamC", "M-N2QvWURzA", "Y_zKK_pCb7P", "DY2CZPtxcW_k", "d08D2iszg9c", "cSMUvOWijNL", "dxhSKRHoS8", "YaLGT7y0U7d", "Bs5jrWE64V", "j7tad74ze1P", "PuKMB9IkWr", "6x3bzz_Dwur", "9oh7uFZX1_7" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Reviewers,\n\nWe have submitted a second revised paper. \n\nOur paper now contains an experimental verification of the claim that global stabilization \"can be used to stabilize machine learned PDE solvers without degrading the accuracy of an already-accurate solver.\" See lines 247-251 in section 6.1, figure 3, and appendix D.\n\nOther revisions include:\n\n--Clarifying the theoretical argument in section 6.1. See lines 234-246 and Appendix C.\n\n--Defining 'spurious oscillations' and 'numerical diffusion'. See lines 106-108.\n\n--Discuss the violation of finite propagation speed (i.e., causality). See lines 340-351.\n\n--Generalize equation (12). Now vorticity correlation takes less of a hit in figure (5). See appendix E.\n\n--Add an appendix generalizing to non-periodic boundary conditions and non-uniform grid spacing. See appendix B.\n\n--Non-supported assertions have been removed.\n\n\nThanks, The Authors", " Thank you for the response and the careful revision. I have read all of them as well as the other reviews, and I plan to keep my score unchanged.", " Thanks for your suggestions on how to continue to improve the paper.\n\n--Lines 79/80: We will include a similar high-level comment in the August 9th revision.\n\n--Line 108: We see now that we mischaracterized your statement. Sorry about that. We agree that a large stencil should be classified as a \"local\" phenomenon, and with your point that a more thorough discussion of this fact would improve the presentation. We will include this in the August 9th revision.\n\n--\"It would be great if you could point me to the section/equation in the paper where such a guarantee can be found\": The relevant section is lines 355-362 in section 9. \n\nOur intention was to use section 6.1 to explain that while other methods for generating stability add a large amount of numerical diffusion, global stabilization adds the minimum amount of numerical diffusion necessary to stabilize the solution. We also wanted to point out that this is only possible if we perform a global calculation of the L2-norm, which explains why we chose to allow the solution to violate causality.\n\nOur intention was then to use lines 355-362 to complete the argument:\n(A) Global stabilization is only added if the algorithm violates the non-increasing L2-norm property.\n(B) A highly accurate ML solver is unlikely to frequently violate this property within the training distribution.\n(C) Even if it does violate the property, the additional numerical diffusion is the minimum necessary to correct the violation. \nThus, \"we can expect the effects of global stabilization to be infrequent, small, and applied only when necessary.\" (A) is in algorithm 1. (B) is justified in appendix B. (C) is discussed in section 6.1.\n\nNow, if you are a very astute observer (and it seems like you are), you will point out that you it is possible to think of examples of PDEs and initial conditions where (B) is not true. Thus, we should do a better job of spelling out exactly when (B) is going to be true and when it might be violated. \n\nThe important things for the reader to understand are that:\n(1) We assume that the training data is given by the coarse-grained exact solution.\n(2) For non-linear f(u) the solution develops high-$k$ structures or modes which coarse graining integrates out, meaning that the training data will have non-increasing discrete L2-norm with very high probability.\n(3) For linear $f(u)$, i.e., the advection equation, the solution doesn't develop high-$k$ structures, which means that for some initial conditions the discrete L2-norm of the training data will oscillate around a fixed value. \nIn other words, only for linear f(u) and only for some initial conditions is (B) likely to be violated. \n\nWe will move the theoretical argument to section 6.1, clarify that (B) is true for non-linear $f(u)$ but might not always be satisfied for linear $f(u)$, as well as point out that for linear equations it may be better to set $\\frac{d\\ell_2^{\\textnormal{new}}}{dt}=0$ rather than $\\frac{d\\ell_2^{\\textnormal{new}}}{dt}\\le 0$.", " A) Although our method doesn't maintain a discrete analogue of the entropy inequality in each grid cell, neither does the MUSCL scheme (reference [36], equation 3.16). Thus, we don't think this is a weakness of our scheme.\n\nB) It is not a factual error to say that the solution is inaccurate, but it is a factual error to say that \"accuracy deteriorates\" when in fact accuracy improves. Eliminating spurious oscillations from the unstable and inaccurate centered flux would require designing a TVD scheme. An accurate machine learned flux predictor would probably not introduce spurious oscillations.\n\nRegarding (A) and (B), we believe our paper would be substantially weaker if our method satisfied a discrete entropy inequality or was TVD. We allude to this in lines 348-354, writing that \"designers of numerical methods must determine which properties of the continuous system should be preserved by the discrete system and which properties either cannot be preserved or degrade the accuracy of the discrete system.\"\n\nC) We misunderstood your comment. Sorry about that. Glad you agree the equation is in fact exact.\n\nD) We can include an appendix which derives the expressions for non-periodic boundary conditions and non-uniform grid spacing. We will remove the sentence \"Our results can easily be generalized to the case where the grid spacing is non-uniform.\"\n\nE) It seems like you want us to remove the word \"novel\" from line 123, which we are happy to do.", " After reviewing the response, I do not believe the authors have addressed the substance of my more significant concerns.\n\n\tFactual errors:\n\tAR: A) \"Conserves properties (only?) on a global scale\": In continuous systems, conservation is a global invariance that applies to closed systems. There is no such thing as 'local' conservation of, e.g., energy. Instead, you have continuity equations which describe the advection of, e.g., energy density. In discrete systems, conservation is also a global property, there is no such thing as 'local' conservation. For example, the discrete conservation of mass is derived by summing over all N grid cells (see lines 90-91 of revised paper). Instead, you have discrete continuity equations. Perhaps what you mean to say is that a discrete analogue of the entropy inequality is not satisfied for each 'local' grid cell but is satisfied globally. We do not think this is a weakness. We argue in section 7 that violating this condition is necessary to obtain an accurate solution.\n\t\n\nReviewer Response (RR): Yes, the issue is enforcement of continuity and entropy constraints on every grid.\n\t\n\n\tAR: B) \"accuracy deteriorates in the presence of sharp gradients in the solution (for example, shocks in the flow)\": Perhaps you are misinterpreting figure 1. Figure 1 demonstrates that if you apply global stabilization to a numerical method which is unstable and highly inaccurate, you will get a result which is stable and less inaccurate, but still inaccurate. As we make clear in section 4 of the revised paper, global stabilization adds diffusion to the solution. This will not degrade accuracy near shocks, in fact shock-capturing methods all add numerical diffusion near shocks.\n\t\nRR: Unless I misunderstand something, these high-frequency oscillations near the shocks, as seen in the figure 1, are unwanted and \"unphysical\". Hence, I described it as inaccurate because there are numerical schemes that do not have this problem. You could discuss whether an improvement can be made to eliminate these spurious oscillations, however, the current paper does not yet have that. \n\t\n\n\tAR: C)\t\"Equations 3b cannot be exact since flux at cell boundaries can still vary in a direction parallel to the cell boundary\": Flux is perpendicular to a surface. The flux parallel to a surface is by definition zero. Equation 3b is exact; it is simply a discrete statement of the continuity equation for a rectangular cell.\n\t\nPerhaps the authors have not understood the issue. I meant to say that 'flux that is perpendicular to the face' can vary along the edge. For example, if you consider an edge [0,1] then the flux in the interval [0,0.5] can be pointing inwards, whereas the flux in the interval [0.5,1] can be pointing outwards. Regardless, the paper now states averaged flux which is fine i think.\n\n\tAR: D) \"limited application\": While you correctly state that the original paper \"does not explain how\" to extend the method to (a) higher dimensions, (b) non-periodic boundary conditions, and (c) systems of hyperbolic PDEs, you incorrectly conflate \"not explain[ing] how\" with having a \"limited application\". In other words, just because we do not explain how to extend the method to (a) (b) and (c) does not mean that our method cannot be extended to (a) (b) and (c). Students familiar with FV methods will be able to derive (a) by deriving a 3D analogue of the FV update equations 3a and 3b then computing the rate of change of the L2 norm. ............ Nevertheless, as we argue in section 8 of the revised paper, \"it is standard practice in the numerical analysis community to first use the scalar conservation law eq. (1) to introduce a new method before later extending the method to systems of PDEs\". See, for example, Cockburn and Shu's papers introducing the Discontinuous Galerkin method.\n\t\nRR: Perhaps 'Limited application' is not right choice of words; 'the application shown in the paper is limited to a specific pde' is more accurate. With that being said, unless some amount of explanation or proof is provided 'blanket statements' should be avoided. Assertions, made in the paper, should either be self-explanatory or supported by a proof/reference.\n\t\n\tAR: Finally, you conclude that there is a 'lack of novelty' due to this being 'a standard predictor-corrector approach'. Even if you conclude that our technique is not novel, as stated in the reviewer guidelines a novel combination of well-known techniques can be valuable, especially when doing so makes progress on an important problem.\n\t\nRR: 'Valuable' and 'novel' are not the same. The 'lack of novelty' remark was made because the paper claims the techniques presented as 'novel' which is not the case. It would be better to focus on the contributions made by the paper and the value they add.\n\t", " Thank you for your clarifications. I appreciate the improved presentation in the most recent upload, which resolves a sizeable subset of my specific questions.\n* I like the new Section 6.1, but I do not think it elaborates on how much accuracy is traded for stability. In the rebuttal, you mention a theoretical argument for stabilising machine learned solvers without degrading accuracy; it would be great if you could point me to the section/equation in the paper where such a guarantee can be found. It sounds like a powerful statement, but I cannot seem to find it. What am I missing?\n* Lines 79/80: I think it would be a good idea to explain numerical diffusion and spurious oscillations in the paper, because not every reader will be familiar with those terms. (In my view, the high-level explanation you used in the rebuttal together with a reference would be plenty.) But ultimately, it is of course the authors' decision what to discuss in the paper.\n* Line 108: Thanks for your reply. In my review, I did not call this a limitation but a _potential_ limitation which might warrant more discussion. I kind of agree with you and reviewer sQzi that there are examples which also seem to break causality (even though I disagree with the example sQzi gives -- I would still consider a large stencil to be a local phenomenon). Nevertheless, it is not extremely common, especially in the sense that with global stabilisation, a formerly \"causality-preserving\" method would not be \"causality-preserving\" any more. Therefore, I think a more thorough discussion of this fact would improve the presentation of the paper.", " Dear reviewer sQzi,\n\nThanks for a truly excellent review with very insightful questions. Your comment about “the most general possible solution” is particularly insightful and helpful.\n\nWe have submitted a revised paper. The revised paper has a more clear and rigorous presentation, with significant changes to sections 2, 3, and 6. We believe that the revised paper addresses all of your concerns, except your concern that \"the paper has no explicit experimentation section.\" We will update the paper by August 9th.\n\n\nWe agree with almost all of your comments. The only comment we think is incorrect is the comment “looking at their results in figure 3, it is certainly the case that standard measures of performance such as vorticity correlation take a large hit. This is the greatest weakness of the paper”. Sorry if this was confusing. Figure 3 is not related to the main discussion, we have moved it to an appendix in the revised paper. The proper conclusion to draw from figure 3 is instead that “it is often not even desirable for a numerical method to inherit the properties of the continuous equation” (see section 9 of revised paper).\n\nHowever, your comment (implicitly) raises an important question raised also by reviewer 9qVo: how will global stabilization influence the accuracy of a machine learned numerical method? The revised paper is more clear on this question (see sections 6.1 and 9). We have a theoretical justification for our argument in sections 6.1 and 9, but we agree it would be easier \"to verify its efficacy\" if we also had an experimental demonstration of our claim. We will work on updating the paper with experiments by August 9th.\n\n\nQuestions:\n\nQ1: Excellent suggestion. We have rewritten sections 2 and 3 to more rigorously and clearly define the concept of ‘stability’.\n\nQ2: You are correct, this is not strictly true. However, global stabilization cannot be applied to in the discrete time case. The problem is this introduces a quadratic term proportional to $(dt)^2$ (see $\\langle N | N\\rangle$ term in line 291 in the revised paper) which I don't know how to cancel via transformation of $N$.\n\nQ3: Nothing can guarantee perfect accuracy, except by using a convergent method and taking the limit $dx \\rightarrow 0$. Fundamentally, the issue is that you have an infinite degree-of-freedom system which you are solving using a finite number of degrees of freedom. Some information will be lost, so some accuracy will be lost. Classical numerical methods typically get a sufficiently accurate solution by using (a) a convergent numerical method, (b) some kind of hand-crafted interpolation function, and (c) a sufficiently small grid spacing $dx$. Hybrid machine learned numerical methods try to outperform standard numerical methods by replacing (either explicitly or implicitly) the hand-crafted interpolation function with some sort of statistical interpolation function and using large $dx$. Thus, for a machine learned numerical method, accuracy comes down to performing accurate statistical inference (i.e., regression) under uncertainty. We attempt to clarify by adding the phrase \"which consistently make accurate predictions about the time evolution of the solution\" to line 331.\n\nQ4: The reason you would choose this expression is because it has a simple physical interpretation, but as you point out it is not the most general expression. See sections 4 and 6 of the revised paper.\n\nQ5: The red solution has blown up and the magnitude is too large to be seen in figure 1 at $t=0.5$. We modify the caption, and explicitly state that the red solution blows up while the discrete $\\ell_2$-norm of the blue solution is conserved.\n\nQ6: This is a tricky question. The answer in the text is \"because machine learned PDE solvers solve discrete equations that are designed to approximate $\\chi^{\\textnormal{exact}}_{i,j}$\". To expand on this a bit further: we have a loss function to minimize as well as hard constraints which we force our algorithm to satisfy. If our hard constraints are incompatible with the result that minimizes our loss function, then we are necessarily going to make errors at each timestep. For time-dependent PDEs, even small errors can accumulate into big errors. The exception would be if somehow the errors cancelled out, but for all the properties we've looked at the errors don't cancel.\n\nQ7: Interestingly, reviewer 9qVo has almost the opposite concern, they see this limitation as “potentially problematic”. We are inclined to agree with you: we don’t think this is a very concerning limitation. In the revised paper, we have removed this limitation from section 8. We discuss this further in section 6.1. We can recognize that this is not a limitation when we see that global stabilization simply is adding a constant-coefficient diffusion term to the solution (see section 4), where the diffusion coefficient depends on the solution over the entire domain. Adding numerical diffusion when it is needed is not a limitation.", " Dear reviewer 6knQ,\n\nThanks for the very helpful review. We agree with all of your comments and your suggestions for improving the paper.\n\nWe have submitted a revised paper. I suspect you will find that the revised paper is more rigorous, easier to follow, and friendlier to readers who do not have advanced background knowledge of PDEs and ML-based solvers. I hope you will agree with us that the presentation of the revised paper is now much better.\n\nIf you feel inclined, you might read the comments of the other reviewers, who have all raised intelligent and important concerns. I hope you will agree with us that the revised paper satisfactorily addresses these concerns. Nevertheless, we agree with reviewer sQzi and reviewer 9qVo that our paper would benefit from an experimental verification of the claim that global stabilization \"can be used to stabilize machine learned PDE solvers without degrading the accuracy of an already-accurate solver.\" We will add an experimental verification by August 9th.\n\n\nQuestions:\n\nGood questions and suggestions. We have updated the revised paper accordingly.\n\n\"Is there any theoretical analysis on the numerical diffusion introduced by Eqn. 5?\" Thanks for asking this question. Setting $\\beta=1$ is equivalent to adding a spatially constant diffusion coefficient to the solution (see section 4 of revised paper).", " Dear reviewer 9qVo,\n\nThanks for a truly excellent and extremely thorough review. We agree with all your comments. \n\nWe have submitted a revised paper. The revised paper has a more clear and rigorous presentation, with especially large changes to sections 2, 3, and 6. We believe that the revised paper addresses all of your concerns, although we do not define 'spurious oscillations' or 'numerical diffusion'. We will also update the paper by August 9th to add an experimental section.\n\n\nLimitations:\n\n\"the biggest limitation of the method seems to be understanding how much accuracy... is traded for improved stability.\" We agree that this point is crucial but poorly explained. The revised paper is more clear on this topic (see sections 6.1 and 9). \n\nAlthough we have a theoretical argument for why global stabilization \"can be used to stabilize machine learned solvers without degrading the accuracy of an already-accurate solver\", we think that the paper would be more convincing if it had an explicit experimental demonstration of this claim. We have not had time to perform these experiments yet, but we will work on updating the paper with these experiments by August 9th.\n\n\nQuestions:\n\nMost of your questions have been clarified in the revised paper. Some questions require further explanation:\n\nLines 79/80: We have not defined these terms in the revised paper, but have cited a reference about numerical diffusion. We can define these terms in the August 9th resubmission if you still would like us to. The issue is that \"spurious oscillations\" and \"numerical diffusion\" are both elementary and fundamental concepts in numerical analysis, but neither concept has a rigorous definition or a simple-to-digest explanation. Spurious oscillations relate to a violation of the monotonicity property of the solution, which would require recursive explanation. Likewise, numerical diffusion is a topic that shows up again and again in numerical analysis but does not admit a simple explanation. A simple but non-rigorous explanation is that numerical diffusion is (explicit or implicit) diffusion added to a high-order method. Usually, numerical diffusion is used to ensure that a stability property is preserved. \n\nLine 108: Interestingly, reviewer sQzi has almost the opposite concern, they ask \"is it such a limitation\"? We are inclined to agree with reviewer sQzi: we don’t think this is a very concerning limitation. In the revised paper, we have removed this limitation from section 8. We discuss this further in section 6.1. We can recognize that this is not a limitation when we see that global stabilization simply is adding a constant-coefficient diffusion term to the solution (see section 4), where the diffusion coefficient depends on the solution over the entire domain. Adding numerical diffusion when it is needed is not a limitation.\n\nFigure 2(b): We will fix this in the August 9th resubmission by changing the dark blue solution to green.", " Dear reviewer ee6a,\n\nThanks for the helpful review. We have submitted a revised paper. The revised paper has a more clear and rigorous presentation. We believe that the revised paper addresses all of your concerns, except your concern that \"there [is not] a guideline to obtain such criteria for [systems of hyperbolic] PDEs\".\n\nYou ask good questions and we agree with many of your comments. There are, however, a couple factual errors. As suggested by the Program Chairs, we will focus on the factual errors then answer your questions.\n\n\nFactual errors:\n\n1) \"Conserves properties (only?) on a global scale\": In continuous systems, conservation is a global invariance that applies to closed systems. There is no such thing as 'local' conservation of, e.g., energy. Instead, you have continuity equations which describe the advection of, e.g., energy density. In discrete systems, conservation is also a global property, there is no such thing as 'local' conservation. For example, the discrete conservation of mass is derived by summing over all N grid cells (see lines 90-91 of revised paper). Instead, you have discrete continuity equations. Perhaps what you mean to say is that a discrete analogue of the entropy inequality is not satisfied for each 'local' grid cell but is satisfied globally. We do not think this is a weakness. We argue in section 7 that violating this condition is necessary to obtain an accurate solution.\n\n2) \"accuracy deteriorates in the presence of sharp gradients in the solution (for example, shocks in the flow)\": Perhaps you are misinterpreting figure 1. Figure 1 demonstrates that if you apply global stabilization to a numerical method which is unstable and highly inaccurate, you will get a result which is stable and less inaccurate, but still inaccurate. As we make clear in section 4 of the revised paper, global stabilization adds diffusion to the solution. This will not degrade accuracy near shocks, in fact shock-capturing methods all add numerical diffusion near shocks.\n\n3) \"Equations 3b cannot be exact since flux at cell boundaries can still vary in a direction parallel to the cell boundary\": Flux is perpendicular to a surface. The flux parallel to a surface is by definition zero. Equation 3b is exact; it is simply a discrete statement of the continuity equation for a rectangular cell.\n\n4) \"limited application\": While you correctly state that the original paper \"does not explain how\" to extend the method to (a) higher dimensions, (b) non-periodic boundary conditions, and (c) systems of hyperbolic PDEs, you incorrectly conflate \"not explain[ing] how\" with having a \"limited application\". In other words, just because we do not explain how to extend the method to (a) (b) and (c) does not mean that our method cannot be extended to (a) (b) and (c). Students familiar with FV methods will be able to derive (a) by deriving a 3D analogue of the FV update equations 3a and 3b then computing the rate of change of the L2 norm. As we explain in the revised paper, the method can be extended to (b) by keeping track of the (known) fluxes through the domain boundary. The energy method can also be extended to some systems of hyperbolic PDEs, though we do not include \"a guideline for obtaining such criteria\". Nevertheless, as we argue in section 8 of the revised paper, \"it is standard practice in the numerical analysis community to first use the scalar conservation law eq. (1) to introduce a new method before later extending the method to systems of PDEs\". See, for example, Cockburn and Shu's papers introducing the Discontinuous Galerkin method.\n\n\nQuestions:\n\nA) Good point. Reviewer sQzi has a similar question, stating that equation 5 \"does not look to be most general expression\". We have updated sections 4 and 5 to be more general.\n\nC) The discrete energy increases for the no damping case for the same reason the energy decreases in the MUSCL 128x128 case: numerical error. Although there is not a clear physical interpretation here, I think the best way to understand the increase in energy is that high-k oscillations in vorticity space lead to high-magnitude variations in velocity space, which lead to an increase in the energy.\n\nD) Good question. Setting beta=$ has \"a simple physical interpretation: [it] correspond[s] to the addition of a spatially constant diffusion coefficient everywhere in space.\" See section 4 of the revised paper.\n\n\nFinally, you conclude that there is a 'lack of novelty' due to this being 'a standard predictor-corrector approach'. Even if you conclude that our technique is not novel, as stated in the reviewer guidelines a novel combination of well-known techniques can be valuable, especially when doing so makes progress on an important problem.", " Reviewers,\n\nThank you for the excellent and detailed reviews. We are very impressed by the quality of the reviews.\n\nAs you can see, we have submitted a revised paper. We hope that you would please read the revised paper, especially sections 2, 3, 6, and 9. We hope you will agree with us that the updated paper has an improved presentation and that it addresses your concerns.\n\nIn our opinion, the remaining weakness of the revised paper is that we only have theoretical justification for our claim that global stabilization \"can be used to stabilize machine learned PDE solvers without degrading the accuracy of an already-accurate solver.\" This claim would be more convincing if it also had experimental evidence to support it.\n\nTo address this remaining weakness, we plan to submit another revision of the paper by August 9th which tests our claim experimentally. We will run experiments using a machine learned solver for the 1D advection equation (plus a small diffusion term). The 1D advection equation is a very important model equation for fluid-like equations that is \"embarrassing[ly]\" difficult to solve accurately. We expect the solver to be quite accurate and stable within the training distribution, and for these experiments to show that global stabilization has only a small impact on the accuracy of the solver.\n\nThanks,\nThe Authors", " The paper presents a set of sufficient criteria that guarantee the stability of numerical schemes used to solve scalar hyperbolic PDEs with periodic BCs and presents a predictor-corrector algorithm to implement such schemes. The prediction step is performed by a Machine Learned function (can also be other functions? such as Method of Lines) and finds the values of fluxes at desired locations to propagate the flow. The corrector step modifies the predicted flux values to ensure \"sufficient criteria for stability\" are satisfied.\nThe approach was applied to two examples, and the results show that this approach achieves \"stabilization of unstable numerical schemes\" and \"reduction of numerical damping\" while globally preserving conserved properties. Strengths:\n\na) The algorithm (and the predictor-corrector approach) presented is relatively simple to implement in various numerical schemes\n\nb) the results seem to indicate improvement in stability and numerical damping effects with almost no computational cost increase\n\nWeakness:\n\na) Hard to generalize to other systems\n\nb) Conserves properties (only?) on a global scale, and accuracy deteriorates in the presence of sharp gradients in the solution (for example, shocks in the flow).\n\n\n A) Is there a way to alter C in equation (5) to be a function of index j and replace it with C_j in the corrector step to allow for local conservation of properties while preserving discrete L2-norm?\n\nB) Equations 3b cannot be exact since flux at cell boundaries can still vary in a direction parallel to the cell boundary. What does ``exact'' mean in this context?\n\nC) Why does the discrete energy of the system increase in the second numerical example in the globally stabilized scheme with no damping? What is the source of this energy coming into the system?\n\nD) How do the 'beta'-parameters in the corrector step influence the performance/accuracy of the numerical scheme? The application of results seems limited - there does not seem to be an explanation of how \"sufficient conditions for stability\" (see section 2) are derived, nor is there a guideline to obtain such criteria for other PDEs and boundary conditions. While the paper states this method could be extended for hyperbolic PDEs with a source term, higher spatial dimensions, and other boundary conditions but does not explain how. In that sense, the application is limited and specific to a particular PDE. Further, extension to vector-valued hyperbolic PDEs does not seem trivial as sufficient conditions are likely to change.\n\n\nSoundness:\nThere are no apparent errors in the math; however, the main results of the paper are not compartmentalized into \"theorems and proofs.\" Hence the exact limitations of the method are not explicitly stated anywhere. Furthermore, all the summation terms are missing index limits. Therefore actual treatment of the boundary terms and correction applied to fluxes at the boundary is not clear (the correction to fluxes is probably the same for interior and boundary since periodic boundary conditions on an \\R-interval are the same as no boundary conditions on a spherical manifold; however, such details are missing in the paper).\n\nPresentation:\nThe paper's content is presented concisely and to the point; however, repeated use of technical terms without ever defining them leads to a lack of clarity. Many statements made are inadequately explained or supported. For example, the last sentence of the introduction, missing summation index limits in all the equations, last paragraph in section 3, second paragraph in section 5, etc.\n\nContribution:\nLack of novelty (a standard predictor-corrector approach), lack of mathematical rigor in the presentation, and limited application (only hyperbolic PDEs with periodic BCs, Method of lines, scalar PDE) make the overall contribution to be marginal.", " The paper is concerned with designing stable solvers for scalar conservation laws on rectangular domains with periodic boundary conditions.\nThe central idea is to translate a stability condition into the requirement that in the continuous-time limit, the discrete $L^2$-norm of the solution must be non-decreasing.\nThis is equivalent to requiring $\\sum_j u_j(f_{j+1/2} - f_{j-1/2}) \\geq 0$ ($u_j$ and $f_{j\\pm1/2}$ are discretisations of the variables in the PDE).\nNow the main idea of the paper is to rewrite this inequality condition with summation-by-parts and to identify what needs to be added to each $f_{j+1/2}$ for the condition to always hold.\n\n I appreciate the idea presented in this work, and I like the honest list of the limitations of the proposed method.\nBut I am unsure about the motivation/premise behind the outlined algorithms, I struggle with the lack of mathematical rigour, especially in the introductory Sections 2 and 3, and I have too many questions about the proposed scheme to be able to recommend acceptance.\nI think this paper should be rejected.\nMore details on what I consider strengths and weaknesses are below.\n\n### Strengths \n\n* Machine-learned PDE solvers are a subject that seems to be of interest to the Neurips community\n* Designing a stable finite volume solver is an important task for PDE simulation\n* The general idea is simple and could be implemented by anyone that wishes to adopt the proposed method\n* Related work and limitations are assessed thoroughly\n\n\n### Weaknesses\n\n* The paper claims to be concerned with _machine-learned solvers_, but the content of the paper is independent of _learning_ solvers: the results apply to any algorithm that predicts the flux $f_{j + 1/2}$. \nThe two experiments do not learn PDE solvers but consider a-priori-chosen schemes: centred flux, and the _already-stable_ MUSCL scheme.\nI am not sure whether this lack of a connection to machine learning makes the results less or more interesting. But given the venue and the paper's title and abstract, the disparity between mentioning \"machine-learned PDE solvers\" frequently on the one hand, and not connecting to learning PDE solvers, on the other hand, makes the message of the paper slightly confusing.\n* The motivation for this paper is stated in line 88f.: \"While machine-learned FV solvers are capable of ensuring stability via classical techniques [27, 25], we anticipate that such choices of flux will be too restrictive and too diffusive to outperform state-of-the-art numerical methods.\" \nIt seems like the submitted paper provides a solution to an _anticipated_ problem. There is no evidence, experimental or referenced, that shows how \"such choices of flux will be too restrictive and too diffusive\" is an actual concern.\nIn my view, this limits the expected impact of this work significantly.\n* The readability of the paper is inhibited by a lack of rigour in technical components, especially in Sections 2 and 3. I understand that the authors attempt to keep the mathematical load on the reader as light as possible, but it makes the introduction difficult to follow, sometimes even difficult to verify/falsify. Some issues are listed in the questions below. \n* Line 25: \"and as a result, often do not maintain stable solutions\". I would appreciate a reference that reveals/discusses this issue.\n* Line 31: \"the answer to this question\" Which question? \n* Lines 24/25 and 27-34: I would appreciate a more formal definition of stability early in the paper. I find the exposition in 27-34 too high-level to be able to falsify/verify its statements.\n* Lines 40f: \"We note that the method can also be used when the right hand side (RHS) of eq. (1) is nonzero by treating the right hand side terms using classical techniques and applying global stabilization only to the divergence terms\": please elaborate. If necessary, in an appendix. The statement, taken as is, is difficult to falsify/verify.\n* Line 52: what is $S$? Please define all terms in the equation.\n* Line 54f: \"we can see that the discrete L2 norm is non-increasing\": Please define these terms more rigorously; the paper only states $d/dt \\int u^2/2 dx \\leq 0$.\n* Lines 54f: \"we now list four conditions sufficient for ensuring that a solver of eq. (1) is stable:\" Please embed the four conditions that you list into the literature. Does everyone mean these four conditions when discussing stability? Some references would help the understanding.\n* Line 57: \"the integral of the solution (...) is conserved\": this statement would be easier to follow if you wrote down the integral, because it is not obvious whether the integral is over time, space, or both.\n* Line 65: \"Integrating eq. (1) over a grid cell\" Please elaborate/formalise. For someone that does not already know finite volume schemes, I expect that it would be difficult to follow.\n* Line 67: Please define $f_{j+1/2}$ prior to Equation (3).\n* Lines 79/80: Please elaborate on the terms \"spurious oscillations\" and \"numerical diffusion\". \n* Line 96f: \"Thus, eg. (4) will be satisfied if the following transformation is made\" Could you please write down the proof of this statement somewhere? This might be the most central contribution of the paper, it should not be up to the reader to verify it.\n* Eq. (5): what does the $\\equiv$ symbol mean? Please define it.\n* Line 108: \"which violates the physical property...\" The paper mentions this in Section 8, and I also find it potentially problematic if each $f_{j + 1/2}$ depends on the solution over the entire domain because it seems to differ quite drastically from conventional method-of-lines- or finite-volumes-solvers. \nI would appreciate a more thorough discussion of this possible drawback.\n* Figure 1: I am not able to deduce from the figure how \"global stabilization ensures that the discrete L2 norm is conserved.\"\n* Figure 2(b): the two different shades of dark blue are incredibly difficult to distinguish.\n* Lines 168f: \"that adjusts the (...) output if that output moves the solution towards instability\": what is the price in accuracy? I find it difficult to believe that adjusting for stability does not cost accuracy. This aligns with the statements made in Lines 255f, but is there any way of understanding or at least quantifying this trade-off?\n* Eq. (10): please define the notation $\\langle N \\mid 1 \\rangle$. I suppose it refers to an inner product?\n In my view, the biggest limitation of the method seems to be understanding how much accuracy, respectively general reliability of the solver is traded for improved stability. This is mentioned in Section 8, but not discussed very thoroughly (but other, more minor limitations are discussed thoroughly).\n", " This paper studies the scalar hyperbolic PDE problems on a uniform grid with periodical boundary conditions. The paper proposes a numerical finite-volume solver that guarantees to generate stable solutions to the PDEs by construction. The key observation is to notice that the stability of the solution can be enforced by properly reconstructing the cell boundary flux so that it ensures a non-increasing L2 norm of the solution. Therefore, it proposes a correction scheme for the cell boundary flux called “global stabilization”. While the main result is theoretical, one can properly combine it with machine learning solvers to generate a stable solver. Strengths:\n- The paper studies a very critical question that, in my opinion, is a bottleneck of many existing machine learning solvers for PDEs.\n- The mathematical insight behind the proposed solution is convincing.\n- I also appreciate that the proposed numerical scheme stabilizes the solution and removes numerical diffusion at the same time.\n\nWeaknesses:\n- The paper requires advanced background knowledge of PDEs and is not very friendly to readers unfamiliar with numerical methods for PDEs. I think having more illustrations will be nice, e.g., an inset around Eqn 3 that displays a uniform grid with locations of each variable u and f.\n 1. Line 71: Is the last equation = 0 because of the periodic boundary conditions (so that f_0 and f_last are known to be the same)?\n\n2. Although I can see that Eqn. 5 guarantees the L2 stability, I cannot see how it is derived based on the explanation in lines 95-96. I’d appreciate it if more intuition can be provided when deriving Eqn. 5.\n\n3. On a related note, lines 109-110 mentioned Eqn. 5 can be used for both stabilizing a solver and reducing numerical damping and presented two numerical experiments to support them. Is there any theoretical analysis on the numerical diffusion introduced by Eqn. 5?\n\n4. Because of 2 and 3, I have found Sec. 5 a bit difficult to follow. Maybe a better way is to first state the goal is to come up with a correction scheme on f so that it stabilizes the solver and reduces numerical damping, then explains how this goal leads to a derivation of Eqn. 5.\n I really like the fact that the authors are upfront about the limitations of the proposed approach, especially by pointing out many assumptions that they made about the PDE system and numerical schemes. To me, these issues (assuming periodical boundary conditions, regular grids, limited PDE types, etc.) are not deal-breakers but rather show the potential of inspiring follow-up papers.", " I thank the authors for an interesting read.\n\nRollout stability of PDE solvers for time-dependent hyperbolic PDEs is a property baked into classical solvers, yet it is unaddressed in the machine learning for PDEs community. As such it is a common problem for learned solvers that they are sometimes unstable. When and under what conditions this instability arises is poorly understood and often swept under the table.\n\nThis submission seeks to resolve that with a technique that the authors call `global stabilization’. They introduce an L2-stability criterion, which states that the L2 norm of the solution is non-increasing over time. This is a well-known property of hyperbolic PDEs in conservation form. The beauty of this is that enforcing this property directly ensures that stability is met, since stability is defined as the solution blowing up and this cannot happen if solutions are norm non-increasing in time.\n\nThe technique is general and works as an ad hoc projection step to be used on the output of any model, but it is also handcrafted and there is no guarantee that it is the most general such stabilization method.\n The paper tackles a very important and much overlooked issue in the ML for PDE solving literature. I welcome this contribution and see it as a positive aspect of the submission.\n\nThe paper boasts a nice related work section, which situates in the landscape of contemporary ideas adequately.\n\nThe limitations section of the paper is very open and honest. Furthermore it opens a lot of interesting questions, such as, how do we make accurate ML solvers? This makes it particularly transparent for a machine learning paper. I thank the authors for this addition.\n\nThe technique is general and works as an ad hoc projection step to be used on the output of any model, but it is also handcrafted and there is no guarantee that it is the most general such stabilization method. As such, as the authors admit themselves, while the method achieves the desired stability, it does not guarantee accuracy and looking at their results in Figure 3, it is certainly the case that standard measures of performance such as vorticity correlation take a large hit. This is the greatest weakness of the paper.\n\nFrom what I can make out, the paper has no explicit experimentation section. I think this makes it hard to verify its efficacy.\n I think a short discussion on different kinds of stability would be helpful. The kind focussed on in this submission is “eigenvalue stability” (aka weak stability, absolute stability, time-stability, practical stability, or P-stability) where the timestep is kept finite and time is allowed to tend to infinity as opposed to simply “stability” (aka zero-stability, D-stability, Lax stability, or Lax-Richtmyer stability) where time is kept fixed and the timestep tends to zero.\n\nConditions 2 and 3 on page 2 state that mass conservation and discrete L2-norm non-increase require the continuous time limit. Is this strictly true? One could discretize time again and these (in)equalities still hold.\n\nWhile the four conditions on page 2 are sufficient, they do not guarantee accuracy (and therefore convergence). (After reading through the end I note here that this point is acknowledged in the limitations section). What is missing to guarantee accuracy?\n\nEquation 5 is just one way to satisfy the L2-stability condition, but it does not look to be the most general expression that could be used here. Why did you choose this expression? Am I perhaps wrong and is this the most general expression that could be used?\n\nFigure 1: it is not clear from the figure that at t=0.5 the red solution is less stable than the blue solution.\n\nLines 158 - 160: I see that there is a choice to be made here as to whether the ML solver should model the properties of the continuous or the discrete equations. To me it does not necessarily have to be one or the other. Is there a natural reason why you assert it must be that the ML solver must inherit the properties of the discrete equations only?\n\nIs it such a limitation that global stabilization breaks causality? In many numerical systems it often happens that discretization breaks notions of locality. For instance, computing high order derivatives at a point requires that we use very large stencils, despite the fact that derivatives are defined entirely locally.\n I believe that the authors have addressed most of the limitations of this work. The one remaining one is that I do not believe the proposed technique to be the most general possible solution and as such it may be that ML solvers will lose functional expressivity on some level." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 7, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 4 ]
[ "nips_2022_WyiM4lDJOcK", "cSMUvOWijNL", "DY2CZPtxcW_k", "Y_zKK_pCb7P", "j7tad74ze1P", "dxhSKRHoS8", "9oh7uFZX1_7", "6x3bzz_Dwur", "PuKMB9IkWr", "j7tad74ze1P", "nips_2022_WyiM4lDJOcK", "nips_2022_WyiM4lDJOcK", "nips_2022_WyiM4lDJOcK", "nips_2022_WyiM4lDJOcK", "nips_2022_WyiM4lDJOcK" ]
nips_2022_pl279jU4GOu
Convergence beyond the over-parameterized regime using Rayleigh quotients
In this paper, we present a new strategy to prove the convergence of Deep Learning architectures to a zero training (or even testing) loss by gradient flow. Our analysis is centered on the notion of Rayleigh quotients in order to prove Kurdyka-Lojasiewicz inequalities for a broader set of neural network architectures and loss functions. We show that Rayleigh quotients provide a unified view for several convergence analysis techniques in the literature. Our strategy produces a proof of convergence for various examples of parametric learning. In particular, our analysis does not require the number of parameters to tend to infinity, nor the number of samples to be finite, thus extending to test loss minimization and beyond the over-parameterized regime.
Accept
This paper proposes a new method for proving the convergence of gradient flow to zero loss by leveraging Rayleigh quotients to establish KL inequalities, a strategy that can apply even without overparameterization. The reviewers found the paper to be well written and generally easy to follow, despite a few concerns about burdensome notation. The discussion highlighted a few minor technical issues which were addressed in the revision. Overall the consensus is that this paper provides valuable new tools and the results will be of interest to the theory community, so I recommend acceptance.
train
[ "Q7EkaphhNL", "Q_zvBRkvN-", "OVbMSUNA9_m", "aHJBKvysxAP", "I1lszOaskNd", "2pYUFS41w8i", "2b4FxF5P-JY", "WJi4wM5oE32" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I agree with that there is a gap from the presented results to deep networks but it is still a very insightful work. I am looking forward to your follow-up version. ", " The comments/concerns/suggestions have been reflected in the rebuttal version. Also thanks for highlighting the updates for easier follow-up.\n\nImportant notice:\nYou should double-check page limit but I don't think you can go over 9 page limit in rebuttal, and it may cause rejection!\n", " Thank you for your time and (very) thorough reading of all proof details.\nWe have updated our submission to include your corrections (pdf in updated supplementary material, changes in red).\n\nWe have fixed all the typos and mistakes you found.\nA discussion of previous results on logistic regression has also been added to Section 4.3, and a reference with several examples of use of KL inequalities to Section 3.\n\nRegarding your first question on $\\mathcal{L}(\\theta) - \\varepsilon$, the issue was that we operated under the (unstated) assumption that $\\mathcal{L}(\\theta) \\geq \\varepsilon$ corresponding to non-null loss in Proposition 3.1, which was a mistake on our part. For succinctness and readability, we have modified the inequality to use the positive part $\\left(\\mathcal{L}(\\theta) - \\varepsilon\\right)_+$ which is null when $\\mathcal{L}(\\theta) \\leq \\varepsilon$, and thus fixes the issue with minimal changes.\n\nThe property $\\theta(\\mathbb{R}_+) \\subseteq \\mathcal{B}(\\theta_0, R)$ was not a claim but an assumption of our statement. In the final version, we have removed this assumption and strengthened our proposition, which should lift this concern as well. We have modified the proof to give a convergence rate as requested, and assume you will find the idea very unsurprising for it almost exactly matches your suggestion.\n\nWe have added a discussion of $\\kappa$-isolation and $\\varepsilon$-separation in Appendix A.5.5, please let us know if it answers your questions.\nThe property you singled out on L590 of the first submission is indeed not a direct consequence of the lemma, but is easily checked by the same proof technique.\nWe have also added the missing reference on extraction of sequences converging almost surely. Convergence of positive variables in expectation to zero implies convergence in $L_1$, which in turn implies convergence in probability, from which almost sure convergence follows up to extraction.\n\nWe are sorry for the readability issues in Sections 4.2 and 4.5.\nWe hope the idea for the convergence on the lemniscate was clear despite your trouble with Prop 4.2, for we found this low-dimensional counterexample enlightening. The periodic signal recovery on the other hand is significantly harder to follow, but demonstrates that a good understanding of the problem (materialized here by knowledge of an approximate singular vector decomposition of the network map differential, leveraged with Prop 3.6) is key to reach proofs in the regime with near-optimal parameter count, which seems to be the most interesting. This could also serve as a bridge to signal processing results, to show that this line of work is not restricted to deep learning. Any suggested changes or general ideas you may have to improve the proposition readability or high-level exposition of these two sections is of course very welcome.\n", " \nThank you for your time and dedication to this submission. We have updated it to address the concerns you raised (pdf in updated supplementary material, changes in red).\n\nWe will discuss the major problem (Weakness 1) last.\nFirst, thank you very much for the references on the convergence speed of the logistic regression, we have updated Section 4.3 to properly credit these works, together with comments on the differences between these bounds and the result presented. We are not aware of any more precise results on the convergence of logistic regression than those newly inserted. Please let us know if you think of any other relevant results that we have also missed, we will further update this section if this is the case.\nThe consistency with the $\\mathcal{O}(1/t)$ asymptotic behavior is checked in Appendix A.5.4, the link between the (unquantified) separability assumption and our formulation is now discussed in Appendix A.5.5.\nWe have also added an explicit comment on $(1/n)$-isolation for finite datasets of size $n$ after Proposition 4.5.\n\nRegarding the general readability of the paper, we have added a notation table in Appendix A.1, we hope this helps with the reading.\nWe found further simplification of notations in early sections hard to incorporate. Since the extension to infinite data is not standard, it requires some care in the definitions to have well-formed statements.\nIn particular, the use of the $\\mathcal{D}$-seminorm, though hard to get used to, makes much more explicit the geometric intuition in the following, with functions treated as vectors with angles between them for instance.\nWe believe this choice of notation is an opportunity to steer the discussion towards more geometrical arguments that will ultimately be easier to understand.\nChanges we could consider to lighten notations are inlining the definition of the $\\mathcal{D}$-seminorm everywhere to remove that definition, at the risk of erasing the geometric intuition and lenghtening some statements, along with the removal of the primal definition of $K_\\theta$ and discussion of the difference with the dual $K_\\theta^\\star$, to shorten that definition.\nIf you feel that these (or other changes you may suggest) would significantly improve the readability of this submission, please let us know, but please also keep in mind that the page limit and the conciseness that it implies are a major constraint of ours.\nAny suggestions regarding specific simplifications or preferred choices of notation for the later sections are also welcome.\n\nRegarding the major limitation, identified as Weakness 1 in the review, we have strengthened Proposition 4.6 to lift the initialization-ball assumption. We can now prove (with high probability) convergence to an arbitrarily low loss value under no additional assumption. We believe this clears the concern regarding concrete new results on non-linear networks. \nIt remains that this work is restricted to continuous-time noiseless differentiable optimization, as observed in the review, but we argue that this is a good first step towards a stronger theory.", " \nThank you for your time and thoughtful comments.\nWe have updated our submission to address your concerns (pdf in updated supplementary material, changes in red).\n\nRegarding minor concerns, we have added a notation table in Appendix A.1, and hope this helps with the readability, please let us know if you find that other shorthands deserve to be added to this table or need more attention. We took $\\mu$-uniform-conditioning of ${K_\\theta^\\star}$ to mean that the lowest eigenvalue is at least $\\mu$, i.e. ${K_\\theta^\\star} \\succeq \\mu$ (as positive semi-definite matrices). We have removed this term from the introduction to avoid misunderstandings, and replaced it with a more explicit description.\n\nThe question of the gap from presented results to deep networks is harder to address.\nAs a first and high-level answer, we use the term \"deep learning\" in this sentence in a broader sense, to mean essentially gradient-based iterative training of machine learning models in the massive-data regime, which is a looser definition but arguably compatible with a more modern use of the term, although not directly connected to the layer-depth of the model which initially led to this choice of terminology. In this broad sense, we believe that any theory powerful enough to explain the vanishing optimization error of deep learning will have to recover nearly-trivial systems such as the lemniscate as subcases. This example shares some properties with layer-deep architectures that rule out both parameter-convex and definite-NTK theories. We argue not that this approach will surely lead to convergence proofs of deep architectures, but only that previous approaches won't and show why. Our approach is then in a way incomplete by construction, but can be viewed as the easy case that later theories must recover as a subcase to stand a chance, which substantially shrinks the search space for such theories.\n\nRegarding a stricter interpretation of the term \"deep\" learning, and the road to transformers and the likes,\nwe still have high hopes for this line of work.\nWe have started exploring some ideas to extend the result from two-layer networks to deeper multi-layer perceptrons, but don't have fully fleshed-out results at the moment that could be added to this submission. Should this submission be accepted, we will consider extending it with more examples in the direction you suggest in a follow-up journal version, to strengthen this argument.", " This paper proposes a new insight in proving the convergence of the training of deep learning. Taking advantage of the notion of Rayleigh quotients, the authors show that KL inequalities for a broader set of neural network architectures and loss functions can be obtained. Moreover, several illustrative examples are provided to demonstrate their idea. In particular, their analysis does not require over-parameterization. My detailed comments are given as below.\n\nStrengths:\n\n1 This is a theoretical article on the convergence of the GD for deep learning. This topic has been intensively studied recently. Most of previous works require the over-parameterization. But this work is different, and it presents a new strategy in proving the convergence based on the Rayleigh quotients and the KL inequalities. This innovative idea has intrigued me, and I feel that this article contributes significantly to the understanding of training dynamics of NNs.\n\n2 One of main limitations of previous works is that their arguments require the positiveness of the NTK which further requires the over-parameterization. This paper points out that one can bound the Rayleigh quotient of the gradient instead of lower bounding the eigenvalues of the NTK. Furthermore, the authors show the connection between the Rayleigh quotient and the KL inequalities which ensure the convergence.\n\n3 This paper is very well written and easy to read. Several case studies are presented in Sec 4.These cases are common and representative, and make their ideas very convincing. Moreover, the proof seems clean. Although I did not thoroughly examine the proof, it provided me with several fresh perspectives.\n\nWeaknesses:\n\nThis paper argues that the proposed new strategy can be used to prove the convergence of deep learning architectures to a zero training\". However, only two-layer NNs are discussed in case studies. Can you provide some cases to support your argument on ``the deep architectures\"? Minor issues:\n\n1 A table of notations is preferred.\n\n2 What does the ``uniform conditioning\" (in Line 19) mean? YES", " This paper presents a strategy to prove the loss convergence of gradient flow for training loss of the form $\\ell(F(\\theta))$, where $F$ is the vector consisting of neural net outputs on each data point, and $\\ell$ maps the outputs to the final loss value (e.g., $\\ell$ can be the MSE to target values). Previously NTK-based analysis assumes that the smallest eigenvalue of the NTK matrix $K = DF \\cdot DF^\\top$ is bounded from below, then proves the loss convergence through establishing the PL condition under this assumption. In this paper, the authors avoid the assumption on the smallest eigenvalue and instead point out a proof strategy based on Rayleigh quotients of the NTK matrix and KL condition. This strategy is then applied to prove loss convergence bounds for linear regression, linear logistic regression, and some partial results are also given for two-layer net with squared loss. ### Strengths:\n\n1. The proposed proof strategy does not require the smallest eigenvalue of the NTK matrix to be lower bounded, allowing analysis when the number of data is more than the number of parameters.\n2. Applying the proposed proof strategy gives an alternative way to understand the loss convergence bound for linear regression.\n\n### Weaknesses:\n\n1. While it sounds more promising to establish loss convergence bounds beyond the over-parameterized regime, this paper does not prove any concrete theorem on nonlinear neural nets with fewer parameters than the number of data. In Sections 4.4 and 4.5, the authors give some partial results establishing loss convergence under the assumption that the gradient flow does not move far from the initial point. This assumption weakens the strength of the results because it seems to be a more interesting and important question that why gradient flow does not intend to move far away in search of a better solution. In other words, it is unclear how useful the proposed proof strategy is in terms of overcoming the various difficulties in analyzing loss convergence beyond over-parameterized regime.\n2. The results on linear logistic regression are not well contextualized relative to prior works. The authors claim that the loss convergence bound in this case is new (Line 97), but it is unclear how it compares with previous works such as the $O(1/t)$ bound in Theorem 5 of [1] for binary classification (see also Appendix E for the multi-class case). These two bounds should be the same when $t \\to \\infty$, so the authors should compare them carefully in the non-asymptotic case.\n3. The presentation of the current paper is not very understandable, since there are too many notations and the notations vary in different settings. I spent hours reading this paper to understand and memorize all these notations just to understand the basic idea. One main issue is that the definitions in Sections 2 and 3 include too many technical details that are unrelated to the idea of bounding Rayleigh quotients, e.g., the concept of D-seminorm is somewhat unnecessary because one can just replace the D-seminorm of $\\nabla \\ell_f$ with an expectation over the dataset, which is more accessible to the readers. To improve the presentation, I recommend the authors to either reduce the number of notations or add a warmup section that illustrates the proof strategy in a simple case (e.g., linear regression) before introducing the general setup.\n\n[1]. https://www.jmlr.org/papers/volume19/18-188/18-188.pdf 1. I wonder if the authors have some concrete examples showing that loss convergence bounds can indeed be deduced for nonlinear neural nets beyond the over-parameterized regime (without extra assumptions).\n2. How does the loss convergence bound in the linear logistic regression case compare with prior works? How much is known about the loss convergence rate of linear logistic regression in the literature?\n3. When does the isolation assumption hold in Proposition 4.5? The major limitation has already been mentioned in Weakness 1. Besides that, this paper also has some limitations that are common in many optimization papers: (1) the dynamics of SGD and gradient flow can be very different; (2) the loss function can be non-smooth.", " This work investigates the convergence of gradient flow to zero training (and test) loss. The main idea is to derive Kurdyka-Lojasiewicz inequality which is accomplished by lower bounding Rayleigh quotients, and several showcases of common learning problems have been analyzed by this technique. Overall the paper is well-written and the main messages are easy to follow. However I had problems in understanding section 4.5 and its proofs, as well as prop 4.2, and couldn't check their correctness. Other results, except for prop 4.6 which might need slight modification discussed later, seem to be correct. The main weakness can be seen in discussing the conditions under which prop 4.6 and prop 4.5 hold, as addressed in below.\n\nI like the argument that Prop 3.1 makes the loss evolution more tractable using the function $\\phi$. Indeed in line 164 you provide $\\phi$ when PL condition is satisfied and further in section 4.3 provide example for multi-class logistic regression. In Prop 3.3 you argue that functional-space KL inequality is easier to obtain and together with Rayleigh quotient bound they imply parametric-space KL inequality, hence able to use the convergence result of Prop 3.1. The techniques employed throughout Prop 3.4-3.6 to split the Rayleigh quotient bound seems powerful and its first simple use is shown in linear models with quadratic loss in Prop 4.1.\n\nI would also like to know about any previous work that analyzes Cross-entropy minimization with linear models, assuming you are not the first paper to consider it. \n\nOverall the references include major important works in related areas, and some of them deal with Gradient Descent and overparameterized models. Please pinpoint the works that do not have overparametrized assumptions and if possible, compare your work to them. For example in line 150, you mention that \"this was introduced as an extension to PL inequality for linear convergence ...\". Please include as many references as possible that revolve around using inequalities in Kurdyka [1998]. One suggestion is to defer the proof of prop 4.1 to appendix so that you have more space. Typo:\nline 319, \"be compact\". line 703, \"has a ...\". line 730, equation (1), it should be \"l.h.s $\\leq \\frac{\\eta}{2}$\". in equation just before line 737, it might be $\\frac{\\eta}{2} + \\frac{R}{\\sqrt{k}}$, which yields $\\sqrt{k}\\geq 2R/\\eta$. line 360, \"there there\". line 545, there might be an error in calculating first component of the gradient.\n\nQuestions and suggestions:\n\n1- Proof and statement of Prop 4.6 needs revision. First I can't find a proof that $\\theta(\\mathbb{R}_+)\\subseteq \\mathcal{B}(\\theta_0,R)$ if $\\theta$ is a gradient flow, and in fact you just assume this condition holds in line 745 without proving it and further in line 328 you defer its proof to future work. Please indicate what is the difficulty in proving it. Most importantly, in the beginning of page 27, the conclusion in the 3rd line $\\cdots \\geq \\frac{1}{\\lVert \\nu_0 \\rVert^2}(\\mathcal{L}(\\theta_0) +0 - \\epsilon)^2 $ does not seem correct because $a\\geq b $ does not imply $a \\geq b$. However it seems that there is a fix for that because you just need to consider $\\lVert \\nabla \\mathcal{L}(w,a) \\rVert_2 \\geq \\cdots \\geq \\frac{1}{\\lVert \\nu_0 \\rVert}(\\mathcal{L}-\\epsilon)$. Even with this change, the argument in line 751 needs explanation on how you use prop 3.1 to show $\\overset{\\sim}{\\mathcal{L}}(\\theta)$ converges to zero. One way I can think of is to use $\\phi(u)=-\\frac{1}{u}$ which is a strictly increasing differentiable function on positive real numbers and $\\phi'(u)=\\frac{1}{u^2}$ and it gives $\\lVert \\nabla \\tilde{\\mathcal{L}}(\\theta) \\rVert^2 \\geq \\mu \\tilde{\\mathcal{L}}(\\theta)^2$ which is true here as you assume $\\mathcal{L}(\\theta_t)\\geq \\epsilon$. Hence by showing that $\\mathcal{L}(\\theta_t)$ converges to $\\eta\\leq \\epsilon$, it follows that $\\mathcal{L}(\\theta_t) - \\epsilon$ is negative for large enough t, and hence $a \\geq b$ does not imply $a^2 \\geq b^2$ with $a=\\lVert \\nabla \\mathcal{L}(\\theta) \\rVert$ and $b=\\kappa (\\mathcal{L}(\\theta) - \\epsilon)$. Finally I would like to know about rate of convergence by using Prop 3.1. It seems to me Prop 3.1 can only be used to show contradiction, i.e. $\\eta\\leq\\epsilon$, and I'm wondering if $\\lVert \\nabla \\mathcal{L}(\\theta) \\rVert \\geq \\kappa (\\mathcal{L}(\\theta) - \\epsilon) $ can be used with prop 3.1 to give any rate of convergence.\n\n2- (when) is the $\\kappa$-isolated condition in line 307 satisfied? I couldn’t find how it is proved anywhere in the Proof section, and stating under which conditions it is satisfied seems crucial. Also please discuss how fast is the convergence rate of prop 4.5.\n\n3- It is true that existing a \"non-zero\" $\\epsilon$-separating ray leads to unique label in softmax classifier and also interesting that separating ray with zero loss implies dirac labels. Please cite any existing work that considers this separating ray assumption, assuming you are not the first one.\n\n4- in line 590, is the conclusion $\\nabla \\ell(u): x\\rightarrow \\nabla \\ell_x(u(x))$ a result of Lemma A.1?\n\n5- in line 583, please explain more thoroughly how \"convergence of positive r.v.s in expectation to zero leads to some subsequence converging a.e. to zero\".\n\n The limitations concerning conditions under which prop 4.6 holds is discussed in lines 328-331. However I couldn't find how $\\kappa$-isolated in line 307 and $\\epsilon$-separating ray in line 309 are satisfied. It is strongly suggested to discuss these in the appendix." ]
[ -1, -1, -1, -1, -1, 7, 4, 7 ]
[ -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "I1lszOaskNd", "OVbMSUNA9_m", "WJi4wM5oE32", "2b4FxF5P-JY", "2pYUFS41w8i", "nips_2022_pl279jU4GOu", "nips_2022_pl279jU4GOu", "nips_2022_pl279jU4GOu" ]
nips_2022_M_WuaKoaEfQ
A Quadrature Rule combining Control Variates and Adaptive Importance Sampling
Driven by several successful applications such as in stochastic gradient descent or in Bayesian computation, control variates have become a major tool for Monte Carlo integration. However, standard methods do not allow the distribution of the particles to evolve during the algorithm, as is the case in sequential simulation methods. Within the standard adaptive importance sampling framework, a simple weighted least squares approach is proposed to improve the procedure with control variates. The procedure takes the form of a quadrature rule with adapted quadrature weights to reflect the information brought in by the control variates. The quadrature points and weights do not depend on the integrand, a computational advantage in case of multiple integrands. Moreover, the target density needs to be known only up to a multiplicative constant. Our main result is a non-asymptotic bound on the probabilistic error of the procedure. The bound proves that for improving the estimate's accuracy, the benefits from adaptive importance sampling and control variates can be combined. The good behavior of the method is illustrated empirically on synthetic examples and real-world data for Bayesian linear regression.
Accept
This paper proposes a novel method to perform Monte Carlo integration combining control variates and annealed importance sampling. All reviewers agreed the algorithm was of interest, the theoretical evidence was strong, and the experimental results were sufficiently convincing, so there was a consensus on acceptance. (As a minor aside, I would encourage the authors to consider the font sizes in their figures.)
val
[ "sVLPlvPUyQc", "t_bP5zcnOWr", "vqdZTE7vIjA", "pRBs2uVDYL", "tYQk2fZk0iUa", "DxiCgFnivyr", "5NvdPn9BS5D", "6IehXvQ3aGG", "QHrUBibw5qN" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your detailed response! I appreciate the updated figures and the discussion about the choice of control variates and have increased my score to a 7. ", " Thanks so much for your response! I must admit i made a typo in my initial review: I meant Bayesian **logistic** regression. I think an example like that would speak more to how this could affect real use cases!", " I thank reviewers for their time in responding to my concerns and questions. I did not have many points that required improving, and therefore I remain positive about this paper. I suggest acceptance.", " We would like to warmly thank the reviewer for the valuable feedback, suggestions and the overall appreciation of our work, especially on the theoretical side. \n\nConcerning the general idea of combining AIS and CV: indeed, our methodology combines the two techniques in a more or less independent (although sequential) manner. Still, it is an interesting remark and challenge to seek for control variates that match the sampling policy well. A possible starting point is the parameter $\\tau$ of Remark 1, which encodes the residual variance when combining the importance weights with the linear regression residuals. This quantity plays a central role in the theoretical analysis and makes the link between AIS and CV. Thank you for pointing out this direction of further research.\n\nFollowing your comment, the y-axes of the different figures now share the same cutoff value, facilitating visual comparison -- please check the rebuttal version.\n\nConcerning a baseline of just CV estimators in the numerical experiments: please note that a standard CV estimate requires access to samples from the target distribution. Given such access, it may not be relevant to use Adaptive Importance Sampling or Monte Carlo Markov chains. If we were to compute a CV estimate given the analytically available Gaussian posterior distribution, for example, we would probably take the family of Hermite polynomials since they form an orthonormal basis with respect to the Gaussian measure. Instead, our experiments are cast in a Bayesian spirit, the task being to compute an integral with respect to a posterior distribution from which one cannot sample directly. \n\nThe reviewer is right concerning the \"virtually zero\" variance: this is a CV effect, the integrand lying close to its projection onto the linear space spanned by the control variates (see also Proposition 2). \n\nWe thank the reviewer again for the constructive feedback, the positive appreciation, and the time and effort spent reviewing our work.", " We would like to sincerely thank the reviewer for the valuable feedback, the suggestions and the overall appreciation of our work. We address the different points below.\n\nThe thought-provoking notion of 'adversarial' control variates, defined perhaps in terms of a min-max game and some appropriate rule, represents an interesting avenue for further research as (to the best of our knowledge) control variates nor AIS procedures have ever been compared under adversarial attacks. Still, in terms of asymptotic variance of the error term, it never harms to add linearly independent control variates (Assumption 2). In practice, however, choosing too many control variates may result in an ill-conditioned empirical Gram matrix and/or in overfitting. The OLS solution could become unstable, requiring some kind of regularization, such as the LASSO. \n\nConcerning the variance oddity in the boxplots of Figure 2, please note that it is a visual effect caused by the logarithmic scale. We apologize for the confusion. As expected, the AISCV methods do have a smaller variance than AIS as shown by the novel Figure 5 in the appendix drawn at a linear scale (please check the rebuttal revision). At a logarithmic scale, however, the smaller error values of the AISCV method (around 1e-5) look as if they are spread more widely than the larger error values of the AIS method (around 1e-3). For readability and clarity, Figure 2 now only presents the mean squared error, still on logarithmic scale.\n\nRegarding the choice of the control variates, the theoretical requirement in Assumption 2 is not hard so satisfy and is guaranteed for the construction methods in Section 5. Of course, choosing control variates in some optimal way given the integrand and the target density can indeed be quite difficult, and it could very well occur that the obtained variance reduction does not justify the additional computational expenses. \n\nWe thank the reviewer again for the positive feedback and for the time spent reviewing our paper.", " We would like to sincerely thank the reviewer for the valuable feedback, enthusiasm and suggestions.\n\nThe framework of Bayesian Linear Regression with real-world data was chosen so that the value of the integral of interest is analytically available when computing mean squared errors. Based on the suggestion, we have added in the supplement a state-of-the-art Monte Carlo Markov chain called NUTS (No-U-Turn Sampler), which is a self-tuning variant of Hamiltonian Monte Carlo (HMC).\n\nThank you for spotting the typo: on l.126, it should indeed be $\\mathbb{E}_f[g - \\beta^\\top h] = \\mathbb{E}_f[g]$. \n\nA limitation of the combined AIS-CV approach is that it requires the user to make quite some design choices, notably the sampling policy for the AIS part and the control variates for the CV part. These culminate into the factor $\\tau$ in Remark 1, which appears prominently in the error bound in Theorem 1 and which can be interpreted roughly as the variance of $w\\epsilon$, where $w$ is the importance weight -- well behaved when the policy is well-chosen in relation to the target density -- and where $\\epsilon$ is some residual function -- well behaved when the control variates are well-chosen with respect to the integrand. Further, choosing too many control variates may result in an ill-conditioned empirical Gram matrix and/or in overfitting. The OLS solution could become unstable, requiring some kind of regularization, such as the LASSO. If we are given the occasion, we will add a remark in the paper.\n\nWe thank the reviewer again for the careful reading of our paper, the positive appreciation, and the interest in our research.", " In this paper, the authors combine adaptive importance sampling (AIS) with control variates (CV) leading to a novel Monte Carlo estimator which they call AISCV. Non-asymptotic bounds were developed for the novel estimator and the experiments section demonstrated the superiority of AISCV versus a standard AIS estimate. # Strengths\nI can not express how cool the proposed approach is. While standard CV estimators require $\\beta$ to be recomputed for every target function $g$, AISCV sidesteps this completely! Instead---just like AIS---quadrature weights are computed using the particles and their corresponding importance weights, where, crucially, the quadrature weights **are not a function of** $g$! Thus, this process is done once and can be used for any $g$. Moreover, the bounds on the AISCV estimate are very impressive as well!\n\n# Weaknesses\n\nWhile the experiments section clearly demonstrated the superiority of AISCV, I wish a more practical experiment was done. AISCV was motivated by wanting to compute posterior expectations. Thus, it was disappointing that the only example done was Bayesian linear regression where the posterior is available analytically. Instead, it would have been more compelling if the AISCV estimate was compared against Bayesian linear regression, which can be sampled by using HMC for instance. \n Below I will list general questions/comments\n\n1. In line 126, I think there is a typo. The paper says that $\\mathbb{E}_f[g - \\beta^\\top h] = 0$ but it should be $\\mathbb{E}_f[g - \\beta^\\top h] = \\mathbb{E}_f[g]$. There is no section in the paper describing the limitations of the approach. While presumably the limitations of AISCV are similar to AIS and CV, it would be beneficial to state them in the paper.", " This paper proposes a novel method for approximating intractable integrals that combines adaptive importance sampling and control variates. The authors prove a probabilistic bound on the error of their proposed estimator and demonstrate its superiority to the simple AIS estimator. Overall, I found this paper an enjoyable read although I confess, many of the technical details surrounding the theoretical results escaped me. \n\n1. In terms of originality, the primary proposed idea is not particularly inspired, in that it combines two commonly-used, existing methods. \n2. The level of analysis, both theoretical and empirical, is compelling. \n3. The paper is extremely well written and clear. \n4. The work seems likely to be significant as the task of estimating integrals is commonplace and the proposed method appears sufficiently general so as to be widely applicable. Could it ever be the case that poorly (or even adversarially) constructed control variates result in unboundedly worse performance than AIS? Or is Assumption 2 a sufficient safeguard against this kind of behavior? \n\nCan the authors speak to why their AISCV methods in Figure 2 demonstrate a higher variance than AIS in this setting? It seems odd to me that control variants, a method originally proposed as a variance reduction technique, could cause this behavior. Related to my questions above, it seems that the use of control variates introduces a whole host of design choices that have the potential to result in poor performance. While the theoretical results speak to what requirements the control variates need to satisfy, in practice it seems like this could result in undesirable behavior if the requirements are not met.", " The authors present AISCV estimator that combined Adaptive Importance Sampling (AIS) and control variates (CV). The paper presents a theoretical proof of its convergence rates and presents practical considerations that are really helpful. The theoretical side of this paper is extremely well executed. Everything is carefully described and seems technically sound (I didn't carefully check details in the proof of Theorem 1, but seems valid based on what we know from AIS and CV. I did check the outline of the proof). \n\nI would like the authors to reflect on if AISCV is just a combination of AIS and CV, or if they think it is greater than \"the sum of its parts\". \n\nCould you Figure 1(a) and 1(b) have the same cutoff on the y-axis for easier visual comparison between input dimensions. Same where appropriate in Figure 2.\n\nThis paper has enough quality to be accepted to NeurIPS. The drawdown is, I think, that the short format of NeurIPS is making this paper poorer than it could be in a longer format. The experimental section is likely a bit sub-par to many other papers at this venue. Yet, the authors have more experiments in the appendix that I did not read carefully.\n Would it be worth having a baseline of just CV estimators? It is interesting to see how big the reduction is from CV -> AISCV. Especially, how would it compare on Figure 3. My thinking is the \"virtually zero\" variance is a CV effect rather than an AISCV effect. Is this correct? The authors did not address this issue. I do not see any reason why the authors would need to address it either, the paper does not touch any sensitive area of society." ]
[ -1, -1, -1, -1, -1, -1, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, 3, 2, 4 ]
[ "tYQk2fZk0iUa", "DxiCgFnivyr", "pRBs2uVDYL", "QHrUBibw5qN", "6IehXvQ3aGG", "5NvdPn9BS5D", "nips_2022_M_WuaKoaEfQ", "nips_2022_M_WuaKoaEfQ", "nips_2022_M_WuaKoaEfQ" ]
nips_2022_s0AgNH86p8
TransBoost: Improving the Best ImageNet Performance using Deep Transduction
This paper deals with deep transductive learning, and proposes TransBoost as a procedure for fine-tuning any deep neural model to improve its performance on any (unlabeled) test set provided at training time. TransBoost is inspired by a large margin principle and is efficient and simple to use. Our method significantly improves the ImageNet classification performance on a wide range of architectures, such as ResNets, MobileNetV3-L, EfficientNetB0, ViT-S, and ConvNext-T, leading to state-of-the-art transductive performance. Additionally we show that TransBoost is effective on a wide variety of image classification datasets. The implementation of TransBoost is provided at: https://github.com/omerb01/TransBoost .
Accept
This paper initially received mixed opinions. After intensive author-reviewer and reviewer-reviewer discussions, all reviewers converged and recommended acceptance. AC recommends accepting the paper.
train
[ "INoXbQ8Zi-S", "T-WaMK4cdd", "_Lp9ndV_ESR", "N1GxPSnRO8a", "Lz4B6TRwMFX", "gmHCbc5x6Xa", "ZrTJYDgzis", "l37040edpk_", "lbrhEu-PIwY", "Fq1ZdERlH-s", "KyIvPRrZs1" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " After reading relevant materials, I realize that the unlabelled test data are usually used in transduction learning. The experimental results in Table 1 also shows that TransBoost outperforms self-supervised learning methods such as SimCLRv2 in transductive setting, which demonstrate its superiority. The practical value of transduction learning is clear in few-shot learning where only a few labeled samples are given. However, the benchmarks used in this paper, such as ImageNet, CIFAR10, el al., have a large number of labelled training data, it is unnecessary for us to use (unlabelled) test data. Anyway, this paper demonstrates the effectiveness of transduction learning in classification, even better than SSL methods. Hence, I would like to raise my rating to 5.", " After considering comments by other reviewers, authors, and rebuttals, I believe that the submission has demonstrated enough merit for acceptance. I also believe that the community would benefit from the work.", " While I generally avoid directly weighing on other reviewers' comments, I believe that in this case, it's necessary. Transductive learning is a well-established problem domain, with extensive prior literature, especially in the field of few-shot learning. Furthermore, I am personally aware of industrial settings where transductive learning would be beneficial (such as the case where large unlabelled sets of images are available and we need to produce labels on all instances but want to label only a proportion of them). The authors have also clearly discussed bias issues that may arise.\n\nTherefore, I find it unfair to completely reject the submission on the basis of the problem domain not being convincing enough. I would be happy to discuss the specific merits of the method, the experiments, the limitations, and other factors in generating my final reading. However, the problem domain concerns do not provide grounds for rejecting the paper in my opinion.", " Unfortunately (to us) your view and beliefs on transductive learning are wrong.\nIt is recommended that you read the Wikipedia entry:\nhttps://en.wikipedia.org/wiki/Transduction_(machine_learning).\nThis page provides some basic but detailed information on transductive learning, as well as an example problem to illustrate some of the unique properties of transduction against induction.", " Thank you for your detailed rebuttal. Although the authors have clarified the concept of overfitting and biasing, I still have doubt whether it is meaningful to use test data for image-pair learning. In my view, the performance can be definitely (with appropriate usage) improved when test data is used in training stage. Note that transduction learning typically have source domain and target domain, which is often the case in practice. However, test set is often used to test the performance of a model and does not exist in true scenario. I decide to keep my rating unchanged.", " Thank you for your positive review! We appreciate your feedback and are very pleased that our work inspires you. As well, we're curious about your suggestions, like whether TransBoost is more applicable to some architectures than others and what rules TransBoost follows from a theoretical perspective in order to understand how to maximize its performance. We'll leave these questions for future work, as we also mentioned in Section 6 (Concluding Remarks) for some related research questions.", " Thank you for your review and your suggestion. We will do our best to explain in detail why transductive learning is a meaningful task and detail various applications for TransBoost.\n\nFirst, let's briefly discuss transductive learning. Transduction was first introduced by Vapnik [1] in the context of statistical learning theory. In relation to classical machine learning, it has been extensively examined in the literature, including the empirical work [2] that applied TSVM to various datasets of text classification to demonstrate accuracy gains compared to standard SVM. The work in [2] first demonstrated that a transductive algorithm could boost the performance of an inductive models (not neural models). As far as deep learning is concerned, transduction has only been briefly addressed. Therefore, we developed TransBoost to take advantage of transduction in deep learning, and we demonstrate overwhelming and consistent improvements (relative to standard inductive deep learning) that should be exploited whenever transductive learning is applicable.\n\n[1] Vladimir Vapnik. The nature of statistical learning theory. Springer science \\& business media, 1999.\n\n[2] Thorsten Joachims et al. Transductive inference for text classification using support vector machines. In Icml, volume 99, pages 200–209, 1999.\n\n*\"The task of deep transductive learning has a severe issue of overfitting target test set\"*\n\nSorry for the misunderstanding. Overfitting is perhaps the wrong word here and the correct one is probably biasing. Indeed biasing the model towards the given (unlabeled) test set is precisely what transductive learning is about.\n In the transductive setting we are given both a labeled and an unlabeled set at training time and the goal is to predict only the labels of the given unlabeled set (the test set) as accurately as possible, without caring about any unseen instances. In section 2 (Problem Formulation), we state that our target objective is only the loss on the given test set (Equation 1). Perhaps this is what ignited your concern about overfitiing/biasing. However, the legitimate biasing in transduction enables improved performance.\n \n*\"Although Transboost does not use the true label of the test set, it implicitly uses pseudo labels, i.e. similarity of image pairs. In this sense, the performance would boost a lot in any deep models as shown in Fig.1.\"*\n\nFirst we should all agree that using pseudo labels is a perfectly valid ingredient.\nHowever, observe that simply using pseudo labels is not enough in order to achieve the accuracy gains that we presented in Figure 1.\nFirst, we have preliminary results on ResNet architectures that show that simply training each model with both labeled data and unlabeled data with its pseudo labels via the standard cross entropy does not provide any accuracy gain (the performance remains the same). Much of the power of transductive learning is achieved through analyzing the given test set as a group. Thus, we designed TransBoost to take advantage of the test set as a group instead of one instance at a time as in inductive settings. In Section 5.4 we empirically demonstrated this idea. We observed that whenever TransBoost uses the entire training set, its performance increases as the test set size grows.\n\n*\"I would like to suggest the authors define a meaningful task. Eqn.(2) borrows ideas from TSVM and it would have a positive effect in test time training\"*\n\nWe appreciate your suggestion. The Test-Time Training approach is indeed an interesting application for TransBoost to improve performance under distribution shifts. We'll add a discussion about it in Section 6 (Concluding Remarks) in the final version of the paper. Thank you for the good comment.\nMeanwhile, An appendix section with detailed elaborations on TransBoost applications has been added to a revised paper and prioritized as Appendix A. In general TransBoost is particularly useful when we are able to accumulate a test set of instances and then finetune a specialized model to predict their labels, as we demonstrated in our paper. There are numerous applications for this setting. Our paper briefly mentioned medical diagnosis (Introduction, Paragraph 3) and now in Appendix A we elaborate on additional applications in various fields: medicine, fintech, targeted advertising, homeland security, and data analytics (see Appendix A for more information). Using TransBoost, a user can achieve more accurate predictions in each task while using an efficient and simple framework on top of existing inductive (standard) models.", " Thank you for your review and your suggestions. We are committed to address all the questions and concerns raised in your review.\n\n*\"The paper is not in the submission format, there is no line number.\"*\n\nThanks, you are correct, we have added the line numbers and uploaded a revised paper.\n\n*\"Significance: the author doesn't provide a strong motivation for why their work is important\"*\n\nWe agree with this concern and thank you for this good comment. Indeed, we have not sufficiently motivated deep transductive learning and haven't elaborated on useful applications for TransBoost. An appendix section with detailed elaborations on TransBoost applications has been added to a revised paper and prioritized as Appendix A.\nIn general TransBoost is particularly useful when we are able to accumulate a test set of instances and then finetune a specialized model to predict their labels, as we demonstrated in our paper. There are numerous applications for this setting. Our paper briefly mentioned medical diagnosis (Introduction, Paragraph 3) and now in Appendix A we elaborate on additional applications in various fields: medicine, fintech, targeted advertising, homeland security, and data analytics (see Appendix A for more information). Using TransBoost, a user can achieve more accurate predictions in each task while using an efficient and simple framework on top of existing inductive (standard) models.\n\n*\"in the 3rd paragraph in Section 4, what does it mean: we incentivize test sample pairs that are likely to be different in their classes to also be different in their empirical class probabilities while preserving the prior knowledge of $f_\\theta$?\"*\n\nThe following statement was made in this section to give an overview of TransBoost's process after it was introduced formally. Let us elaborate on this intuition.\nTransBoost loss in Equation 2 consists of three components - a confidence function $\\kappa$, a selection function $\\delta$, and a similarity function $\\mathcal{S}$. Informally, it was designed to distinguish representations of pairs that are likely ($\\kappa$) to belong to different classes ($\\delta$), but appear to be similar ($\\mathcal{S}$). For example, if we have a pair of unlabeled test samples where one is predicted to be \"Dog\", and the other is predicted to be \"Cat\", TransBoost will calculate its loss based on their similarity in their class probability vectors, so that when Dog and Cat have similar class probability vectors (as defined by Equation 4), they will get a higher loss than when Dog and Cat have dissimilar class probability vectors. As an intuition, we wrote: \"preserving the prior knowledge of $f_\\theta$\" to emphasize that we are given a pretrained inductive model that performs meaningfully on the test set, so our method preserves the accuracy of the given model on the labeled training set while boosting the performance on the test set.\nAnyway, thank you for your note, we will fix this in the revised paper.\n\n*\"(a) How is the $p(x|f)$ calculated with SVM?\"*\n\nWe agree to add an appendix section that includes these technical details. Given a test point $x\\in\\mathbb{R}^2$ we calculated its binary class probability vector $p(x|f)$ via standard logistic regression. Formally, Let $w\\in\\mathbb{R}^2$ and $b\\in\\mathbb{R}$ denote the SVM's coefficients and bias term after training. We first calculated its linear score $z=w^Tx+b$. Then, we applied the logistic function $f(z)=\\frac{1}{1+\\exp(-z)}$ to calculate the probability score for the instance belonging to the first class (the probability score for the instance belonging to the second class is $1-f(z)$).\n\n*\"(b) Although TransBoost loss is lower in T-SVM, does the decision boundary of T-SVM provides better separation of the test instances? So what is the distribution of test instances?\"*\n\nThe purpose of this toy example is to quantitatively demonstrate why training with the TransBoost loss encourages large margins on a test set (similar to TSVM). In this example, we synthesized the labeled training points and unlabeled test points without considering their real distribution (because it's synthetic). Therefore, this example does not claim anything about the model's performance when a large margin principle is taken into account on the test set, but only illustrates that TransBoost encourages this behavior.\nIn classical machine learning, large margin methods were extensively researched. A relevant example may be the work [1] that demonstrated impressive accuracy gains when TSVM was compared with SVM on several binary text classification datasets. We believe this work can answer your question on how large margins can help the test set perform better (Figure 5 in [1]).\n\n[1] Thorsten Joachims et al. Transductive inference for text classification using support vector machines. In Icml, volume 99, pages 200–209, 1999.\n\nThank you for the other minor comments, we fixed them and uploaded a revised version.", " The paper proposes TransBoost to deal with deep transductive learning. The method follows the large margin principle and includes the similarity and confidence of two instances in the loss function. The effectiveness of this method is examined by using a variety of pre-trained neural networks, datasets, and baseline models. It shows good performance compared to inductive classification as well as versus SSL and t-FSL methods Strengths:\nIt provides extensive experiments on multiple neural network architectures and datasets and shows good improvement.\n\nWeakness:\nThe paper is not in the submission format, there is no line number. \nClarity: some paragraphs are hard to read.\nSignificance: the author doesn't provide a strong motivation for why their work is important 1. Some paragraphs are hard to read. For example, in the 3rd paragraph in Section 4, what does it mean: we incentivize test sample pairs that are likely to be different in their classes to also be different in their empirical class probabilities while preserving the prior knowledge of $f_{\\theta}$?\n2. The toy example in Section 4.2 is unclear:\n(a) How is the p(x|f) calculated with SVM?\n(b) Although TransBoost loss is lower in T-SVM, does the decision boundary of T-SVM provides better separation of the test instances? So what is the distribution of test instances?\n3 Typo?\n(a) Section 5.5: separating the different -> separating the difference\n(b) Figure 2: ... example indicating ... -> ... example indicates ... N/A", " This paper investigates deep transductive learning where both a labeled training sample and an unlabeled test sample are provided at training time. The goal of deep transductive learning is to achieve high accuracy on the test set. To this end, the authors propose TransBoost as a procedure to fine-tune any deep neural model using a labeled training sample and an unlabeled test sample. Extensive experiments are conducted to verify the effectiveness of the proposed method.\n **Pros:**\n1. The paper is well-written and well-organized.\n2. Investigating conventional statistical machine learning in the context of deep learning is interesting and meaningful.\n3. Extensive experiments are conducted to verify the effectiveness of the proposed task and method.\n\n**Cons:**\n1. The task of deep transductive learning has a severe issue of overfitting target test set. Although Transboost does not use the true label of the test set, it implicitly uses pseudo labels, i.e. similarity of image pairs. In this sense, the performance would boost a lot in any deep models as shown in Fig.1.\n Please see details in Cons. I would like to suggest the authors define a meaningful task. Eqn.(2) borrows ideas from TSVM and it would have a positive effect in test time training [1]\n\n[1] Sun et al. Test-Time Training with Self-Supervision for Generalization under Distribution Shifts. ICML 2020.\n Yes", " Paper proposes TransBoost, a transductive learning loss function that improves performance in transductive image classification in domains where large datasets and test sets are available. The work draws inspiration from TSVM and proposed a transductive loss regularization term that discriminates unlabelled examples that have similar class probabilities but shouldn't. Extensive experiments demonstrate that TransBoost improves performance on various architectures, datasets, and settings. Ablation studies provide interesting insights into the behaviour of TransBoost under different conditions. Strengths:\n- Paper is incredibly well-written and very easy to follow.\n- TransBoost is an interesting and relatively novel idea. It effectively bridges the gap in performance from t-FSL methods that have been widely developed. The algorithmic choices are sound and well-motivated.\n- Experiments are extensive and provide sufficient evidence to establish state-of-the-art performance.\n\n(Small )Weaknesses:\n- Certain results suggest that TransBoost is more applicable to some architectures than others. Although performance improvements are universal, some methods improve much more, and some even surpass their otherwise more inductively powerful counterparts. This suggests that TransBoost is algorithmically better suited for certain architectures, but the reasoning is primarily speculated. It would be useful to see further analysis as to why for instance, Transformer based architectures benefit from TransBoost?\n- One would presume that alternatively modifying the loss function to group together similar samples would benefit performance, albeit not to the same extent. However, the observation that it degrades performance may suggest that transductive learning can produce fragile optimization manifolds that should be carefully designed. Further elaboration on this front could be useful. Suggestions are included in the above sections. An overall strong submission that could be further strengthened by additional analysis of TransBoosts uneven behaviour with respect to architectures. Yes, the authors provide an in-depth analysis of the technical limitations of the work and provide directions for future research. There is, however, no explicit discussion of societal impacts. I believe the work could benefit from a concise analysis of its societal impact and if any could potentially be negative." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "N1GxPSnRO8a", "gmHCbc5x6Xa", "N1GxPSnRO8a", "Lz4B6TRwMFX", "ZrTJYDgzis", "KyIvPRrZs1", "Fq1ZdERlH-s", "lbrhEu-PIwY", "nips_2022_s0AgNH86p8", "nips_2022_s0AgNH86p8", "nips_2022_s0AgNH86p8" ]
nips_2022_lHj-q9BSRjF
Data Distributional Properties Drive Emergent In-Context Learning in Transformers
Large transformer-based models are able to perform in-context few-shot learning, without being explicitly trained for it. This observation raises the question: what aspects of the training regime lead to this emergent behavior? Here, we show that this behavior is driven by the distributions of the training data itself. In-context learning emerges when the training data exhibits particular distributional properties such as burstiness (items appear in clusters rather than being uniformly distributed over time) and having a large number of rarely occurring classes. In-context learning also emerges more strongly when item meanings or interpretations are dynamic rather than fixed. These properties are exemplified by natural language, but are also inherent to naturalistic data in a wide range of other domains. They also depart significantly from the uniform, i.i.d. training distributions typically used for standard supervised learning. In our initial experiments, we found that in-context learning traded off against more conventional weight-based learning, and models were unable to achieve both simultaneously. However, our later experiments uncovered that the two modes of learning could co-exist in a single model when it was trained on data following a skewed Zipfian distribution -- another common property of naturalistic data, including language. In further experiments, we found that naturalistic data distributions were only able to elicit in-context learning in transformers, and not in recurrent models. Our findings indicate how the transformer architecture works together with particular properties of the training data to drive the intriguing emergent in-context learning behaviour of large language models, and indicate how future work might encourage both in-context and in-weights learning in domains beyond language.
Accept
This paper poses and analyses an interesting question -- do statistical properties of the training data affect emergent behavior (e.g. in-context learning) in Transformers? The study is novel since it probes the properties of the data itself, as opposed to most existing work that has studied the effect of model architectures and training algorithms on model capabilities. The paper shows that properties like Zipfian distribution and burstiness are useful for eliciting in-context learning (two properties that are widely found in natural languages, where in-context learning has had a rapid emergence). All the reviewers appreciated the originality and timeliness of this paper, and experiments provide interesting conclusions. Overall, I believe the paper will spark future inquiry into data distributional properties. I recommend the authors pay attention to Reviewer D2K5's comment about clarifying the definition of "burstiness" in the writing and making it more consistent with the experiments performed.
train
[ "RAq5Jj7XAUW", "cGsRNvCXNKw", "aHEj2yR3sDh", "O49jV3rf69-", "aruS1CmsNi", "dbEsYMqzcJV", "YerysSgAp3C", "mVHiGW_khIP2", "0dQDB7H5iX8", "7IdtocvytDY", "fCuIobnorb2", "oQ6EtaGfSQh", "FAqR025Bgfo", "TTXMQ8I87q6", "kC_ZtRDUX2k", "2Zdk3a3EpS_", "KhcjQCOOSm-", "lVQ4ErKPTrR" ]
[ "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you so much for your engagement and helpful comments through this process -- our paper is now significantly clearer, thanks to your help. And yes, we do hope that this work can ignite a new chain of research going forward! We think that answering these questions is important both for understanding current models and for developing new models with in-context learning abilities.\n\nBest regards,\n\nThe authors", " We appreciate your effort in critically evaluating our paper. Together with the other reviewers it has helped us to improve it. Thank you!", " I don't have any further questions, and thanks for addressing one of my clarity concerns. I still see this as an Accept, well done.", " Dear reviewer,\n\nThank you again for your review -- please let us know if you have any remaining questions, so that we can address them before the deadline tomorrow. Alternatively, if you feel that your original concerns about clarity are addressed, we would appreciate your updating your evaluation to reflect that.\n\nThank you so much,\n\nThe authors\n", " Dear reviewer,\n\nThank you again for your review -- please let us know if you have any remaining questions or concerns, so that we can address them before the deadline tomorrow. Alternatively, if you feel that your original concerns are addressed, we would appreciate your updating your evaluation to reflect that.\n\nThank you so much,\n\nThe authors", " Thank you for the insights and clarifications!", " Thank you for clarifying the points. Now I understand more about what this paper is trying to do. This paper indeed opens up a really important research direction -- where do the in-context learning ability come from -- and provides some analysis towards an answer, in-context learning ability comes from the data distributional properties. To fully support this answer (e.g., which property in the data distribution allows the in-context learning, and if any of the property interacts with the structural choices), tens of subsequent papers are needed. I'll update my score. \nBest regards, \nReviewer EeoY", " [[ RE: How does in-context learning work together with few-shot learning? Did you sample multiple demonstration sequences and run few-shot learning? Compared to few-shot learning (using just one image as input), this setting has a lot of computational overheads. Compared to in-context learning (without changing model parameters), this approach is limited to much smaller models. What are the benefits of in-context few-shot learning? ]]\n\nWe are not completely certain that we understand this question, but we think that there may potentially be a misunderstanding of the definitions of few-shot learning and in-context learning. Hopefully the more explicit definitions added to the introduction will ameliorate this. Few-shot learning generally involves learning from a *few* image-label pairs, rather than one. In-context learning is a specific type of few-shot learning, where there are no gradient updates, and where the model output is conditioned on a \"context\" that is provided at evaluation.\n\nIn the second part of your question, we believe you are pointing out that few-shot learning *with* gradient updates is an approach that is limited to smaller models, perhaps because gradient updates are expensive for a larger model? This is true, and it is very interesting to consider your question about the benefits of in-context learning vs few-shot learning with gradient updates. In-context learning is beneficial because it is e.g. far less computationally expensive, and able to deal with non-uniform sampling across sequences. Note that our paper is not intended to advocate for in-context learning, but rather to explain the factors that drive this interesting emergent phenomenon. \n\n[[ RE: Sec 3.1: Doesn't bustiness overlap to some extent with \"a large number of rarely occurring classes\"? ]]\n\nWe vary these distributional properties independently in our experiments. The number of rarely occurring classes is a property describing the number of possible images that can appear; but given a fixed distribution across all images, it is possible to make that distribution bursty (by packing images into the same sequence) or distributed. As a toy example, if the distribution were “a” x 2 and “b” x 2, this could be made into bursty sequences “aa” and “bb”, or into non-bursty sequences “ba” and “ab”. The frequency of \"bursty\" sequences can be controlled independently of the number of classes.\n\nWe hope that these clarifications address your questions. Thank you again for your review, and please do let us know if you have any additional questions.\n\nBest regards,\n\nThe authors\n", " Dear Reviewer,\n\nThank you for your thoughtful review of our paper, and for your evaluation that our paper is novel, interesting, and important. \n\nThank you also for your pointers for clarification. We appreciate the opportunity to further improve our exposition, and we have addressed your comments below and with updates to the paper.\n\n–\n\n[[ RE: A definition for burstiness ]]\n\nThe fields of statistics, computer, and linguistics have proposed many ways to quantify burstiness, e.g. as described in the references included in the introduction, but there is unfortunately no single standard definition. A small sampling of of these include: a \"numerical discrepancy\" (the difference in counts between the current context and the mean), the Fano factor (a ratio between the variance and mean of counts), or the degree of skew when fitting an exponential distribution to inter-event intervals. Rather than choosing a restrictive definition, we wanted to emphasize the general phenomenon, which is observed across many natural domains. We therefore described burstiness in the introduction as simply \"a distribution that is not uniform across time, instead tending to appear in clusters\", in order to capture this general phenomenon. Wikipedia states, similarly, that \"In statistics, burstiness is the intermittent increases and decreases in activity or frequency of an event.\" For our experiments, we use the functional definition that bursty sequences have multiple occurrences of the query category in the context, chosen to reflect one type of burstiness that can be observed in e.g. language (within a single context window). We have added emphasis on these points in the section \"The training data\". Even in our particular instantiation, however, burstiness is a continuous feature of the overall dataset, which we vary by increasing or decreasing the proportion of \"bursty\" sequences. We have clarified these points in the paper. We have also added emphasis on the wide variety of possible ways to quantify burstiness, and additional references.\n\n[[ RE: Definitions for \"in-weights\" vs \"in-context\" learning ]]\n\nThe definitions for these terms can be found in the main text, rather than in the figure captions. We previously defined \"in-weights learning\" as \"information that is stored through slow, gradient-based updates\". We defined \"in-context learning\" as \"few-shot learning from a few examples without the need for any weight updates … referred to as 'in-context learning', as the output is conditioned on the context\". We believe that these are in line with standard definitions for these terms. However, we have now rewritten the first paragraph of the introduction to make these definitions much more explicit:\n\n\"Large transformer-based language models show an intriguing ability to perform in-context learning \\citep{brown_language_2020}. This is the ability to generalize rapidly from a few examples of a new concept on which they have not been previously trained, without gradient updates to the model. In-context learning is a special case of few-shot learning in which the output is conditioned on examples from a 'context', and where there are no gradient updates. It contrasts with 'in-weights' learning, which is the standard setting for supervised learning – this is slow (requiring many examples), and depends on gradient updates to the weights.\"\n\nWe have also emphasized the defined terms in bold, to draw readers' attention and make them easier to refer back to. Please do let us know if further clarification is needed.\n\n[[ RE: \"Bursty sequences can be solved by in-weights learning or in-context learning\". So the non-bursty sequences can not be solved by either in-weights learning or in-context learning? ]]\n\nNon-bursty sequences can only be solved by in-weights learning.\n\n(continued below in part 2)", " \n> I assume \"burstiness\" refers to the so-called Taylor's law (bias as time series data), but can it be expressed in short training data of eight examples? Isn't it simply the uniformity of the frequency (cf. time series) that is changing...?\n\nAs described in the introduction, burstiness is \"a distribution that is not uniform across time, instead tending to appear in clusters\". In statistics, computer science, and linguistics, there exist many proposed methods for quantifying burstiness, e.g. as described in the references in the introduction. A small sampling of examples include: a \"numerical discrepancy\" (the difference in counts between the current context and the mean), the Fano factor (a ratio between the variance and mean of counts), or the degree of skew when fitting an exponential distribution to the inter-event intervals. Taylor's law can indeed lead to one type of such burstiness, though it is only one instantiation of it. Rather than choosing a restrictive definition, we wanted to emphasize the general phenomenon, which is observed across many natural domains. We therefore used the simple and general definition stated in the introduction. For the experiments, we use the simple functional definition that bursty sequences have multiple occurrences of categories within a context window, chosen to reflect one type of burstiness that can be observed in e.g. language (within a single context window). We have added emphasis on this point in the Experimental Design section. Finally, it is indeed the case that burstiness implies nonstationarity (non-uniformity in frequency). We mentioned this in the introduction, though we call it \"a distribution that is not uniform across time\" (let us know if any alternate wording might be preferred).\n\nWe hope that we were able to address your questions through these explanations, additional analyses, and updates to the paper – please let us know if you have any remaining questions at all.\n\nBest regards,\n\nThe authors\n", " Dear Reviewer,\n\nThank you for your review. We were humbled to read that you found our research \"fascinating\", and that you appreciated the new perspective on an \"important issue\". Across the ML community, there has been widespread recognition that in-context learning is an important ability with many potential applications. However, this phenomenon is still poorly understood. In this work, we've provided clear evidence of factors that drive its emergence. We believe that the conclusions we reveal can be highly impactful – for understanding in-context learning in language models, and for potentially eliciting this useful ability in new domains.\n\nThank you also for your questions regarding the dependent and explanatory variables – we have clarified below and in the paper, and we hope that these clarifications and additional experiments address your concerns.\n\n–\n\nOur experiments evaluate the effects of the distributional properties of training data on two primary dependent variables: performance on in-context learning, and performance on in-weights learning. Regarding the three variables you pointed to:\n\n(1) binary vs multi-class classification. \n\nAcross training and both kinds of evaluation (in-context and in-weights), the model structure is the same – it is trained to predict one of a large number of labels (e.g. 1600 labels). For in-context evaluation, we reported accuracy based only on the two labels seen in context. This form of evaluation provides a more sensitive measure of the model’s in-context learning — for example, in full multi-class evaluation it is possible to achieve above-chance performance simply by randomly selecting from one of the labels in context. Also, the two-choice evaluation ensures that all experimental conditions have the same level of chance – otherwise, for the \"number of training classes\" and \"dynamic meanings\" experiments, chance level and difficulty would vary with the number of possible output labels.\n\nHowever, we do appreciate that this leads to an additional difference between the two evaluation conditions, as you point out. Therefore, we have now also computed and reported the in-context evaluations using full multi-class evaluation, while keeping in mind the caveat about chance levels changing with different output sizes. These results have been added in Appendix C.4. When computed in this way, the results do not change much. Given this full set of results on both two-choice and multi-class evaluation, we do not believe that this variable is a confounding factor.\n\nAs a side note, the multi-class evaluation did uncover one interesting new result for the models trained on Zipfian distributions (we may pursue this in future work, and so are particularly glad that you brought up this question!). Feel free to read more in Appendix C.4 if you are interested.\n\n(2) no model updates vs model updates\n\nLearning image labels by relying on gradient updates (in-weights learning) and learning without model updates (in-context learning) are precisely our two dependent variables in this work. This factor is not a confound in our experiment – it is intrinsic to the phenomena we are attempting to measure! We have re-written the first paragraph of the introduction to make this much more explicit.\n\n(3) classification on holdout vs training classes.\n\nThis is a good point. We made the decision to use a more standard form of evaluation by measuring in-context learning as the ability to learn an entirely held-out visual category (without model updates) . However, learning a new label binding for an otherwise familiar category is an equally valid form of in-context learning. We think the former is strictly more impressive than the latter, but to make things extra clear we have added results for the latter case (so that now, both in-weights learning and in-context learning can be measured based on 'familiar' classes from the training data). These results are reported in Appendex C.3. As you might predict, in-context learning emerges slightly more readily if defined in this way vs. the stricter definition we adopted originally. Otherwise, we observe very little difference in our empirical outcomes.\n\n(continued in part 2 below)", " > I am genuinely surprised that the LSTM and RNN models failed so terribly and am curious if the authors have a take on this.\n\nThis is one of the directions we are most excited to pursue in future work. We were a bit surprised at these results as well, and hence performed the large hyperparameter search, and investigated both LSTMs and vanilla RNNs. But indeed, our results are in line with previous work, which has shown that recurrent architectures can only meta-learn to perform few-shot learning when the RNNs are complemented with additional architectural components, e.g. an additional memory component (Santoro et al, 2016) or an additional metric-learning component (Vinyals et al, 2016). \n\nWe believe that recurrent models perform worse than transformers at few-shot learning potentially because of a combination of factors. Most importantly, perhaps, recurrent models lack the attention mechanisms that allow them to easily compare query items with items from context. Instead, they must pass information forward through the bottleneck of the hidden state, making it harder to bring forward all of the necessary information. They also suffer, as is well known, from exploding and vanishing gradients, which exacerbate the issue. We find this to be a very interesting question, and hope to investigate it in our continuing work.\n\nThank you again for your thoughtful and careful review – we very much appreciate it.\n\nBest regards,\n\nThe authors\n", " Dear Reviewer,\n\nThank you so much for your thoughtful review. We appreciate the effort you made to really understand the questions and concepts that we probe in this work. We have now rewritten the first paragraph of the introduction to make our definitions of in-context and in-weights learning much more explicit. We have also added bold font to the defined terms, to draw readers' attention and to make them easier to refer back to. Please let us know if any ambiguities remain:\n\n\"Large transformer-based language models show an intriguing ability to perform in-context learning \\citep{brown_language_2020}. This is the ability to generalize rapidly from a few examples of a new concept on which they have not been previously trained, without gradient updates to the model. In-context learning is a special case of few-shot learning in which the output is conditioned on examples from a 'context', and where there are no gradient updates. It contrasts with 'in-weights' learning, which is the standard setting for supervised learning – this is slow (requiring many examples), and depends on gradient updates to the weights.\"\n\nThank you also for your questions, which we found very interesting. We answer below:\n\n> How would you construct a language dataset to test some of these hypotheses instead of a dummy omniglot one? That feels like a missing experiment here because you do lots of variations, but do not include anything that gets at the environment that the LLMs are operating in. Could you create a language dataset that wasn't bursty? In my mind, this would be testing what happens when you use a real-world dataset that followed the uniform IID concept.\n\nThis is an interesting topic, and one we have thought carefully about. Our aim with these experiments was to factor out properties of data distributions that could promote the emergence of in-context learning, and also to test these properties in a domain outside of language – only by satisfying these two desiderata can we clearly show that it the data distributional properties themselves, rather than any other aspect of language, that drive in-context learning. This drove our decision to perform our experiments on the Omniglot image dataset, in addition to Omniglot being a standard dataset for few-shot or in-context learning. We would also argue that since the properties we investigated are well documented for natural language, they are in fact integral to the environment that LLMs are operating in.\n\nAt the same time, our experiments do not provide causal evidence that these are precisely the properties driving in-context learning in LLMs, and we are careful to say only that our results hint at that interpretation. As you can imagine, given that these properties are inherent to natural language, we cannot remove these properties without breaking many other properties of language as well (syntax etc), and so that kind of ablation experiment on language cannot be done as far as we could tell. Nonetheless, we believe that it would be very interesting for future work to investigate the emergence of in-context learning in settings that replicate other aspects of language and language modeling, as mentioned in the discussion – e.g. having symbolic inputs, and training on next-token or masked-token prediction. We believe that another exciting future test of these ideas will be through the application of these ideas to larger-scale vision problems and reinforcement learning problems, as described in the answer to the next question.\n\n> What would it take to do the same experiments in vision like you point to in the discussion?\n\nAlthough we did perform our experiments on a small image dataset, it would be very interesting to apply the lessons of this work to applied vision work, or to larger vision datasets. We believe that there are two exciting potential routes worth investigating:\n\n(a) Adjust the distributions of vision datasets to reflect the properties found, in this work, to drive in-context learning.\n\n(b) Move to more naturalistic data, which will naturally reflect such properties – this seems to be a promising trend that is happening in the field in any case, and is reflected in the transition e.g. from ImageNet-style datasets a decade ago (with classes curated to be equal in size) to massive \"in-the-wild\" datasets like Ego4D, which are not curated to be artificially uniform in the same way. We look forward to seeing the results of training on these more natural datasets.\n\n(continued in part 2 below)", " Dear reviewer,\n\nThank you so much for your review and for your recognition of the value of this research – we have put in a lot of time and effort into this work, and we very much appreciate the acknowledgement.\n\nTo address your questions:\n\n> How might the critical effects of the Zipfian exponent of the distributions be explained?\n\nThis is a very interesting question – our hypothesis is that a Zipfian distribution has a combination of common and rare classes that each induces one type of learning. Common classes could be retained in weights because they were presented frequently enough to benefit from the kind of gradual learning that gradient descent performs. At the same time, the long tail of rare classes cannot be learned via gradient descent because they are each seen so infrequently – thus, the model must also acquire in-context learning abilities to handle these rare classes.\n\nIn more detail, we believe that there may be two important factors: the dynamics of weight changes (which are determined by the optimizer, batch size, etc) and the Zipf exponent. Together, these factors establish the following:\n\n1) The probability that a particular class is present in each subsequent batch. In the ideal supervised learning case, a particular class is observed in each batch, allowing its class label to be learned in the model's weights.\n\n2) The amount that weights change from update to update, and how these changes contribute to catastrophic forgetting. For example, if it takes N batches before a particular class is seen again (see previous point), and the weights change by M, then it might be impossible for that class to be learned in the weights. What N and M are, we do not know.\n\n3) The probability that a (catastrophically) forgotten class is present in each batch. In the ideal meta-learning case, each batch contains classes whose labels are forced to be unknowable (and hence, are functionally catastrophically forgotten).\n\nWe predict that the probabilities of (1) and (3) need to be high to see the effects we observed. The particular conditions that make this so, conditioned on (2), are probably highly context dependent. We are very interested in exploring these questions and predictions in future work.\n\n> What are the color codings of the curves in Fig 7 & Fig 8? Are they just plotting the individual runs for different seeds, or perhaps for different hyperparameter settings?\n\nThe colors are indeed indicating different runs, but they are not informative in any way beyond that. Thank you for pointing out this potential ambiguity – we have added clarification in the figure captions.\n\n> RE: Details of the hyperparameter search.\n\nThe sweeps over max learning rate and num warmup steps were 15 log-uniform random samples over the ranges [1e-5, 0.1] and [1, 10000] respectively, rather than just two samples for each hyperparameter. We have clarified this in the text.\n\nWe performed 15 runs each for\n* the transformer with 2 layers\n* the transformer with 12 layers\n* the vanilla RNN with 2 layers\n* the vanilla RNN with 12 layers\n* the LSTM with 2 layers\n* the LSTM with 12 layers\nI.e. 90 runs total. We have clarified this in the text as well.\n\nThank you again for your review and comments – they are much appreciated.\n\nBest regards,\n\nThe authors\n", " The authors pointed out that the reason why in-context learning (few-shot learning without parameter updates) can be achieved with large left-to-right (causal) language models is due to the data distribution.\n\nCompared with in-weights learning (learning with parameter updates), the authors found that in-context learning is superior when the training data has the following characteristics: (1) burstiness (degree of unevenness as time series data), (2) large number of classes or heavy-tailed distribution (3) tokens have dynamic meaning. Also, they found that these characteristics are not valid in the recurrent architecture, but are more pronounced in the Transformer architecture. ### Strengths\n\nIt is fascinating to see research that provides an analysis from a new perspective on distributional properties of the training data to the important issue of realizing few-shot learning with causal language models.\n\n### Weaknesses\n\nThe experiments are not controlled.\n\nThe three dependent variables are different in in-context learning and in-weights learning: (1) binary classification <-> multi-class classification (2) no model update <-> model update (3) classification for holdout classes <-> classification for trained (or common) classes. Therefore, we do not know which explanatory variable (burstiness, heavy-tail, dynamic semantics) works for which dependent variable. It must be said that the scientific findings are unclear. I assume \"burstiness\" refers to the so-called Taylor's law (bias as time series data), but can it be expressed in short training data of eight examples? Isn't it simply the uniformity of the frequency (cf. time series) that is changing...? Limitations (future directions) are carefully described in Discussion section.", " This work performs a detailed investigation of the factors that promote different types of learning in transformers: in-context vs. in-weights. The results delineate which aspects of the training distribution promote one form of learning or the other, or even both together. Further experiments find that recurrent models do not demonstrate the same learning abilities as the transformers studied here. This is an elegant and important paper. The investigations and their results are presented in a clear and compelling way, making for a real page-turner! \n\nThe work makes several contributions. The two types of learning studied here (in-context and in-weights) are neatly defined and explained. The experiments demonstrate how in-context learning is promoted by several specific properties of the training data: burstiness, number of classes, and the Zipfian exponent of their marginal distributions. Especially exciting are the results indicating that a Zipfian marginal distribution with exponent of 1 over classes promotes a combination of both in-context and in-weights learning, which are otherwise at odds with each other in most experiments.\n\nThe comparisons between transformers and RNNs is of added benefit. The broader implications of these results, such as for application to RL, are especially exciting.\n\nOverall, this work begins to dispel the mysteries behind the excellent empirical performance of transformers, and should lead to many more enlightening studies.\n How might the critical effects of the Zipfian exponent of the distributions be explained?\n\nWhat are the color codings of the curves in Fig 7 & Fig 8? Are they just plotting the individual runs for different seeds, or perhaps for different hyperparameter settings?\n Line 232 says: “We performed a comprehensive hyperparameter search for all models (see Appendix for details).” But the appendix mentions only “15 runs for each architecture” over three hyperparameters with 2 values each. This seems like a fairly shallow hyperparameter sweep. Unless perhaps each hyperparameter combination was counted as a separate architecture, making 8 * 15 = 120 runs for the transformer alone, etc. ", " This paper asks a range of questions around what affects in-context vs in-weight learning in modern neural networks, focusing on Transformers, with RNN and LSTMs serving as comparisons. In totality, the experiments evaluate:\n\n1. In-context vs In-weight based on burstiness of the data.\n2. In-context vs In-weight based on number of training classes in the data.\n3. In-context learning based on the number of labels per class.\n4. In-context vs In-weight based on within-class variation.\n5. How a Zipfian distribution in the data affects In-context and In-weight learning.\n6. How In-context learning compares across Transformers, RNNs, and LSTMs.\n\nThe paper asks these questions in context of the recent strong LLM results. How can it be that these models seem to be exhibiting both in-context and in-weight learning? They then proceed to examine why this is and try to parse apart the causes by examining artificial data and comparing model classes. Originality:\n\nThis paper is original wrt the questions it asks. The methods to answering them (omniglot, simple plots, etc) are not new, but they don't need to be. \n\nClarity:\n\nWhile the writing was good, the paper was surprisingly tougher to follow than I would have expected. I think it would get a huge improvement from a graphical interpretation of in-context vs in-weight learning. These are not terms I've seen before (and I've worked in FSL) and in-weight learning wasn't really ever properly defined in the paper. As far as I can tell, in-context learning was only roughly defined and that was on page 1 in relation to meta-learning. I had to piece together what I think is the definition from the rest of the work. I'm not sure if others you've shown this to have had the same problem, but just setting aside some space near the beginning to define what you mean by this would help a lot. Otherwise, I was left to wonder what actually fits into this view and what doesn't.\n\nQuality:\n\nThe quality is high. Once I understood what was going on with their interpretation of in-context vs in-weight learning, the results were pretty interesting. I liked the Zipfian distribution experiment a lot because it was one of the questions I would have had towards doing this on language. I am genuinely surprised that the LSTM and RNN models failed so terribly and am curious if the authors have a take on this.\n\nSignificance:\n\nThe paper's experiments are intriguing and point to some insights into why LLMs are working so well. I'm not intimately familiar with this part of the literature to know whether these align with other results, but to me this is new and good stuff asking worthwhile questions with thought-provoking results. This strikes me as significant for computer vision as well because of the questions about whether it's worthwhile to shift to different paradigms of data than the prevailing uniform IID approach. 1. What is your actual definition of in-context vs in-weight learning?\n2. How would you construct a language dataset to test some of these hypotheses instead of a dummy omniglot one? That feels like a missing experiment here because you do lots of variations, but do not include anything that gets at the environment that the LLMs are operating in. Could you create a language dataset that wasn't bursty? In my mind, this would be testing what happens when you use a real-world dataset that followed the uniform IID concept.\n3. What would it take to do the same experiments in vision like you point to in the discussion? As evidenced by the lengthy discussion, the authors understand the paper's limitations and what it points to. ", " Recently, the emergence of in-context learning raises a question about the cause of this learning behavior. This paper considers in-context few-shot learning for image data. In the setting, this paper finds that in-context learning emerges when the training data is bursty. In addition, when the data follows a skewed Zipfian distribution, both in-context learning and in-weight learning can co-exist in a single model.\n\nUpdates after seeing other reviews: I under-estimated the core contributions of this paper, both in the novelty of the perspective and the setting of empirical evaluations. However, I still think this paper will benefit from more rigorous descriptions of some key terms. I increased the score to 5 (confidence rating 3). **Strengths**\n\n- The study of interaction between in-context learning and in-weight learning is novel.\n- The inquiry into the reason that lead to in-context learning is important.\n- The experiments show some interesting findings.\n\n**Weaknesses**\n\n- Formatting: The page limit is 9 pages right? Lines 314-317 appear in page 10.\n- Three key concepts discussed in this paper, in-context learning, in-weight learning, and burstiness, lack quantified definitions. Please refer to my comments below. - It appears that burstiness is treated as a binary feature (defined in Figure 1 caption, lines 5-7). This is not a rigorous definition, and would definitely benefit from additional elaborations. \n- Similarly, \"in-weight learning\" and \"in-context learning\" are also defined in Figure 1 caption (lines 6-7), which are not quantitative definitions. The in-weight vs. in-context learning is an effect mentioned a lot in this paper. I think a less ambiguous definition than just \"learning labels across sequences\" or \"referring back to the context\" would be necessary.\n- \"Bursty sequences can be solved by in-weights learning or in-context learning\". So the non-bursty sequences can not be solved by either in-weights learning or in-context learning?\n- How does in-context learning work together with few-shot learning? Did you sample multiple demonstration sequences and run few-shot learning? Compared to few-shot learning (using just one image as input), this setting has a lot of computational overheads. Compared to in-context learning (without changing model parameters), this approach is limited to much smaller models. What are the benefits of in-context few-shot learning?\n- Sec 3.1: Doesn't bustiness overlap to some extent with \"a large number of rarely occurring classes\"? The limitations are adequately addressed." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 9, 7, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 3, 3 ]
[ "YerysSgAp3C", "aHEj2yR3sDh", "O49jV3rf69-", "KhcjQCOOSm-", "kC_ZtRDUX2k", "TTXMQ8I87q6", "mVHiGW_khIP2", "0dQDB7H5iX8", "lVQ4ErKPTrR", "fCuIobnorb2", "kC_ZtRDUX2k", "FAqR025Bgfo", "KhcjQCOOSm-", "2Zdk3a3EpS_", "nips_2022_lHj-q9BSRjF", "nips_2022_lHj-q9BSRjF", "nips_2022_lHj-q9BSRjF", "nips_2022_lHj-q9BSRjF" ]
nips_2022_WUMH5xloWn
Automatic differentiation of nonsmooth iterative algorithms
Differentiation along algorithms, i.e., piggyback propagation of derivatives, is now routinely used to differentiate iterative solvers in differentiable programming. Asymptotics is well understood for many smooth problems but the nondifferentiable case is hardly considered. Is there a limiting object for nonsmooth piggyback automatic differentiation (AD)? Does it have any variational meaning and can it be used effectively in machine learning? Is there a connection with classical derivative? All these questions are addressed under appropriate contractivity conditions in the framework of conservative derivatives which has proved useful in understanding nonsmooth AD. For nonsmooth piggyback iterations, we characterize the attractor set of nonsmooth piggyback iterations as a set-valued fixed point which remains in the conservative framework. This has various consequences and in particular almost everywhere convergence of classical derivatives. Our results are illustrated on parametric convex optimization problems with forward-backward, Douglas-Rachford and Alternating Direction of Multiplier algorithms as well as the Heavy-Ball method.
Accept
The reviewers agreed that the paper has solid and novel technical contributions. Nevertheless, please consider elaborating more on the background of the techniques used in the revision, so that the paper is more self-contained.
train
[ "Y2jcv4oTqqC", "Z2zaM65baIY", "7tDXRRj7-zit", "PVMji9XTYwK", "7FNjAW-82Go", "9-RY--2VZKU", "q0kmzAL6YHu", "ow-Phwz5Gk", "0bQlpiavAJK" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for his positive feedback and his relevant questions.\n\n**A comment on the difference between the convergence speed of Piggyback iterations for the mentioned optimization algorithms is missing.**\n\nYes definitely, we will add a discussion in Section 4.1 along the following lines:\n\n- Consider for instance a sum composite problem: take a $\\mu$ strongly convex $f$ with $L$-Lipschitz gradient, and $g$ convex and consider Forward-Backward recursion (FB) and Douglas-Rachford recursion (DR) (we ignore ADMM which deals with different types of problem and which is anyway an instance of DR). Corollary 3 suggests that the asymptotic linear convergence factor can be chosen as closed at desired to $\\sqrt{\\rho}$ where $\\rho$ is the contraction factor of the algorithm. Actually we suspect that the square root is an artifact and could be removed, but do not have a proof yet. \n\nTherefore one needs to compute the contraction factor of each algorithms. This results in the following, for FB and DR iterations, set $\\tau = L/\\mu$ the condition number of $f$\n- FB: choosing $\\alpha = 2 / (L + \\mu)$ results in contraction factor $\\rho_{FB} = (\\tau - 1) / (\\tau + 1)$\n- DR: (see for example [25] proposition 3), choosing $\\alpha = 1 / \\sqrt{L \\mu}$ results in contraction factor $\\rho_{DR} = (\\sqrt{\\tau} - 1) / \\sqrt{\\tau} + 1) < \\rho_{FB}$\n\nCorollary 3 suggests asymptotic linear convergence factor of order $\\sqrt{\\rho_{FB}}$ for Forward Backward piggy back derivatives and $\\sqrt{\\rho_{DR}} < \\sqrt{\\rho_{FB}}$ for Douglas Rachford piggy back derivatives. The obtained rate is asymptotically faster for DR than FB. Note that the limiting conservative gradients need not to be the same.\n\n**What do the authors think of the difference in using Automatic and Implicit Differentiation for non-smooth fixed-point iterations and equations respectively**\n\nThis is a very relevant question which was part of our interrogations while writing this manuscript and for which we do not have an answer yet. \n\n**\"the computational efficiency of the algorithms in their setting\":**\n\nLet us sketch a possible answer. Assume that we perform $K$ iterations of the algorithms. Denote by $cost$ the computational cost, we have $cost(x_K) = O(K cost(F))$. Let us also assume that $cost(J_F) \\leq a cost(F)$ for a small constant $a$ (cheap derivative principle). Denote by $c(p,m)$ the cost of $p \\times p$ matrix multiplication with $p \\times m$ matrix. It is $p^2 \\times m$ for the naive multiplication algorithm, and smaller for smarter algorithms, as Strassen's method. We use the naive algorithm for simplicity. Denote by $J_{imp}$ the conservative Jacobian obtained by algorithmic differentiation.\n\n- Using the recursion (PB) $cost(J_{x_K}) = O( (1+a) K cost(F) + K p^2 \\times m)$ and the memory footprint is of the order of $p\\times m$ for the current Jacobian matrix $J_k$, $k = 0,\\ldots, K$.\n\n- Using the forward mode JVP in Algorithm 1 $cost(J_{x_K} \\dot{\\theta}) = O( (1+a) K cost(F) + K p \\times m)$ and the memory footprint is of the order of $p$ for the current iterate $x_k$, $k= 0,\\ldots, K$. Note that computing $w_K^T J_{x_K}$ using forward mode JVP algorithm essentially requires to compute the full recursion (PB) since $J_{x_K} \\dot{\\theta}$ will typically \"one coordinate\" of $w_K^T J_{x_K}$ in the form $w_K^T J_{x_K}\\dot{\\theta}$.\n\n- Using backward mode VJP in Algorithm 1 $cost(w_K^T J_{x_K}) = O( (1+a)K cost(F) + K p\\times m)$ and the memory footprint is of the order of $Kp$ for the $K$ iterates $x_0\\ldots, x_K$.\n\n- Using implicit differentiation, $cost(J_{imp}(x_K)) = O((K+a) cost(F) + p^3 + p^2m)$ where the cube is for matrix inversion and the memory footprint is of order $p$ to store the last iterate $x_K$.\n\nThis (very rough) list of complexity evaluation suggest that the backward mode costs less time when $Km \\leq p^2$, that is the dimension is large with respect to the geometric average $\\sqrt{Km}$. It is the case when there is a small number of parameters $m$ or a small number of iterations $K$. This also happens when $K \\leq p$, the number of iterations is smaller than the number of dimensions. \nOverall, the backward mode is less memory efficient and could be more time efficient in some regimes, when the number of parameters the number of iterations are small. \n", " We thank the reviewer for his positive feedback and provide a detailed response to his comments below:\n\n**I believe the authors should have provided more references to works actually using piggyback derivatives of nonsmooth algorithms in different contexts, to give precise illustrations beyond the toy examples used in the experiments. This would strengthen the impact of the paper and point practical cases for which we now have guarantees.**\n\nIndeed, we will point to more references in the introduction; please do not hesitate to provide references we have missed, this would be of great help. \n\nAs for what we are aware of, reference [8] provides direct application to hyperparameter tuning in inverse problem, while the special case of LASSO is analyzed in [7]. Further relevant references include for example \n- Hurault, Leclaire, Papadakis, Gradient step denoiser for convergent plug-and-play, ICLR 2022.\n- Deledalle, C. A., Vaiter, S., Fadili, J., & Peyré, G. (2014). Stein Unbiased GrAdient estimator of the Risk (SUGAR) for multiple parameter selection. SIAM Journal on Imaging Sciences, 7(4), 2448-2487.\n- Ochs, P., Ranftl, R., Brox, T., & Pock, T. (2016). Techniques for gradient-based bilevel optimization with non-smooth lower level problems. Journal of Mathematical Imaging and Vision, 56(2), 175-194.\n- Bertrand, Q., Klopfenstein, Q., Massias, M., Blondel, M., Vaiter, S., Gramfort, Salmon, J. (2022). Implicit differentiation for fast hyperparameter selection in non-smooth convex learning m. Journal of Machine Learning Research, 23(149), 1-43.\n\n**The presentation is clear enough to understand the general ideas, but the paper is not exactly self-sufficient. In particular, I believe one must already be very familiar with concepts developed in [12] to understand the tools that must be leveraged. Maybe it would have been nice to include a crash-course exposition to these concepts in the appendices.**\n\nIndeed this is clearly missing. We will add more material to explain the most relevant aspects of [12], emphasizing intuition and key results. .\n\n\n**Is it possible to find a case of practical interest where the heavy ball algorithm (or other momentum algorithms) will fail ? I feel like, from the example given, than in most cases the cost functions are regular enough for this not to happen, but I would appreciate to be proven wrong.**\n\nFor more regular, $C^2$ objective $f$, the derivatives of the heavy ball algorithm will converge under strong convexity. This was detailed in [34] and is actually a consequence of [24]. So indeed, more smoothness provides convergent derivatives. It is true that our example is very specific and does not really correspond to any practical situation. We see this example as a theoretical limitation, related to a very bad alignment, but indeed, this behavior may not be generic, and in particular, convergence of derivatives may occur in most practical settings for the heavy ball. This is indeed an interesting question for which we do not have any strong opinion yet. \n\n\n**How hard would the analysis get in the case where there are multiple fixed points of F ? E.g. when minimizing a nonconvex function ? I understand the analysis would get much more complex, and is probably out of the scope of the paper, but this could have been discussed a bit.**\n\nThis is a very relevant question, we have several elements of answer:\n- Assumption 1 may be required to hold only locally (Remark 1), which would allow to relax convexity to some extent (but not uniqueness of the fixed point).\n- Multiple fixed points may cause a major difficulty: the fixed point mapping may become discontinuous with respect to initialization $x_0$. The conservative gradient framework developed in [12] allows to handle locally Lipschitz functions but not discontinuous functions (as integration along absolutely continuous curves cannot hold). This is a major difficulty which would require to take a completely different path. We do not see any direct generalization to this setting.\n- An intermediate case would be to consider multiple fixed points which have a continuous dependency on initial conditions of the algorithm. However verifying such an assumption in practice may be quite hard so that the resulting results would apply only to very specific problems (potentially close to convex). An example of step in this direction is given in (Non-Convex Bilevel Games with Critical Point Selection Maps, Arbel and Mairal, 2022) where Section 4 describes convergence of derivatives of continuous time gradient flow in a setting where implicit differentiation does not apply.\n\n\n\n\n", " **\"the correct evaluation of the derivative of the fixed-point (or something \"reasonable\" when that does not exist) in a more general setting\"**\n\n- The set valued map $x \\rightrightarrows J_{imp}(x)$ has a closed graph and therefore $gap(J_{imp}(x_k), J_{imp}(\\bar{x}))$ will converge to $0$ similarly as $gap(J_{x_k}, J_{fix})$ converges to $0$. We do not have quantitative ways to estimate difference between both quantities.\n- Intuitively $J_{imp}(\\bar{x}) \\subset J_{fix}$ and the second set may be easier to converge to (this is just a guess).\n- More formally, automatic differentiation ensures that for all $k$, $J_{x_k}$ is a conservative Jacobian for $x_k$. An example of consequence is that for almost all $\\theta$, and for all $k$, $J_{x_k}(\\theta)$ is equal to the classical Jacobian. No such property holds for $J_{imp}(x_k)$, and beyond its convergence, for a fixed $k$ it does not carry variational meaning. This might have consequences, for example for the Lasso hyperparameter tuning problem with forward backward algorithm (e.g. in [7]), implicit differentiation would work accurately after identification of the support of the solution (active manifold), but before such an identification, nothing can really be said about the meaning of $J_{imp}(x_k)$ while $J_{x_k}$ retains some meaning (it is conservative for $x_k).\n\n\n**If the answer to the previous question is hat Implicit Differentiation behaves at least as well as Automatic Differentiation then why even use Automatic Differentiation since Implicit Differentiation is memory efficient.**\n\nWe agree with the remark of the reviewer and we have two comments to make:\n- The previous analysis suggests that for moderate values of $K$ or $m$ the computational time of backward AD is more favorable. For instance, well-conditioned problem generally yield fast convergence so it is likely that they form a favorable class for fast PB processes.\n- Implicit differentiation does not have a variational meaning for a fixed $K$ while this is the case for automatic differentiation. This could be interesting in a \"sketchy calculus\" context or when constraint identification plays a significative role.\n\nMore speculatively, convergence of automatic differentiation may occur in situations whereas the implicit function theorem does not apply due to lack of invertibility (with a potential dependency on initial conditions). An example in this direction is given in (Non-Convex Bilevel Games with Critical Point Selection Maps, Arbel and Mairal, 2022) where Section 4 describes convergence of derivatives of continuous time gradient flow in a setting where implicit differentiation does not apply.\n\nThese are very qualitative and high level responses and we consider that the question raised by the reviewer is essentially open in a nonsmooth context and requires further investigations.\n\n\n**Minor Typos and other Confusions**\n\nThanks for catching these typos, in particular, we will define the initialization more precisely as also pointed out by reviewer nZjc.", " We thank the reviewer for his positive assessment and critical feedback. A point by point reply is found below.\n\n**One small weak point is, according to my personal impression, that the paper is partially not self-contained. For example the inputs of algorithmic differentiation in Section 3.3 could at least be briefly explained.**\n\nThanks for pointing this out, this section is indeed a bit fast. We will add a more precise description of input quantities: in a compositional model $\\dot{\\theta}$ is the gradient of an inner functions $\\theta(\\lambda)$ controlling algorithm parameters $\\theta$ with another variable real variable $\\lambda$, for example an hyper parameter. The goal is to combine $\\frac{\\partial x_k(\\theta)}{\\partial \\theta}$ and $\\frac{\\partial \\theta}{\\partial \\lambda}$ in a forward path to implement the chain rule and obtain the total derivative $\\frac{\\partial x_k(\\theta(\\lambda))}{\\partial \\lambda}$. On the other hand, in a compositional model, $\\bar{w}_k$ is typically the gradient of an outer loss functions $\\ell$ evaluated at $x_k(\\theta)$. In this case the goal is to combine derivatives of iterates $\\frac{\\partial x_k}{\\partial \\theta}$ with $\\bar{w}_k = \\frac{\\partial \\ell(x_k)}{\\partial x_k}$ in a backward path to output the total derivative $\\frac{\\partial l(x_k(\\theta))}{\\partial \\theta}$. \n\nSection 3.3 will be made more natural and intuitive using the above discussion.\n\n**How do you initialize $J_{x_0}(\\theta)$?**\n\nThis is indeed missing, we will add the following sentence after (PB) on page 3 and all Corollaries of Theorem 2\n\n$J_{x_0} \\colon \\mathbb{R}^m \\rightrightarrows \\mathbb{R}^{p \\times m}$ is any conservative Jacobian of $\\theta \\mapsto x_0(\\theta)$.\n\nNote that this is not required for the convergence to hold as the initial condition is progressively \"forgotten\" so that the limit does not depend on it. However it is important to precise what is the initialization and a conservative Jacobian is what was considered implicitly.\n\n**Additional questions:**\n\nThanks for catching the remaining typos, they will be fixed.", " We thank the reviewer for his positive assessment and critical feedback. A point by point reply is found below.\n\n**For assumption 1, how does this assumption compare with these in the smooth cases? For example, in [37], they assume the Hessian at optimum is positive definite and they use the gradient descent method. So it would be better to discuss the strength of assumption 1 in various situations where we assume the function is C^2 and fixed the algorithm. Is Assumption 1 for the nonsmooth procedure stronger or weaker than that in existing papers?**\n\nIn the smooth case a natural candidate for being a conservative Jacobian is the classical Jacobian and our assumption is very close to what is used in the smooth case in [24,26,15] and more recently [37,34], but there are differences motivated by the surprisingly complex behavior of spectral radii of family of matrices. \n\n_The case of gradient descent:_ In [37], local smoothness and strong convexity ensure that the smooth counterpart of Assumption 1 holds locally for the gradient descent mapping. One difference is that the assumption in [37] is local, but this is also compatible with our analysis as explained in Remark 1, independently of smoothness. All in all, for gradient descent on a $C^2$ convex function, Assumption 1 with the classical Jacobian (and its local version, Remark 1) exactly corresponds to the setting of [37]. This will be made clearer after assumption 1.\n\n_Beyond gradient descent:_ Our assumption is actually stronger than what is required in existing literature on abstract iterative schemes. Indeed in the smooth case, for general algorithm $F$ it is shown in [24] that a bound on the spectral radius of the Jacobian is sufficient to ensure convergence. Spectral radius and operator norm are the same in the _symmetric case_ (the Jacobian of gradient descent mapping is a Hessian), but not in general (e.g. Jacobian of heavy ball mapping). On the other hand, in the nonsmooth setting, our heavy-ball counterexample shows that uniform spectral radius bound on Clarke Jacobian is not enough to ensure convergence of derivatives. The intrinsic reason for this discrepancy is as follows : for a single matrix $M$ with spectral radius $\\rho$, and any $\\epsilon > 0$, there exist a scalar product and induced norm $\\Omega$ such that $M$ is $\\rho+\\epsilon$ Lipschitz. So up to linear a change of variables, spectral radius and operator norm are almost the same for a single matrix, and the analysis carries through in the smooth case because the (usual) Jacobian is a single matrix. However, for two matrices $M_1$, $M_2$ with respective spectral radius $\\leq \\rho$, such a norm, ensuring both $M_1$ and $M_2$ to be $\\rho+\\epsilon$ Lipschitz, may not exist. Actually the existence of such a norm strongly relates to undecidable problems (see \"The boundedness of all products of a pair of matrices is undecidable\", Blondel and Tsitsiklis, systems and control letters 2000)! \n\nThis is the reason why Assumption 1 is on operator norm and not on spectral radius. This is sharp, as illustrated by the heavy ball example for which derivatives converge in the C2 case (e.g. [37]), but not for less regular cases even when the spectral radii are <1. This discussion will also be reflected after Assumption 1 and in Section 5.\n\n**Some minor points: L131: ... L153: ...**\nThanks for catching these typos, they will be corrected.", " This paper introduced a theory for the piggyback iteration of nonsmooth functions. While existing asymptotic analysis for the piggyback AD procedure only works for twice continuously differentiable functions, with the conservative derivative framework, this paper provided the asymptotic convergence results for nonsmooth path-differentiable functions under a nonexpansive condition (in Assumption 1). They formally defined the notion of general Clarke Jacobian for piggyback and its fixed point, with which the convergence of an infinite piggyback iteration is proved in Corollary 1. For mapping with a Lipschitz gradient selection, they proved a non-asymptotic convergence rate. They also showed applications by applying the piggyback iterations to various methods for convex problems. The paper is well-written with a clear introduction to various new and existing notions in the conservative derivative framework. The problem studies are timely and important. The theoretical results are nontrivial and demonstrate wide extensibility, which also suggests important applications. The failure example discussed in Section 5 is interesting. It shows that even functions with a Lipschitz gradient cannot ensure the convergence of the piggyback procedure. For assumption 1, how does this assumption compare with these in the smooth cases? For example, in [37], they assume the Hessian at optimum is positive definite and they use the gradient descent method. So it would be better to discuss the strength of assumption 1 in various situations where we assume the function is C^2 and fixed the algorithm. Is Assumption 1 for the nonsmooth procedure stronger or weaker than that in existing papers? Some minor points:\n\nL131: \"X\\in R^{p \\times n}\" -> \"X\\in R^{p \\times m}\"\nL153: \"this a\" -> \"this is a\" See above. Yes.", " The authors investigate the differentiation of iterates $x_k(\\theta)\\in \\mathbb{R}^p$ of nonsmooth optimization algorithms with respect to algorithm parameters $\\theta \\in \\mathbb{R}^m$. This task is motivated by the need to differentiate through the argmin of objectives that give rise to nonsmooth algorithmic iterations $x_{k+1} = F(x_k(\\theta), \\theta)$. Differentiation with respect to algorithmic iterates is frequently used because the argmin itself might for example be incaccessible to explict differentiation or alternative approaches.\n\nAn important question is whether Jacobians with respect to algorithmic iterates converge to classical Jacobians at fixed points $\\bar x(\\theta)$. This has previously been answered for the case of smooth iterations and the authors consider the nonsmooth case in this paper. To that end, they first introduce conservative Jacobians that are set-valued and enable a recursive differentiation framework for nonsmooth iterations. Under assumptions, they show that the defined recursion converges to a nonempty and compact set of fixed points in $\\mathbb{R}^{p\\times m}$ that is a conservative Jacobian of $\\bar x(\\theta)$. The limit of conservative Jacobians of $x_k(\\theta)$ is therefore equal to a conservative Jacobian at $\\bar x(\\theta)$. In other words: Limit formation and differentiation can be exchanged here.\n\nAfterwards, the authors also provide a linear convergence result for a specific class of functions (where the iteration is a Lipschitz gradient selection function), they show that their framework is applicable to several splitting algorithms including numerical examples, and they finally provide a concrete counterexample using the heavy-ball algorithm, where the sequence of derivatives diverges although the iterates themselves are linearly convergent. I think that this is an interesting and well-written paper. To the best of my knowledge, the presented results for nonsmooth algorithms are novel and also relevant (from a theoretical point of view but also in view of a growing number of use-cases of algorithm unrolling). Moreover, the authors provide rigorous proofs of their results in the appendix. One small weak point is, according to my personal impression, that the paper is partially not self-contained. For example the inputs of algorithmic differentiation in Section 3.3 could at least be briefly explained. Line 77: Should it be $x\\in\\mathbb{R}^p$ rather than $x\\in\\mathbb{R}^n$?\n\nLine 89: Should it be $\\mathbb{R}^{m\\times p}$ rather than $\\mathbb{R}^{p\\times m}$?\n\nLine 110: How do you initialize $J_{x_0}(\\theta)$?\n\nLine 131: Should it be $X \\in \\mathbb{R}^{p\\times m}$ rather than $\\mathbb{R}^{p\\times n}$? Yes.", " In this paper, the authors consider a possibly non-smooth but structured iterative algorithm which additionally depends on some parameter $\\theta$, also in a possibly non-smooth but structured manner; the structure being defined in Assumption 1. They used the tools developed in [1, 2] to differentiate this parametric fixed-point iteration through Chain Rule. This defines a (so-called) piggyback recursion, where the sequence being generated is set-valued. They show that the conservative jacobian of the fixed-point $\\bar x (\\theta)$ of the algorithm satisfies a set-valued, affine fixed-point equation. They show the convergence of the resulting sequence to the conservative jacobian of the fixed-point of the algorithm for all $\\theta$ with respect to the gap metric defined in Section 3.1. They also link the implicit differentiation of the fixed-point $\\bar x (\\theta)$ by using the conservative calculus in [3] to the limit of the set-valued sequence. They show linear convergence of this sequence under additional Lipschitz-gradient-selection assumption of the algorithm mapping $F$. Finally, the consider instances of the iteartive algorithm to solve a composite optimization problem, namely, Forward-Backward Splitting, Douglas Rachford and ADMM. They put another assumption on the optimization problem (Assumption 2) and show that Assumption 1 is satisfied by the above listed algorithms. They also show the failure of convergence of the derivative sequence for inertial methods.\n\n[1] Jerome Bolte and Edouard Pauwels. Conservative set valued fields, automatic differentiation, stochastic gradient method and deep learning. Mathematical Programming, 2020.\n\n[2] Jerome Bolte and Edouard Pauwels. A mathematical model for automatic differentiation in machine learning. In Conference on Neural Information Processing Systems, Vancouver, Canada, 2020.\n\n[3] Jerome Bolte, Tam Le, Edouard Pauwels, and Antonio Silveti-Falls. Nonsmooth Implicit Differentiation for Machine Learning and Optimization. In Advances in Neural Information Processing Systems, Online, 2021. Strenghts\n- The paper provides a fairly general result for non-smooth iterative algorithms. This is useful for instance in a bilevel framework since their result along with [1, 2] provides theoretical guarantees for correctly evaluating the gradient of the upper level problem by differentiating through the argmin of the lower problem by using Automatic Differentiation.\n\nWeaknesses\n- A comment on the difference between the convergence speed of Piggyback iterations for the mentioned optimization algorithms is missing. - What do the authors think of the difference in using Automatic and Implicit Differentiation for non-smooth fixed-point iterations and equations respectively in terms of\n - the computational efficiency of the algorithms in their setting, and\n - the correct evaluation of the derivative of the fixed-point (or something \"reasonable\" when that does not exist) in a more general setting.\n- If the answer to the previous question is hat Implicit Differentiation behaves atleast as well as Automatic Differentiation then why even use Automatic Differentiation since Implicit Differentiation is memory efficient.\n\nMinor Typos and other Confusions\n- Page 3, Line 94: \"...classes convex functions...\" -> \"...classes of convex functions...\".\n- Page 3, Line 110: $J_{x_1}$ was not defined.\n- Page 4, Line 132, Equation (6): \"$[A, B] \\in J$\" -> \"$[A, B] \\in \\mathcal J$\".\n- Page 4, Line 145: \"...Jacobian of the fixed-point of $F_\\theta$\" is misleading -> \"...Jacobian at the fixed-point of $F_\\theta$\" is more suitable perhaps.\n- Page 5, Line 153: \"...this a limit...\" -> \"...this is a limit...\".\n- Page 7, Line 220: \"...Proposition~1, as well Corrollar~3...\" -> Sentence does not look correct gramatically. See Weaknesses. Other than that, in my opinion their work is sound.", " In this paper, the authors present a theoretical framework based on nonsmooth analysis concepts to analyze the asymptotic behavior of « piggyback » derivatives for nonsmooth iterations, i.e. how we can define the automatic differentiation of the iterates of unrolled iterations of an algorithm with nondifferential updates. Cases of practical interest include most popular proximal algorithms : proximal gradient descent, Douglas Rachford splitting and the ADMM. The authors show that a notion of set-valued Jacobians can be defined for these algorithms, and that they coincide with classical Jacobians almost everywhere. What is more, they show that, under suitable assumptions, the fixed point admits a derivative in this sence, and the derivatives of the iterates converge towards the derivatives of the fixed point. They prove, again under suitable assumptions, that there is a linear convergence of the derivatives towards those of the fixed point. They also show that this does not apply to other optimization methods with momentum, the given example being the heavy ball methods for which Jacobian iterates can become unstable for an objective that is not regular enough. Strengths :\n\n- The proposed framework gives a precise meaning to procedures that are now frequently used in practice of unrolling (potentially nonsmooth) iterative algorithms programmed in modern autodiff packages (Tensorflow, Pytorch, JAX…) and were shown to work and give good results when optimizing some of their parameters, but without a firm theoretical explanation of why it worked and the derivatives were well defined. In this sense, the work is a valuable theoretical contribution\n\n- The linear convergene of piggyback derivatives is proven for a number of classical algorithms of interest, and is experimentally confirmed through simple examples. A counterexample is given, showing that one must be careful with momentum methods (though I assume that in most practical cases the functions will be regular enough for this not to happen).\n\n\nWeaknesses :\n\n- I believe the authors should have provided more references to works actually using piggyback derivatives of nonsmooth algorithms in different contexts, to give precide illustrations beyond the toy examples used in the experiments. This would strengthen the impact of the paper and point practical cases for which we now have guarantees.\n\n- The presentation is clear enough to understand the general ideas, but the paper is not exactly self-sufficient. In particular, I believe one must already be very familiar with concepts developed in [12] to understand the tools that must be leveraged. Maybe it would have been nice to include a crash-course exposition to these concepts in the appendices. - Is it possible to find a case of practical interest where the heavy ball algorithm (or other momentum algorithms) will fail ? I feel like, from the example given, than in most cases the cost functions are regular enough for this not to happen, but I would appreciate to be proven wrong.\n\n- How hard would the analysis get in the case where there are multiple fixed points of F ? E.g. when minimizing a nonconvex function ? I understand the analysis would get much more complex, and is probably out of the scope of the paper, but this could have been discussed a bit. The authors state clearly the assumptions they need for their framework and discuss their relevance and connections to practical cases. Some important cases are not covered by the analysis, such as multiple fixed points (useful e.g. for nonconvex bilevel optimization), but this remark is quite minor since results in this case are scarce (to my knowledge) in the literature, and the analysis probably gets much harder. The connection to implicit differentiation is drawn and is interesting but unfortunately not equivalent to the piggyback differentation presented here.\n" ]
[ -1, -1, -1, -1, -1, 7, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, 3, 3, 4, 2 ]
[ "ow-Phwz5Gk", "0bQlpiavAJK", "ow-Phwz5Gk", "q0kmzAL6YHu", "9-RY--2VZKU", "nips_2022_WUMH5xloWn", "nips_2022_WUMH5xloWn", "nips_2022_WUMH5xloWn", "nips_2022_WUMH5xloWn" ]
nips_2022_QW98XBAqNRa
Truncated proposals for scalable and hassle-free simulation-based inference
Simulation-based inference (SBI) solves statistical inverse problems by repeatedly running a stochastic simulator and inferring posterior distributions from model-simulations. To improve simulation efficiency, several inference methods take a sequential approach and iteratively adapt the proposal distributions from which model simulations are generated. However, many of these sequential methods are difficult to use in practice, both because the resulting optimisation problems can be challenging and efficient diagnostic tools are lacking. To overcome these issues, we present Truncated Sequential Neural Posterior Estimation (TSNPE). TSNPE performs sequential inference with truncated proposals, sidestepping the optimisation issues of alternative approaches. In addition, TSNPE allows to efficiently perform coverage tests that can scale to complex models with many parameters. We demonstrate that TSNPE performs on par with previous methods on established benchmark tasks. We then apply TSNPE to two challenging problems from neuroscience and show that TSNPE can successfully obtain the posterior distributions, whereas previous methods fail. Overall, our results demonstrate that TSNPE is an efficient, accurate, and robust inference method that can scale to challenging scientific models.
Accept
This paper received generally positive reviews, with one reviewer originally weakly backing rejection but ultimately backing acceptance after substantial discussions, and the other three confidently backing acceptance from the off. Based on the reviewer's comments and my own assessments, I see the positives and negatives of the work as follows: Positives - Very strong writing and presentation - Good range of experiments - Key ideas seem relatively simple to use and deploy - Important problem area and seems to make decent progress on two known issues in the area (leakage and coverage tests) Negatives - The novelty of the work is quite limited: the core approach is arguably a special case of an approach that already exists (APT) and the paper is mostly combining known ideas rather than proposing anything especially original. Thus, though the paper does certainly have some new ideas and is potentially useful to the community, I do think it is quite incremental. - The use of SIR is worrisome and likely to cause theoretical and occasional practical failure cases, even if these have not especially manifested in the current experiments - The method introduces biases (from SIR and the truncation itself) that might be difficult to quantify and there is a lack of any notable theoretical guarantees (though the objective is quite clearly sound with epsilon=0 and no iteration of the truncation, this provides no guarantees for real-world settings that will have epsilon>0 and multiple rounds of truncation). - Some of the claims about efficiency and rejection rates could have been more clearly demonstrated and explained. On balance, I agree with the reviewers that the positives outweigh the negatives; this is well-polished and potential useful work that will be of interest to the community. As such, I recommend acceptance. Minor comment - I believe that the suggestion that TSNPE “can scale to problems that were previously inaccessible to neural posterior estimation” is over claiming given the actual experiment results, given the fact that only one approach (with a single “out of the box” proposal) is actually compared to and the fact that the results are generally qualitative rather than quantitative. Moreover, given that TSNPE can arguably be seen as a special case of APT in the first place, the claim seems distinctly unreasonable. I would thus like to see this claimed dropped or at least significantly toned down.
train
[ "W2kYHyZ-hKx", "4oGlaFmyboU", "WgUXsiz-c7", "nYv-XqoFXo-", "eIQ2nzFXvr5", "YGqa6yF7TV", "AW97Xe9MriA", "Z6gAB2DOITs", "-awbGEyQ2V", "1QKIoyI8ube", "Dgn8uwhFFjv", "O2kFOnEwoicV", "Ze_sDjh27Ov", "BrZbbyaYBfm", "YKzs3dvoBg0", "uz9I8Khx4An", "wYyx75RDsiE", "Dcq9ffwJ2Eh", "v0WHdo6uBvo", "1wqZ6A1pNB", "uncTSFyp0rQ", "Q61IVqqh_5n", "COnwU0dwLQ", "5xmwBTeSYv", "SevXSBJbY9L", "WWwvxu7P2I" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for their immense efforts in reviewing our paper and are happy that the reviewer can now recommend acceptance!", " I will reply primarily to my concern about the assumption. I think your numerical and empirical arguments are strong and go along with how truncation has been justified in existing literature. I'm glad that you clarified when the assumptions can hold and also I should have read more closely to see the epsilon discussion that you pointed out.\n\nGenerally, I think you have put in a significant amount of work addressing my concerns which is appreciated. Also I think that the main points about the nature of the truncation are well explained. I will change my ratings to reflect this and recommend acceptance.\n\nCongrats!", " Given that the discussion period is about to end, we would be happy if the reviewer informed us whether our response convinces them of the soundness of our method, or, if not, what further evidence (in addition to the newly added figure and section in the manuscript) would be required to change their mind. We again thank the reviewer for taking the time to review our paper, we believe that their feedback has significantly strengthened our manuscript!", " We are happy that the AC finds our changes helpful. We agree that further spelling this out in the main paper will be useful for practitioners and we will do so if the extra page becomes available. Thank you for taking your time to assess our paper!", " Thanks for your reply, this is indeed helpful and I think the changes improve the paper. If the paper is accepted, I think it would be good to use the extra page available for the camera-ready to build on these a little further and the potential failure mechanism more directly in the main paper, but I appreciate there is limited space at the moment and there is no need to do this now.\n\n", " We thank the reviewer for taking their time to review our paper and and for appreciating our efforts in the revision. We are happy that the reviewer can recommend our paper for acceptance.", " We sincerely thank the reviewer for taking their time to review our paper and are happy that the reviewer recommends our paper for acceptance!", " We thank the AC for their interest in our work and for this insightful comment! We have added a new paragraph to the paper and we have added a supplementary figure in which we demonstrate that, indeed, SIR with too low values of K leads to a too narrow proposal distribution and, thus, a too narrow TSNPE posterior approximation (new supplementary Fig. 12). When run for a large number of rounds, this behaviour reinforces itself and can lead to divergence of TSNPE. In our benchmark and real-world experiments, we did not observe this behaviour (with K=1024 and 10 rounds), but we now specifically emphasise the importance of using tools to diagnose potential failures of TSNPE (e.g. SBCC) in combination with diagnostic tools for SIR (e.g. by inspecting the importance weights as we suggest in Supplementary Sec. 6.12). Finally, we agree that, in cases in which SIR fails (and rejection sampling cannot be applied due to its high rejection rate), sequential methods such as the ones mentioned by the AC provide a viable option to sample from the truncated proposal.", " We thank the reviewer for taking their time to respond and for the constructive feedback. We briefly address a few minor points and then discuss the assumption made in our proof of convergence. We hope that our response, as well as our new figure and changes to the manuscript, convince the reviewer to recommend our paper for acceptance.\n\n> In TMNRE the value of truncating the marginals is to make sampling from the prior simple\n\nWe agree that the fact that TMNRE proposals allow for quick sampling is a nice feature of these proposals. We have added this to the discussion:\n\n```truncating based on the marginals allows TMRNE to sample from the truncated proposal without rejection or SIR sampling.```\n\n> The critique about rejection steps in TMNRE confuses me.\n\nThe “critique about rejection steps” referred to the case in which the truncation is performed on the joint posterior and sampled with nested sampling. This is mentioned in the second to last paragraph in the discussion of the TMNRE paper. We are sorry about any confusion regarding this point. We agree that truncation based on the marginals does not require rejection sampling.\n\n> That's an issue with scalability.\n\nAs we write in the manuscript, we agree that rejection sampling is limited if the rejection rate is too high. However, even rejection rates of 99.9999% require significantly less computational cost for sampling than for running simulations (for many real-world simulators): As we discuss in Sec. 6.11, the cost of drawing 100 rejection samples at a rejection rate of 99.9% is only 0.012% of compute time as compared to running simulations. Thus, even for rejection rates of 99.9999%, sampling is significantly cheaper than simulating. In cases in which the rejection rate becomes prohibitive, we recommend using SIR.\n\nFinally, we note that our example with highest dimensionality is 31D (Fig 5), not 20D. We emphasise that most problems tackled in sbi literature have less than 10 parameters, and we are not aware of any work that scaled sequential neural-network based SBI to a simulator with more than 31 parameters. Thus, our results suggest that TSNPE is at least as scalable as other sequential SBI methods in problems with 31 dimensions or less.\n\n> If this assumption does not hold, then I think the proof does not either.\n\nWe thank the reviewer for clarifying their concerns about convergence of our algorithm. We agree that, if the assumption does not hold (i.e., if the truncated proposal at any round does not cover the posterior support), then the proof does not hold. As we had written in the original submission:\n\n```Values of ε > 0 yield a proposal prior which has smaller support than the current estimate of the posterior, e.g., using ε=1e-3 neglects 0.1% of mass from the approximate-posterior support. Thus, this approach can lead to errors in posterior estimation, e.g., to `under-covered' posteriors.```\n\nWe agree that we had not explicitly spelled out how this failure affects inference performance, and neither had we demonstrated this failure on a toy example. To address this, we have added a new supplementary section “Approximation errors due to truncation” in which we describe this failure mode in detail and demonstrate it on a 1D toy example (Fig. 11). We will move parts of this section into a dedicated “Limitations” paragraph in the discussion for the camera-ready version (as an extra page is available).\n\nOn benchmark and real-world problems, however, we do not expect the truncation errors to strongly influence inference quality: We extensively benchmarked different truncation thresholds ε > 0 (Fig 4, Appendix Fig 8 & 9) and across all benchmark tasks, we found that the exact choice of threshold does not significantly influence C2ST accuracy. This indicates that the error of our method (as well as other (S)NPE methods) is dominated by limited simulation-budgets and imperfect neural network convergence.\nOverall, while we agree that truncation might lead to errors in the tails of the posterior approximation (and we explicitly highlight this in the new supplementary section), our theoretical and empirical results demonstrate that the errors (with ε < 1e-3) are expected to only weakly affect inference quality.\n\n\n> The overconfidence exhibited by some of your plots makes me wonder if this is a symptom of the issue I raised in part B. [...] Isn't that what we see in Figure 6d?\n \nThe coverage plots shown in Figure 6d are after the first round (i.e., no truncation yet). Thus, truncation cannot be the reason for the observed overconfidence in this example. In general, while we agree that the errors induced by truncation should, in principle, be visible in the high-confidence regions of SBCC, the errors induced by ε=1e-4 are very small: Nominal coverage of 1 would correspond to 1-ε empirical coverage, which is 0.9999 for our choice of ε. Similar to what we wrote above, we expect errors due to limited simulation-budgets to dominate the errors observed in SBCC.", " tThanks for considering these points and making changes. Generally there isn't much to say on these topics. I think that the proof in 6.2 is clearer now, but I still wonder about its fundamental assumptions as I stated above. It would be great to hear about this from you.\n\nThank you for taking the time to make these changes. In general the paper is obviously well written and I think it does a fairer job at citing exciting literature now. My concern lies primarily in the assumption I mentioned above. If this can be properly addressed I would certainly recommend acceptance. If not, I think it's important to reevaluate where the results of this algorithm are applicable. (my intuition would be only in the truncated region of the prior.)\n\nThank you!", " After looking at the examples, I think that you are correct. Those methods do indeed evaluate on a grid. I was wrong.\n\nI appreciate that you updated your algorithm on this matter. I disagree that doing MCMC with a ratio estimator is a fundamentally difficult or slow task, but in general I think you clarified your point well here.\n\nI'll skip to the last paragraph about coverage. The overconfidence exhibited by some of your plots makes me wonder if this is a symptom of the issue I raised in part B. If indeed you are using a mixture of proposal distributions which are not accounted for in the loss, I would expect that the posterior is overconfident in the tails. (tails are too shallow.) Isn't that what we see in Figure 6d?", " Dear authors,\nThank you for your thoughtful reply! It's clear you spent some time considering my thoughts and I appreciate it.\n\nI spoke with the AC and we agreed that it was too much to ask to cite other works in the abstract. The changes made in the new version and citations are satisfactory to me. Although, I would request that you consider the following reply when characterizing TMNRE in the discussion.\n\nA) Indeed, in TMNRE the selection of the truncated prior and the computation of the \"final\" estimates are separated. The initial phase in TMNRE generates samples relevant to a certain observation and the second phase is to estimate arbitrary posterior marginals (including the joint) on the data generated in this truncated region. By truncating marginals, the data is generated in a \"product space\" of truncated priors and may take up a more prior volume than by truncating the joint. The authors are correct that samples from this space are likely to be LESS simulation efficient (posterior accuracy / simulation count) than the ones drawn using their proposed method of truncating the joint; however, the proposed method for drawing from the joint still raises some concerns for me both in novelty and scalability.\n\nI claimed that TSNPE is a simpler proposal mechanism that TMNRE and I still believe this. The reason is that both of the experimentally investigated methods to draw from the truncated prior in TSNPE, rejection sampling and SIR, are completely applicable to TMNRE applied to a truncated joint. The critique about rejection steps in TMNRE confuses me because the whole point of TMNRE is to avoid rejection to sample from the prior. In TMNRE the value of truncating the marginals is to make sampling from the prior simple. This is necessary when the parameter space is high dimensional. It merely involves the integral probability transform and changing the bounds of the interval. \n\nTSNPE does explore doing truncation in higher dimension than TMNRE, but the experiments in TSNPE have maximally 10-dimensional parameters in the toy problems and 20-dimensional parameters in the scientific application case. As seen in Figures 7-9 in the appendix, the acceptance rate of drawing from the truncated prior goes to nearly zero in TSNPE. That's an issue with scalability.\n\nGiven these points, I don't really see any \"major innovation\" in truncation methods in TSNPE.\n\nB) The use of the \"standard\" loss function in NPE without correction is, in my opinion, the primary advantage and notable point (innovation) of this paper. This is also what I claimed in my initial review. Correcting for the proposal distribution is a major problem in SNPE as it really produces overconfident posteriors.\n\nI was not careful with my words. \"Fundamental flaw\" is too harsh, especially without proper explanation. For that I apologize. Let me be specific now, I am concerned that in appendix 6.2 your proof assumption that the HDR is not necessarily the support of the posterior. Consider a gaussian model with a conjugate likelihood. The support of the posterior distribution is R but any HDR is a compact set.\n\nWhy did I say this is a fundamental flaw? If this assumption does not hold, then I think the proof does not either. I think it is likely that you are using a proposal distribution which needs to be accounted for in the estimate of the posterior, if you want the posterior to be accurate everywhere. I think this glosses over a really critical point of truncation: you cannot trust the estimate, except perhaps on the truncated region.", " Thanks to the authors for the updated paper version and the response, which I found very clear. In particular, I found the new experiments interesting and well-suited to address my questions.\n\nI've read the other reviews and the author's responses. Reviewer Afw3 made a good point about the novelty of the calibration diagnostic, but the authors addressed this well. The discussion about the similarities with TMNRE was interesting, but I share the authors' opinion that there are substantial differences between the methods and that TMNRE is sufficiently acknowledged and discussed.\n\nOverall, I think this paper presents a good contribution to the field and should be accepted. I am keeping my original score of 7. ", " I thank the authors for their response. The toy model presented in Figure 1 is interesting and shows evidence of leakage. I appreciate the added figure in the Appendix of this toy model. After reading the authors' response to my review and the other reviewers, I keep my rating at a 7. After seeing Reviewer Afw3's review, I agree that the original submission oversold the novelty of their SBCC metric. However, the authors provided a good response by adding the relevant citations and editing the paper to reflect this. \n\nI disagree with Reviewer nSm9 that this method (TSPNE) is a simplifed version of TMNRE. TSPNE selects the truncation threshold of the joint distirbution based on the quantiles of the proposal density, while TMNRE truncates based on the ratio of the parameter to its maximum value. I find that the TSPNE threshold more intuitive, because it directly corresponds to the probability mass that is truncated at each stage of the algorithm, and it makes sense even as the dimension of the posterior grows. Moreover, it also seems that TMNRE was positioned to tackle a different setting than TSPME. It's unclear that TMNRE can handle a high-dimensional marginal posterior, since the highest marginal distribution they tested had $d=2$.", " Dear Authors\n\nThank you very much for your submission and extensive efforts in replying to the reviews.\n\nI have just gone through the paper myself and wanted to raise a separate concern I had to those mentioned by the reviewers. To avoid giving you a heart attack, I should point out upfront that it is not something I think is a fatal flaw, but I am trying to gather as much information as possible for making the final decision and was hoping you might be able to provide some comments on it.\n\n> SIR seems to me to be a problematic and potentially unstable mechanism for sampling from the truncated prior. I’m sure that it will often work absolutely fine, but it introduces an unwanted interplay between the fact that the NPE at each iteration is used to define the truncated prior estimator that will be used to update it in the next. It would thus be quite easy to have an initial underestimate of the posterior density in some region reinforce itself in a way that leads to a region of the true posterior being missed. This could be particularly problematic when we consider the fact that the TSNPE objective itself requires the approximation to be more disperse than the true posterior. From a more theoretical perspective, the SIR estimator is biased with a bias that depends on phi itself (and which is not guaranteed to diminish with K if the tails of q are too light). This will mean that it is difficult to prove any convergence for the approach; indeed, I would expect it to sometimes diverge. I think this needs to be properly acknowledged as a potential limitation and I would encourage the authors to consider testing alternative, less problematic, sampling schemes as well, with nested sampling, adaptive multi-level splitting, or even resample-move SMC potential suitable candidates for this. Note that the instabilities I expect to see with SIR here are quite common in the adaptive importance sampling literature and the approach is kind of working like an adaptive importance sampler in that current approximations are driving the sampling in an adaptive way. This literature might thus be helpful for categorising the potential issues that might occur.\n\nAs I said, it would be good to hear your thoughts on this and, if you do think the concern is reasonable, whether there is anything you might be able to do to address/alleviate it.\n\nThanks\n\nAC", " We thank the reviewer for their insightful comments and feedback, as well as for appreciating our “well-written” paper and our “convincing” experimental results. We address the remaining concerns below and hope that this will allow the reviewer to give a strong recommendation for acceptance.\n\n\n> As parameters used for simulation are not sampled from the prior, parameters with zero prior probability could be sampled. This phenomenon is called leakage.\n\nWe emphasise that this leakage can be a major limitation of APT (since leakage will often be 99.9999% of the posterior mass, as in the pyloric network example) and prevents its use-case on many real-world applications. TSNPE, thus, opens the possibility to apply simulation-based inference to models that have been inaccessible to APT and other SNPE methods.\n\n> The SBCC diagnostic is quite known in the context of NPE and NRE (for example, see \"Arbitrary Marginal Neural Ratio Estimation for Simulation-based Inference\", Rozet et al. 2021) and is a straightforward extension of SBC.\n\nWe thank the reviewer for pointing us to this work, we had indeed not been aware of this. We have de-emphasized the diagnostic tool in the abstract and introduction, and re-written the corresponding section in the manuscript. As suggested by the reviewer, we highlight that TSNPE can make use of a diagnostic which scales to high-d parameter spaces and does not require MCMC sampling.\n", " We thank the reviewer for their insightful comments and for raising important discussion points. We have addressed all concerns and suggestions and believe that these changes have substantially improved our manuscript. We have added further analyses regarding (1) the impact of the information loss from using the truncated proposal rather than the posterior, (2) the error of SIR due to finite K, and (3) the computational cost of SIR and rejection sampling. We hope that these modifications clear remaining doubts of the reviewer and enable the reviewer to provide a strong recommendation for acceptance.\n\n> discuss why rejection sampling in TSNPE is more efficient than in APT.\n\nWe thank the reviewer for raising this crucial point. The rejection rate of APT is not bounded: as we demonstrated on, e.g., the pyloric network application, the rejection rate can reach 99.9999% after only 2 rounds. In contrast, the rejection rate of TSNPE depends on the volume of the approximate posterior as compared to the prior. Across all benchmark tasks, the rejection rate was at most 99.9% and thus multiple orders of magnitude smaller than the rejection rate of APT. If the posterior is particularly narrow, we recommend using SIR in TSNPE, which requires K samples for each posterior sample (rejection rate 1-1/K). For K=1024, SIR has a rejection rate of 99.9%, which is again orders of magnitude more efficient than rejection sampling in APT. We discuss the computational cost of rejection sampling and SIR later in our response.\n\n> Could SIR be applied equally well to the APT posterior?\n\nThis is an interesting idea, but in cases in which APT exhibits large leakage, SIR will not help. SIR is a resampling algorithm, which, in the case of APT, first draws K samples from the unbounded approximate posterior and then resamples one of them. However, if all K samples from the unbounded approximate posterior are outside of the prior bounds, the resampled sample will again be outside of the prior bounds. \n\n> how much information from past rounds is lost by sampling proportional to the prior rather than proportional to the current posterior\n\nWe investigated this on the benchmark tasks and added the analysis to the paper: We compared APT with proposal being the truncated prior to APT with proposal being the previous posterior. Since both algorithms use the same loss function, it allows us to investigate the impact of the information loss, when using the truncated proposal as compared to the previous posterior, on the APT performance. We found that truncated APT shows similar or worse C2ST accuracy than standard APT but better performance than NPE, depending on the benchmark tasks (one task is still running). We note, however, that we cannot confidently say that this will also be the case on other models.\n\n> In the experiments, how big is the error of SIR due to finite K?\n\nWe have added two analyses to the paper and now explore the efficiency of SIR in three ways: 1) As we show in Fig 10, SIR does not affect the posterior on benchmark problems. 2) We have added an analysis that demonstrates that, with K=1024, proposal samples follow the truncated proposal distribution almost perfectly on 1D toy examples (Fig. 11). 3) We studied the variance of the importance weights of SIR by inspecting the effective sample size (ESS). We found that, across the sir, 2-moons, and bernoulli-glm tasks, the worst ESS was 8.311, which is still far higher than 1 (the number of resampled samples). The other tasks are still running and we will update the manuscript accordingly. All of these results indicate that SIR is expected to be a useful method for TSNPE. We now address these points in a new supplementary section “Accuracy of SIR”.\n\n> In the benchmark experiments, does the problem of APT posterior leakage occur?\n\nOn the benchmark experiments, leakage does not occur because the prior is transformed into unbounded space. This worked well on the low-dimensional benchmark tasks, but does not scale to high-dimensional problems (Fig 5).\n\n> In the experiments, do the methods that rely on rejection sampling or SIR have a substantial overhead in runtime from that?\n\nOn a GeForce RTX 2080 GPU, one can draw (or evaluate) 100k samples from the approx. posterior in 0.17 seconds. When running SIR with K=1024, this produces 100 samples from the truncated proposal in 2x0.17=0.34 seconds. Thus, on most real-world simulators, the simulation time will be orders of magnitude higher than the SIR sampling time. E.g., for the multicompartment model (30 seconds per simulation), SIR sampling takes up only 0.012% of the time required to run the simulations. We added a new supplementary section “Computational cost of rejection sampling and SIR” to the paper.\n\nFurther edits:\n- We have added several citations to the introduction.\n- We changed the abstract to “inferring posterior distributions from model-simulations”.\n- We changed several instances of the word “likelihood” to “posterior density”.", " We thank the reviewer for their constructive feedback and for appreciating the strong results on real-world tasks as well as our efforts to identify and fix potential failure modes, which “lends confidence” to our method. Below, we address the remaining concerns of the reviewer:\n\n> it would be reassuring to re-create the failure of APT observed in the more complex neurological applications in a simple setting.\n\nWe agree that showing this failure is important, and we demonstrate such a failure in Fig. 1. On a simple 1-dimensional toy example, APT leads to strong leakage. We have now added a supplementary figure which shows that this behaviour gets worse as even more rounds are run (see Appendix Fig. 7). This illustrates the instability of APT on a simple toy example and demonstrates why sampling in APT can become prohibitively slow.\n\n> would it work to form a defensive mixture by sampling from the untruncated prior with some probability?\n\nWe thank the reviewer for this interesting suggestion. In this case, the proposal is no longer a truncated proposal, but a mixture between the prior and the truncated proposal. Because of this, we would no longer be able to train with maximum likelihood and would have to adjust the loss accordingly. This would lead to a mixture between the loss employed by Lueckmann et al. 2017 and our loss (i.e., maximum likelihood). We believe that exploring such mixtures between loss functions will be an interesting subject and potentially a fruitful idea for future research.\n\n> Is the forward KL objective applicable to TSNPE?\n\nYes. Indeed, the loss-function of TSNPE is identical to minimizing the forward KL divergence between the true and approximate posterior and is, thus, support covering. We have highlighted this in the manuscript. This behaviour is highly desirable for TSNPE: since it relies on the proposal distribution being sufficiently broad, the mode covering behaviour of the forward KL confers a high degree of robustness to TSNPE.\nSummary: We thank the reviewer for their insightful comments and hope that our answers clarify any remaining doubts, so that they will hopefully be able to strongly endorse our manuscript.", " > Regarding rejection rate and SIR: Does it [SIR] affect the final posterior?\n\nWe explore the efficiency of SIR in three ways: 1) As we show in appendix Fig 10, SIR does not affect the posterior on benchmark problems. 2) We have added an analysis that demonstrates that, with K=1024, proposal samples follow the truncated proposal distribution almost perfectly on 1D toy-examples (Appendix Fig. 11). 3) As suggested by the reviewer, we have studied the variance of the importance weights of SIR by inspecting the effective sample size (ESS). We found that, across the sir, two-moons, and bernoulli-glm tasks, the worst ESS we observed (for K=1024) was 8.311, i.e., it was always significantly higher than 1 (the number of resampled samples). The results for other benchmark tasks are still running and we will update the manuscript accordingly. All of these results indicate that SIR is expected to be a useful and robust sampling method for TSNPE. We now address these points in a dedicated supplementary section “Accuracy of SIR”.\n\n> The proof of convergence in Section 6.2 is a difficult to follow. \n\nWe thank the reviewer for pointing this out. We have clarified the proof and made it more rigorous for data pooled from all rounds. We hope that this clarifies all doubts about the correctness of our method.\n\n> The statement about the region which is well calibrated in 6.12 is unclear to me.\n\nWe thank the reviewer for pointing this out. We have rewritten this paragraph in order to make it more understandable. In short, the issue arises because our proof only ensures convergence to the true posterior for the observation x_o, but not for other x. This poses a difficulty to applying SBCC (which computes the posterior for several x). We now describe two ways in which this issue can be circumvented.\n\nIn Section 3.3 the authors claim that having (expected) coverage implies that the HDR of q_phi(t | x_0) is a super set of p(t | x_0).\nWe agree with the reviewer that this is inaccurate. As with all SBC methods, the expected coverage only implies that the HPR is correct on average, but not for a particular observation. We have corrected this in the manuscript.\n\n>The simulation efficiency gained by truncation is marginal if the posterior is nearly as wide as the prior.\n\nWe thank the reviewer for raising this point. We agree that, if the posterior is as wide as the prior, our method will not be more efficient than NPE. However, this is also expected for other sequential methods such as SNPE (or SNLE, SNRE, TMNRE). Thus, we do not believe that it is necessary to further elaborate upon this point beyond mentioning it in the results section.\n\n> The limitations of computing coverage using the SBCC algorithms in high dimensions should be mentioned.\n\nAs we have argued above, this is not correct. Unlike the coverage methods suggested in TMNRE and the averting-a-crisis paper, SBCC scales well with dimensionality.\n\nFurther modifications:\n- We thank the reviewer for pointing out the typo in algorithm 2. We have updated its description.\n- We have added nested sampling as a possibility to sample from the truncated proposal.\n- We have clarified algorithm 3 with comments.\n- We have updated the term coverage to expected coverage.\n- We have removed the word “optional” from the computation of coverage as a part of TSNPE.\n", " Relation of coverage tests to previous work\n\nThe reviewer writes that the diagnostic tools used in TMNRE and in the averting-a-crisis-paper do not use a grid. We respectfully disagree with the reviewer on this point. We checked the code of TMNRE on github and found that it uses a histogram of samples for computing the coverage [here](https://github.com/bkmi/tmnre/blob/40d2bf2faf037a211923ce22130913ff996e88b7/tmnre/coverage/oned.py#L66). Such histogram-based approaches work well in 1D parameter spaces (as is done in TMNRE for the 1D marginals), but do not scale to high-dimensional parameter spaces. This highlights that TMNRE cannot easily be extended to diagnosing coverage of the full posterior (it would require expensive MCMC runs for multiple observations. The performance of this was never empirically investigated in the TMNRE paper). Similarly, the code of the averting-a-crisis paper [here](https://github.com/montefiore-ai/averting-a-crisis-in-simulation-based-inference/blob/a53f34ccfa2b520c58f46158e52e235766da681f/workflows/SNL/mg1/snl.py#L94-L98) shows that coverage is based on evaluating the posterior distribution on a grid which, again, does not scale to high-D parameters spaces (all examples they show are one, two, or three-dimensional).\n\nThe reviewer also claims that they “expect this [SBCC] to suffer from the curse of dimensionality”. However, this is not true. Independent of the dimensionality of the problem, the fraction of “proposed samples” (i.e., posterior samples) that are below the contour line is exactly the empirical coverage. Thus, SBCC does not suffer from the curse of dimensionality in the same way as histogram-based approaches (e.g., TMNRE) or grid-based approaches (e.g., crisis-paper) and, therefore, constitutes a clear advantage of TSNPE. We note that one could, in principle, evaluate the coverage of the full posterior also for likelihood(-ratio)-based approaches such as TMNRE without evaluating on a grid, but this would require MCMC sampling from the posterior given several observations, which is substantially slow. We now discuss this more clearly in the paper.\n\nOverall, we maintain that neither the coverage-computation of TMNRE nor that of the crisis-paper provide any evidence that they can be evaluated quickly in high-dimensional parameter spaces, in contrast with SBCC. We have clarified these shortcomings on TMNRE and the crisis-paper in the manuscript and highlighted that TSNPE allows coverage computation of the joint posterior without evaluating on a grid or MCMC sampling.\n\nFurther discussion points:\n> In Figure 6 d [...] Truncating in this round would lead to an overconfident estimate of the HDR. \n\nWe agree that the posterior is rather underconfident. However, for truncation, the crucial regions of the coverage tests are for high-confidence regions (because these define the HPR for small epsilon and, thus, undercoverage in these regions would indicate that parameters are wrongly excluded from the support). In these regions, coverage is very good and we, therefore, expect a reasonably good estimate of the HPR. Nonetheless, we have emphasised this danger in the results section.\n\n> It should be made clear that computing coverage only in the final round does NOT tell the user if the algorithm truncated too aggressively since parameter samples are drawn from the already truncated prior.\n\nWe absolutely agree on this point. As spelled out in Algorithm 1, we compute coverage at every round.\n\n> Truncating is somehow \"dangerous\" when the first rounds of NPE miss a mode of the posterior, since it will not be represented in the subsequent rounds due to truncation. \n\nWe agree that this could be a problem and we have highlighted this in the discussion. We do, however, emphasise that the loss of NPE tends to be mode-covering (it is the forward KL). In addition, it is expected that sequential methods that do not rely on truncation will suffer from these issues as well.\n\n> It is possible to miss modes with this method and inaccurately remove them from the analysis. [...]. Coverage alone does not stop this from happening, to my knowledge.\n\nAs discussed above, we agree that it is important to cover all modes. However, if the approximate posterior consistently misses one mode (for several observations), this would be reflected in the coverage diagnostic. The diagnostic is limited by the fact that SBC(C) only evaluates the coverage on average and does not make statements for particular observations. We have emphasised this problem of SBC(C) in the manuscript.", " We thank the reviewer for their constructive and detailed feedback. We have updated our paper based on your suggestions, and believe that this has significantly strengthened our paper, and thus hope that the reviewer can now recommend the acceptance of the improved paper. We do, however, respectfully disagree with two of the central criticisms of the reviewer, namely the relationship of our algorithm to TMNRE and our coverage tests to those in previous work: \n\nRelation to TMNRE:\n\nWe agree that our paper has close connections with TMNRE. We have further clarified this in the introduction and the corresponding paragraph in the discussion. In particular, we agree that we should have more strongly emphasised the contribution of TMNRE to using coverage as a diagnostic (see below). \n\nHowever, in our original submission, we cited and discussed the TMNRE paper—the reviewer is now requesting us to cite it in the abstract (!), to cite an additional workshop paper on it, as well as several papers applying TMNRE (all of which happen to have an overlapping set of authors). We are awaiting guidance from the AC as to which of these requests we should follow. \n\nThe reviewer claims that “the proposed method is merely the combination of NPE with a simplified version [...] of the proposal algorithm of TMNRE”. We disagree, and believe that this statement does not do justice to several central contributions of our work:\n\nA) Importantly, TMNRE, in contrast to our algorithm, is focused on truncating marginals (hence the ‘M’ in its name), which implies a substantial difference in the algorithms in particular in higher-dimensional problems. The TMNRE paper focuses on one- and two-dimensional marginals—and only makes a single, passing reference on how NRE (not NPE!) could be generalized to truncating joints (on page 6, “In this paper, we estimate the one- and two-dimensional marginals necessary for corner plots. We emphasize that the data already generated during the truncation phase can be reused to learn arbitrary marginals of interest.”).\n\nOther than this passing reference, if and how TMNRE would work in this mode was never investigated in the paper. Several key questions on how to use TMNRE to truncate the full posterior are not addressed in the paper, for example: is the value of the density threshold epsilon=1e-6 still justified in high-D parameter spaces? How well does nested sampling, as proposed by TMNRE, perform in high-d spaces (since TMNRE requires rejection steps, it might be substantially slow)? How can one extend their diagnostic tool to high-D spaces (see below)?\n\nThere can be big differences between looking at joint posteriors or marginals. If we had based our truncated proposals on the one-d marginals only, gains in simulation efficiency would have been orders of magnitude smaller: in the pyloric network model, truncation based on the marginals reduces the prior volume to 80% of the full prior (the marginals cover almost the full prior range), whereas truncation based on the joint reduces the prior volume to 0.06%. In other words, truncation based on the joint provides a speed-up of several orders of magnitude compared to truncation based on the marginals. We have added a discussion paragraph that more clearly describes the shortcomings of TMNRE and describes the main solutions provided by TSNPE.\n\nIn summary, while we agree that TMNRE and TSNPE share several features (and we have further emphasised this in the introduction and discussion), our work provides key innovations that had not been addressed in TMNRE. We have added further details to more clearly describe both the similarities, but also the major differences between the algorithms.\n\nB) Notably, NRE and NPE are distinct approaches, and they have complementary strengths and weaknesses—therefore, developing a truncated approach for TSNPE is important and a useful contribution. We demonstrate (and prove) that the use of truncated proposals allows us to perform SNPE without correction of the loss function, thus side-stepping problems of previous SNPE methods. The fact that one can use the loss function of NPE (instead of the atomic loss of APT) is a central contribution of our work and allows us to scale SNPE to new and impactful real-world inference problems, which have previously been inaccessible to any SBI approach. We have now improved our convergence proof which hopefully makes it clear how TSNPE works, and shows that the combination of truncated proposals with NPE is non-trivial and that solutions to these problems were not a subject in the TMNRE paper. \n\nWe address concerns regarding clarity and correctness of our method later in our response. We are confident that our approach does not contain a ‘fundamental flaw’. ", " We thank the reviewers for their extensive comments and insightful feedback on our manuscript. The reviewers highlight the relevance of our work, call our work an “important” (nSm9 & Afw3) advance in SNPE methods and appreciate that our work might have “impact in many areas of science” (ue8u). They also appreciate our “convincing” experiments (Afw3) and our efforts to diagnose possible failure modes of our method (which “lends confidence to the proposed method” (VtwR)). Finally, we are happy that the reviewers believe our paper to be “very well written” (Afw3), “quite a pleasure to read” (ue8u) and a “high-quality paper” (ue8u). We thank the reviewers for this highly positive feedback.\n\nIn addition to requests for clarification, the reviewers suggest a number of experiments to further strengthen our paper. To address this, we have added supplementary figures that:\n1) compare inference quality of APT with and without truncated proposals;\n2) show that SIR sampling has a sufficiently high effective sample size (ESS) to produce good samples from the truncated proposal;\n3) demonstrate that the computational overhead of rejection sampling and SIR is small compared to the cost of simulating.\n\nFinally, as suggested by reviewer nSm9, we also more clearly explain how our submission is substantially different from previous work (in particular “Truncated marginal neural ratio estimation”, TMNRE).\n\nWe hope that our responses address the remaining concerns of all reviewers and that they can recommend our paper for acceptance at NeurIPS. We thank the reviewers for taking their time to review our paper.\n", " The paper introduces a simulation-based inference method for increasing the simulation-efficiency of Neural Posterior Estimation using an alternative proposal distribution. The proposal is constructed by truncating the prior to the HDR of the previously-estimated posterior for a particular observation of interest. They call their method Truncated Sequential Neural Posterior Estimation (TSNPE).\n\nTSNPE:\n- Overcomes issues with unstable training and leakage of posterior mass outside of the prior associated with other sequential version of NPE, namely so-called APT and SNPE-C.\n- Enables a specialized coverage test, which is dubbed Simulation-based Coverage Calibration (SBCC), which exploits the tractability of the approximate posterior and general empirical expected coverage testing on the truncated region due to \"local amortization.\"\n- 's empirical performance is shown on six benchmark tasks from the simulation-based inference benchmark as well as two complex neuroscience problems. The performance on benchmarks is not significantly distinguished from other methods. The previous versions of Sequential Neural Posterior Estimation cannot robustly identify the posterior on the neuroscience problems. ## Originality\n### Strengths\n- The combination of existing truncated proposals with existing Neural Posterior Estimation (NPE) is novel.\n- Addressing the issue of posterior \"leakage\" into the region without prior support in APT / SNPE-C is an important advance in the development of NPE-related techniques.\n- Although SIR is not novel, using it to sample from the truncated proposal in this context is new to me.\n\n### Weaknesses\n#### Relation to TMNRE\nA core weakness of originality in the paper is that the proposed method is merely the combination of Neural Posterior Estimation with a simplified version (only rejection / SIR sampling from the prior; no iterative bounding of prior for faster proposal sampling) of the proposal algorithm of [Truncated Marginal Neural Ratio Estimation (TMNRE)](https://arxiv.org/abs/2107.01214), another deep-learning based and highly-effective simulation-based inference algorithm. This holds when the marginals are taken to be the entire set of parameters; a mode which is suggested in the TMNRE paper on page 6. Unfortunately, this current paper relegates the exceptional similarity with TMNRE to a paragraph which emphasizes that both techniques are locally amortized (agreed) and writes that their aims are orthogonal (a point on which I disagree). Furthermore, just as in the TMNRE paper, the current paper emphasizes empirical expected coverage testing in order to avoid truncating the prior too aggressively; however, the TMNRE paper is never cited in this context. To the reviewer's knowledge, the TMNRE paper is first to use tests of coverage in this way and even uses [a very similar algorithm](https://github.com/bkmi/tmnre/blob/main/tmnre/coverage/oned.py) for computing coverage as the current paper's proposed SBCC that doesn't require integrating on a grid as the current paper claims (consult their code, which is available on github). Finally, relevant to TMNRE, several examples of applications of simulation-based inference to scientific problems are cited; however, relevant work applying TMNRE (which uses the truncated proposals suggested by the current paper) is not cited. For example: [Fast and Credible Likelihood-Free Cosmology with Truncated Marginal Neural Ratio Estimation](https://arxiv.org/abs/2111.08030) and [Estimating the warm dark matter mass from strong lensing images with truncated marginal neural ratio estimation](https://arxiv.org/abs/2205.09126). Also the workshop paper which predated TMNRE, [Simulation-efficient marginal posterior estimation with swyft: stop wasting your precious time](https://arxiv.org/abs/2011.13951), is not cited either.\n\nTo address this issue, I suggest the authors:\n- Clearly highlight the similarities between TMNRE and relationship to NPE. In particular, by emphasizing the similar methodology (in the case of parameters of interest = all parameters) applied to a different simulation-based inference technique. In my opinion, the connection is so close that it could be mentioned in the abstract along with the relationship to Blum and François [2010]. Also highlight the new key innovations contained in this paper.\n- Clarify that the SBCC algorithm (as best as I can understand it given the state of its clarity in the current draft, see Clarity) is quite similar to techniques used in the computation of coverage by TMNRE and include TMNRE in citations in the discussions of using coverage as a diagnostic tool. (A very similar algorithm is also used in the averting a crisis in sbi paper, see github code.)\n- Introduce the application / workshop paper citations suggested above.\n\n#### SBCC Algorithms\nThe current paper suggests that coverage testing has only been done by integrating on a grid in the cited papers; however, this is not the case. An example is shown above in the previous section for TMNRE. Here is an example from the [Averting a crisis in SBI](https://github.com/montefiore-ai/averting-a-crisis-in-simulation-based-inference/blob/a53f34ccfa2b520c58f46158e52e235766da681f/workflows/SNL/mg1/snl.py#L116-L129) paper.. Both TMNRE and the crisis in SBI papers use a similar algorithm to SBCC and the crisis in SBI paper computes coverage for the entire set of parameters. In addition to the stated similarity to simulation-based calibration, this shows that the SBCC algorithm is not novel; although its explicit enumeration is useful for the community. \n\nTo address the issue:\n- The paper should be updated to reflect that coverage is not generally computed by these other papers based on integrating on a grid.\n- The paper should be updated to reflect that the SBCC algorithm is not novel.\n\n**Caveat: Algorithm 2, which corresponds to SBCC has several typos. If this means that I didn't understand how it was novel, I would change my opinion here. Either way these typos must be fixed (see Clarity).**\n\n#### Nested Sampling\nThe paper does not mention nested sampling, although truncation is an important part of the process. Please consider citing the following review https://arxiv.org/abs/2101.09675, and these papers:\n- https://projecteuclid.org/journals/bayesian-analysis/volume-1/issue-4/Nested-sampling-for-general-Bayesian-computation/10.1214/06-BA127.full\n- https://arxiv.org/abs/0809.3437\n- https://arxiv.org/abs/1904.02180\n- https://arxiv.org/abs/1506.00171\n\n\n## Quality\n### Strengths\n- Addressing the leakage issue with APT / SNPE-C is demonstrated.\n- The neuroscience tasks are made possible using the algorithm.\n- The use of coverage testing to determine whether or not it is safe to truncate is a good idea and in line with previous work on the subject (TMNRE / crisis in sbi).\n\n### Weaknesses\n- The improvement on the benchmark is marginal, but the benchmark does not usually produce a clear \"winner\" among methods. Perhaps this should be stated within the paper.\n- In Figure 6 d (and corresponding text) the use of ensembles does not produce a conservative estimator of the posterior, although it improves matters compared to single neural networks. Truncating in this round would lead to an overconfident estimate of the HDR. This is not mentioned in the paper, but it should be highlighted as exactly the failure mode to avoid!\n- It should be made clear that computing coverage only in the final round does NOT tell the user if the algorithm truncated too aggressively since parameter samples are drawn from the already truncated prior. In this setting, it is like computing the coverage of different problem where the prior has been replaced by a truncated version.\n- The term coverage is generally misused throughout the paper and not well explained in the proposed SBCC. The paper computes the expected coverage. Coverage is a frequentist concept which characterizes the confidence sets, not the credible regions discussed in this paper. Expected coverage is discussed the averting a crisis in sbi paper and implies a coverage test for averaged over simulations from the proposal. (See questions)\n- Truncating is somehow \"dangerous\" when the first rounds of NPE miss a mode of the posterior, since it will not be represented in the subsequent rounds due to truncation. This is not sufficiently addressed and should have its own section. (See the limitations response prompt for more discussion.)\n- It looks like the parameter rejection rate for many benchmark problems goes to nearly zero at higher rounds. The amount of help SIR provides in this is not well quantified; does using K=1024 distort the proposal distribution due to high variance in the importance weights? Does it affect the final posterior? This is an importance nuance which is not sufficiently explored. (See questions.)\n\n## Clarity\n### Strengths\n- The main text is well written and easy to follow.\n- I am extremely familiar with this field of research. I feel that it has a satisfactory preamble and main paper description. I will give a positive score but look to other reviewers to see if it is clear for others who are less well acquainted. Namely, I believe the paper references concepts from the development of deep-learning based simulation-based inference which are clear to me but perhaps not to others.\n- I have not looked at the code presented in the paper, but hydra has been used for documenting the details of every run for reproducibility. That's very good!\n\n### Weaknesses\n- The proof of convergence in Section 6.2 is a difficult to follow. It could benefit from fewer words and more terse mathematical description.\n- Algorithm 2 uses symbols which are starred then removes the stars later. It also uses symbols which are not introduced. This makes the algorithm unclear. Perhaps a verbal description would be beneficial as a comment? (See questions.)\n- Algorithm 3 uses symbols which are not introduced. This makes the algorithm unclear. Perhaps a verbal description would be beneficial as a comment? (See questions.)\n- The statement about the region which is well calibrated in 6.12 is unclear to me. Is the implication that the estimated posterior is truncated based upon the most recent round, thereby the support of q handles any issues regarding sampling parameters from the prior? If this is the case, it is a nuance which I suspect to pass most readers by and should be clarified. (see questions)\n- It is not made clear that training on data drawn from all rounds should produce an accurate posterior. (Section 6.11) Drawing data from all rounds means that the \"implicit prior,\" due to the use of a mixed proposal, can be modulated by a sum of R step functions with arbitrary support. Consider that in the limit of large steps that implies the modulating function can be extremely expressive. (see questions)\n\n## Significance\n### Strengths\n- Although I am not an expert in neuroscience, the author's claims imply that the expanded relevance of NPE to these problems is significant.\n- Addressing leakage in APT / SNPE-C is quite practical.\n\n### Weaknesses\n- As discussed in the Originality section above, the novelty of the TSNPE is marginal.\n- Unless it is shown that training on data from all rounds is acceptable, then the proposed algorithm has a fundamental flaw. (It is addressed by training only on data drawn from that lies within the previous round's proposal, but then the performance on various tasks is unknown.) - Please address the bullet points raised in the \"weaknesses\" section above.\n- Would you please clarify Algorithm 2 such that I can check whether or not it is novel?\n- Would you please quantify the effects of using SIR and opposed to rejection sampling for drawing from the proposal? An analysis of the variance of importance weights would help us to understand whether SIR produces useful samples for training.\n- Why do you consider computing coverage at every round to be optional? Truncating with an overconfident posterior, as is empirically produced by even the ensemble of networks in Figure 6, means that the proposal truncates too aggressively for the hyperparameter setting.\n- I am not convinced that training TSNPE based on data aggregated from all rounds during truncation converges to the HDR of the ground truth posterior. Could you please clarify where in the paper this is proven or prove it here (and include it in the paper)? Specifically, I am concerned by how a collection of data from many rounds implies a prior with mass modulated by the details of truncation regions of previous rounds. It also implies that the estimated posterior outside of the HDR is untrustworthy.\n- In Section 3.3 the authors claim that having (expected) coverage implies that the HDR of q_phi(t | x_0) is a super set of p(t | x_0). I don't think this is true. Would the authors please clarify or prove it? - The simulation efficiency gained by truncation is marginal if the posterior is nearly as wide as the prior. This is mentioned in the result section, but should be put into the limitations section.\n- It is possible to miss modes with this method and inaccurately remove them from the analysis. This should be emphasized. Coverage alone does not stop this from happening, to my knowledge. (see questions / quality weaknesses.)\n- The limitations of computing coverage using the SBCC algorithms in high dimensions should be mentioned. Yes, drawing parameters is cheap from this method but it may be that a significant amount of proposed parameters are below the contour line of interest. I expect this to suffer from the curse of dimensionality.", " Given a generative model $\\mathcal P(\\theta, x)$ of the model parameters $\\theta$ and data $x$, the goal is to solve the inverse problem $\\mathcal P(x | \\theta)$. Neural Posterior Estimation (NPE) uses simulations from $\\mathcal P(\\theta, x)$ to train an approximate posterior distribution $q_\\phi (\\theta | x)$ that is tractable and has learnable parameters $\\phi$.\nIf one is interested in solving the inverse problem for a particular data realization $x_0$, Sequential Neural Posterior Estimation (SPNE) proposes a iterative sequence, starting at the prior, where the sampler for the model parameters is drawn from the previously learned $q_\\phi$.\n\nRather than plugging in the previous proposal. this paper proposes a sequence of truncated prior distributions.\nWith Truncated Sequential Neural Posterior Estimation (TSPNE), the sampler for $\\theta$ at each step is drawn from the prior distribution truncated at the highest-probability region (HPR) of the most recent approximate posterior.\nThis overcomes the drawback of SPNE that the maximizer of the loss function no longer converges to the true posterior.\nThe authors show that as long as the chosen HPR covers the support of the true posterior distribution, TSNPE provably converges to the true posterior.\nThe authors also introduce a metric called simulation-based coverage calculation (SBCC) that evaluates the coverage of their learned posterior approximation. This coverage is based on whether the quantiles of the learned log-density cover the log-density evaluated at the true parameter across repeated draws from the generative model.\n\nThe authors evaluate their procedure on six synthetic benchmarks and compare to the original NPE and APT, which is a specific variant of SPNE.\nThey show that their method mostly matches or outperforms APT and NPE, although APT seems to do better in two cases.\nThey then compare APT and their TSNPE on a simulation of pyloric networks in crabs.\nTheir TSNPE method does well in this example and is similar to previously known posteriors.\nThey also show that APT does poorly in this case.\nTheir final example is on a multicompartment model of a single neuron. Since only one proposal exhibited poor coverage based on their SBCC metric, they use an ensemble of 10 proposals and find this leads to much better coverage.\nThe competing APT method also does poorly in this case.\n ## Strengths\nThis paper builds on the established Sequential Neural Posterior Estimation (SNPE) framework, but using a truncated prior distribution as a prior. The truncation is achieved by rejecting samples outside of the highest-probability region (HPR) of the previously-estimated posterior. This method is simple, yet it makes intuitive sense because it prevents sampling parameters that are not possible under the prior and parameters unlikely under the estimated posterior.\nMoreover, it corrects the main problem with SPNE by ensuring that the minimizer of the TSNPE objective at eatch stage is the true posterior.\nThis property hinges on the HPR covering the support of the true posterior, which may not be guarenteed in practice, but the authors develop the SBCC metric to verify the validity of the learned posteriors.\nThis lends confidence to the proposed method.\n\nThe pyloric network and multicompartment experiments demonstrate that TSNPE can work in challenging scenarios that the competing method (ANP) fails in. In the multicompartment experiment, it was nice to see that the authors' SBCC metric identified that a single neural network led to poor coverage and was corrected by an ensemble method.\n\n## Weaknesses\nThe main weakness to this work are the simple simulation experiments in section 4.1. While TSNPE appears to match the performance of other methods, it would be reassuring to re-create the failure of APT observed in the more complex neurological applications in a simple setting.\n + Instead of an ensemble mixture to solve the poor coverage in the multicompartment experiment, would it work to form a defensive mixture by sampling from the untruncated prior with some probability? Even if this probability were small, it may help get around the issue of the HPR not covering the true posterior support.\n+ It has been suggested that the forward KL objective (used here) leads to an overdispersed posterior compared to the reverse KL typical of variational inference. Is this applicable to TSNPE?\n The authors address the main limitation of their method that the estimated HPR may leave out the true posterior support, and they describe their proposed SBCC metric for checking poor coverage.\n", " The authors present a new simulation-based inference (SBI) technique that sequentially learns a neural surrogate for the posterior. The key idea is to use a truncated version of the prior, rather than the current posterior, as a proposal distribution to generate new samples. This removes the need for an impractical importance weight during training, leading to two practical advantages: it solves a posterior leakage problem that other state-of-the-art sequential neural posterior methods (APT) have, and it simplifies the use of calibration diagnostics. There are some downsides, too: the truncation biases the posterior and sampling from the proposal distribution requires some rejection sampling, which can be inefficient.\n\nThis new method, TSNPE, is demonstrated on a suite of standard benchmarks and finally on two real-life problems from neuroscience. On the benchmarks it performs on par with other neural posterior estimation methods, while on the neuroscience problems it enables plausible solutions for the posterior where other methods failed. ## Strengths\n\n- Scaling SBI methods to more complex problem is an important (and to some extent unsolved) problem, with potential impact in many areas of science.\n- The proposed method is, overall, sensible (with some caveats, see below) and can help make SBI work in more complex problems.\n- The downsides of the presented methods (in particular the bias from $\\epsilon > 0$ and the inefficiency of rejection sampling) are acknowledged and discussed (though this could be further improved, see below).\n- The empirical evaluation is strong. I appreciate that the authors both present benchmark problems (for which true posterior samples are available) and more challenging real-world neuroscience problem. In the latter, the authors do a good job of discussing the plausibility of the learned posterior. While the presented method does not lead to a huge improvement on the benchmark tasks, it does appear effective in the neuroscience problems, and that is ultimately more important.\n- The paper is very well written and clearly structured, quite a pleasure to read.\n- The figures (both schematics and experimental results) are well designed and very clear.\n- Related work is discussed in detail and the relation of the presented method to other approaches is made quite clear.\n- The authors have released the code to run the experiments.\n\n\n## Weaknesses\n\n- While TSNPE is a new algorithm, it is a modification of existing algorithms like SNPE and APT. Similarly, the proposed calibration tool builds strongly on existing work. This paper is thus somewhat iterative rather than revolutionary.\n- There are a couple of aspects that could use a more in-depth discussion, see the Questions section below. In particular, I would like to understand better why rejection sampling in TSNPE can be more efficient than rejection sampling based on the APT posterior. Also, it would be great if the authors could explain a bit more why sampling from a truncated prior isn't quite sample-inefficient compared to sampling from the posterior.\n\n## Bottom line\n\nI would like to thank the authors for a solid contribution to the toolbox of simulation-based inference and to a high-quality paper. Still, I think it can be made stronger by extending the discussion and addressing some questions (see below). The authors present the posterior leakage of APT as a major motivation behind their work. As they point out, this posterior leakage can be fixed in APT through rejection sampling, but that can be inefficient. However, the proposed algorithm (TSNPE) also involves rejection sampling (plain or in the form of SIR). I would appreciate if the authors discuss in more detail of why the rejection sampling in TSNPE is more efficient (has a lower rejection rate) than the rejection sampling in the APT posterior. Could SIR be applied equally well to the APT posterior?\n\nIs it fair to say that in some sense, TSNPE is a compromise between vanilla NPE (no annoying importance factors in training) and SNPE (optimal use of past information in the posterior)? I'm trying to better understand how much information from past rounds is lost by sampling proportional to the prior rather than proportional to the current posterior (within some truncation region). Can the authors discuss this some more?\n\nPhrasing the same question in a different way: Compared to standard SNPE, the proposed algorithm presents two changes. 1) The proposal is truncated and 2) within the truncated support, we sample proportional to the prior rather than the posterior. It would be interesting to disentangle these two factors and understand how they individually contribute to the sample efficiency. If I'm not mistaken, only doing 2 (and not 1) corresponds to vanilla, non-sequential NPE, is that correct? How would an algorithm do that does 1 but not 2, i.e. truncates the posterior and then samples proportional to the posterior within this truncation region? Would this for all practical purposes be identical to APT?\n\nSome minor comments:\n- The authors do a good job of listing the most closely related references. Perhaps SBI has grown to be too large a field by now, but there are still a number of SBI works not cited here.\n- In the first sentence of the abstract, the authors describe SBI as \"learning posterior distributions\". As the authors themselves point out later, in many SBI methods other statistical objects like the likelihood or the likelihood ratio are learned, and other methods again do not have any explicit learning at all (like old-fashioned ABC). Maybe it is a good idea to rephrase this sentence. \n- The word \"likelihood\" is a bit overloaded and used both for the likelihood $p(x|\\theta)$, but often also to the density of some trained surrogate of the posterior $q(\\theta|x)$. This is to some extent unavoidable, as \"maximum likelihood\" is such a known term, but in other instance I found it slightly confusing (for instance in line 128). Perhaps the authors could refer to the \"probability density\" of the posterior rather than the \"likelihood\" of the posterior?\n- In the experiments, how big is the error of SIR due to finite K?\n- In the benchmark experiments, does the problem of APT posterior leakage occur? Do you fix it with rejection sampling (or in some other way)?\n- In the experiments, do the methods that rely on rejection sampling or SIR have a substantial overhead in runtime from that, or is that negligible? The authors have addressed the limitations of their methods and discussed them well. Some aspects could use a more in-depth discussion, see the Questions section above.\n\nThe potential societal impact is not discussed, but I do not see any obvious issues.", " The manuscript presents a new sequential neural posterior estimation (SNPE) algorithm for simulation-based inference. Simulation-based inference refers to the task of inferring parameters of interest from a real observation, based on a simulator. Neural posterior estimation (NPE) refers to methods aiming to build a surrogate for the posterior (typically a normalizing flow) based on samples from the simulator. Sequential NPE refers to their sequential counterparts that alternate between training and sampling phases with the objective of sampling simulated observations close to the real observation of interest and hence typically sample parameters to use for simulation from the current posterior surrogate. However, current SNPE methods suffer from two issues limiting their applicability:\n* As training samples are sampled from the current posterior surrogate, they are non-amortized, meaning that the algorithms must be fully rerun for each observation. In opposition, amortized methods reuse the same model to perform inference on multiple observations. Non-amortized methods hence have the drawback of being inefficient to diagnose. \n* As parameters used for simulation are not sampled from the prior, parameters with zero prior probability could be sampled. This phenomenon is called leakage.\n\nTruncated SNPE (TSNPE) addresses both of those issues by using, as proposal, a truncated version of the prior in place of the current surrogate posterior. The prior is truncated by removing parameters with surrogate posterior density below a given threshold, hence focusing on parameters of high posterior density. Consequently:\n* Parameters sampled are always within the prior support, solving the leakage issue. \n* It makes the method locally amortized as the built surrogate is valid for observations resulting from parameters sampled from a region of the prior instead of a single observation and hence enables efficient diagnostics. The paper introduces the simulation-based coverage calibration (SBCC) diagnostic that leverages this property. ### Major\n* (+) The paper is very well written and easy to follow.\n* (+) The method is sound\n* (+) It addresses two important issues in SNPE (leakage and diagnostics)\n* (+) Experiments are convincing.\n\n### Minor\n\n* (-) The SBCC diagnostic is quite known in the context of NPE and NRE (for example, see \"Arbitrary Marginal Neural Ratio Estimation\nfor Simulation-based Inference\", Rozet et al. 2021) and is a straightforward extension of SBC. Therefore, in my opinion, it should not be framed as a core contribution of the paper. The contribution rather lies in the development of an sbi algorithm that allows the use of this diagnostic (which is great). None I do not see any potential negative societal impact of this work." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4, 3 ]
[ "4oGlaFmyboU", "-awbGEyQ2V", "-awbGEyQ2V", "eIQ2nzFXvr5", "Z6gAB2DOITs", "Ze_sDjh27Ov", "BrZbbyaYBfm", "YKzs3dvoBg0", "1QKIoyI8ube", "v0WHdo6uBvo", "1wqZ6A1pNB", "uncTSFyp0rQ", "wYyx75RDsiE", "Dcq9ffwJ2Eh", "nips_2022_QW98XBAqNRa", "WWwvxu7P2I", "SevXSBJbY9L", "5xmwBTeSYv", "1wqZ6A1pNB", "uncTSFyp0rQ", "COnwU0dwLQ", "nips_2022_QW98XBAqNRa", "nips_2022_QW98XBAqNRa", "nips_2022_QW98XBAqNRa", "nips_2022_QW98XBAqNRa", "nips_2022_QW98XBAqNRa" ]
nips_2022_M12autRxeeS
Extracting computational mechanisms from neural data using low-rank RNNs
An influential framework within systems neuroscience posits that neural computations can be understood in terms of low-dimensional dynamics in recurrent circuits. A number of methods have thus been developed to extract latent dynamical systems from neural recordings, but inferring models that are both predictive and interpretable remains a difficult challenge. Here we propose a new method called Low-rank Inference from Neural Trajectories (LINT), based on a class of low-rank recurrent neural networks (lrRNNs) for which a link between connectivity and dynamics has been previously demonstrated. By fitting such networks to trajectories of neural activity, LINT yields a mechanistic model of latent dynamics, as well as a set of axes for dimensionality reduction and verifiable predictions for inactivations of specific populations of neurons. Here, we first demonstrate the consistency of our method and apply it to two use cases: (i) we reverse-engineer "black-box" vanilla RNNs trained to perform cognitive tasks, and (ii) we infer latent dynamics and neural contributions from electrophysiological recordings of nonhuman primates performing a similar task.
Accept
Building on recent theoretical work on the dynamics of low-rank recurrent neural networks, the authors present a method called LINT for learning low-rank network models directly from data. As the reviewers point out, from a purely technical perspective, the idea is straightforward: simply optimize a low-rank parameterization of an RNN. Similar ideas have been considered under the heading of _tensorized_ neural networks [e.g. 1]. See also related works cited therein, which considers low-rank parameterizations of weight matrices [e.g. 2]. Though the technical innovation may be limited, the work makes up for it with connections to recent research in theoretical neuroscience and interesting experiments. The reviewers raise many important caveats and limitations (e.g. are these tasks really reflective of the complexity of "real world" tasks in ML and experimental neuroscience?). Overall, though, the reviewers and I think this paper offers valuable contributions. I encourage the authors to revise their manuscript in light of these thorough and constructive reviews. [1] Novikov, A., Podoprikhin, D., Osokin, A. and Vetrov, D.P., 2015. Tensorizing neural networks. Advances in neural information processing systems, 28. [2] Denil, M., Shakibi, B., Dinh, L., Ranzato, M.A. and De Freitas, N., 2013. Predicting parameters in deep learning. Advances in neural information processing systems, 26.
train
[ "sqLAhlsRUIB", "AyApxb3-bRlV", "0MjhEJj2zNsO", "s5C9-M2SqC_", "3-FV5SelSgT", "No-Ifq3lOp", "Gn-k4kcme2Y", "-tRUYzPPDtZ", "sgt0XRChYO", "rrlH_jJfgV_", "DgKRFPYA06u", "nSXAXj4HwXT", "wiJgKauMfkj" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their thoughtful responses and revision. I’m still in favor of this work, although I think my main point (3) has not really been addressed. That is, the K-back task is perhaps helpful in further confirming that the authors’ method works well. But it doesn’t give me more reason to believe the model really retrieves neural dynamics underlying task performance rather than simply the task structure itself. Perhaps this is too difficult to answer w/o further experiments on simultaneous recordings, i.e. w/o averaging across independently drawn cells.\nI still think it’s a valuable contribution. But this prevents me a bit from raising my score further.", " I have read the rebuttal and remain positive. Adding the K-back task with a wide range of K's would be a valuable contribution. The authors may want to consider (and perhaps comment) on the observation that the Cueva paper sees increasing dimensions in experiments with a fixed delay interval.", " Thank you for your response. I still find the contributions not high, but I agree directly modelling neural activities is an interesting direction. In light of this, I increased my score.", " Thank you for the rebuttal. The K-back task is a nice addition, although I would argue that for the values of K explored (2-6) it is still low-dimensional.", " 1. The reviewer is absolutely correct, we believe this is the first study to train low-rank RNNs to fit neural activity. See “Novelty” in the General Rebuttal.\n2. The objective of an SVD is to optimize the low-rank reconstruction error on the connectivity matrix J. This objective does not take into account the input and output vectors. We believe it is therefore simply not relevant to approximation of the network dynamics. Note that other works have tried this, for example (Krause et al. biorXiv 2022) or (Schuessler et al. NeurIPS 2019). We will clarify this point in the Discussion of the revised paper.\n3. We agree that the low-dimensionality of the tasks is a key concern, and we have focused on addressing it by studying an additional task. See \"low-dimensionality of tasks\" and \"limitations\" in the General Rebuttal. These new results will be included in the revised manuscript. Concerning the non-simultaneous character of the recordings used here, we simply tackle it by considering only trial-averaged data, which is a classical approach with this kind of data. Although we have not yet had the occasion to perform analyses on datasets of simultaneously recorded neurons, this is a direction that we wish to explore in future work, and in which single trials could potentially be fitted.\n4. This point is particularly relevant to this line of work. Indeed the reviewer is correct that a 5-unit network is sufficient to perform the CDM task, although it is not clear how such a network will map to brain-like computations, distributed over large networks. Our approach specifically provides a method for identifying a small network of latent variables (with potentially multiplicative interactions) that emerges as an effective description of a large vanilla RNN. The rather large number of units in the vanilla RNN allows us to perform a principled mean-field analysis.\n5. Low-rank RNNs in fact precisely exhibit mixed-selectivity (see Mastrogiuseppe & Ostojic, Neuron 2018). The original goal of this class of models was to precisely reconcile the observations of mixed-selectivity with observations of low-dimensional dynamics.\n6. This is a relevant suggestion, and we will include more single unit examples in the revised manuscript.\n7. Reproducibility of the low-rank structures is addressed through the simulations of Fig. 2 and SI Fig. 1. It is not guaranteed that the same population mechanism would consistently be retrieved across different seeds and/or hyperparameters in the original full-rank network. Our goal to present the reverse-engineering analysis as a proof-of-concept of how our method can help probe deep mechanistic underpinnings of trained black-box networks (see also the DMS example in Sups). Note that the full-rank network analyzed in these sections was selected randomly, i.e. not cherry-picked.\n\nMinor points:\n- Indeed, universal approximation would involve the presence of additional hidden units. We reworked the sentence to avoid confusions.\n- Indeed, the implication is that for any dataset the optimal rank has to be searched for by trying different values.\n- This was a typo, now corrected. Thanks for spotting it.\n- We are not sure to understand which matrices are referred to here, but would be glad to include them in the main text if space permits.\n- See points 4 and 5 above.\n- We made the choice to match units of the model to recorded units on a 1-to-1 basis, without adding more hidden units mainly because we aim at understanding the computations that arise at the level of the recorded circuit. By providing hidden units to the model, we were concerned that it could \"hijack\" the computations by delegating them to the hidden units, failing to teach us what dynamics can arise in the area being studied.\n\n", " We thank the reviewer for their time and their positive comments about our work. We address their concerns in the point \"novelty\" of our general rebuttal, reproduced here for convenience: \nThis work builds on previous studies that fall into two classes: (i) theoretical analysis of low-rank RNNs trained or designed to perform specific tasks (Mastrogiuseppe & Ostojic 2018, Dubreuil et al 2022, Schuessler et al 2019 notably); (ii) data analyses of full-rank RNNs trained to fit recorded neural activity (Rajan et al. Neuron 2016, Perich et al biorXiv 2021). The key novelty of this work is that we combine these two approaches: we train low-rank RNNs to fit neural activity, and then use the low-rank theory to analyze and interpret the obtained networks. Methods for automatic discovery of interpretable models of neural activity are an important goal for neuroscience and ML research, and we believe our work provides a significant contribution to this literature.\n\nWe hope this response (and the assessment of other reviewers) alleviate the reviewer's concerns about the contributions of our work to the field. We would therefore like to humbly ask if they might consider raising their score above the acceptance threshold.\n\nMinor weaknesses: the purpose of introducing the dynamical framework, with variables $\\kappa_r$ and $v_s$ is to demonstrate the necessary low-dimensionality of those frameworks, and how they align with the general line of work modeling neural computations through low-dimensional dynamical systems (Vyas et al. 2020). In general, $\\kappa_r$ and $v_s$ parametrize the low-dimensional trajectories illustrated, for example in Fig. 3 and Fig. 5, which we will clarify in the corresponding captions. Appendix A.2 mostly justifies mathematically the concept of effective connectivity, which shows how multiple RNNs with different connectivities can lead to the same trajectories, and justifies our use of low-rank models to simplify this degeneracy.\n\nQuestions:\n- How is the orthogonality of ‘m’ vectors enforced? During back-propagation, the weights $m_i$ and $n_i$ are trained, without constraining the $m^{(r)}$ vectors to be mutually orthogonal (same for the $n^{(r)}$ vectors). At the end of training, we compute the connectivity matrix J from the obtained m and n vectors. We then perform an SVD on J, which leads to the final $m^{(r)}$ and $n^{(r)}$ vectors, where the m vectors are mutually orthogonal (and same for the n vectors).\n- Zero-centered parameters: the general method does not assume centered weights for all vectors (and for the DMS task in sup. material the parameters are not zero-centered). The interpretation of the CDM task in section 3.2 is partly based on the empirical observation of centered weights (see fig. 4a). The main results remain valid if the weights happened to not be centered during training (in particular all section 3.2.1, identification of populations, and gain modulation mechanisms).\n- we are not entirely sure to have understood the question, but believe the reviewer suggests a 3 step protocol that would involve: neural data -> full-rank model -> low-rank approximation of the full-rank model. This protocol would add a step and could lead to more information loss between the original data, and the final low-rank model. Nevertheless, we do test our method with full-rank instead of low-rank models, in fig. 2f, and in fig. 5a notably. \n", " We are very grateful to the reviewer for their very positive comments and constructive suggestions. \nTo address the comment and question concerning the dimensionality of the task, we have followed the reviewers’ suggestion and examined the K-back task. See General Rebuttal for details (point “Low-dimensionality of tasks”).\n\nWe hope this addresses the reviewer’s concern. We would be glad to add those results to the revised paper.\n", " We thank the reviewer for their time and effort in reviewing our manuscript, and their positive and relevant remarks. Their concerns are overall aligned with those of other reviewers, and we refer to our general rebuttal where they are addressed, and answer to some specific points here.\n\nResponses to weaknesses:\n1. See point \"novelty\" in the General Rebuttal.\n2. See point \"other architectures\" in the General Rebuttal.\n3. What happens for more complex tasks? This is a very interesting question, to which we devote new analyses in our current revision, on a task that is higher dimensional by design, the K-back task.\nSee \"low-dimensionality of tasks\" in the General Rebuttal for details. \n\nMinor suggestions: these are very relevant and we have now corrected most of them. We do not understand what technical issue has caused the registration of the pdf as images, and are very sorry about it. We had to move the explanation of the effective connectivity correlation by lack of place, as they are a specific point. This measure does not appear except in section 3.1.\n\nQuestions:\nWhy use a hard constraint on rank rather than nuclear norm?\nWe found empirically that it is easier to control the rank by enforcing it with a hard constraint, than by a soft constraint. More systematic comparisons with nuclear norm regularizations however remain to be performed.\nWhat are the multiple lines in Fig 2? \nThe multiple curves correspond to random seeds of the original full-rank network, reproducing the results across a range of full-rank networks trained to do the same task. They show that different seeds can lead to solutions of different dimensionality (see for DMS for example one curve that stands out). This is now explained in the caption.\n- Equivalent of Fig 2f for the DMS task? Producing this figure is numerically particularly demanding, we can add it as a supplement in the final paper.\n", " We thank all the reviewers for the time and effort invested in evaluating our work, and their encouraging comments. Here we address the comments common to several reviews, and summarize the corresponding changes to the paper. Individual responses to each reviewer are provided separately.\n\nNovelty: This work builds on previous studies that fall into two classes: (i) theoretical analysis of low-rank RNNs trained or designed to perform specific tasks (Mastrogiuseppe & Ostojic 2018, Dubreuil et al 2022, Schuessler et al 2019 notably); (ii) data analyses of full-rank RNNs trained to fit recorded neural activity [Refs]. The key novelty of this work is that we combine these two approaches: we train low-rank RNNs to fit neural activity, and then use the low-rank theory to analyze and interpret the obtained networks. In the revised paper, this will be clarified in the introduction.\nMethods for automatic discovery of interpretable models of neural activity are an important goal for neuroscience and ML research, and we believe our work provides a significant contribution to this literature.\n\nLow-dimensionality of tasks: We focused on tasks used with animals in the neuroscience literature. These tasks are simple from an ML perspective, but challenging for animals to learn, and important for identifying mechanisms (for recent examples on the CDM task, see (Krause et al., biorXiv 2022), (Langdon & Engel, biorXiv 2022), (Flesch et al., Neuron 2022))\nWe nevertheless agree with reviewers that it is important to consider higher-dimensional tasks. During the revision period, we have added an analysis of the K-back task, in which a network receives a sequence of P stimuli in {-1, +1} (P being a random integer), and has to output the K-before-last stimulus received. This task requires the network to implement a working memory of capacity K, and the rank of a network implementing it should scale at least as K. We trained full-rank networks on this task for increasing values of K, and observed that their trajectories, through the LINT method, could indeed be fitted by low-rank networks of rank at least K (and usually very close to K). These results show that accommodating for increasingly long-term sequential or temporal effects requires an appropriate increase in dimensionality (as pointed notably by reviewer 8DBW citing (Cueva et al PNAS 2020)). Still, even confronted with higher-dimensional implementations, our method proves to be useful in quantifying precisely the dimensionality required by certain computations.\n We have uploaded a revised manuscript that includes a preliminary version of this analysis (see appendix F and Sup. Fig. 8 for details) and will add a more thorough version in the final revised manuscript.\n\nLimitations of low-rank networks: The reviewers are right in pointing out that the limitations of our method are not sufficiently described. We wish to clarify them partially with the new results on the K-back task, and in the revised paper, we will rework the introduction and discussion to better clarify the limitations.\n\n\nApplicability to other architectures: To which extent our method applies to other recurrent architectures like GRU and LSTM is a very interesting question that merits a study on its own. Recent analyses of dynamics and structure in those networks (see Maheswaranathan et al. ICML 2019, and NeurIPS 2019, or Schuessler et al. NeurIPS 2019 for example) suggest that low-rank models, even vanilla ones, could capture an important part of the variance of those trajectories. We will incorporate this point in the discussion, and aim to direct future research towards it.\n\nWe are also very sorry to discover that the pdf appeared as a series of images. This was unintentional, and caused by a technical issue. We have uploaded a proper pdf with searchable text now.\n\nTracked changes: removed wrap-ups of sections 3.1 and 3.3, added a paragraph l. 155-163, and another l. 296-300, added appendix F and SI Fig. 8, minor clarifications and typos.\n", " This paper studies the problem of modeling dynamical systems using low-rank RNNs. The paper motivates a number of domains in which low-rank RNNs have given insight into the computations underlying a particular task. The paper proposes directly fitting low-rank RNNs (meaning the connectivity matrix in the RNN is restricted to be low-rank). The paper demonstrates how this restriction allows one to learn the correct effective connectivity for a number of synthetic tasks, and also applies the method to recordings from macaque prefontal cortex while the animals performed a context-dependent decision making task. This method reveals that a rank one network is sufficient to capture the neural activity. Strengths:\n1. The paper is well written, and does a good job of motivating the problem and summarizing relevant work.\n2. The paper's focus on low-rank structure as an important inductive bias for neuroscience is not particularly new, but is nonetheless important. For example, Fig 2f. is a powerful reminder of the utility of the appropriate inductive biases for systems neuroscience, where we are (generally) only able to sample a tiny fraction of the relevant neurons participating in a given task.\n3. The experiments on synthetic tasks as well as the application to neural data are clear and high quality.\n\nWeaknesses:\n1. The main contribution of the paper is essentially advocating for a type of regularization (low-rank structure), one that has been discussed plenty in the literature (for example by Mastroguiseppe and Ostojic, which is already cited by the paper). While important, the overall significance and novelty does not seem that high.\n2. It is not clear how one generalizes the method to RNNs beyond the ones described by equation (1). For example, what about RNNs with multiple weight matrices (such as gated architectures: GRUs or LSTMs)? These are commonly used in machine learning and neuroscience, I am curious if or how the authors would apply their method to those networks.\n3. The paper studies tasks which can readily be solved by a low-rank vanilla RNN. But the space of tasks that we are interested in understanding is much greater! What happens if you apply this method to a dynamical system that is solving a more complicated task? For example, one that does not admit a low-rank solution? It would be good to get a sense of what to expect from the method in these scenarios.\n\nMinor suggestions for improvement:\n- Should explain the \"effective connectivity\" in the main text. It is used throughout, and thus seems important enough to put in the main text (currently the explanation is in appendix A.4).\n- The entire PDF is low resolution (for exmample text) which makes it hard to read.\n- This is a nitpick, but replace $J^{eff}$ with $J^{\\text{eff}}$. That will prevent the \"eff\" from being italicized (which should only be done for mathematical variables).\n\nOverall, I am not so worried about point (1) under weaknesses, but look forward to see if the authors can address (2) and (3) in the rebuttal.\n - Why use an explicit (hard) rank constraint, rather than an implicit (soft) constraint, for example by penalizing the nuclear norm of the connectivity matrix? My understanding is that the latter is admits an easier optimization problem, as the nuclear norm is the convex relaxation of a hard rank constraint.\n- There are multiple gray and red curves in Fig 2. What do they correspond to? Different random seeds?\n- Why is there no equivalent of Fig 2f. for the DMS task? Yes", " This paper describes results from a method for estimating low rank RNNs from observed trajectories. The method itself is just gradient descent on the set of weights that specify a low-rank RNN to match observed trajectories. The paper demonstrates that the method can recover the effective connectivity of known low-rank RNNs, does a good job describing the behavior of general RNNs trained on a set of four tasks, and does a reasonable job recovering neural trajectories from NHPs performing one of those tasks. Strengths\n\nGiven the widespread interest in RNNs in machine learning and neuroscience, this method may find broad applications in both of those fields. The low-rank description is more readily interpretable.\n\nWeaknesses\n\nI think the paper seems to take as a given that low rank RNNs can provide a veridical description of brain dynamics and learn many tasks. This is not at all well established. The tasks chosen here are well-described by low-dimensional RNNs, but many tasks and natural networks appear not to have that property. For instance, Cueva et al., (2020, PNAS) estimate the dimensionality of neural populations during the delay period of a variety of experiments (see especially their figure 7). They find a number on the order of 5-10 for the dimensionality over a few seconds. Leaving aside technical questions about whether a particular number is meaningful, one can read their result as decelerating but growing without bound (e.g., dimensionality $\\propto \\log t$), or as saturating at some time $T$. The first interpretation argues that a low-rank description is not appropriate---or at least that rank grows to be much larger than 10. The other possibility---that dimensionality saturates at some time T---suggests it would be impossible for the network to distinguish times $> T$. But that seems inconsistent with the data (e.g., Lewis & Miall, 2009, Trans Royal Soc B). \n\nKind of a minor issue, but submitting a set of images instead of a searchable PDF made the job of reviewing this paper more difficult than it needed to be.\n\n It would be very informative to include a situation where the dynamics are in fact relatively high-dimensional. Note that a full-rank RNN trained on the tasks used here does not necessarily exhibit the growth in dimensionality. One possibility is to compare to an RNN trained to distinguish a large number of temporal intervals. \nThe paper should do a better job at pointing to the computational limitations of low-rank RNNs and their possible limitations in describing brain data and cognition.\n\n", " In this paper, the authors use low-rank RNNs for modelling neural data. The connectivity matrix is approximated using a low-rank representation, which is then inferred by minimising the loss between the predicted neural outputs and the target ones. In task-optimised RNN, relevant parameters to are trained based on task targets. The method is then applied to various cognitive tasks to validate and extract insights. \n Strengths: \nThe models used are rather simple, which benefits from good interpretability. \nThe experiments considered are interesting and elucidate different aspects of the model.\n\nWeaknesses:\nNovelty. The novelty of the paper in terms of the model is very limited. The paper shows that low-rank RNNs can help analyse neural data, but the contributions of the work beyond these experimental evaluations and applications of the low-rank RNNs are unclear.\n\nMinor weaknesses:\nPresentation: From the main text, it is unclear how \\kappa and \\v enter the analysis.\nSimilarly, the role of section A.2 is rather unclear. Is it to justify the effective connectivity metric?\n - The assumption is 'm' vectors are orthogonal to each other (and similarly for 'n'). How is this constraint enforced during training? \n\n- 3.2 assumes that m,n, I, and w are all zero-centred. Please expand on why this is a valid assumption.\n\n- One can first obtain a full-ranked connectivity matrix and then approximate it using the low-ranked method suggested here (similar to what is done for SVD in the paper) rather than learning a low-ranked approximation directly. How is this alternative compared to what is suggested in the paper? On a similar note, one can train a truncated SVD directly (rather than making the approximation post-training); how is this method's performance compared with the proposed method? N/A", " This paper introduces low-rank RNNs, parameterized by the left and right eigenvectors of the connectivity matrix. The authors first show on simulated full-rank vanilla RNNs trained to perform some cognitive tasks that low-rank RNNs can accurately extract the essential dynamical and task structure learned by the full-rank systems. They thereby offer a computationally interpretable low dimensional representation of task activity. The authors then test their approach on neural recordings from two monkeys performing a context-dependent decision making task. Again the trained low-rank RNNs seem to extract the behaviorally relevant low-dimensional dynamics, at the same time producing synaptic weight distributions reminiscent of those observed in the brain. Strengths:\nGenerally speaking, I think this is a powerful approach that may have important implications in neuroscience. Building a low-rank structure right away into the models used for inference may profoundly ease detection of computational mechanisms from data.\n\nWeaknesses:\nSee detailed questions below. My major point probably is that I don't think it's clear whether the model really extracts the neural dynamics and how an animal is truly solving the task, or whether it merely finds a parsimonious solution to the task. 1) One may perhaps still ask what is really novel about this approach, given that low-rank RNN-type models have been employed in computational neuroscience for a number of years now to obtain insight into neural dynamics and task performance. I assume the authors are the first to actually train such a model directly on simulated and real neural data and to systematically explore its abilities in identifying the dynamical structure?\n\n2) Directly inferring a lrRNN seems to be more powerful than training a full-rank RNN and using SVD posthoc. But why actually? Can the authors offer some theoretical insight why this is more efficient? Does this simply have to do with the much lower number of parameters that need to be inferred on lrRNNs, easing the training process?\n\n3) The authors call the CDM and DMS tasks ‘complex’, but compared to many real world tasks and those in use elsewhere in the ML community I find them almost simplistic. Could the low-rank structure be an ‘artifact’ or result of the very simple nature of these tasks? For instance, say you were to construct a very high-dimensional space where dimensions are somewhat modulated by task features like inputs, would a lrRNN always infer this same structure cos it’s really a property of the task, not of the recorded neural activity?\n\nI think this is a relevant question since I was surprised that, despite the very simple nature of the tasks, hundreds of neurons were used for the full-rank RNNs trained on the tasks. More importantly, the neural recording data sets appear to consist likewise of hundreds of neurons which were however drawn *independently* from different sessions and trials? This appears to imply that the neurons are *not* truly dynamically connected, and do not have any correlations beyond that induced by the task structure. Thus, the inferred lrRNNs may simply reflect this task structure but not really the processing and (missing) dynamics that goes on in the neural population during a task.\n\n4) A related question: As I assume hundreds of RNN units are not really necessary for learning these tasks, couldn’t we just train a much lower-dimensional system to begin with and thereby obtain a much more parsimonious representation of dynamics? I would guess for these tasks 5 units may actually already do the job?\n\n5) Is the phenomenon of mixed selectivity in cortical networks at odds with the low-rank structures identified?\n\n6) I would like to see more examples of original and fitted neural firing rates and neural trajectories. For instance, in sect. 3.1 the claim is made that essential connectivity features and neural trajectories were well reproduced by the lrRNN, but Table 1 shows only a single summary statistic w/o confidence limits. Same goes for the real neural recordings, more unit examples that illustrate neural behavior on different trials of the task would be great.\n\n7) Since Fig. 3-5 appear to show just examples of a single training run, I would like to see more stats on how robust the training process is, i.e. how likely the same structures are to be identified again when starting from many different initial conditions for parameters?\n\nI think point #3 is really my main issue. In any case it would have been nice to see the application of this method to at least one other data set with a very different task. But given the third point, in my mind it’s essential to see an application on a higher-dimensional set of truly simultaneously recorded neurons. \n\nMinor points:\n\n- Line 129: Universal approximation properties in my mind don’t apply here: To assure this, you would need to allow for a an arbitrarily long basis expansion, commonly implying that J would need to be of full rank and, in addition, you may need many more latent dim. than observed variables.\n\n- My understanding is that for any real physiological data set one would need to re-train the lrRNN for a number of different R’s?\n\n- Line 146-147: says that at least rank of 9 is necessary, but Fig. 2 only shows up to rank 5.\n\n- Would be good to have connectivity matrices from the Suppl. in main text. I think they add a lot to the understanding of the inner workings of the trained lrRNNs.\n\n- I’m somewhat puzzled why so many units are used for lrRNNs to begin with, and then kept after training? The results on the CDM, for instance, seem to imply one just needs 5 units?\n\n- Does the lrRNN trained on the monkey data have as many dimensions as units? Isn’t this a constraint that could also prevent detection of latent structure in the data, since there are no latent variables in the lrRNN that could account for unobserved processes?\n Partly, see questions above." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 4 ]
[ "3-FV5SelSgT", "DgKRFPYA06u", "No-Ifq3lOp", "-tRUYzPPDtZ", "wiJgKauMfkj", "nSXAXj4HwXT", "DgKRFPYA06u", "rrlH_jJfgV_", "nips_2022_M12autRxeeS", "nips_2022_M12autRxeeS", "nips_2022_M12autRxeeS", "nips_2022_M12autRxeeS", "nips_2022_M12autRxeeS" ]
nips_2022_nC8VC8gVGPo
Training Spiking Neural Networks with Local Tandem Learning
Spiking neural networks (SNNs) are shown to be more biologically plausible and energy efficient over their predecessors. However, there is a lack of an efficient and generalized training method for deep SNNs, especially for deployment on analog computing substrates. In this paper, we put forward a generalized learning rule, termed Local Tandem Learning (LTL). The LTL rule follows the teacher-student learning approach by mimicking the intermediate feature representations of a pre-trained ANN. By decoupling the learning of network layers and leveraging highly informative supervisor signals, we demonstrate rapid network convergence within five training epochs on the CIFAR-10 dataset while having low computational complexity. Our experimental results have also shown that the SNNs thus trained can achieve comparable accuracies to their teacher ANNs on CIFAR-10, CIFAR-100, and Tiny ImageNet datasets. Moreover, the proposed LTL rule is hardware friendly. It can be easily implemented on-chip to perform fast parameter calibration and provide robustness against the notorious device non-ideality issues. It, therefore, opens up a myriad of opportunities for training and deployment of SNN on ultra-low-power mixed-signal neuromorphic computing chips.
Accept
This paper proposes a novel method of training spiking neural networks (SNNs) by matching the intermediate feature representations of SNNs with pre-trained ANNs. The method is on-chip and local, allowing SNNs to be learned directly on neuromorphic hardware. All reviewers agreed that the problem that the paper target to solve is important, and the proposed method is novel. During the discussion period, the authors successfully addressed the concerns of the reviewers. Therefore, I recommend acceptance.
train
[ "5J7vjXGkLvN", "wsgFc3XF0v-", "bLZIt6wS2Ru", "JfhZW-mt1_zf", "tzsuhNDeCW", "qWFIQo3ZEDu", "xxk2pZmHEYH", "HKZfrmmUaDl", "XULyGrnpl7l", "mH-LgMtzEo", "EkpSATo17Hg", "p7E4NPz_vYz", "Vl8acFyGHWN", "c7GmH_kyLu", "4EZnLd7biE", "KFssSzpRGP" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " >As for the supplemented results, I have a question about the setting of the experiments, i.e. the dataset and the noise level.\n\nThank you very much for raising the question on the experiment setting. Due to time constraints, we only manage to run experiments on the MNIST dataset, which is much simpler than the CIFAR-10 dataset that had been investigated in the original paper. Given the simplicity of the dataset, the effect of noise is not as obvious as the previous one. Therefore, we purposely increase the level of noise to provide a more convincing baseline to validate the effectiveness of the proposed LTL algorithm and compare with the E2E in-the-loop training. We are still running the experiments on the CIFAR-10 dataset, and we will provide a more concrete analysis in the final camera-ready version.\n\n\n>A few more comments on the connection with ANN-to-SNN and possible underlying explanation for LTL.\n\nThank you for sharing your insight and providing these excellent references. We agree with you that the success of the LTL method is because of the underlying connections with the ANN-like closed-form transformations. In fact, our formulation is also grounded on the firing rate equivalence between ANN and SNN. For the two papers you have referred to, the equivalence is established at the spike-train level instead of for each time step, which seems to be a promising direction to explore and we will study them in our future work.\n", " I appreciate the authors' effort in running additional experiments. The additional experiments are very promising and my concerns regarding the experiments are mostly addressed. Moreover, the explanations offered by the authors regarding noise robustness and hardware implementation are convincing. Thus, I am raising my score.\n\nAs noted by other reviewers, testing on actual neuromorphic hardware and scaling to larger datasets like ImageNet would further strengthen the contributions, although it is understandable if it is out of scope for the current submission. Nevertheless, I believe the strengths of the paper outweigh its weaknesses.", " I thank the reviewers for the rebuttal. I am satisfied with the authors‘ reply so I maintain the same rating for acceptance.", " I thank the authors for their detailed reply. The response addresses the concerns about the concept of LTL learning on chips as well as the discussion with related work, and presents promising preliminary results to deal with the device mismatch problem in comparison to E2E fine-tuning. Some weaknesses, e.g. lack of large-scale results and limitation for dynamic inputs, are also discussed in the response and the reasons are acceptable. So I think the strengths outweigh the weaknesses and I raise my score.\n\nAs for the supplemented results, I have a question about the setting of the experiments, i.e. the dataset and the noise level. It seems that no results in the paper are around 99%, and the noise level in Figure 5 is from 0.05 to 0.4 while it is much larger in this supplemented results (from 0.5 to 1.5). Is it from a different dataset and with a different setting? Hope the authors could give a clearer description and perfect the comparison (e.g. in Figure 5) in the final version.\n\nA few more comments on the connection with ANN-to-SNN and possible underlying explanation for LTL. The proposed LTL method is indeed more generalizable to ANN-to-SNN as it is not limited to IF neurons, and it is promising for neuromorphic computing. But I think the reason for the success of the LTL method may still depend on some possible underlying connections with the ANN-like closed-form transformations. This is because the single-layer neural network does not have universal approximation abilities, and there is no guarantee that such layer-wise teacher-student knowledge transfer can be learned if the layer-wise transformation is arbitrary (traditional knowledge distillation in ANNs is not layer-wise but model-wise). Actually, several works have derived the closed-form transformation between LIF neurons in order to train SNNs [1, 2], and [2] shows that the approximation transformation between LIF neurons with weighted firing rates can follow the same formulation as IF neurons with firing rates. Therefore, I guess this layer-wise transformation similarity between IF/LIF neurons and ANNs encourages layer-wise knowledge transfer and leads to successful results. \n\n[1] Wu J, Chua Y, Zhang M, et al. A tandem learning rule for effective training and rapid inference of deep spiking neural networks[J]. IEEE Transactions on Neural Networks and Learning Systems, 2021.\n\n[2] Meng Q, Xiao M, Yan S, et al. Training High-Performance Low-Latency Spiking Neural Networks by Differentiation on Spike Representation. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022.", " I am very happy to see the authors have made great improvements in experiments by including on-chip learning implementation and substantial experimental comparison with other SOTA methods. This work will be of sufficient significance to advance the field of neuromorphic computing.", " > What is the advantage of LTL if ANN-to-SNN can achieve better results? \n\nThe ANN-to-SNN conversion methods establish a firing-rate approximation only for the non-leaky IF neurons, and the performance typically degrades for other neuron models with richer neuronal dynamics. This limits their applications to the existing neuromorphic computing chips, especially for those using analog spiking neurons. In contrast, our method is more generalizable and we have demonstrated competitive results for both IF and LIF neuron models. \n\n> Experiments do not include the large-scale ImageNet dataset.\n\nThank you for pointing this out and we agree with you that it would be better to include ImageNet results to facilitate comparison. However, the computational cost of conducting ImageNet experiments is beyond the reach of our lab. As informed by the author of ref [1], training a state-of-the-art SNN on the ImageNet dataset requires 112 GPU days (8 cards * 14 days) with a time window of only 6. Instead, we use the Tiny-ImageNet which can be considered as an effective surrogate for the full ImageNet dataset to test out the scalability of the proposed learning rule in more diversified, real-world scenarios. In order to provide a more comprehensive comparison, we have added a comparison to another **two** SOTA ANN-to-SNN conversion methods as summarised in the Table below. It is clear that our method can achieve better classification accuracies given the same time window. \n\n| Method | Architecture | ANN Acc. (%) | T | SNN Acc. (%) | $\\Delta$ Acc. (%) | \n| --------- | --------- | --------- | --------- | --------- | --------- | \n| Calibration [2] | VGG-16 | 60.95 | 16 | 44.39 | -16.56 | \n| Calibration [2] | VGG-16 | 60.95 | 32 | 53.96 | -6.99 | \n| QCFS [3] | VGG-16 | 57.82 | 16 | 44.53 | -13.29 | \n| QCFS [3] | VGG-16 | 57.82 | 32 | 53.54 | -4.28 |\n| Offline LTL | VGG-16 | 57.39 | 16 | 56.85 | -0.54 | \n| Offline LTL | VGG-16 | 57.39 | 32 | 57.73 | 0.34 |\n| Calibration [2] | ResNet-20 | 58.29 | 16 | 44.89 | -13.40 | \n| Calibration [2] | ResNet-20 | 58.29 | 32 | 52.79 | -5.5 |\n| QCFS [3] | ResNet-20 | 55.81 | 16 | 52.32 | -3.49 |\n| QCFS [3] | ResNet-20 | 55.81 | 32 | 55.06| -0.75 |\n| Offline LTL | ResNet-20 | 56.68 | 16 | 56.24 | -0.44 | \n| Offline LTL | ResNet-20 | 56.68 | 32 | 57.42 | 0.74 |\n\n[1] Hu, Y., Wu, Y., Deng, L., & Li, G. (2021). Advancing Residual Learning towards Powerful Deep Spiking Neural Networks. arXiv preprint arXiv:2112.08954 \n\n[2] Bu, T.; Fang, W.; Ding, J.; Dai, P.; Yu, Z.; and Huang, T. 2022. Optimal ANN-SNN Conversion for High-accuracy and Ultra-low-latency Spiking Neural Networks. In International Conference on Learning Representations.\n\n[3] Li, Y.; Deng, S.; Dong, X.; Gong, R.; and Gu, S. 2021. A free lunch from ANN: Towards efficient, accurate spiking neural networks calibration. In International Conference on Machine Learning, 6316–6325. PMLR.\n\n\n> Since the proposed method relies on ANNs, is it limited to static inputs? How is the flexibility compared with direct training methods?\n\nYou have rightly pointed out the limitation of the proposed method. As already discussed in the conclusion section, our method is not applicable for the dynamic datasets, but we would like to mention that the proposed teacher-student learning approach is a generalized information transfer method. In future work, we are interested to explore and apply the proposed teacher-student learning approach to the recurrent neural network and transformer architectures that are designed to handle temporally rich signals. \n", " Thank you very much for the insightful and detailed feedback. We are encouraged that you find our method novel and our experiments comprehensive. We would like to address your specific concerns as follows:\n\n> This paper emphasizes the on-chip implementation throughout the paper, but no real on-chip implementation is demonstrated, and more importantly, it is not explained clearly how conceptually the learning from a pre-trained ANN (rather than direct SNN training) can be implemented on chips.\n\nThank you for bringing this up and we fully agree with you. To facilitate hardware implementation of the proposed LTL learning rule, we have revised Figure 1 and provided more details on the on-chip implementation. As shown in Figure 1, the on-chip training can be performed efficiently by simultaneously extracting the layerwise targets from ANNs running on the host computer for data batch i+1 and performing on-chip SNN training for data batch i. This is similar to the conventional ANN training on the GPUs, where the data preprocessing of the next data batch is performed on the CPU while the current data batch is used for ANN training on the GPU. The only difference is that the input data is preprocessed by the pre-trained ANN, in our case, to extract the targets for intermediate layers. Given the inference of ANN can be performed in parallel on the host computer, the overall training time is bottlenecked by the neuromorphic chip that operates in sequential mode, where only one sample is been processed at one time.\n\n> There is some related work to address the real device mismatch problem [1]. They combine the on-chip and off-chip computation to correct real device mismatch by surrogate gradient learning. What is the comparison of the proposed method to their method? \n\nThank you for the excellent reference. We would like to highlight that the proposed LTL learning rule is more hardware friendly compared to the hardware-in-the-loop training approach introduced in [1]. As explained in this paper, the hardware-in-the-loop training requires two-way information communication, including reading and communicating intermediate neuronal states from the NC chip to the host computer to perform off-chip training and synchronizing the updated weights to the NC chips. Given the sequential nature of these two processes and the high implementation cost for reading and communicating neuron states (e.g., requiring to implement costly analog-to-digital converters for analog spiking neurons), our method is expected to have much lower hardware and time complexity. \n\nMoreover, the proposed layerwise training approach has better noise robustness and scalability over this end-to-end training approach. Specifically, we compare the online LTL rule to this in-the-loop training rule in addressing the **Device mismatch** introduced in Section 3.5. In particular, we study their robustness to different levels of noise and scalability to different network depths. As the results summarised in the Table below, the LTL rule is highly robust to different levels of noise and can scale up to deeper network structures. In contrast, the performance degrades for the in-the-loop learning method. This can be explained by the fact that the gradients estimated at each layer tend to be noisy, and the errors accumulated across layers during training. Whereas, our layerwise training approach can effectively overcome this problem. Due to time constraints, we are unable to provide analysis for other noise types studied in Section 3.5, but we will include them in the final version.\n\n| Model | SNN Pre-trained (software) | Noise level | Non fine-tuned | LTL fine-tuned | E2E fine-tuned | \n| ---------| --------- | --------- | --------- | --------- | --------- | \n| VGG9 | 99.45% | std_p = 0.5 | 99.00% | 99.42% | 99.02% | \n| | | std_p = 1.0 | 95.99% | 99.36% | 97.31% | \n| | | std_p = 1.5 | 70.51% | 99.34%| 78.40% | \n| VGG11 | 99.55% | std_p = 0.5 | 98.29% | 99.69% | 99.40% | \n| | | std_p = 1.0 | 87.63% | 99.60% | 86.52% | \n| | | std_p = 1.5 | 39.21% | 99.52% | 10.42% | \n\nIn summary, we believe the proposed LTL learning rule is both effective and efficient for addressing the device non-idealities of mixed-signal NC chips. Given the limited access to the functioning mixed-signal NC chips, we wish to broadcast the proposed LTL learning rule to the broader neuromorphic computing community, so as to allow interested hardware groups to implement and test it out on the mixed-signal NC chip.\n\n[1] Cramer B, Billaudelle S, Kanya S, et al. Surrogate gradients for analog neuromorphic computing. Proceedings of the National Academy of Sciences, 2022, 119(4): e2109194119.\n\n", " > In section 3.4 the authors study the asymptotic time and memory complexity of LTL. It would be valuable to empirically validate these asymptotic trends. \n\nThank you for your suggestion. To provide a more concrete analysis of the memory and time complexity, we have measured the actual memory consumption and training time using the VGG11 architecture and CIFAR-10 dataset. We present the average results over 10 training epochs in the Table below. In general, the GPU memory usage scales up near linearly with the time window size (i.e., T=16) for the STBP and Offline LTL methods, which follows our theoretical analysis. Compared to the STBP rule, the offline LTL rule requires slightly more memory space for the calculation of the layer-wise loss functions. As for per epoch training time, although both offline and online LTL rules are significantly faster than STBP rule, the ratios of speed up scale poorer than the theoretical ones. We would like to acknowledge that we adopt the primitive and unoptimized Pytorch GPU kernels in our implementation. To achieve the desired theoretical speed up, it requires customized GPU kernels that can perform the training both in parallel across time and layers and we leave this as future work. \n\n| Method | GPU Memory Usage (MB) | Training Time per Epoch (s) | \n| --------- | --------- | --------- | \n| STBP | 7072 | 489.80 | \n| Offline LTL | 7472 | 389.32 | \n| Online LTL | 718 | 103.30 | \n\n> How does the noise robustness of LTL compare with other ANN-to-SNN conversion methods?\n\nBoth the ANN-to-SNN and LTL methods adopt firing rate-based feature representation. Given the same time window size, they are expected to achieve comparable noise robustness. While if a longer time window is adopted for the ANN-to-SNN conversion method, as normally does, it is expected to suffer more from input noises (e.g., thermal noise and neuron silencing noises). Due to time constraints, we are unable to provide a concrete experimental analysis here, but we will include it in the final version. Nevertheless, we would like to highlight that the main purpose of the noise robustness experiments is to demonstrate that our LTL learning rule can perform on-chip, noise-aware learning, such that the accuracy drop during system deployment can be effectively addressed. \n\n> It would also be good to note any practical difficulties that might arise when implementing LTL on neuromorphic hardware.\n\nThank you for your suggestion. To facilitate hardware implementation of the proposed LTL learning rule, we have revised Figure 1 and provided more details on the on-chip implementation. As shown in Figure 1, the on-chip training can be performed efficiently by simultaneously extracting the layerwise targets from ANNs, running on the host computer, for data batch i+1 and performing on-chip SNN training for data batch i. This is similar to the conventional ANN training on the GPUs, where the data preprocessing of the next data batch is performed on the CPU while the current data batch is used for ANN training on the GPU. The only difference is that the input data is preprocessed by the pre-trained ANN, in our case, to extract the targets for intermediate layers. Given the inference of ANN can be performed in parallel on the host computer, the overall training time is bottlenecked by the neuromorphic chip that operates in a sequential mode, where only one sample is been processed at a time. \n\nWe would like to highlight that the proposed LTL learning rule is more hardware friendly compared to the state-of-the-art in-the-loop training approach [1] that requires two-way information communication, including reading and communicating intermediate neuronal states from the NC chip to the host computer to perform off-chip training and synchronizing the updated weights to the NC chips. Given the sequential nature of these two processes and the high implementation cost for reading and communicating neuron states (e.g., requiring to implement costly analog-to-digital converters for analog spiking neurons), our proposed method is expected to have much lower hardware and time complexity. Moreover, as discussed earlier, the proposed layerwise learning approach demonstrates better noise robustness and scalability. In summary, we believe the proposed LTL learning rule is both effective and efficient for addressing the device non-idealities of mixed-signal NC chips. Given the limited access to the functioning mixed-signal NC chips, we wish to broadcast this information to the broader neuromorphic computing community, so as to allow interested hardware groups to test out the proposed LTL learning rule and unleash the power of mixed-signal NC hardware.\n\n[1] Cramer B, Billaudelle S, Kanya S, et al. Surrogate gradients for analog neuromorphic computing. Proceedings of the National Academy of Sciences, 2022, 119(4): e2109194119.", " Thank you very much for your insightful and detailed review comments. We are encouraged that you find our paper well-written and that the problem tackled in our paper will have a significant impact to the neuromorphic computing community. We would like to address your specific concerns as follows: \n\n> Given that one of the main advantages of LTL is its efficiency and robustness, the experiments in section 3.2 - 3.5 could be significantly expanded to highlight this. In particular, the authors may want to consider additional SNN training baselines on top of STBP. \n\nWe agree with you on this point. Following your suggestion, in Section 3.2, we performed additional experiments on inference time energy efficiency with a recently introduced ANN-to-SNN conversion method QCFS [1] that works for the extremely short inference time window. As summarized in the Table below, the proposed LTL rule consumes fewer SynOps than this network conversion method probably due to a larger number of neurons remaining silent in our network. This can be explained by the round-down operation that has been taken when calculating the target spike count in our method. \n\n| Model | Method | T=4 | T= 8 | T=16 | T= 32 | \n| --------- | --------- | --------- | --------- | --------- | --------- | \n| VGG11 | QCFS | 0.25 | 0.50 | 0.99 | 1.98 |\n| VGG11 | LTL | 0.21 | 0.43 | 0.87 | 1.77 |\n| VGG16 | QCFS | 0.45 | 0.92 | 1.84 | 3.67 |\n| VGG16 | LTL | 0.27 | 0.57 | 1.17 | 2.39 |\n\nIn addition, to strengthen our analysis of the network convergence speed, we added the comparison to a recently introduced direct learning rule TET [2] in Section 3.3. As shown in Figure 4 of the main text, our proposed LTL rule achieved much faster network convergence to both the TET and STBP learning rules on the CIFAR-10 dataset. \n\n> The experiments in section 3.5 are interesting, but unfortunately difficult to contextualize without a baseline method to compare with.\n\nThank you for pointing this out. To improve on this, we have compared our online LTL learning rule to a recent hardware in-the-loop surrogate gradient learning method [3]. This method reads the actual membrane potentials and output spikes from the neuromorphic chip and sends them to the host computer that runs the surrogate gradient learning method to update the network parameters in an end-to-end fashion. The updated network parameters will be synchronized to the chip after every training iteration. In this way, this costly, off-chip learning method can address the hardware non-idealities discussed in Section 3.5. \n\nHere, we compare our method to this in-the-loop training method in addressing the **Device mismatch** introduced in Section 3.5. In particular, we study their robustness to different levels of noise and scalability to different network depths. As the results summarised in the Table below, the LTL rule is highly robust to different levels of noise and can scale up to deeper network structures. In contrast, the performance degrades for the in-the-loop learning method. This can be explained by the fact that the gradients estimated at each layer tend to be noisy, and the errors accumulated across layers during training. Whereas, our layerwise training approach can effectively overcome this problem. \n\n| Model | SNN pre-trained | Noise level | Non-trained | LTL trained | E2E trained | \n| ---------| --------- | --------- | --------- | --------- | --------- | \n| VGG9 | 99.45% | std_p = 0.5 | 99.00% | 99.42% | 99.02% | \n| | | std_p = 1.0 | 95.99% | 99.36% | 97.31% | \n| | | std_p = 1.5 | 70.51% | 99.34% | 78.40% | \n| VGG11 | 99.55% | std_p = 0.5 | 98.29% | 99.69% | 99.40% | \n| | | std_p = 1.0 | 87.63% | 99.60% | 86.52% | \n| | | std_p = 1.5 | 39.21% | 99.52% | 10.42% | \n\n[1] Bu, T.; Fang, W.; Ding, J.; Dai, P.; Yu, Z.; and Huang, T. 2022. Optimal ANN-SNN Conversion for High-accuracy and Ultra-low-latency Spiking Neural Networks. In International Conference on Learning Representations.\n\n[2] Deng, S.; Li, Y.; Zhang, S.; and Gu, S. 2022. Temporal Efficient Training of Spiking Neural Network via Gradient Re-weighting. arXiv preprint arXiv:2202.11946.\n\n[3] Cramer B, Billaudelle S, Kanya S, et al. Surrogate gradients for analog neuromorphic computing. Proceedings of the National Academy of Sciences, 2022, 119(4): e2109194119.\n\n", " Thank you very much for your thoughtful and constructive comments. We are delighted that you find our work to be valuable and the experiments are sufficient. We would like to address your specific concerns as follows:\n\n> Background knowledge of the Teacher-Student (T-S) learning approach is not introduced and some introductions about this aspect can be added.\n\nThank you for bringing this up. We will bridge this information gap by including the following paragraph into the Introduction section. \n\n*As a generalized knowledge transfer approach, teacher-student (T-S) learning leverages the expert teacher model to guide the training of the student model. Hinton et al. [1] first proposed the idea, known as knowledge distillation, to transfer the knowledge from a model ensemble or a large, regularized model to a small, distilled model so as to achieve effective model compression. Romero et al. [2] further proposed FitNets, which enhanced the knowledge distillation framework by transferring the fine-grained, intermediate feature maps of the teacher to the student model. Later, Huang et al. [3] proposed to enforce the student network to mimic the feature distribution of the teacher network. The T-S learning approach was originally introduced for model compression [1,2], and subsequent work extended it for other task domains, including domain adaptation [4], ensemble learning [5]. For a more complete review on the T-S Learning, we refer the readers to [6]. In this work, we apply this approach to heterogeneous networks ( i.e., ANN and SNN) for the first time, and we show information can be transferred effectively and efficiently from teacher ANN to the student SNN with our proposed LTL learning rule.*\n\n[1] Hinton, Geoffrey, Oriol Vinyals, and Jeff Dean. \"Distilling the knowledge in a neural network.\" arXiv preprint arXiv:1503.02531 2.7 (2015). \\\n[2] Romero, Adriana, et al. \"Fitnets: Hints for thin deep nets.\" arXiv preprint arXiv:1412.6550 (2014). \\\n[3] Huang, Zehao, and Naiyan Wang. \"Like what you like: Knowledge distill via neuron selectivity transfer.\" arXiv preprint arXiv:1707.01219 (2017). \\\n[4] Lu, Jie, et al. \"Transfer learning using computational intelligence: A survey.\" Knowledge-Based Systems 80 (2015): 14-23. \\\n[5] Kang, Jaeyong, and Jeonghwan Gwak. \"Ensemble learning of lightweight deep learning models using knowledge distillation for image classification.\" Mathematics 8.10 (2020): 1652. \\\n[6] Gou, Jianping, et al. \"Knowledge distillation: A survey.\" International Journal of Computer Vision 129.6 (2021): 1789-1819. \n\n> The comparison of accuracy between the Local Tandem Learning (LTL) method and only use the surrogate gradient method to training SNN can be added to verify that this method learns the knowledge in ANN. \n\nThank you for the suggestion. We have included a more recent direct training method, TET[1], for a more concrete comparison. The experiments results can be found on the reponse to the Reviewer XCfF provided on top. In summary, our results are comparable to the TET method on the CIFAR-10 dataset and CIFAR-100 dataset, while requiring a slightly longer inference time window due to the underlying rate-based representation. These results align with our previous comparison to the direct training method: Diet-SNN. \n\n[1] Deng, S., Li, Y., Zhang, S., & Gu, S. (2022). Temporal Efficient Training of Spiking Neural Network via Gradient Re-weighting. ICLR22.\n\n> More architectures such as ResNet can be used to verify the high performance of this algorithm rather than just VGG network architecture in order to better demonstrate the advantages of this approach in different network\n\nThank you for the useful suggestion. As summarised in the table below, we have performed additional experiments on the ResNet architectures to evaluate the generalizability of our work. It is clear that the LTL learning rule can achieve superior classification accuracies on the ResNet architecture. Due to time constraints, we only manage to obtain the results for the offline setting and we will include the online results in the camera-ready version.\n\n| Dataset | Method | Model | ANN Acc. (%) | SNN (IF) Acc. (%) | SNN (LIF) Acc. (%) | T |\n| --------- | --------- | --------- | --------- | --------- | --------- | --------- |\n| CIFAR-10 | LTL (offline) | ResNet-20 | 95.36 | 94.82 | 94.76 | 16 |\n| CIFAR-10 | LTL (offline) | ResNet-20 | 95.36 | 95.28 | 95.25 | 32 | \n| CIFAR-100 | LTL (offline) | ResNet-20 | 76.36 | 75.22 | 75.62 | 16 |\n| CIFAR-100 | LTL (offline) | ResNet-20 | 76.36 | 76.08 | 76.12 | 32 |\n| Tiny-ImageNet | LTL (offline) | ResNet-20 | 56.68 | 56.28 | 56.24 | 16 |\n| Tiny-ImageNet | LTL (offline) | ResNet-20 | 56.68 | 56.48 | 57.42 | 32 |", " Thank you very much for your thoughtful and constructive comments. We are encouraged that you find our paper novel, significant, and of sufficient interest to the neuromorphic computing community. We would like to address your specific concerns as follows:\n\n> For experimental results in Table 1, the comparisons are mainly focus on the ANN-to-SNN conversion methods, while authors only compared with one direct training method. I suggest authors to include other more recent direct training algorithms to show the superiority. \n\nThank you for pointing this out. Following your suggestion, we have included a more recent direct training method, TET[1], in the comparison. To ensure a fair comparison, we reproduce the results using their published codes while only modifying the network structures to be the same as ours. As summarised in the Table below, our results are comparable to the TET method on the CIFAR-10 dataset and CIFAR-100 dataset, while requiring a slightly longer inference time window due to the underlying rate-based representation. These results align with our previous comparison to the direct training method, Diet-SNN. \n\n| Dataset | Method | Model | Acc. (%) | Time Window |\n| --------- | --------- | --------- | --------- | --------- |\n| CIFAR-10 | TET | VGG11 | 92.98 | 6 |\n| CIFAR-10 | LTL (offline) | VGG11 | 93.20 | 16 |\n| CIFAR-10 | TET | VGG16 | 93.49 | 6 |\n| CIFAR-10 | LTL (offline) | VGG16 | 93.23 | 16 |\n| CIFAR-100 | TET | VGG16 | 72.45 | 6 |\n| CIFAR-100 | LTL (offline) | VGG16 | 74.19 | 16 |\n\n[1] Deng, S., Li, Y., Zhang, S., & Gu, S. (2022). Temporal Efficient Training of Spiking Neural Network via Gradient Re-weighting. ICLR22.\n\n> For online LTL learning, as the first few time-steps are generally unstable, with inaccurate firing rate estimation, hence result in noisy training. I am wondering how this affects the learning performance. \n\nWe agree with you that the first few time-steps are generally unstable due to relatively inaccurate firing rate approximation. However, this issue can be easily overcome by treating the first few steps as warm-up steps and only performing parameter updates in the later time steps. To shed light on this, we perform an ablation study on the CIFAR-10 dataset with VGG16 by varying the number of warm-up time steps **Twarm**. It is clear that increasing the warm-up period will lead to better gradient approximation as evidenced by the improved test accuracy. In practice, we set **Twarm** to 14 for all our experiments under the online setting, which also reduces the training cost. \n\n| Twarm (online) | SNN (LIF) | SNN (IF) |\n| --------- | --------- | --------- | \n| 2 | 91.07 | 90.38 |\n| 4 | 92.27 | 92.20 |\n| 6 | 92.54 | 92.37 |\n| 8 | 92.71 | 92.26 |\n| 10 | 92.73 | 92.45 |\n| 12 | 92.74 | 92.15 |\n| 14 | 93.00 | 92.60 |\n\n> In the hardware-related noise experiments, authors using the Gaussian noise to simulate the parameter mismatch and thermal noise. However, in the real hardware implementation, these noises may not follow the Gaussian distribution. I am curious why authors use the Gaussian Noise in this experiment. \n\nAs mentioned in Section 3.5, we use the noise models introduced by Julian et al. [2]. According to their paper, these noise models are constructed using real data measured on the DYNA-SE, which is a mixed-signal neuromorphic processor. Therefore, it can faithfully reflect the actual hardware noises that one can expect in the real scenario. \n\n[2] Julian Büchel, Dmitrii Zendrikov, Sergio Solinas, Giacomo Indiveri, and Dylan R Muir. Supervised training of spiking neural networks for robust deployment on mixed-signal neuromorphic processors. Scientific reports, 11(1):1–12, 2021.\n\n> Please summarize the key differences from previous ANN-to-SNN work [1]\n\nThank you for pointing out this excellent reference. Our proposed LTL learning method differs from this one in three aspects. Frist, their method needs to establish a mapping relationship for each synapse so as to achieve weight sharing between ANN and SNN. In contrast, our method only establishes the equivalence at the neuron level, rather than at each synapse, hence it has a higher level of freedom and is more generalizable to different underlying neuron models. Second, their method uses ANN activations to approximate the spike count of the SNN, and perform E2E backpropagation on the ANN. In contrast, our method eliminates the E2E backpropagation by using the layer-wise supervision signal to guide the learning of SNN. Third, their method learns from the spike train level representation and hence, ignores the temporal dynamic in SNN. Our method adopts the BPTT for each layer to address the temporal credit assignment issue, hence considering accurate temporal dynamics during training.\n\n> Please fix the typo of one of the works in Table 1: OCFS\n\nThank you very much for pointing out this typo. We have corrected it accordingly in the manuscript.", " We thank reviewers for their thoughtful and constructive comments. We are delighted that they found our method to be novel (**R1,R2,R3**) and useful (**R1, R3, R4**), and our presentation is clear (**R1, R2, R3, R4**). We are pleased that they recognize this paper address an important and under-explored research problem of neuromorphic algorithm-hardware codevelopment (**R1, R2, R3**) and is of sufficient interest and impact to the neuromorphic computing community (**R1, R3**). We are glad they found our evaluation to be extensive and cover many perspectives (**R2, R4**), including classification accuracy, the energy efficiency of inference, the convergence speed of learning, the computational complexity of learning, and robustness to hardware-related noises. One primary concern was insufficient discussion and experimental comparison for the on-chip learning setting. We agree on this and we have fully addressed this concern in the rebuttal. In the following, we address the specific questions and we will incorporate all the feedback into the final version. ", " This paper proposes a novel method of training spiking neural networks (SNNs) by matching the intermediate feature representations of SNNs with pretrained ANNs. The method is on-chip and local, allowing SNNs to be learned directly on neuromorphic hardware. Experiments on image classification benchmarks demonstrate that the proposed LTL rule performs comparably to other ANN-to-SNN conversion methods. Moreover, LTL is able to train SNNs with high accuracy even under small time windows, which is important to limit the number of synaptic operations of SNNs. LTL trained SNNs can also be trained with a small number of training iterations relative to STBP. Theoretically, STBP has lower or comparable memory and time complexity to baselines. Finally, LTL is robust to a variety of noises. **Originality**\nThe proposed method appears novel. To my knowledge, this method is the first on-chip ANN-to-SNN conversion method that uses layerwise feature-based transfer of information from an ANN to an SNN. \n\n**Quality**\nThe method is not theoretically motivated. However, this is not strictly necessary if there are good empirical results.\n\nExperiments are extensive. The authors demonstrate that LTL achieves comparable performance to many baseline methods (see Table 1). However, given that one of the main advantages of LTL is its efficiency and robustness, the experiments in section 3.2 - 3.5 could be significantly expanded to highlight this. In particular, the authors may want to consider additional SNN training baselines on top of STBP. Moreover, in section 3.4 the authors study the asymptotic time and memory complexity of LTL. It would be valuable to empirically validate these asymptotic trends. Finally, the experiments in section 3.5 are interesting, but unfortunately difficult to contextualize without a baseline method to compare with.\n\nIf the authors can adequately address the concerns on the experiments, then I think the overall quality of the paper can be dramatically increased.\n\n**Clarity**\nThe paper is generally well written. The mathematical notation is clear and figures are well illustrated.\n\n**Significance**\nThe paper is potentially quite significant. It appears to be a more efficient and robust method of training SNNs on-chip than previous method, while maintaining high accuracy. However, further empirical experiments are needed to better justify these claims. Given that the experiments are improved, the paper would be an important contribution to the field of SNN training, and may potentially be used on actual neuromorphic chips. How does the number of synaptic operations of LTL compare with other SNN-to-ANN conversion methods?\n\nDoes the asymptotic analysis presented in section 3.4 match empirical observations?\n\nHow does the noise robustness of LTL compare with other SNN-to-ANN conversion methods? The authors touch on some limitations of the work in the conclusion: they do not study architectures for sequential data such as RNNs and transformers. It would also be good to note any practical difficulties that might arise when implementing LTL on neuromorphic hardware.", " This paper proposes a Local Tandem Learning (LTL) method to train spiking neural networks by distilling knowledge from a pre-trained ANN in a layer-wise way. Two versions of LTL, offline learning and online learning, are introduced and the on-chip implementation is discussed. Experiments on CIFAR-10, CIFAR-100, and Tiny ImageNet show competitive performance and robustness to hardware-related noises. Strengths:\n1. The proposed LTL method considers both offline and online learning, and the on-chip implementation is discussed. It is novel to transfer knowledge of ANNs by distilling intermediate feature representations instead of reusing network weights.\n2. Experiments from many perspectives are conducted, including classification accuracy, synaptic operations, convergence of learning, and robustness to hardware-related noises.\n\nWeakness:\n1. This paper emphasizes the on-chip implementation throughout the paper, but no real on-chip implementation is demonstrated, and more importantly, it is not explained clearly how conceptually the learning from a pre-trained ANN (rather than direct SNN training) can be implemented on chips. Figure 1(a) shows that the inference of both SNN and ANN is required in order to calculate the loss for each layer. But how can the inference of ANNs be implemented on neuromorphic chips without conversion to SNNs? Or is the inference of ANNs realized on common hardware and the signal is passed to chips? If so, is it only partly on-chip learning, and what is the advantage over off-chip learning since it still requires much off-chip computation (the inference of ANNs)? If the motivation is to address the real device mismatch problem, it is better to show some preliminary real on-chip results, otherwise the simulation of noises can also be leveraged by off-chip methods for finetuning. Or some comparison with finetuning by other off-chip methods in this simulation scenario is ok. There is some related work to address the real device mismatch problem [1]. They combine the on-chip and off-chip computation to correct real device mismatch by surrogate gradient learning. What is the comparison of the proposed method to their method? There should be some discussion and clarification to the above questions.\n\n2. The state-of-the-art ANN-to-SNN conversion method outperforms the results (accuracy and latency) in this paper [3]. Since LTL also requires knowledge from pre-trained ANNs and involves more complex training procedure than conversion methods (conversion methods only need to train ANNs, LTL should first train ANNs and then train SNNs with knowledge from the pretrained ANNs), what is the advantage of LTL if ANN-to-SNN can achieve better results?\n\n3. Experiments do not include the large-scale ImageNet dataset. Typical ANN-to-SNN conversion methods include results on large-scale datasets [2,3] and recently even direct training methods [4,5,6,7] have demonstrated competitive performance on ImageNet. Since this paper also transfer knowledge from ANNs, it would be better to include ImageNet results to show the scalability of the method.\n\n4. Is the proposed method applicable to dynamic DVS datasets, such as CIFAR10-DVS? SNNs can be more powerful to process DVS inputs and have demonstrated strong results [4,5,6,7]. Since the proposed method relies on ANNs, is it limited to static inputs? How is the flexibility compared with direct training methods?\n\n[1] Cramer B, Billaudelle S, Kanya S, et al. Surrogate gradients for analog neuromorphic computing. Proceedings of the National Academy of Sciences, 2022, 119(4): e2109194119.\n\n[2] Li Y, Deng S, Dong X, et al. A free lunch from ANN: Towards efficient, accurate spiking neural networks calibration. International Conference on Machine Learning, 2021.\n\n[3] Bu T, Fang W, Ding J, et al. Optimal ANN-SNN Conversion for High-accuracy and Ultra-low-latency Spiking Neural Networks. International Conference on Learning Representations, 2022.\n\n[4] Zheng H, Wu Y, Deng L, et al. Going deeper with directly-trained larger spiking neural networks. Proceedings of the AAAI Conference on Artificial Intelligence, 2021.\n\n[5] Li Y, Guo Y, Zhang S, et al. Differentiable spike: Rethinking gradient-descent for training spiking neural networks. Advances in Neural Information Processing Systems, 2021.\n\n[6] Meng Q, Xiao M, Yan S, et al. Training High-Performance Low-Latency Spiking Neural Networks by Differentiation on Spike Representation. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022.\n\n[7] Deng S, Li Y, Zhang S, et al. Temporal Efficient Training of Spiking Neural Network via Gradient Re-weighting. International Conference on Learning Representations, 2022.\n See Weakness above. There is no potential negative societal impact.", " Summary:\nThis paper proposes a spiking neural network (SNN) learning rule, namely Local Tandem Learning (LTL). It combines the teacher-student learning framework and a novel layer-wise local learning method for ANN-to-SNN conversions. Experiments demonstrate near lossless knowledge transfer can be achieved on Cifar-10, Cifar-100 and Tiny-ImageNet datasets with fast convergence and low computational complexity. Notably, authors also show that the LTL rule is hardware friendly. It can address the hardware-related noises through on-chip learning, which remains a grand challenge for deploying SNNs on the ultra-low-power mixed-signal neuromorphic computing chips.\n\nOverall, this paper is well-written and easy to understand. In terms of technical quality, the proposed local tandem learning rule is novel, significant, and well-motivated. It provides an appreciable advance in the field of neuromorphic computing. \n\n\n\n Pros:\nThe topic of this paper is of sufficient interests to the neuromorphic computing community. As clearly summarized in the paper, the LTL learning rule has several attractive feats:\n1.\tThe proposed Teacher-Student learning framework is a generalized knowledge transfer approach. It can be applied for different neuron models, which is out of the reach of the existing ANN-to-SNN conversion methods.\n2.\tThe proposed local learning approach is a valuable exploration beyond the mainstream e2e SNN training algorithms. By decoupling the learning process for each layer, the proposed method reduces the required spatial-temporal credit assignment to only the temporal domain, which greatly improve the network convergence speed and can also alleviate the compounding gradient approximation errors of the surrogate gradient method.\n3.\tThis paper also addresses an under-explored direction of algorithm-hardware co-development. This paper shows the proposed LTL rule can be easily implemented on-chip using only locally available information. \n\nCons:\n1.\tFor experimental results in Table 1, the comparisons are mainly focus on the ANN-to-SNN conversion methods, while authors only compared with one direct training method. I suggest authors to include other more recent direct training algorithms to show the superiority. \n2.\tFor online LTL learning, as the first few time-steps are generally unstable, with inaccurate firing rate estimation, hence result in noisy training. I am wondering how this affects the learning performance.\n3.\tIn the hardware-related noise experiments, authors using the Gaussian noise to simulate the parameter mismatch and thermal noise. However, in the real hardware implementation, these noises may not follow the Gaussian distribution. I am curious why authors use the Gaussian Noise in this experiment.\n 1. Please summarize the key differences from previous ANN-to-SNN work[1]\n2. Please fix the typo of one of the works in Table 1: OCFS \n\nRef. \nWu J, Chua Y, Zhang M, et al. A tandem learning rule for effective training and rapid inference of deep spiking neural networks[J]. IEEE Transactions on Neural Networks and Learning Systems, 2021. N/A", " This paper proposes a generalized learning rule called Local Tandem Learning (LTL) which the SNN mimics the feature representation of a pre-trained ANN through local loss functions. The intermediate feature representation of a pre-trained ANN is transferred to SNN through layer-wise local loss functions. This method has not only high accuracy but also fast convergence It has two versions: offline and online and both versions have high performance. In the experiments, the authors verify the high performance of this method on CIFAR-10, CIFAR-100, and ImageNet dataset. Besides, this method is hardware friendly and can be easily implemented on-chip which is robust to hardware-related noises. Strengths:\n(1)\tThe method proposes a local loss function trains the SNN by learning the intermediate feature representation of the middle layers of the pre-trained ANN. It combines the advantages of ANN-to-SNN conversion and gradient-based training methods. This work is very valuable.\n(2)\tThe authors have done sufficient experiments. This method reduces the epochs required for training to convergence. It can achieve rapid network convergence within five training epochs on the CIFAR-10 dataset.\n(3)\tThe accuracy of SNN trained by this method gap to the teacher ANN is almost zero. Compared with other SOTA learning algorithms, the proposed methods also can achieve comparable performance.\nWeaknesses:\n(1)\tBackground knowledge of the Teacher-Student (T-S) learning approach is not introduced and some introductions about this aspect can be added.\n (1)\tThe comparison of accuracy between the Local Tandem Learning (LTL) method and only use the surrogate gradient method to training SNN can be added to verify that this method learns the knowledge in ANN.\n(2)\tMore architectures such as ResNet can be used to verify the high performance of this algorithm rather than just VGG network architecture in order to better demonstrate the advantages of this approach in different network structures.\n The authors put forward a generalized learning rule, termed Local Tandem Learning (LTL) which the SNN mimics the feature representation of a pre-trained ANN through local loss functions. It uses the intermediate feature represents of the pre-trained ANN to supervise the training of SNNs. This method considers the problem that the SNN is difficult to train because of the discrete spikes. It improves the accuracy and computational complexity of SNN. The research of this article is very useful and valuable." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 8, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 5 ]
[ "JfhZW-mt1_zf", "HKZfrmmUaDl", "4EZnLd7biE", "c7GmH_kyLu", "mH-LgMtzEo", "xxk2pZmHEYH", "c7GmH_kyLu", "XULyGrnpl7l", "Vl8acFyGHWN", "KFssSzpRGP", "4EZnLd7biE", "nips_2022_nC8VC8gVGPo", "nips_2022_nC8VC8gVGPo", "nips_2022_nC8VC8gVGPo", "nips_2022_nC8VC8gVGPo", "nips_2022_nC8VC8gVGPo" ]
nips_2022_OxHn1Yz_Kl3
Causal Identification under Markov equivalence: Calculus, Algorithm, and Completeness
One common task in many data sciences applications is to answer questions about the effect of new interventions, like: `what would happen to $Y$ if we make $X$ equal to $x$ while observing covariates $Z=z$?'. Formally, this is known as conditional effect identification, where the goal is to determine whether a post-interventional distribution is computable from the combination of an observational distribution and assumptions about the underlying domain represented by a causal diagram. A plethora of methods was developed for solving this problem, including the celebrated do-calculus [Pearl, 1995]. In practice, these results are not always applicable since they require a fully specified causal diagram as input, which is usually not available. In this paper, we assume as the input of the task a less informative structure known as a partial ancestral graph (PAG), which represents a Markov equivalence class of causal diagrams, learnable from observational data. We make the following contributions under this relaxed setting. First, we introduce a new causal calculus, which subsumes the current state-of-the-art, PAG-calculus. Second, we develop an algorithm for conditional effect identification given a PAG and prove it to be both sound and complete. In words, failure of the algorithm to identify a certain effect implies that this effect is not identifiable by any method. Third, we prove the proposed calculus to be complete for the same task.
Accept
The reviewers are all in agreeement that the paper constitutes a fundamental advance in the theory of causal inference. The authors responded to the reviewers' remaining questions in a detailed way, and there is no further issue with the paper being accepted.
val
[ "VGrrc-U_qYh", "glUFdtaLhU", "sNb62uRXji", "gzod7hDtZF", "zGtfYR-_UrMW", "2SvzT2NkHhr6", "lyCJMrwC2NT", "EM-k9AndGyoP", "XGgbmyd9eKVg", "rCfjAIGKucN", "X_gxr0T_n9U", "PBQrtl8m59h", "7Ijg-5tAFU" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors of the paper for their detailed reply to my questions!", " The authors answer most of my questions/concerns and I would like to increase my score.", " Thank you for your assessment and time. Below, we address the raised issues.\n\n1. “the natural question is the contribution of this work. It is very important to clearly mention what is the main difference between the proposed method and the other work and what is the significant of this work compared to the others … It will improve the readability to add more details of what has been already done (like Algorithm 1) and what is newly proposed in the paper.”\n\nThank you for raising this issue. We share your concern about distinguishing the contributions of this work clearly from the known results in the literature. In the introduction of the manuscript, we briefly discuss the related work and its shortcomings (lines 55-65), which motivates the subsequent list of contributions (lines 69-75). We state those contributions below and contrast them with the previous work.\n\n- Introduction of a new calculus for PAGs: We propose a new calculus in Section 3.2 that subsumes that of (Zhang, 2007), and resolves its shortcomings, as discussed in Section 3.1. One subtle, yet crucial difference between the two calculi is the sufficiency of blocking the definite status paths only (Def. 3) in the new calculus in contrast to blocking all the paths in the old calculus (Prop. 1 and Thm. 1). This was posed as an open question in (Zhang, 2008a, Footnote 15) and an affirmative answer to this question is key in establishing the new calculus in Thm. 1.\n\n- Development of the first complete algorithm for conditional effect identification (Section 4.2; Algorithm 2): This algorithm employs the new calculus (Thm. 2) and Algorithm 1, a simplified version of IDP due to (Jaber et al., 2019a). Note that the original version of IDP (Jaber et al., 2019a) can be used interchangeably with Algorithm 1 in the formulation of Algorithm 2 and it still yields a complete algorithm for conditional identification. The advantage of the version in Algorithm 1 is that it outputs simpler expressions when the marginal effect is identifiable (lines 251--258). Having said that, it is important to note that we do not claim the version of IDP presented in Algorithm 1 as part of our main contributions in the introduction (lines 69-75), and hence do not intend to take any credit away from the work in (Jaber et al., 2019a).\n\n- Completeness of the proposed calculus for the task of causal effect identification (marginal and conditional): This result is obtained by reducing CIDP (Algorithm 2) to a sequence of applications of the calculus rules in Thm. 2. Hence, the completeness of CIDP implies the completeness of the new calculus for the task of identification (Thm. 4).", " 2. “It is important to provide high-level intuition about the difference between the algorithm for identifying conditional causal effects proposed in “Identification of Conditional Causal Effects under Markov Equivalence” by A. Jaber and the algorithm proposed in this work.”\n\nThe algorithm proposed in (Jaber et al., 2019b) takes a more algebraic approach that generalizes the notion of c-component in causal diagrams to pc-component in PAGs. It then exploits the functional dependencies that can be read from the PAG to rewrite and decompose the query in an effort to identify a set of sub-queries that are necessary for the identification of the input conditional effect. This approach builds on the methodology first developed in (Tian, 2004), which identifies conditional effects given a causal diagram. Most importantly, the algorithm in (Jaber et al., 2019b) was not proven to be complete, and to our knowledge, it is still unknown whether it is complete.\n\nBy contrast, our algorithm follows a more symbolic approach by using the new calculus, namely rule 2, to transition from the original query to a more amenable conditional query, where the identification of the marginal effect is necessary for the identification of the conditional. You are right to note that our approach to conditional identification resembles, in part, that of (Shpitser and Pearl, 2006), which is why we acknowledge the work in lines 274-275. However, only mimicking their treatment in PAGs falls short of identifying some conditional effects, which we illustrate explicitly in Observation 3 (lines 284--292). In general, addressing the problem of identification in PAGs obviously builds on the previous work and understanding on identification in causal diagrams; however, it comes with its own distinctive challenges such as the one discussed in Observation 3, which arise due to the structural uncertainties in PAGs.\n\n3. “Papers on identifiability commonly have an assumption that the underlying DAG is semi-Markovian. I would like to ask whether it is also needed here and whether this has any effect (limitation) on the set of PAGs which are considered in this work? Will the proofs still hold if the underlying DAG of the corresponding PAG was not semi-Markovian?”\n\nThis is an important point that we have not made explicit in the paper, so thank you for pointing it out. In fact, the reduction of CIDP to a sequence of applications of the calculus implies that the algorithm is sound and complete even when the underlying DAG is non-Markovian and not just for the special case of semi-Markovian diagrams. This result holds because both the original calculus (Zhang, 2007) and the one in Thm. 2 consider non-Markovian models. We will definitely add a note to make this point clear.", " Thank you for the favorable assessment of our work and all the feedback provided. We will fix the typos and notations accordingly. Below we address the other raised issues:\n\n1. “What is the main difference between the algorithm proposed in by Jaber et al. 2019a and your algorithm? If the difference is insignificant, I believe the algorithm of Jaber et al. 2019a deserves more credit in the discussion…”\n\nThank you for raising this issue. We have no intention to take any credit away from the work of Jaber et al. (2019a), where the first complete algorithm for marginal effect identification was presented. Notice that we don't claim Algorithm 1 (the presented version of IDP) as part of the contributions, which are listed at the end of the introduction on page 2. Moreover, we introduce the corresponding section on marginal effect identification by saying “Sec. 4.1 introduces a version of the IDP algorithm [Jaber et al., 2019a] to identify marginal causal effects. The attractiveness of this version is that it yields simpler expressions whenever the effect is identifiable while preserving the same expressive power, i.e., completeness for marginal identification.” In that sense, Algorithm 1 is a mere simplification of the algorithm proposed in (Jaber et al., 2019a), which abstracts away some details and is hopefully easier to understand.\n\nOn the other hand, we do claim CIDP (Algorithm 2) as the second main contribution, in addition to the proposed calculus. CIDP transforms a given conditional query to a specific, and more amenable conditional query using rule 2 of the proposed calculus before calling IDP as a subroutine to identify the marginal effect if possible. However, the use of rule 2 of the calculus to obtain this reduction is quite subtle and non-trivial, as discussed in Section 4.2 (based on the three observations therein). Last but not least, it is important to note that the completeness of Algorithm 2 is the reason why we are able to establish the completeness of the calculus rules for the task of causal effect identification. So, while the atomic completeness of the calculus (Thm. 2) is a standalone consequence of the calculus itself, the completeness of the calculus for the task of causal identification (Thm. 4) is derived from the completeness of the CIDP algorithm.\n\n2. “Are there any runtime or space complexity guarantees for your algorithm? Is it polynomial time?”\n\nSimilar to the ID and CID algorithms for DAGs (Tian & Pearl, 2002; Tian, 2004), the IDP and CIDP algorithms run in polynomial time in the number of nodes in the graph. Operations for finding possible ancestors and descendants as well as finding buckets and pc-components of a node (lines 1 and 6 of IDP and lines 1, 2, and 6 of CIDP) are performed by graph traversal (e.g., depth-first search). Each of these operations is linear in the number of nodes in the graph. Specifically, each search takes time O(n + m) where n is the number of nodes and m is the number of edges. Testing definite m-separability in PAGs can also be done in time O(n + m) by a procedure based on the Bayes-Ball algorithm (Shachter, 1998). A similar procedure has been used by (Perkovic et al., 2018) for deciding m-separability when testing whether a set is admissible for adjustment in PAGs. The complexity is the same for deciding m-separability in MAGs, which was shown to be O(n + m) by (Van der Zander et al., 2019). Since applications of do-calculus in PAGs rely on testing m-separation, lines 4 and 8 of CIDP run in O(n + m). Finally, the partition process in line 10 of IDP is clearly polynomial.", " 3. “Are there any open problems left related to conditional effect identification (that authors consider interesting)? … Is anything known about the approximate version of the problem: how many samples are sufficient to identify the causal effect?”\n\nWe address questions Q.3 and Q.4 together as they are related. It helps to make the distinction between the task of causal effect identification and that of causal effect estimation.\n\nThis work is concerned with the first task (causal identification), which asks whether a target conditional effect is uniquely computable from P(V), the observational distribution, and given a PAG learnable from P(V). The objective of CIDP in Algorithm 2 is to decide whether the effect is identifiable and provide an expression for it when the answer is yes, while being agnostic as to whether P(V) can be accurately estimated from the available samples. An interesting direction for future work is to investigate the setting where the causal effect is not identifiable given the PAG, according to Definition 4, and study whether the causal effect can be bounded (Balke and Pearl, 1997).\n\nAs for the second task, the estimation of the conditional causal effect using the identification formula provided by Algorithm 2 poses several challenges under finite data. The number of samples sufficient to identify a given effect would depend on the size of the expression, among other factors, and naive methods for estimation exacerbate this problem. Recent work such as (Jung et al., 2021) proposes a double machine learning estimator for marginal effects that are identifiable given a PAG. An interesting direction of work is to generalize this approach to conditional causal effects identifiable by CIDP.\n\n4. “I would kindly suggest renaming Proposition 1 (by Zhang 2007) into a Theorem, as calling it a proposition appears to be a bit judgemental.”\n\nThank you for the suggestion. The choice to present this result as a proposition was not intended to diminish the groundbreaking work of (Zhang, 2007), but rather to distinguish previous work from our main contributions. We can add a clarification that propositions are from other papers while the label ‘theorem’ is reserved for the main contributions claimed in our paper. We are certainly open to reconsidering the issue in case there is an agreement among the reviewers about this convention.\n\n(Jung et al., 2021) Yonghan Jung, Jin Tian, and Elias Bareinboim. \"Estimating identifiable causal effects on Markov equivalence class through double machine learning.\" International Conference on Machine Learning. PMLR, 2021.\n\n(van der Zander et al., 2019) Benito van der Zander, Maciej Liśkiewicz, and Johannes Textor. “Separators and adjustment sets in causal graphs: Complete criteria and an algorithmic framework.” Artificial Intelligence 270 (2019): 1-40.\n\n(Shacter, 1998) Ross D. Shacter. \"Bayes ball: The rational pastime.\" In Proceedings of the 14 Annual Conference on Uncertainty in Artificial Intelligence. 1998.\n\n(Balke and Pearl, 1997) Alexander Balke and Judea Pearl. \"Bounds on treatment effects from studies with imperfect compliance.\" Journal of the American Statistical Association 92, no. 439 (1997): 1171-1176.", " Thank you for the very favorable assessment and all the feedback provided. We are surely considering your suggestion of a journal submission to provide a unified and thorough treatment of the identification problem in PAGs. Below, we address the issues you raised.\n\n1. \"The do-calculus is famously complete for effect identification across all three levels of the causal hierarchy (Shpitser & Pearl, 2008). How, if at all, could the methods proposed here be leveraged for counterfactual inference in PAGs?\"\n\nThank you for the good and non-trivial question. Perhaps there are different ways of thinking about this, and we would like to share two that we consider complementaries. \n\nFirst, the do-calculus is specific for queries in layer 2, so the name of the calculus. This means that it can express counterfactuals of the form $P(Y_x | Z_x)$, for when the subscript of all terms are the same; also, it doesn’t allow for nesting. In other words, quantities such as $P(Y_x | Z_{x’})$, where $x \\neq x’$ are not allowed in the do-language and in layer 2. For a more technical discussion on the semantics of this and layer 2 queries, Def. 5/Eq. 1.9 in (Bareinboim et al., 2022) could be enlightening. Having conflicting subscripts (worlds) is a distinct feature of layer 3. Having said that, it’s not the case that the do-calculus solves any counterfactual query in layer 3, but just do-queries related to layers 1 and 2. The current most general result in terms of the do-calculus we believe is (Corollary 2, Lee et al., 2019), which may also contain a good survey.\n\nSecond, the work you mentioned – (Shpitser and Pearl, 2008) – provides methods to perform counterfactual reasoning in many settings but not in all, including as discussed in (Pearl, 2001, Avin et al., 2005). Examples of more recent work relaxing some of the conditions of this work and prior literature include (Shpitser and Sherman, 2018; Zhang and Bareinboim, 2018), and a more recent, claimed complete method for arbitrary nested counterfactual identification in (Correa et al., 2021). It’s out of the scope of this paper to do a full survey on the literature on layer 3 identification, so we apologize beforehand if we missed some references or nuance. To be clear, we make no claim about counterfactual identification and don’t feel this result follows immediately from our or these other works in the literature. One thing that is interesting in these works, as we understand them, is that they do not use calculus per se, and follow a more algorithmic approach, generalizing Tian’s ID/C-component algorithm. On the other, more logical side, (Halpern, 2000) has his own axiomatization, and even though is more focused on interventional quantities, it could be used to think about counterfactuals.\n\nOne distinctive challenge with counterfactual queries in the context of PAGs is how to generalize the notion of counterfactual factorization and graphs so as to read off relevant properties from it. We do feel the results in our paper can help provide some insights on this challenge, but it’s incomplete and a current, ongoing, and challenging project. We will add some remarks on this, thank you for raising the issue.\n\n2. “There is some discussion in the manuscript of an R implementation. Mentioning this without providing any empirical results or supplemental code is a bit frustrating, as it’s unclear how reviewers are meant to evaluate the performance or usability of the CIDP algorithm. I would either include this material as part of the submission or cut the reference altogether.”\n\nAn empirical evaluation of CIDP is provided in the current supplementary material, please, see Appx. G. Unfortunately, we had to move it to the appendix due to space limitations and to ensure a thorough and clear presentation of the theoretical claims. We plan to utilize the extra page in the final version, if the paper is accepted, to present the empirical evaluation. We are currently polishing the code before releasing it to the public. \n\n3. “There has been considerable work in recent years on CATE estimation. Could methods like inverse propensity weights or double machine learning be used to estimate conditional causal effects in PAGs, at least under certain conditions? Again, perhaps beyond scope to go too deep into this, but a natural follow up that will likely occur to many readers.”\n\nThis is a good suggestion indeed. Recent work utilizes frameworks such as double machine learning to estimate marginal causal effects given a PAG (Jung et al., 2021), following the work of (Chernozhukov et al., 2018). A natural future research direction is to use similar techniques to estimate conditional causal effects as well.", " 4. “I am not convinced that the brief reference to an independence oracle constitutes a thorough discussion of the method’s limitations.”\n\nAssuming the presence of an oracle for conditional independence encapsulates the challenge of dealing with finite data and of testing for conditional independence thereof. Another challenge lies in the computational complexity of learning the PAG in the first place (Colombo et al., 2012), and estimating the expression when the effect is identifiable (Jung et al., 2021). We will elaborate on these issues in the manuscript. Thanks for the excellent point.\n\n(Bareinboim et al., 2022) Elias Bareinboim, Juan D. Correa, Duligur Ibeling, and Thomas Icard. “On pearl’s hierarchy and the foundations of causal inference.” In Probabilistic and Causal Inference: The Works of Judea Pearl, pp. 507-556. 2022.\n\n(Jung et al., 2021) Yonghan Jung, Jin Tian, and Elias Bareinboim. “Estimating identifiable causal effects on Markov equivalence class through double machine learning.” International Conference on Machine Learning. PMLR, 2021.\n\n(Correa et al. 2021) Juan D. Correa, Sanghack Lee, and Elias Bareinboim. \"Nested counterfactual identification from arbitrary surrogate experiments.\" Advances in Neural Information Processing Systems 34 (2021): 6856-6867.\n\n(Lee et al., 2019) Sanghack Lee, Juan D. Correa, and Elias Bareinboim. \"General identifiability with arbitrary surrogate experiments.\" In Uncertainty in artificial intelligence, pp. 389-398. PMLR, 2020.\n\n(Shpitser and Sherman, 2018) Ilya Shpitser and Eli Sherman. \"Identification of personalized effects associated with causal pathways.\" In Proceedings of the Conference on Uncertainty in Artificial Intelligence, 2018.\n\n(Zhang and Bareinboim, 2018) Junzhe Zhang and Elias Bareinboim. “Fairness in decision-making—the causal explanation formula.” In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, no. 1. 2018.\n\n(Chernozhukov et al., 2018) Victor Chernozhukov, Denis Chetverikov, Mert Demirer, Esther Duflo, Christian Hansen, Whitney Newey, James Robins. “Double/debiased machine learning for treatment and structural parameters.” The Econometrics Journal, 21(1), 2018.\n\n(Colombo et al., 2012) Diego Colombo, Marloes H. Maathuis, Markus Kalisch, and Thomas S. Richardson. \"Learning high-dimensional directed acyclic graphs with latent and selection variables.\" The Annals of Statistics (2012): 294-321.\n\n(Shpitser and Pearl, 2008) Ilya Shpitser and Judea Pearl. “Complete identification methods for the causal hierarchy.” Journal of Machine Learning Research 9 (2008): 1941-1979.\n\n(Avin et al., 2005) Chen Avin, Ilya Shpitser, and Judea Pearl. \"Identifiability of path-specific effects.\" In Proceedings of the 19th international joint conference on Artificial intelligence, pp. 357-363. 2005.\n\n(Pearl, 2001) Judea Pearl. \"Direct and indirect effects.\" In Proceedings of the 17th Conference on Uncertainty in Artificial Intelligence, 2001.\n\n(Halpern, 2000) Joseph Y. Halpern. \"Axiomatizing causal reasoning.\" Journal of Artificial Intelligence Research 12 (2000): 317-337.", " Thank you for the very favorable assessment and the feedback provided. Below, we address the questions you raised.\n\n1. “It is not perfectly clear to me whether or not these results can be also applied to counterfactual queries (I think so but this point might be made explicit).”\n\nEven when the specific causal diagram is available (instead of the equivalence class as in our case), counterfactual inferences require distinct machinery beyond the scope of the calculus rules and ID algorithm we discussed. One distinctive challenge with counterfactual queries in the context of PAGs is how to generalize the notion of counterfactual graphs from PAGs so as to read off relevant properties from them. The results in this paper can provide insights on how to solve this task, which is an ongoing project. We will add some remarks on this important question.\n\n2. “As I understand, PAG identifiability corresponds to identifiability on all the CGs compatible with the PAG and the fact that the query would give the same result on each CG. What I don't understand is whether non-identifiability in PAGs, means that there is at least a non-identifiable CG compatible with the PAG or it might be the case that all the CGs are identifiable but leading to different values.”\n\nThat’s a good question, thank you. According to Def. 4, non-identifiability given PAGs, in principle, can originate from one of the two conditions you mention: either the effect is not identifiable in some causal diagram in the equivalence class of the PAG, or the effect is identifiable in all the diagrams but it is not unique. Our completeness result (Thm. 4) concludes that for every non-identifiable effect given a PAG, there exists at least one causal diagram in the equivalence class of the PAG where the effect is not identifiable. This is stated in the contributions (lines 72--74) and the discussion before Thm. 4 (lines 317--323). We will make this point more prominent in the manuscript.", " This paper studies the problem of conditional causal effect identification, which asks to recover P(y|do(x), z) from the observational distribution. There is a sequence of works that propose an algorithm for this problem under the assumption that the underlining causal graph is known. This paper relaxes this assumption by assuming that only the Markov equivalence class of the underlining causal graph is known. More specifically, the paper assumes that the so-called partial ancestral graph (PAG) is known. The paper proposes new calculus, which is a modified version of Zhang's [2007] Calculus. For the new calculus, the authors prove that it is complete and atomic complete. Moreover, they propose an algorithm for the identification of conditional causal effects given a PAG and show that the algorithms is complete, that is if some causal cannot be identified by this algorithm then it is not identifiable in some causal graph in the equivalence class of a given PAG. *Significance:* The problem of identifiability of interventional causal effects is an important and long-studied problem. This paper improves over the state-of-the-art result by relaxing the assumption that the ground truth causal graph is known. Instead, the paper assumes that only the partial ancestral graph is known. This is a significant relaxation of the assumption as the true underlining causal graph is frequently not known, while PAG can be recovered from conditional independencies of variables. Zhang [2007] proposed a calculus to solve the problem considered in this paper which relies on the notion of ``possibly m-connected paths\". However, Zhang's calculus is not complete, i.e., it is possible to contract a PAG in which the effect is identifiable but is not deducible from Zhang's calculus. \n\nThe key observation of this paper is that if one relies on the notion \"definite m-connected paths\" instead of \"possibly m-connected paths\", then the minor modification of Zhang's calculus becomes a complete calculus. Moreover, the paper proposes an algorithm that uses the rules of the newly introduced calculus to recover causal effect and shows that this algorithm is complete.\n\n*Clarity:* Despite the fact the paper is reasonably well-written it is extremely technical, and as a result quite hard to read. I was not able to verify the correctness of the proofs in the appendix within a reasonable time. \n\n*Originality:* The paper relies mostly on ideas that are well-known in the field. The proposed algorithm appears to be a minor modification of the prior algorithm by Jaber et al. 2019a. However, the proposed calculus and its completeness seem to be novel. 1) What is the main difference between the algorithm proposed in by Jaber et al. 2019a and your algorithm? If the difference is insignificant, I believe the algorithm of Jaber et al. 2019a deserves more credit in the discussion, in particular, it should be acknowledged that your algorithm is a minor modification. Alternatively, if the difference is significant, I would kindly suggest including a more detailed discussion of the distinction between these algorithms. \n\n2) Are there any runtime or space complexity guarantees for your algorithm? Is it polynomial time?\n\n3) Is anything known about the approximate version of the problem: how many samples are sufficient to identify the causal effect?\n\n4) Are there any open problems left related to conditional effect identification (that authors consider interesting)? \n\n4) I would kindly suggest renaming Proposition 1 (by Zhang 2007) into a Theorem, as calling it a proposition appears to be a bit judgemental. \n\n5) typo in line 213: X, Y, Y\n\n6) I believe the notation in line 265 is poorly chosen, y serves the role of both a fixed variable and a summation index (which makes no mathematical sense). I kindly suggest renaming the summation index. The same notation problem occurs in other places. I believe the authors should comment on the runtime and sample complexity guarantees of the proposed algorithm. If the problem of establishing such guarantees is not resolved I kindly suggest that the authors acknowledge this. Alternatively, if such guarantees (lower or upper bounds) follow from the prior work it will be useful if they are included in the paper. ", " This work studies the question of identification of conditional causal effects given a PAG. Authors propose sound and complete algorithm for the identification of the causal effects of the form P(y|do(x), z). \n It studies an important and quite interesting problem in causality which is causal effect identification in PAGs. \n \nAuthors propose new set of do-calculus rules for PAGs and prove their soundness and completeness in terms of identifiability of conditional causal effects. Also they propose sound and complete algorithm for the identification of causal effects P(y|do(x), z) given PAG. \n\n\nThere are statements that require more references. For example, the IDP algorithm is similar (except slight changes in one line) to the algorithm in “Causal Identification under Markov Equivalence: Completeness Results” by A. Jaber. This needs to be clarified in the main text. \n\nResults resemble the ones presented by Shpitser for identification of the conditional causal effects in DAGs. Moreover, the problem of identification of causal effects in PAGs has been addressed before. Therefore, the natural question is the contribution of this work. It is very important to clearly mention what is the main difference between the proposed method and the other work and what is the significant of this work compared to the others, e.g., proofs are different and more involved, assumptions have been relaxed, ...\n\n\n It is important to provide high-level intuition about the difference between the algorithm for identifying conditional causal effects proposed in “Identification of Conditional Causal Effects under Markov Equivalence” by A. Jaber and the algorithm proposed in this work.\n\n\nIt will improve the readability to add more details of what has been already done (like Algorithm 1) and what is newly proposed in the paper.\n\nPapers on identifiability commonly have an assumption that the underlying DAG is semi-Markovian. \nI would like to ask whether it is also needed here and whether this has any effect (limitation) on the set of PAGs which are considered in this work? Will the proofs still hold if the underlying DAG of the corresponding PAG was not semi-Markovian?\n Please see above comments.", " This is a paper about the identification of causal queries in structural causal models. If most of the existing works assume the availability of the causal graph, here only the PAG (partial ancestral graph) structure (corresponding to a collection of CGs) is assumed to be available. The authors derive an analogous of do-calculus for such setup. This is an extension of Zhang's calculus, but here completeness guarantees are provided. Algorithms for the identification of causal effects are provided. Strength\n\n- Although representing an evolution of existing results (Zhang's results and calculus for DAGs and Jaber's algorithms and completeness result) the results in this paper are clearly novel and authors' final claim about their paper \"clos[ing] the problem of effect identification under Markov equivalence\" is hard to question. This makes the contribution highly significant.\n\n- The paper is very technical, but the results and the proofs seem to be correct (I only partially checked to long proofs in the supplementary material). The the derivations and the notation is quite accessible to people working with SCMs. I also appreciate the insights provided by the different examples reported in the paper.\n\nWeaknesses\n\n- While the statements of the theorems and the examples are generally very clear, the presentation of the algorithms can be probably improved.\n\n\n - It is not perfectly clear to me whether or not these results can be also applied to counterfactual queries (I think so but this point might be made explicit).\n- The considered approach allows to address identifiability when the causal graph is not known and only the PAG is available. As I understand, PAG identifiability corresponds to identifiability on all the CGs compatible with the PAG and the fact that the query would give the same result on each CG. What I don't understand is whether non-identifiability in PAGs, means that there is at least a non-identifiable CG compatible with the PAG or it might be the case that all the CGs are identifiable but leading to different values. A clarification on these points would be helpful, at least from my point of view. I don't see critical issues in terms of societal impact.", " The manuscript provides a sound and complete calculus and algorithm for identifying conditional causal effects of the form $p(\\bf{y} | do(\\bf{x}), \\bf{z})$ in partial ancestral graphs (PAGs), i.e. a Markov equivalence class of maximal ancestral graphs (MAGs). This generalizes prior work on causal effect identification in different kinds of graphs and equivalence classes thereof. Completeness results for conditional effect identification in PAGs are notably lacking, despite strong progress in this area by Zhang (2007) and Jaber et al. (2019a, 2019b). Providing such a result – not just for an abstract set of rules but in an efficient algorithm – constitutes a major contribution. Since the FCI algorithm outputs a PAG, CIPD could plausibly be incorporated into a causal discovery pipeline for effect estimation over a learned Markov equivalence class. This could be of great use to practitioners in a variety of fields. The manuscript is well-researched and well-written, quite clear and easy to follow despite considerable technicalities. I commend the authors for their strong and original work.\n\nIf anything, I wonder if a conference submission is really the right venue for this, given how many follow up questions I have! I fully appreciate that space constraints prevent the authors from tackling all the problems that arise in causal reasoning with PAGs, but I strongly encourage them to consider a journal length follow up article, along the lines of Zhang (2007) or Shpitser & Pearl (2008). Short of that, some revisions to the present manuscript at least addressing these concerns would be welcome: \n\n-The do-calculus is famously complete for effect identification across all three levels of the causal hierarchy (Shpitser & Pearl, 2008). How, if at all, could the methods proposed here be leveraged for counterfactual inference in PAGs? While a complete answer may lie beyond the scope of this manuscript, some discussion of the problem would be interesting, if only as a direction for future work. \n\n-There is some discussion in the manuscript of an R implementation. Mentioning this without providing any empirical results or supplemental code is a bit frustrating, as it’s unclear how reviewers are meant to evaluate the performance or usability of the CIDP algorithm. I would either include this material as part of the submission or cut the reference altogether. \n\n-There has been considerable work in recent years on CATE estimation. Could methods like inverse propensity weights or double machine learning be used to estimate conditional causal effects in PAGs, at least under certain conditions? Again, perhaps beyond scope to go too deep into this, but a natural follow up that will likely occur to many readers.\n See above. I am not convinced that the brief reference to an independence oracle constitutes a thorough discussion of the method’s limitations. Some elaboration would be welcome here, as well as further comments on if/how the method can be used for counterfactual effect identification, computational efficiency, etc." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 9, 9 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 3, 4 ]
[ "2SvzT2NkHhr6", "X_gxr0T_n9U", "X_gxr0T_n9U", "X_gxr0T_n9U", "rCfjAIGKucN", "rCfjAIGKucN", "7Ijg-5tAFU", "7Ijg-5tAFU", "PBQrtl8m59h", "nips_2022_OxHn1Yz_Kl3", "nips_2022_OxHn1Yz_Kl3", "nips_2022_OxHn1Yz_Kl3", "nips_2022_OxHn1Yz_Kl3" ]
nips_2022_JO9o3DgV9l2
Shield Decentralization for Safe Multi-Agent Reinforcement Learning
Learning safe solutions is an important but challenging problem in multi-agent reinforcement learning (MARL). Shielded reinforcement learning is one approach for preventing agents from choosing unsafe actions. Current shielded reinforcement learning methods for MARL make strong assumptions about communication and full observability. In this work, we extend the formalization of the shielded reinforcement learning problem to a decentralized multi-agent setting. We then present an algorithm for decomposition of a centralized shield, allowing shields to be used in such decentralized, communication-free environments. Our results show that agents equipped with decentralized shields perform comparably to agents with centralized shields in several tasks, allowing shielding to be used in environments with decentralized training and execution for the first time.
Accept
It is agreed among reviewers that the paper should be accepted. Hope the authors can address the comments from the reviewers in the final version as promised.
train
[ "471G9AUSV9K", "ijJIMGfaxXj", "Xcx8pALqUQt", "6x6XlpGL7LF", "3ZYcsN-V6tR", "2E4a1OHsq66", "LeB00eP_BzL", "a4OINaTdHuj", "Zpkg0WblbZj", "3wlADu2UURC" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your comments. We will add a note about the complexity of multi-agent centralized shield synthesis to the “Limitations” section.", " I appreciate the comment regarding the decentralized shield permissiveness. While I do think it is an important aspect, providing an informal intuition is very helpful.\n\nMy other concern was regarding providing centralized shields. I agree that creating a shield is easy and there exists many tools do so, my concern was the computational intractability of constructing a centralized shield for a multi-agent system. The work in this paper assumes that there exists such a centralized shield which is in practice is usually not computationally feasible and hence I think this assumption is a very strong one.\n\nIn saying that however, I do believe the rest of the theoretical contributions of this paper are valid and interesting. I will bring my score up to a 5. ", " Thank you for your review and recommendations; we will implement these in the final version of the paper.\n\nFor observability: in some environments, agents may be able to independently reconstruct the safety-relevant information from cameras, sensors, etc. In environments where agents are completely blind and cannot communicate, it may be impossible to enforce safety, unless exact paths are planned in advance. The spectrum of environments between these extremes is quite interesting however, and we are interested in exploring safety in communication-free environments with local or imperfect observability in the future.", " Thank you for your reply. \n1. Your explanation of the figure was extremely useful in better understanding the algorithm and it seems an easy revision to make to the paper. I am willing to update my review. I would suggest the following for the final version of the paper: \n- add labels to each of the column (algorithm 1, 2 and 3)\n- add labels to the grids (agent1 action vs agent 2 action)\nIt was not obvious that colors were labeled. If you could add a paragraph equivalent to your answer that would be even better.\n3. Ok, it would be good to make it explicit. I understand that theoretically it does not imply any communication when executing the policy. This is a bit outside of the scope of the paper but practically, it seems that if we want a fully observable state e.g. the robots know each other's position, communication would always be needed?\n4. Ok please add to the main paper. ", " Thank you for your review.\n\n1. On the question of the figures, Figure 1 was our attempt to provide a visual representation of the whole sequence of algorithms; the leftmost column as a representation of the output of Algorithm 1, the central column for Algorithm 2, and rightmost for Algorithm 3. In all three columns, the circles represent states in the shield, while arrows represent transitions to other states. Colors on the arrows represent labels which are observed from the environment, while the grids of green and red boxes represent actions which are allowed or disallowed. The shield which is depicted in the figures is not meant to enforce safety for any specific task; rather, it is a simplified shield used to demonstrate key parts of the sequence of algorithms.\n\n The leftmost column of Figure 1 represents that from the initial state, after observing the label “brown” from the environment, there are two possible states which the centralized shield may transition to. The colored boxes indicate the safe actions which are discovered by Algorithm 1: Agent 1 may take actions 1 or 2, while Agent 2 may take any actions. \n\n The central column represents that Algorithm 2 has projected the set of safe actions onto two transient-state individual shields; one shield per each agent. The small black circles represent transient states. Using Agent 1 (top) as the example, the initial state is a transient state where either labels brown or blue may be observed. After observing brown, the shield would transition to the upper state. From here, the previous step determined that Agent 1 may only take actions 1 or 2. No matter which of these allowed actions get taken, the next state is the transient state on the top-middle. After observing a label from the environment again, the shield would transition to a state following that.\n\n The right column represents that Algorithm 3 has combined transient states with the states which follow, in order to obtain a DFA which matches the shield interface. From the initial state, if the agent observes the brown label, and takes allowed actions 1 or 2, it transitions to the upper state.\n\n Figure 2 visualizes each step of Algorithm 1 in more detail. (a) represents the input to the algorithm. Note that this is a different centralized shield than the input represented in Figure 1. In (b)-(f), the 3x3 grid of colored squares represents the joint actions which are allowed by the centralized shield for a given state/label pair. The 3x1 rectangles on the top and left represent the individual actions which the decentralized shield will allow. Green represents an action which is allowed by the shield, red represents an action which is not allowed, and yellow indicates that a given individual action is under consideration during that step of the algorithm. The list of squares with arrows to circles represents $\\mathcal{R}(s, l)$; the map of what labels of possible next shield states correspond to specific shield states; this is necessary to detect ambiguous actions (indicated by the same color pointing to two different states, such as in (e)).\n\n The revised version of the paper will have an extra page which we can use to clarify the figure, its caption, and the text in the main paper. Additionally, we plan to update Figures 1 and 2 to use the same example shield as each other.\n\n2. On the question of synthesizing a decentralized shield directly, rather than computing a centralized shield first: We do not yet have an algorithm to do so, but it is an interesting, if difficult, area to explore. A general algorithm may run into issues such as the undecidability of decentralized reactive synthesis, so the decomposition algorithm may still be more efficient than directly searching for a decentralized shield. Such approaches and the resulting analysis is left for future work. \n\n3. On the question of the observations necessary to use a shield: Our work is based on the centralized shielding literature, which typically assumes an MMDP formalization of the environment; the state is fully-observable. However, our method continues to work in some partially-observable cases; for instance, in tasks such as Experiment 2, where all agents receive enough information to ensure safety, but not necessarily enough information to solve the task without memory. While our work is the first to eliminate the communication requirement for shielding, we leave the problem of shield synthesis with more constrained observations to future work. \n\n4. On the question of Algorithm 1 versus Algorithms 2 and 3: the process for constructing a decentralized shield is the full sequence of Algorithms 1, 2, and 3. Algorithm 1 is the most important/novel step of the process, while Algorithms 2 and 3 transform the outputs of Algorithm 1 into DFAs which satisfy the standard shield interface. We will clarify these points in the revised paper.\n", " Thank you for your review. \n\nPlease note that in this reply, we refer to Algorithm line numbers (starting from 1 on the line containing “Input”) which we mistakenly did not enable for the original submission; this will be fixed in the revised version.\n\n1. On the question of permissiveness: the comment on line 325 refers to the general case, in which we currently do not know if the decentralized shield is necessarily maximally permissive. Future work can analyze this question further, and potentially improve the decentralized shield. The claim on line 172 refers to the specific case of shields which are Cartesian and unambiguous. We will make this clear in the revised version. We provide intuition below about why this type of decentralized shield is maximally permissive, and will add a more formal version of this argument to the revised paper:\n\n Given a shield state and label, Algorithm 1 determines the set of individual actions the agents can take. $\\mathcal{C}(s, l)$ represents the set of joint actions allowed by the centralized shield at this point. If the shield is Cartesian, at the `if` check on line 15, $A$ will always either be a subset of $\\mathcal{C}(s, l)$, or fully disjoint; if the shield is unambiguous as well, there will be no joint actions which are allowed by the centralized shield, but then subsequently aren’t added to the decentralized shield on line 16. Since the decentralized shield allows every joint action which is present in the centralized shield, it is necessarily maximally permissive.\n\n2. On the question of unique decomposition: the results are not necessarily unique. In particular, our algorithm allows for two arbitrary choices: First, for each state-label pair, the guaranteed-safe action $a$ is chosen arbitrarily from the set of safe actions allowed by the centralized shield (line 9 of algorithm). The no-op action, when safe, appears to be the most reasonable choice for this, although a full exploration of how to choose the starting point for the algorithm is left for future work.\n\n Second, the order of agent priority may be chosen arbitrarily, and given as iteration order on line 12. Our current implementation randomizes the agent priority each time step using a deterministic PRNG, as hinted at by the comment on line 12; we found this to be a good trade-off between complexity and coverage of the joint action space.\n\n3. On the question of quantifying a bound for how conservative a shield is: we did not extensively explore this idea, although the agent-priority randomization mentioned above is an attempt to allow as many behaviors as possible. It is an interesting question, and the development of additional metrics to quantify permissiveness is future work.\n\n4. On the question of algorithmic complexity: as calculated in our response to Reviewer ZmUF, the time complexity of decentralizing a centralized shield is $O(S * L^2 * D * A * A^D)$, where $S$ is the number of states in the centralized shield, $L$ is the size of the label space, $D$ is the number of agents, $A$ is the number of actions per agent, and $A^D$ is the size of the joint action space.\n\n In our experiments, the size of the centralized shield tended to be linear to the size of the state space of the MMDP; due to the specific implementation of the synthesis tool we used, there was an additional scaling factor of the number of joint actions available. It is worth noting that the MMDP's state space usually scales exponentially with the number of agents. \n\n However, the exact scaling factor for the shield is domain-specific. For example, if we were to add complexity to the reward structure of our gridworld domain, but maintain the same collision-avoidance safety specification, we could likely use the same shield without further modification.\n\n5. On the question of contract decomposition literature: we are generally familiar with compositional methods in verification, and with contract based design, and we will add references to these methods in our revised paper. We are not specifically aware of the contract decomposition literature that the Reviewer is referring to, and we would be grateful if the Reviewer could point us to specific papers in that area. Thank you.\n", " Thank you for your review. \n\nThe key problem which our paper addresses is stated on line 162 (Problem 3), although by necessity, some detail was left out of the paper due to the length limitation. We plan to use part of the extra page allowed in the camera-ready version of the paper to enhance the clarity of the problem statement and algorithm.\n\nPlease note that in this reply, we refer to Algorithm line numbers (starting from 1 on the line containing “Input”) which we mistakenly did not enable for the original submission; this will be fixed in the revised version.\n\n1. On the question of scaling and algorithmic complexity, our paper provides a solution to a key limitation in current state-of-the-art methods (such as ElSayed-Aly et al.)—the need for communication with a centralized shield during execution. Our approach does assume a centralized shield is given, but generating a centralized shield is a common task, and there are several tools to do so. As a result, our method is widely applicable, and future work can focus on removing or scaling the centralized shield synthesis problem. We note that, in our approach, the centralized shield is only required during the decentralized shield generation process, and not during the decentralized shield execution.\n\n Here is an informal proof that the time complexity of Algorithm 1 is linear in relation to the size of the shield; we will add a version of the following to the appendix of the revised paper:\n\n ```\n S = number of shield states\n A = number of actions per agent\n L = number of labels\n D = number of agents\n A^D = size of the joint action space\n \n Line 7: Loop, iterate O(S * L) times\n Line 8: O(A^D)\n Line 9: O(A^D)\n Line 10: O(D)\n Line 11: O(1)\n Lines 12-20: Loop, iterate D times:\n Lines 13-19: Loop, iterate A times:\n Line 14: O(A^D)\n Line 15: O(L * A^D) # Note: this has a small coefficient if `UnambiguousActions` is implemented efficiently\n Line 16: O(1)\n Line 17: O(A^D)\n ```\n\n The time complexity is $O(S * L^2 * D * A * A^D)$. Note that $L$, $D$, and $A$ are typically small constants, independent of the shield size.\n\n On a laptop with a M1 Max processor, the shield for experiment 1 took 273 seconds to synthesize, and the shield for experiment 2 took 326 seconds. The vast majority of this time was spent in centralized shield generation; our decentralization step took only a few seconds. We did not attempt to optimize the centralized shield synthesis and such optimization may (significantly) reduce the runtime. We will add these numbers to the appendix.\n\n2. On the question of permissiveness: for the general case, we do not know if the decentralized shield is necessarily maximally permissive. A full analysis of this question is left for future work. For the specific case of shields which are Cartesian and unambiguous, we provide intuition below about why this type of decentralized shield is maximally permissive, and will add a more formal version of this argument to the revised paper:\n\n Given a shield state and label, Algorithm 1 determines the set of individual actions the agents can take. $\\mathcal{C}(s, l)$ represents the set of joint actions allowed by the centralized shield at this point. If the shield is Cartesian, at the `if` check on line 15, $A$ will always either be a subset of $\\mathcal{C}(s, l)$, or fully disjoint; if the shield is unambiguous as well, there will be no joint actions which are allowed by the centralized shield, but then subsequently aren’t added to the decentralized shield on line 16. Since the decentralized shield allows every joint action which is present in the centralized shield, it is necessarily maximally permissive.\n", " The authors present a method to decompose a centralized shield in a multi-agent system to individual shields for each agents that can enforce a safety specification with no communication between agents. The authors' proposed approach takes a centralized shield and outputs a maximally permissive decentralized shield. The authors then show, using experiments on gridworlds, that their decentralized shields can perform comparably to a centralized shield and significantly outperforms no shield at all. This paper, while writen clearly, needs significantly more detail and preciseness. The paper also needs better structure as it is not easy to find out what exactly the problem statement is and what the inputs to the problem are and the outputs of the solution. Clearly stating the problem statement in the paper would be helpful. \n\nThe focus of this paper is on the no-communication aspect of the multi-agent shielding problem. However, one of the main issues with multi-agent shielding is that centralized synthesis often scales very poorly. This aspect is not addressed much by the authors, which can be problematic as their solution requires the input of a centralized shield which is often not computationally feasible to procure. \n\nFurthermore, the paper would benefit greatly from complexity results. One of the key contributions is the decomposition of the centralized shields, but there is no intuition on the scaleability of the process. Additionally, it could also help to report synthesis times in the experiments. Can the authors provide any theoretical complexity results on their decomposition process? It is stated simply in section 7 takes linear time in the number of state-label pairs of the centralized shield. A proof, or at least intuition for this statement will be helpful. \n\n\n\nIs there a proof the resulting decentralized shield is indeed maximally permissive? Adequately addressed. ", " This paper considers the problem of reinforcement learning in the scenario in which some states are unsafe, and an agent cannot be allowed to visit them, not even as part of exploration. An example of such a scenario would be a robot behaving in a physical environment, in which it should not collide pedestrians, who might be hurt from such a collision. Due to this safety-critical nature of the problem, it is not sufficient to assign a negative reward to those states and expect the agent to learn to avoid those states from experience.\n\nExisting work has considered the task of learning \"shields\", which are a state-dependent subset of actions that ensure that the agent(s) will never visit the unsafe states. It is possible to generalize the notion of shields to the multi-agent scenario by requiring that the joint actions taken by all of the agents satisfy the constraints of the (centralized) shield, but this has the drawback that it requires the agents to communicate and coordinate their actions, as well as for the shield to have enough information about all of the agent behaviors to know which state restrictions it should impose next.\n\nThis work considers the problem of automatically decomposing a centralized shield into a set of decentralized shields, one for each agent. The shields are iteratively constructed by adding actions for agent that do not result in a violation of the centralized shield. The authors demonstrate their approach on a collection of RL case studies. - The work is certainly relevant. Distributed safe reinforcement learning is an important topic with a broad array of useful applications.\n\n- The case studies demonstrate feasibility of the approach.\n\n- The soundness theorem seems to be correct.\n\n- The metric on permissiveness is given in terms of average actions per state, but this may be the wrong metric. In some cases, a few actions in a specific state may disallow a large portion of the behavior-space, resulting in excessively conservative behavior that is not accurately captured by the metric considered. For finite MMDPs, it should be possible to quantify the percentage of the behavior-space that is lost, and I think this would make for a better metric.\n\n- I have a few open questions, some of which may be straightforward for the authors to address in the camera-ready version. - If the centralized shield is Cartesian and unambiguous, the authors claim in line 172 that their algorithm will produce maximally permissive shields. This statement is given casually, and does not even receive an informal argument to justify it. This statement further seems to be contradicted on line 325. Does this procedure produce maximally permissive decentralized shields, even in the restricted case of cartesian and unambiguous shields?\n\n- For a given centralized shield, is the decomposition unique? This is not clear to me, it is not stated in the paper, and if a decomposition is not unique, it may be valuable to be able to be able to search among decompositions to balance application-specific trade-offs.\n\n- Is it possible to derive a result on a bound for how conservative a distributed shield will be? It seems that for the case of finite MMDPs, it should be possible to quantify how many behaviors are allowed by the centralized shield and disallowed by the decomposition. It would be useful for practitioners to be able to quantify how much of a penalty they are paying in a specific application.\n\n- The authors mention informally that the complexity of computing a shield decomposition is linear in the size of the centralized shield--but how does this relate back to the complexity of the original MMDP? I expect that the complexity of computing a centralized shield is known, and I think the paper would be improved by expressing the complexity of a distributed shield from the size of the MMDP.\n\n- The authors seem to be familiar with the formal methods literature. Existing work in that literature considers the problem of decomposing safety properties into contracts for components in a large architecture--how does their work relate to the contract decomposition literature? I think the limitations were suitably addressed.", " In this paper, the authors address the problem of safe multi-agent decision making in decentralized communication free environment. They build upon previous approaches generating centralized safety shield in multi-agent problems to propose an algorithm that generates a decentralized safety shield given the centralized one. \nThe algorithm consists of a search procedure, it first identifies one safe actions for every agent at every states. Then it is progressively extending that set for each agent. To extend the set it checks that every new actions to be added still satisfies the safety constraints and is unambiguous. Some proofs are provided in appendix. \n\nThe authors empirically analyze the decentralized shielded generated by their algorithm on two discrete environments. In both cases the shield is generated to prevent collisions between two agents in a discrete grid world domain. \n The grade has been updated after clarification by the author on the figure illustrating how the proposed algorithm works. \n\nThe problem being addressed is relevant and important as multi-agent decision making and safe decision making can be applied to many societal applications. Such concept of decentralized shielding and the approach to generate the decentralized shield is to the best of my knowledge novel.\n\nOverall the paper is really clear and well-written, the introduction, related work and problem setting are very nice to read. The related work section is quite clear and they clearly highlight how their method builds upon existing work in a non-trivial way by proposing an algorithm decentralizing an existing centralized shield. They do a very good job explaining a naïve approach to build such shield and how it can be improved. Some intuitive explanation is always provided to accompany the formalism and it is greatly appreciated. \n\nThe experiments are satisfactory, however some discussion is missing on how it would handle more complex safety specification and scale to more than 1 agents. It would have been very nice to introduce another problem that is not related to collision avoidance. \n\n The clarity of figure 1 and figure 2 is insufficient though. I read the caption multiple times and I still don’t get it. See questions. It almost seems like one paragraph is missing from the paper. Maybe the author could remove a figure and add theorem 1 which is referred to many times. Luckily algorithm 1 is quite ok to follow. \n \nAn aspect that was unclear to me is how the joint state is handled, it seems the authors assume that the joint state is fully observable by all the agent and there is one mention that it could be extended to individual state. It seems like such task is highly non trivial. The fact that the agent observe the joint state makes me question the initial claim of communication free environment. \n\n 1. What is the example problem in figure 1 and 2? What are the red and green square, what are the circles? \n2. For future work, would it be more efficient to directly search for decentralized shield rather than computing a centralized shield first and then decentralizing it?\n3. Could you clarify whether or not each agent need to observe the joint state?\n4. What is the benefit of algorithm 2 and 3 over algorithm 1? This was not obvious from reading the paper.\n The authors are upfront and clearly explained the limitation of their method and impact in section 7. They mention possible extension for future work. The most important one is to address the scalability issue. The authors are also very clear about experimental challenges on the design of the environment. " ]
[ -1, -1, -1, -1, -1, -1, -1, 5, 8, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "ijJIMGfaxXj", "LeB00eP_BzL", "6x6XlpGL7LF", "3ZYcsN-V6tR", "3wlADu2UURC", "Zpkg0WblbZj", "a4OINaTdHuj", "nips_2022_JO9o3DgV9l2", "nips_2022_JO9o3DgV9l2", "nips_2022_JO9o3DgV9l2" ]
nips_2022_ONB4RdP2GX
Hardness in Markov Decision Processes: Theory and Practice
Meticulously analysing the empirical strengths and weaknesses of reinforcement learning methods in hard (challenging) environments is essential to inspire innovations and assess progress in the field. In tabular reinforcement learning, there is no well-established standard selection of environments to conduct such analysis, which is partially due to the lack of a widespread understanding of the rich theory of hardness of environments. The goal of this paper is to unlock the practical usefulness of this theory through four main contributions. First, we present a systematic survey of the theory of hardness, which also identifies promising research directions. Second, we introduce $\texttt{Colosseum}$, a pioneering package that enables empirical hardness analysis and implements a principled benchmark composed of environments that are diverse with respect to different measures of hardness. Third, we present an empirical analysis that provides new insights into computable measures. Finally, we benchmark five tabular agents in our newly proposed benchmark. While advancing the theoretical understanding of hardness in non-tabular reinforcement learning remains essential, our contributions in the tabular setting are intended as solid steps towards a principled non-tabular benchmark. Accordingly, we benchmark four agents in non-tabular versions of $\texttt{Colosseum}$ environments, obtaining results that demonstrate the generality of tabular hardness measures.
Accept
The reviewers' opinions are quite consistent towards a weak accept. I'm not confident with the big title "Hardness in Markov Decision Processes: Theory and Practice". This paper is more like a survey + benchmark review instead of a research article. Neither the theory part or the practice part is novel enough as a research article. It's a bit thin as a survey paper. I personally tend to weak reject but I respect the reviewers' weak accept.
val
[ "cmS02ENDIxn", "h6ezFO7qNSt", "3f7lrVwuzls", "pQrtK_3z1ji", "bRkWdE1DUd4y", "vCE0Eip_C8", "HKSTkoETOJh", "NMHyXl-iWRe", "UTKPqmu8FaF", "T40pUTIWc8f", "nqhfSznSD-D", "8JCuksFBSX", "JK4gvN3kcei", "x08qws44Q48", "xV6IVVwFYfq", "d-jWtRydvxc", "F5aqWd-lNnh", "xhVf2q_wc8", "OvjOS8S_1A9", "UtpuGDcSVHT", "fVyzK749QP-" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the follow-up response that has addressed all of my remaining concerns. I think including these examples and discussions in the main text or the appendix would benefit future readers for a more clear understanding of the proposed complete hardness measure.", " Thanks for considering our response! We hope to dispel your remaining concerns.\n\nThe following examples show how hardness measures that rely solely on visitation complexity or estimation complexity may fail to meet the current definition of a complete measure of hardness. Because visitation complexity and estimation complexity are not formally defined, these examples refer to *reasonable* hardness measures, which attempt to capture intuitive aspects of visitation complexity or estimation complexity.\n- Consider an MDP $\\text{M} = (\\mathcal S, \\mathcal A, P, P_0, R)$ and let $\\text{M}' = (\\mathcal S, \\mathcal A, P', P_0, R')$ be identical to $\\text{M}$ except for having an increased probability (prand) $r$ of executing a random action instead of the action selected by the agent and having reward $R'(s,a) = 0$ for every state $s$ and action $a$. Because every policy is optimal for $\\text{M}'$, $\\psi(\\text{M}, A^*) \\geq \\psi(\\text{M}', A^*) = 0$. However, because the agent has less control over transitions in $\\text{M}'$, a *reasonable* measure of hardness $\\theta_{\\text{MC}}$ that focuses solely on visitation complexity may be independent of the reward kernel (which is the case for all Markov chain-based measures discussed in the paper), and therefore assign $\\theta_{\\text{MC}}(\\text{M}) \\leq \\theta_{\\text{MC}} (\\text{M}')$. In non-degenerate MDPs, the previous inequalities would also be strict.\n- Consider an MDP $\\text{M} = (\\mathcal S, \\{0,1\\}, P, P_0, R)$ where $\\mathbb E(R(s, 0)) < \\mathbb E(R(s, 1))$ for every state $s$ and let $\\text{M}' = (\\mathcal S, \\{0,1\\}, P, P_0, R')$ be identical to $\\text{M}$ except for having larger gaps in the rewards means and significantly larger reward variances. Concretely, let $R'(s,0) = R(s, 0) - \\mathcal{N}(\\epsilon, \\sigma^2)$ and $R'(s,1) = R(s, 1) + \\mathcal{N}(\\epsilon, \\sigma^2)$ for every state $s$, where $\\epsilon$ is a small constant and $\\sigma^2$ is a large constant.\nAn agent requires a significantly higher number of samples to find an optimal policy in the presence of larger reward variances, which in turn implies $\\psi(\\text{M}, A^*) \\leq \\psi(\\text{M}', A^*)$. However, because larger gaps typically make the estimation of the optimal value function easier, a *reasonable* measure of hardness $\\theta_{\\text{VB}}$ that focuses solely on estimation complexity may be independent of the reward variances (which is the case for all the value-based measures discussed in the paper), and therefore assign $\\theta_{\\text{VB}}(\\text{M}) \\geq \\theta_{\\text{VB}} (\\text{M}')$.\n\nAlthough we have argued that it is possible to create examples where a reasonable hardness measure that relies solely on visitation complexity or estimation complexity may fail to meet the current definition of a complete measure of hardness, we would not argue that a measure that considers both estimation complexity and visitation complexity is necessarily complete. Such an argument would require formalizing the concepts of visitation complexity and estimation complexity, which may be an interesting (but challenging) problem for future work. The argument would also have to consider each reinforcement learning setting independently, since a measure of hardness is complete with respect to a specific criterion lower bound. Finally, the argument would be made even more difficult by the fact that the definition of complete measure of hardness also applies to non-tabular reinforcement learning, which also has known criterion lower bounds in some specific settings [1, 2, 3].\n\nWe will try to summarize and include these insights into the paper to help readers understand the implications of the definition of a complete measure of hardness.\n\n**References**\n\n[1] Dongruo Zhou, Jiafan He, and Quanquan Gu. \"Provably efficient reinforcement learning for discounted mdps with feature mapping.\" In International Conference on Machine Learning. PMLR, 2021.\n\n[2] Yuanhao Wang, Ruosong Wang, and Sham Kakade. \"An exponential lower bound for linearly realizable mdp with constant suboptimality gap.\" Advances in Neural Information Processing Systems 34 (2021).\n\n[3] Andrea Zanette. \"Exponential lower bounds for batch reinforcement learning: Batch rl can be exponentially harder than online rl.\" In International Conference on Machine Learning. PMLR, 2021.", " Thank you for your response which has addressed most of my concerns. The only remaining part that I found not entirely clear is that while Definition 2.1 is indeed a desideratum that a \"good\" hardness measure should achieve, its connection with existing hardness measures remains indirect. As you have mentioned in Section 2.3, Definition 2.1 is proposed to \"address the issues in the theory of hardness\", including the issue of not simultanesouly capturing state visitation complexity and estimation complexity. I would then expect that one or more of the following holds:\n\n+ Since many of existing hardness measures either focus on state visitation complexity or estimation complexity alone, we should be able to show that they do _not_ meet the condition in Definition 2.1 (maybe by constructing some toy MDPs).\n\n+ More generally, we can probably also show that if a measure does not simultaneously consider the two complexities, it _cannot_ meet the condition in Definition 2.1 (or equivalently, if a measure does meet the condition in Definition 2.1, it must simultaneously consider the two complexities).\n\nNote that I am not asking for a rigorous proof for any of the above arguments since that the visitation complexity and estimation complexity themselves may be hard to define or quantify without a specific RL setting. An intuitive explanation should suffice to show that there indeed exists a connection between \"a measure that satisfies Definition 2.1\" and \"a measure that simultaneously considers state visitation complexity and estimation complexity\".", " Thank you for your responses. These are all very fair points and do address a lot of my reservations.\n\nI am leaning to increase my score, but will wait to see what comes of the discussions with the other reviewers.", " It is less important but still worth mentioning that tabular reinforcement learning algorithms have had an impact in areas where prior knowledge is readily available and can be used to formulate a precise model of the problem. Although we do not consider ourselves experts in applications of tabular reinforcement learning, we can mention the work of Jalalimanesh et al. [8], which employs an agent-based model to simulate vascular tumour growth based on biological evidence with the objective of optimizing dose calculation in radiotherapy, and the work of Govindaiah and Petty [9], which relies on Sarsa(0) to produce increasingly efficient plans for material handling in manufacturing facilities.\n\nPart (2/2)\n\n----------------------\n\n[1] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves et al. \"Human-level control through deep reinforcement learning.\" nature 518, no. 7540 (2015): 529-533.\n[2] Christopher Watkins. \"Learning from delayed rewards\". King's College (United Kingdom), 1989.\n[3] Ian Osband, Daniel Russo, and Benjamin Van Roy. \"(More) efficient reinforcement learning via posterior sampling.\" Advances in Neural Information Processing Systems 26 (2013).\n[4] Osband, Ian, Charles Blundell, Alexander Pritzel, and Benjamin Van Roy. \"Deep exploration via bootstrapped DQN.\" Advances in neural information processing systems 29 (2016).\n[5] Andrea Zanette, David Brandfonbrener, Emma Brunskill, Matteo Pirotta, and Alessandro Lazaric. \"Frequentist regret bounds for randomized least-squares value iteration.\" In International Conference on Artificial Intelligence and Statistics, pp. 1954-1964. PMLR, 2020.\n[6] Ian Osband, Benjamin Van Roy, and Zheng Wen. \"Generalization and exploration via randomized value functions.\" In International Conference on Machine Learning, pp. 2377-2386. PMLR, 2016.\n[7] David Janz, Jiri Hron, Przemysław Mazur, Katja Hofmann, José Miguel Hernández-Lobato, and Sebastian Tschiatschek. \"Successor uncertainties: exploration and uncertainty in temporal difference learning.\" Advances in Neural Information Processing Systems 32 (2019).\n[8] Ammar Jalalimanesh, Hamidreza Shahabi Haghighi, Abbas Ahmadi, and Madjid Soltani. \"Simulation-based optimization of radiotherapy: Agent-based modeling and reinforcement learning.\" Mathematics and Computers in Simulation 133 (2017): 235-248.\n[9] Swetha Govindaiah, and Mikel D. Petty. \"Applying reinforcement learning to plan manufacturing material handling.\" Discover Artificial Intelligence 1, no. 1 (2021): 1-33.", " Thanks for considering our response!\n\nWe are quite confident in our claim that success in hard tabular environments is likely a necessary (but not sufficient) condition for success in [general] hard non-tabular environments. For example, consider a large grid-world with obstacles, objects, and sparse-rewards, which can be implemented as a MiniGrid environment in Colosseum. Such a grid-world can be very challenging for tabular reinforcement learning. For non-tabular reinforcement learning, if the state representation enables generalization (such as through encoding the grid-world as an image), the task should become easier than the task of the tabular agent, which cannot rely on generalization. For instance, upon observing that an action always takes the agent to the left in many states, a non-tabular agent may predict the outcome of an action in a state that it has never visited. If the state representation does not enable generalization, such as when the state employs a one-hot encoding, the advantage of the non-tabular agent is removed. However, such kind of representation can test whether a non-tabular agent is able to deal with environments where generalization is difficult. Generalization is naturally difficult in many settings, so a generally successful non-tabular reinforcement learning algorithm would have to explore as efficiently as possible even in that situation. Therefore, in both cases, tabular environments can be used to test the limits of non-tabular algorithms to a large extent. However, as we mentioned, only a theory of hardness in non-tabular reinforcement learning would enable completely principled and thorough testing, which is why we believe it is so important. This is why we claim that success in tabular reinforcement learning is not a sufficient condition for success in non-tabular reinforcement learning. In case that was not clear, we never intended to claim that a non-tabular algorithm that solves a specific difficult task necessarily needs to be successful in every hard tabular environment. The claim only refers to non-tabular agents that intend to be successful across a wide range of environments.\n\nRegarding the importance of developing tabular reinforcement learning algorithms (with or without theoretical guarantees), DQN [1] is arguably the most influential work in the entire deep reinforcement learning literature. In the paper, although Q-learning with function approximation has been explored in many previous works, the authors explicitly acknowledge the Q-learning algorithm, which was developed by Watkins [2] for tabular reinforcement learning. It may be tempting to think that insights from tabular reinforcement learning are no longer relevant, so we would like to mention an even more recent development. Posterior sampling for reinforcement learning, whose theoretical properties in the tabular setting have been studied by Osband et al. [3], promises principled and scalable exploration, and has subsequently led to several extensions to the non-tabular setting, such as Bootstrapped DQN [4], randomized least-squares value iteration [5,6], successor uncertainties [7], and several others.\n\nThe goal of Colosseum is to provide the community with a benchmarking tool able to nurture principled reinforcement algorithms that can later be scaled up to the non-tabular setting.\n\nPart (1/2)", " Thank you for your detailed response! You have done a good job at addressing many of my concerns and I am inclined to increase my score.\n\nHowever, I am still a little unsure on the point of significance; but this may be in part because I do not follow the literature for algorithmic developments that focus solely on tabular environments.\n\nTo be a bit more concrete, two points in your rebuttal that I'd love a little more detail on are the following:\n1. `we argue that success in hard tabular environments is likely a necessary (but not sufficient) condition for success in hard non-tabular environments`\n * it would be great if you could provide examples (ideally from the literature) that demonstrates this point. it seems relatively intuitive, but i can also imagine cases where it is not the case (e.g. tabular algoriths can't handle continuous state spaces, but function approximation can).\n1. `we note that an empirical benchmark may be important even in developing tabular reinforcement learning algorithms with theoretical guarantees`\n * in making this point, it would be useful if you could provide references to existing works that focus exclusively on tabular RL algorithms for people like me who are not as familiar with this literature.\n\nOne final point which would be great to have, but not as critical as the ones I listed above:\n\nUltimately, I feel, the goal of our community's research is to provide algorithms that can be of practical use in the world. So far, there is plenty of evidence that shows that deep networks with RL can provide remarkable real-world success; it would be great if you could provide examples where purely-tabular RL algorithms have had real-world (or at least non-toy) impact.", " We have updated the Colosseum GitHub repository, which can be anonymously accessed through Anonymous GitHub using [this link](https://anonymous.4open.science/r/Colosseum-B725/README.md), with the following improvements.\n#### *Clarity*\n- We have refactored part of the code to improve clarity and ease the implementation of novel reinforcement learning agents and environments.\n- We have reached 100% documentation coverage.\n#### *Efficiency*\n- We have improved the computational efficiency of the dynamic programming algorithms by optimizing the sparse matrices product operations and of the Markov chain-related computations by implementing sparse matrices-based algorithms to compute stationary distributions.\n#### *New features*\n- We have added automatic computations of state representations for environments implemented in Colosseum. Note that this means that these representations will be already available for every new environment that will be implemented in Colosseum in the future. We make note of the available representations below.\n- We have implemented an automatic hyperparameters selection procedure when running the benchmark to ensure a fair comparison between reinforcement learning agents with respect to the hyperparameter optimization.\n#### *Tutorials*\n- We have added a tutorial on the non-tabular representation available for the Colosseum environments.\n- We have added a tutorial on how to implement new reinforcement learning agents in Colosseum.\n\nThe available non-tabular state representations are:\n- A feature vector with the same dimensionality as the state space that contains a one-hot encoding.\n- A feature vector $\\phi(s)$ that enables linear representation of the optimal value function. In other words, there is a $\\theta$ such that $\\hspace{1mm}V^*(s) = \\theta^T\\phi(s)$.\n- A feature vector $\\phi(s)$ that enables linear representation of the value function of the randomly acting policy policy. In other words, there is a $\\theta$ such that $\\hspace{1mm}V^\\pi(s) = \\theta^T\\phi(s)$, where $\\pi$ is the randomly acting policy.\n- A feature vector that contains uniquely identifying information about the state (for instance, coordinates for the DeepSea environment).\n- A feature matrix that encodes a visual representation of the environment.\n- A 3d matrix that one-hot encodes a visual representation of the environment.\n\nWe will update the supplementary material with the new version of the code when we finish updating the paper according to your feedback.", " Although an analogy between visitation complexity/estimation complexity and the exploration/exploitation dilemma is indeed attractive, we believe that it is potentially misleading. For instance, consider a Markov decision process where it is possible to transition directly from any state to any other state given an appropriate action. According to our Markov-chain based measures of visitation complexity, this environment would be considered easy (note that its diameter is one). However, the exploration/exploitation dilemma still exists. This is because, at every time step, the agent must choose between visiting states for which it has good (value) estimates or less visited states. The more states, the more difficult the trade-off becomes.\n\nIndeed, it is not obvious how to extend the hardness measures presented in the paper to the non-tabular setting, and completely novel measures may be required. However, we believe that our effort in surveying the literature for potential hardness measures and introducing desiderata for tabular measures may help and inspire these future developments. We will make these contributions clearer in the conclusion.\n\nNote that our definition of a complete measure of hardness connects it directly with a specific performance criterion (such as the sample complexity or the expected cumulative regret). Whenever a complete measure of hardness establishes a \"difficulty ordering\" between environments in a certain class, every near-optimal agent for that class (according to the same performance criterion) would by definition \"agree\" with that ordering. Note that a trivial complete measure of hardness ranks every environment equally, which guarantees that a complete measure of hardness exists. However, the existence of a complete measure of hardness is not in itself a guarantee that it is \"sufficiently discerning\" for all purposes.\n\nWe hope that these clarifications and the proposed changes will allow you to consider increasing the rating given to our submission. We aim to provide an updated version of our work that incorporates reviewer feedback as soon as possible.\n\nPart (2/2)\n\n-----\n\nReferences:\n\n[1] Dylan J. Foster, Sham M. Kakade, Jian Qian, and Alexander Rakhlin. \"The statistical complexity of interactive decision making.\" arXiv preprint arXiv:2112.13487 (2021).", " Thank you very much for the time dedicated to evaluate our work. We are glad that you found that our paper presents an interesting viewpoint in a well-organized manner. We are also glad that you found the Colosseum benchmark (and associated tools) to be flexible and of high quality and that you are happy with the thoroughness of the empirical evaluation.\n\nIn order to address your concern about the benchmark and hardness measures only being applicable to tabular reinforcement learning, we want to emphasize that the principled selection of environments for the Colosseum benchmark can play a fundamental role in the development and analysis of new tabular and non-tabular reinforcement learning algorithms. Although the text has mostly focused on using Colosseum for studying tabular reinforcement learning algorithms, an up-to-date version of the package already allows experimentation using non-tabular representations of the existing environments. Concretely, it is possible to use vector-based state representations that allow Colosseum to benchmark reinforcement learning algorithms that employ both linear and non-linear function approximation. Currently available vector-based state(-action) representations include one-hot encodings, positional representations (for grid-like environments), and a representation that guarantees the linearity of the optimal action-value function [2].\n\nIn the early stages of developing a new reinforcement learning algorithm, it is often very useful to test ideas in simple environments, which allows fast iteration. Having a thorough and efficient benchmark equipped with useful analysis tools allows researchers to focus most of their time on the algorithms themselves. Although there is no well-developed theory of hardness for non-tabular reinforcement learning, we argue that success in hard tabular environments is likely a necessary (but not sufficient) condition for success in hard non-tabular environments. The fact that we do not include strictly non-tabular environments stems from our goal of providing a principled benchmark, which is currently not possible for non-tabular reinforcement learning due to hardness being a relatively recent (and active) focus of research in that setting [1]. Similarly, we do not include results of a comparison between state-of-the-art non-tabular reinforcement learning algorithms in the Colosseum benchmark because we believe such comparison would be premature and likely unfair (by not considering all important aspects of hardness in the non-tabular setting). Extending the benchmark to environments that require function approximation in a principled way is one of the main goals of our future work.\n\nFinally, we note that an empirical benchmark may be important even in developing tabular reinforcement learning algorithms with theoretical guarantees. Some choices in those algorithms may have to be made without guidance from the theory, in which case they may be informed by practice. Ultimately, this is important because principled algorithms often inspire scalable counterparts.\n\nIn hindsight, we believe that the text does not emphasize the potential of Colosseum for benchmarking non-tabular reinforcement learning algorithms enough, so we plan on changing the text substantially to reflect this. We will do so by highlighting this potential in the abstract and in the introduction, by explaining the corresponding functionalities in the section that introduces Colosseum, and by recalling this potential in the conclusion. Importantly, we will also add a Jupyter notebook tutorial that shows how to use non-tabular reinforcement learning algorithms with vector-based representations of Colosseum environments to the supplementary material.\n\nPart (1/2)", " Here are the answers to the remaining questions, organized by occurrence in the text:\n\nLine 72: The optimization horizon is the total number of time steps for interaction between the agent and the environment. It may be infinite in some cases, but some algorithms depend on a known horizon.\n\nLines 99-100: For the sake of simplicity, we consider the standard formulation of a Markov decision process where an optimal policy is independent of the time due to the Markov assumption (which guarantees that any information useful to predict next states and rewards would be encoded in the current state). For time-dependent transition and reward kernels, if the current time is not somehow encoded in the current state, the optimal policy is indeed time-dependent.\n\nLine 131: The value of 0.25 used in Equation (1) is a convention in the Markov chain literature. We will clarify this in the text.\n\nLine 186: We sometimes drop the corresponding subscript when the episodic or discounted setting is clear from the context. However, there is no reason for dropping the subscript in this case, so we will include it on the right side of the equation as suggested.\n\nLines 206-208: When the sub-optimality gap is small for every state and action, taking a non-optimal action necessarily has a small impact on the expected value obtained by an agent. In the most extreme case, when the sub-optimality gap is zero for every state and action, every policy is optimal. We will try to clarify our claim that identifying near-optimal policies (and thus approximating the optimal value function) is particularly easy when every sub-optimality gap is small.\n\nLines 374-375: Prior to our work, very little empirical work was done on evaluating these algorithms, which is why we believe the results presented in Table 1 are noteworthy.\n\nFinally, please let us know if you believe a discussion of potential negative societal impact is necessary.\n\nWe hope that these clarifications and the proposed changes will allow you to consider increasing the rating given to our submission. We aim to provide an updated version of our work that incorporates reviewer feedback as soon as possible.\n\nPart (3/3)\n\n-----\n\nReferences:\n\n[1] Dylan J. Foster, Sham M. Kakade, Jian Qian, and Alexander Rakhlin. \"The statistical complexity of interactive decision making.\" arXiv preprint arXiv:2112.13487 (2021).", " In the early stages of developing a new reinforcement learning algorithm, it is often very useful to test ideas in simple environments, which allows fast iteration. Having a thorough and efficient benchmark equipped with useful analysis tools allows researchers to focus most of their time on the algorithms themselves. Although there is no well-developed theory of hardness for non-tabular reinforcement learning, we argue that success in hard tabular environments is likely a necessary (but not sufficient) condition for success in hard non-tabular environments. The fact that we do not include strictly non-tabular environments stems from our goal of providing a principled benchmark, which is currently not possible for non-tabular reinforcement learning due to hardness being a relatively recent (and active) focus of research in that setting [1]. Similarly, we do not include results of a comparison between state-of-the-art non-tabular reinforcement learning algorithms in the Colosseum benchmark because we believe such comparison would be premature and likely unfair (by not considering all important aspects of hardness in the non-tabular setting). Extending the benchmark to environments that require function approximation in a principled way is one of the main goals of our future work.\n\nFinally, we note that an empirical benchmark may be important even in developing tabular reinforcement learning algorithms with theoretical guarantees. Some choices in those algorithms may have to be made without guidance from the theory, in which case they may be informed by practice. Ultimately, this is important because principled algorithms often inspire scalable counterparts.\n\nIn hindsight, we believe that the text does not emphasize the potential of Colosseum for benchmarking non-tabular reinforcement learning algorithms enough, so we plan on changing the text substantially to reflect this. We will do so by highlighting this potential in the abstract and in the introduction, by explaining the corresponding functionalities in the section that introduces Colosseum, and by recalling this potential in the conclusion. Importantly, we will also add a Jupyter notebook tutorial that shows how to use non-tabular reinforcement learning algorithms with vector-based representations of Colosseum environments to the supplementary material.\n\nWe believe that the development of complete hardness measures is important for two main reasons. Empirically, it would allow the creation of principled benchmarks for different performance criteria. Theoretically, it could elucidate what makes a problem hard for a specific performance criterion. However, the lack of (non-trivial) complete measures of hardness does not allow us to relate them to existing performance bounds. We will make this clearer in the conclusion.\n\nPart (2/3)", " Thank you very much for the time dedicated to evaluate our work. We are glad that you found both the paper and the code well-written (and well-documented).\n\nWe would like to clarify that Section 2 is not limited to reviewing existing literature. Instead, we propose a unifying perspective on what so far have been disparate notions of hardness and provide a qualitative comparison that considers their strengths and weaknesses. Note that most quantities that we consider hardness measures were originally proposed in the context of providing theoretical guarantees for reinforcement learning algorithms. We have re-interpreted these quantities as hardness measures. Our definition of a complete measure of hardness further aims to connect measures of hardness to performance criteria that are actually relevant in reinforcement learning, which hopefully will ground and guide future work on hardness.\n\nAlthough we understand that calling Colosseum \"pioneering\" and the benchmark \"the most exhaustive\" may sound like inflation of claims, Colosseum is indeed the first package to attempt to systematically connect theoretical hardness measures to empirical reinforcement learning. It is also the first package to implement many of the functionalities described in Section 3.1. The Colosseum benchmark also contains the largest (not to mention principled) selection of tabular reinforcement learning environments. However, we agree that calling the analysis tools \"invaluable\" is not appropriate, and so we will change that in the text.\n\nWe agree that some of the plots and tables can take a long time to interpret, even though we spent a significant amount of time trying to convey the (large amount of complex) information as simply as possible. Our relatively long descriptive analysis tries to guide the reader through the most important findings. We believe that highlighting the summary of the descriptive analysis for each hardness measure on page 7 (lines 285-307) is a great suggestion. We will incorporate this in the text by moving each summary before the corresponding descriptive analysis. Hopefully, this will help readers focus on the most important results.\n\nRegarding Figure 2, the points are indeed spread somewhat uniformly on the plane, which shows how the continuous ergodic MDPs in the benchmark are diverse with respect to hardness measures. The claim that there is a generally positive relationship between both the measures and the average cumulative regret is related to the color of each point. The claim predicts that the farther along the main diagonal a point is, the darker it should be. In other words, the most difficult (according to the hardness measures) the MDP, the larger the (average cumulative) regret. This pattern is most clear in Figure 2a, but can also be seen in the other figures. We hope this eliminates your concern.\n\nIn order to address your concerns about significance, we would like to emphasize that Colosseum is not intended mainly as a benchmark for empirically validating hardness measures or providing empirical guidance for theoretical developments, although it could certainly be used for those purposes.\n\nMost importantly, we believe that the principled selection of environments for the Colosseum benchmark can play a fundamental role in the development and analysis of new tabular and non-tabular reinforcement learning algorithms. Although the text has mostly focused on using Colosseum for studying tabular reinforcement learning algorithms, an up-to-date version of the package already allows experimentation using non-tabular representations of the existing environments. Concretely, it is possible to use vector-based state representations that allow Colosseum to benchmark reinforcement learning algorithms that employ both linear and non-linear function approximation. Currently available vector-based state(-action) representations include one-hot encodings, positional representations (for grid-like environments), and a representation that guarantees the linearity of the optimal action-value function [2].\n\nPart (1/3)", " We agree that our explanation of why a regret minimization agent would generally behave differently from a PAC-RL agent is somewhat unclear. The distinction in behavior encouraged by the different performance criteria is best explained by the fact that the regret minimization agent must be cautious not to incur large regret during learning (for example, by not revisiting lowly rewarding states that are not followed by highly rewarding states), whereas a PAC-RL agent has the flexibility to incur a large regret as long as it ends up with a near-optimal policy.\n\nRegarding \"the criterion lower bound of class M up to logarithmic factors\", for each environment class M, it is possible to obtain a lower bound on the cumulative regret or the sample complexity. The lower bound quantifies the minimum amount of cumulative regret or sample complexity that a generic agent will incur when attempting to solve an environment from the class M. When a known upper bound on the cumulative regret or the sample complexity of a specific agent matches this lower bound of an environment class up to a logarithmic factor, we say that the agent is near-optimal. For example, Jaksch et al. (2010) prove a lower bound of $\\Omega(\\sqrt{DSAT})$ for the class of continuous communicating MDPs, which implies that the criterion lower bound of the class of communicating MDPs up to logarithmic factors is $\\sqrt{DSAT}$. We will clarify this in the paper.\n\nRegarding \"the probability plazy that an MDP stays in the same state instead of executing the action selected by an agent\", we emphasize that the agent does not have a \"no-op\" action. Instead, the transition kernel of an original MDP is transformed by the introduction of the probability plazy. No matter the action chosen by the agent, there is a probability plazy that the original transition kernel will be ignored and that the transformed MDP will remain in its current state.\n\nThe number of seeds for the experiments was determined during preliminary experiments where we observed the variability of the results.\n\nIn practice, it is very computationally expensive to compute the sample efficiency criterion without making approximations that make it hard to interpret and compare, which is why it is currently not measured by Colosseum. However, we are currently developing an approximation to implement in Colosseum.\n\nBecause other reviewers were concerned that Colosseum could be limited to studying tabular reinforcement algorithms, we would like to note that an up-to-date version of the package already allows experimentation using non-tabular representations of the existing environments. Concretely, it is possible to use vector-based state representations that allow Colosseum to benchmark reinforcement learning algorithms that employ both linear and non-linear function approximation. Currently available vector-based state(-action) representations include one-hot encodings, positional representations (for grid-like environments), and a representation that guarantees the linearity of the optimal action-value function [3]. In hindsight, we believe that the text does not emphasize the potential of Colosseum for benchmarking non-tabular reinforcement learning algorithms enough, so we plan on changing the text substantially to reflect this. We will do so by highlighting this potential in the abstract and in the introduction, by explaining the corresponding functionalities in the section that introduces Colosseum, and by recalling this potential in the conclusion. Importantly, we will also add a Jupyter notebook tutorial that shows how to use non-tabular reinforcement learning algorithms with vector-based representations of Colosseum environments to the supplementary material.\n\nWe hope that these clarifications and the proposed changes will allow you to consider increasing the rating given to our submission. We aim to provide an updated version of our work that incorporates reviewer feedback as soon as possible.\n\nPart (2/2)\n\n--------\n\nReferences:\n\n[1] Dylan J. Foster, Sham M. Kakade, Jian Qian, and Alexander Rakhlin. \"The statistical complexity of interactive decision making.\" arXiv preprint arXiv:2112.13487 (2021).\n\n[2] Thomas Jaksch, Ronald Ortner, and Auer Peter. \"Near-optimal regret bounds for reinforcement learning.\" Advances in neural information processing systems 21 (2008).\n\n[3] Ian Osband, Benjamin Van Roy, and Zheng Wen. \"Generalization and exploration via randomized value functions.\" In International Conference on Machine Learning, pp. 2377-2386. PMLR, 2016.", " Thank you very much for the time dedicated to evaluate our work. We are glad that you found our work well motivated and of potentially great significance to the community.\n\nIn case this is not clear, we would like to emphasize that Section 2 is not limited to reviewing existing literature. Note that most quantities that we consider hardness measures were originally proposed in the context of providing theoretical guarantees for reinforcement learning algorithms. We have re-interpreted these quantities as hardness measures.\n\nIndeed, we had to make difficult choices to meet the page requirements, and a journal would allow us to include much more detail in the main paper. However, we believe a conference submission is likely to attract more feedback from the community, which is very important for the widespread adoption of the Colosseum benchmark. Although some explanations in the main paper are indeed brief, we hope the appendices present every detail needed to understand and replicate our work. We will consider extending the work on Colosseum and submitting it to a journal in the future.\n\nWe also agree that the motivation for choosing the different environment families belongs in the main paper instead of the appendix, and so we will move it to the text. During our selection, we tried to balance between traditional environment families (such as RiverSwim, Taxi, and FrozenLake) and unconventional environment families (such as MiniGid, which is not widely used in tabular reinforcement learning). The Deep Sea family was included since it was recently proposed and developed as an example of a hard exploration problem. The SimpleGrid acts as a simplified version of the MiniGrid-Empty environment.\n\nOur claim that \"ergodic settings seem generally slightly easier than the communicating settings\" is supported by the last line of Table 1 (the average normalized cumulative regret across ergodic environments is lower than the same quantity for the communicating environments). The claim that \"diameter also increases almost linearly with the number of states. When prand is relatively small, an approximately linear relationship can still be observed\" does not refer to Fig. 1a, which corresponds to scenario 1, but to Fig 1.d, which corresponds to scenario 4, so the relatively small value of p is 0.1. We will clarify both claims in the paper.\n\nThe measures of hardness analysed in Fig. 1 are not originally on a comparable scale. In order to present a meaningful comparison, they have to be normalized. As a consequence, plots can only be used to compare trends (growth rates). Noting this is quite important to interpret the figures, which is why the word \"solely\" is italicized. We will update the paper to explain this more thoroughly before we present the figures. In the case of the sum of the reciprocals of the sub-optimality gaps, although the values of such measures are close in the plots to the cumulative regret of the tuned near-optimal agent, the trends are not similar. Trends are similar in Fig. 1a and dissimilar in all the other scenarios. In Fig. 1b, although the normalized values of the sum of the reciprocals of the sub-optimality gaps may appear close to the cumulative regret of the tuned near-optimal agent, the growth rate of the diameter more closely reflects the trend of the cumulative regret of the tuned near-optimal agent. In Figures 1c and 1d, the sum of the reciprocals of the sub-optimality gaps presents the same identical trends even though setting the value of prand to $0.1$ significantly impacts the challenge posed by the MDPs.\n\nRegarding the fact that \"for Q-learning and PSRL (Figs. 2b and 2c), the diameter seems to have a generally smaller influence on the average cumulative regret\", note that it is possible to observe values of the cumulative regret higher than $0.8$ for values of the diameter in the interval $[20, 60]$. This is not the case for UCRL, for which high values of regret only appear after a diameter value of $60$. We will improve the clarity of this statement, which does not make the relativeness to UCRL clear.\n\nAs requested, we will add details about the computational complexity of the (efficiently computable) hardness measures.\n\nPart (1/2)", " Finally, we note that an empirical benchmark may be important even in developing tabular reinforcement learning algorithms with theoretical guarantees. Some choices in those algorithms may have to be made without guidance from the theory, in which case they may be informed by practice. Ultimately, this is important because principled algorithms often inspire scalable counterparts.\n\nIn hindsight, we believe that the text does not emphasize the potential of Colosseum for benchmarking non-tabular reinforcement learning algorithms enough, so we plan on changing the text substantially to reflect this. We will do so by highlighting this potential in the abstract and in the introduction, by explaining the corresponding functionalities in the section that introduces Colosseum, and by recalling this potential in the conclusion. Importantly, we will also add a Jupyter notebook tutorial that shows how to use non-tabular reinforcement learning algorithms with vector-based representations of Colosseum environments to the supplementary material.\n\nWe hope that these clarifications and the proposed changes will allow you to consider increasing the rating given to our submission. We aim to provide an updated version of our work that incorporates reviewer feedback as soon as possible.\n\n(Part 2/2)\n\n---------\n\nReferences:\n\n[1] Dylan J. Foster, Sham M. Kakade, Jian Qian, and Alexander Rakhlin. \"The statistical complexity of interactive decision making.\" arXiv preprint arXiv:2112.13487 (2021).\n\n[2] Ian Osband, Benjamin Van Roy, and Zheng Wen. \"Generalization and exploration via randomized value functions.\" In International Conference on Machine Learning, pp. 2377-2386. PMLR, 2016.", " Thank you very much for the time dedicated to evaluate our work. We are glad that you found that our perspective is novel and worthwhile. We are also glad that you found our paper well constructed behind a unifying insight and significant for future work.\n\nIn case it was not clear, we would like to emphasize that Section 2 is not limited to reviewing existing literature. Note that most quantities that we consider hardness measures were originally proposed in the context of providing theoretical guarantees for reinforcement learning algorithms. We have re-interpreted these quantities as hardness measures.\n\nRegarding the criteria used to select environments from the literature, we tried to balance between traditional environment families (such as RiverSwim, Taxi, and FrozenLake) and unconventional environment families (such as MiniGid, which is not widely used in tabular reinforcement learning). The Deep Sea family was included since it was recently proposed and developed as an example of a hard exploration problem. The SimpleGrid acts as a simplified version of the MiniGrid-Empty environment. We will move the motivation for choosing the different environment families from the appendix to the main text to make this clearer.\n\nWe believe that constructing novel MDPs with desired relationships between hardness measures is made straightforward by Colosseum. In fact, this process can be automated to some extent by employing a strategy similarly to how we selected the MDPs that compose the benchmark. We will make note of this in the text.\n\nWe also want to emphasize that we expect Colosseum to have an impact beyond the reinforcement learning theory community. The principled selection of environments for the Colosseum benchmark can play a fundamental role in the development and analysis of new tabular and non-tabular reinforcement learning algorithms. Although the text has mostly focused on using Colosseum for studying tabular reinforcement learning algorithms, an up-to-date version of the package already allows experimentation using non-tabular representations of the existing environments. Concretely, it is possible to use vector-based state representations that allow Colosseum to benchmark reinforcement learning algorithms that employ both linear and non-linear function approximation. Currently available vector-based state(-action) representations include one-hot encodings, positional representations (for grid-like environments), and a representation that guarantees the linearity of the optimal action-value function [2].\n\nIn the early stages of developing a new reinforcement learning algorithm, it is often very useful to test ideas in simple environments, which allows fast iteration. Having a thorough and efficient benchmark equipped with useful analysis tools allows researchers to focus most of their time on the algorithms themselves. Although there is no well-developed theory of hardness for non-tabular reinforcement learning, we argue that success in hard tabular environments is likely a necessary (but not sufficient) condition for success in hard non-tabular environments. The fact that we do not include strictly non-tabular environments stems from our goal of providing a principled benchmark, which is currently not possible for non-tabular reinforcement learning due to hardness being a relatively recent (and active) focus of research in that setting [1]. Similarly, we do not include results of a comparison between state-of-the-art non-tabular reinforcement learning algorithms in the Colosseum benchmark because we believe such comparison would be premature and likely unfair (by not considering all important aspects of hardness in the non-tabular setting). Extending the benchmark to environments that require function approximation in a principled way is one of the main goals of our future work.\n\n(Part 1/2)", " This work provides a unifying review of hardness measures for MDPs that have appeared in previous RL theory bounds in tabular settings. The authors have also developed a benchmark with easily estimable values for these hardness measures to be used for empirical investigation of RL theory. Performance of four standard algorithms are measured in the environments. *Originality:*\nThe work provides a unifying perspective on what has thus far been seen as disparate notions of hardness, with qualitative comparison of the strengths and weaknesses of each. I think that, while no new notion of hardness is investigated here, which would make this a very strong paper, the perspective offered is novel and worthwhile. The development of a standard tabular benchmark for RL theory is again a work of synthesis from previous papers. While valuable for RL theory practitioners, there is less novel insight here, as these environments are well known.\n\n*Quality:*\nThe paper is clearly well-thought through and well constructed, with unifying insight provided. I think if the environments weren't chosen from the literature, but instead chosen such that the different aspects of hardness described in the paper were easier to control independently with respect to the different policy and environment parameters described in the paper, the benchmark would lead to even more meaningful insight.\n\n*Clarity:*\nThe paper is easy to follow, and plots and charts are easy to read. The visualisations of the environments give good intuition as to their structure, and the experiments give good characterisation of their hardness.\n\n*Significance:*\nThe paper is significant, and well-poised to enable future work in unified hardness and empirical evaluation of RL theory results.\n How were environments selected from the literature? Was any attempt made to construct novel MDPs with particular relationships between hardness measures? What limited this process if so? As suggested above, the insight here is mainly unifying, with little specific novel contributions, either in terms of environment or in terms of analysis. New measures of hardness, or previously unseen environments with useful properties would make the paper very strong.", " The paper introduces Colosseum, a Python package that allows empirical investigation of MDP hardness along estimation complexity and visitation complexity for tabular reinforcement learning. It surveys various existing harndess measures and argues why many of these do not capture the above mentioned complexities properly. The ones which come closest to capturing these are Environmental value norm for estimation complexity and Diameter for visitation complexity as I understand from the paper. It also implements agents and implements a benchmark for what the authors claim are the four most widely studied tabular reinforcement learning settings - which I believe are: (a) Episodic ergodic. (b) Episodic communicating. (c) Continuous ergodic. (d) Continuous communicating. They also perform experiments examining the hardness measures under various changes to the MPDs (Fig. 1) as also with the agents in the mentioned settings (Tab. 1 and fig. 2).\n I believe they motivate the approach well and the benchmark is more comprehensive than existing benhcmark for tabular RL. The quality of the woek also seems high. They provide code and collect it in a single package which should be of great significance to the community, esp. but not just the tabular RL communtiy. I assume the code quality is also good based on a quick walk through the Jupyter notebook: Colosseum_main_tutorial.ipynb. The analysis plots also look good. However, in respect to the clarity, I feel like the paper would be much more suited to the journal format. I feel like many of the questions I ask below are probably clarified in the Appendix (which I only glanced through), however, I feel like some of these rather belong in the main paper.\n\nThe motivations for the different MDP families was not described in the main paper\n\nThere were some statements such as:\n>The ergodic settings seem generally slightly easier than the communicating settings.\n\nBut the evidence was not explicitly pointed out. I felt like this in many places and it feels this was done to save space.\n\nBecause of such space-saving measures, I feel the paper is better suited to a journal.\n\n\nIn various places the clarity can be improved, e.g.:\n\n>The diameter also increases almost linearly with the number of states. When p_rand is relatively small, an approximately linear relationship can still be observed.\n\nThis was a little confusing because sub-figures being referenced are not mentioned. esp. for the 2nd sentence, I can't see the linear relationship when p is \"small\" in Fig. 1a because it's not zoomed in enough. What values of p were meant \"small\"?\n\n\n>\"Sum of the reciprocals of the sub-optimality gaps\": This measure of hardness is not particularly apt at capturing estimation complexity, since it focuses solely on optimal policy identification. It also underestimates the increase in hardness induced by an increase in visitation complexity.\n\nI understand the arguments. However, in Fig. 1a-1d, tbh, \"Sum of the reciprocals of the sub-optimality gaps\" actually seems the closest to the cumulative regret of the tuned near-optimal agent. Could the authors please add some reasoning as to why this measure ends up being seemingly the closet to the cumulative regret of the tuned near-optimal agent even though it is not suited either for visitation complexity or estimation complexity.\n\n\n>For Q-learning and PSRL (Figs. 2b and 2c), the diameter seems to have a generally smaller influence on the average cumulative regret\n\nI cant quite see this in the figures.\n\n>(efficiently computable) hardness measures\n\nAdding a table with computational complexity of calculating the measures would be highly appreciated.\n >For instance, consider an MDP where a set of lowly rewarding states stands between an agent and a set of highly rewarding states. A regret minimization agent is heavily discouraged from visiting the lowly rewarding states, in contrast to a PAC-RL agent. Therefore, the state visitation complexity should differ across settings.\n\nI do not quite think this is the best argument here because if, let's say, the only way to the high-rewarding states is through the lowly rewarding states, then regret would still be minised by going through them and so the agent would not be discouraged.\n\n\n>The fact that current measures disregard this distinction is concerning, since they should quantify the difficulty of a specific optimization task.\n\nWhat is \"this distinction\" here?\n\nWhat is \"the criterion lower bound of class M up to logarithmic factors\"?\n\nIn line no. 274: \"the probability p_lazy that an MDP stays in the same state instead of executing the action selected by an agent\"\n\nHow can the MDP remain in the same state if an action was taken? Is there a \"no-op\" action as well in the action space? I saw only 3 actions in the action space and no no-op.\n\n\"which solely allows comparing trends\"\n\nwhy is solely italicised here?\n\nHow did the authors choose twelve and five random seeds for the experiments?\n\nThe paper mentions both cumulative regret and sample efficiency for exploration as possible performance criteria/metrics but then only experiments with cumulative regret agents are shown. Is there a reason sample efficiency for exploration was not considered for the experiments?\n\n Yes.", " This paper first presents a survey of existing hardness measures and results from the MDP literature. Their main contribution is the introduction of `Colosseum`, which is a benchmark for empirically validating hardness results, which they use to compare various existing hardness measures. # Originality\nIt seems to me that section 2 (Hardness in Theory) is a review of existing literature, so the original contributions of this paper would be limited to the `Colosseum` package and the empirical investigations provided. For these, the work is original as far as I can tell, although some of the claims seem to be a bit inflated (e.g. \"a pioneering Python package\", \"the most exhaustive in tabular reinforcement learning\", \"invaluable analysis tools\", etc.).\n\n# Quality\nThe authors have done a reasonably thorough survey of hardness literature and evaluated these measures using the various environments in their package. There are a few issues regarding clarity and correctness that I include in the questions below.\nThe code for `Colosseum` seems to be well-written and well-documented, which I consider to be a core part of this paper's contribution.\n\n# Clarity\nThe paper is very well written and motivated reasonably well. Some of the plots and tables are hard to digest; specifically, it's often not clear _what_ is being said with them. There's a lot going on in Figure 1 (and even more in Table 1), and even though the main takeaways do seem to be discussed in the text, they're mostly lost in the paragraphs in page 7 (it seems that the last sentence is the main takeaway for each hardness measure). I would suggest highlighting these in a more streamlined manner (to draw the reader's attention directly to the takeaways) and leave the descriptive text until after.\nTable 1 is a bit overwhelming, it's not clear what we're supposed to be looking for. I would also suggest rewriting this section so there are clear takeaways and insights for the readers; currently it reads just as a verbal description of the (many) numbers in the table.\n\nAlthough it is claimed that in Figure 2 \"there is generally a positive relationship between both of these hardness measures and the average cumulative regret\", it seems almost like points uniformly spread on the plane (i.e. I don't see a clear relationship at all).\n\nThere are a few other issues I mention in the questions below.\n\n# Significance\nThis, to me, is the weakest point of the paper. Although I appreciate the authors' effort to produce a nice package for benchmarking theoretical results, it's not clear how significant this will be. Empirical evaluations on toy environments (for theoretical results) are typically meant to highlight characteristics or subtleties of the theory introduced, but are not the end goal in itself. In particular, whether the empirical results suggest sub-linear or linear growth, say, does not in any way change the theoretical results. Thus, it is not clear what the added value would be to have a \"theory benchmark\".\nSomething that I think could make this package more impactful is to try to go beyond tabular. One suggestion would be to look at bsuite (which the authors do cite), as they include both tabular and continuous environments. In particular, it would be interesting to evaluate both as they may allow one to empirically investigate how the hardness measures vary when moving from tabular to larger systems. I acknowledge that in non-tabular systems it may not be possible to compute all of them in closed form, but there may be approximations; alternatively, continuous variants of the tabular systems considered could provide a nice middle ground (e.g. by \"smoothing out\" each tabular state).\nAnother aspect that could increase the significance of this work is to evaluate non-tabular methods, for instance with linear function approximators. A lot of RL theory does exist for linear approximators (and tabular, of course), so it would be interesting to evaluate how the dynamics of the empirical evaluations change (or not) when moving from tabular to non-tabular methods.\n\nAlong these lines, in line 366 it says \"The development of such measures is theoretically and empirically important.\". It would be nice to provide some concrete examples, such as a theoretical bound dependent on one of these hardness measures, or something like that. Otherwise it's not clear why these hardness measures are important. 1. Is all of section 2 just a review of existing literature? It seems like this is the case, but just wanted to confirm.\n2. In line 72 it says \"Given an optimization horizon $T$\". What is this exactly? Is it related to episodic horizon?\n3. In lines 99-100 it says there is an assumption of the existence of an optimal policy $\\pi^*$ that \"obtains maximum value for every state\", but this policy will necessarily be dependent on time as well as state, no?\n4. Why do you use the value of $0.25$ in equation (1)? This is not justified in the text.\n5. Below line 186, $\\gamma$ is a subscript of $C$ on the left-hand side, but it does not appear in the right hand side. Should it? Or should it be removed from $C$?\n6. In lines 206-208 it says \"approximating the value function ... is particularly easy when every sub-optimality gap is small.\" This is not necessarily the case in RL: sub-optimality gap is with respect to the optimal policy, but in RL we don't start from the optimal policy.\n7. In lines 374-375 you say your empirical investigations \"revealed previously undocumented weaknesses of these agents and further validated our choices of environments.\". What does this say about existing empirical work evaluating these algorithms? Some limitations are provided (mostly related to future work). It would be nice to have some discussion regarding the significance/impact (or lack thereof) of this work, in line with some of the comments I made above regarding significance.\n\nNo discussion of potential negative societal impact was provided.", " This paper reviews and categorizes some hardness measures on tabular MDPs, and proposes a new tabular RL benchmark named \"Colosseum\" that enables exact computation of these measures in empirical evaluations. Extensive experiments are conducted to assess the performance of existing tabular RL methods on the proposed benchmark that spans diverse environments focusing on different hardness measures. ### Strengths\n1. The paper suggests an interesting viewpoint of connecting the theory and practice of tabular RL by taking into account the hardness measures that are typically used only in theoretical analysis when evaluating and comparing tabular RL algorithms.\n2. The \"Colosseum\" benchmark is of high quality, flexible, and with sufficient docs and tutorials.\n3. The empirical evaluations are thorough.\n4. The paper is well organized and presented. \n\n### Weaknesses\n1. The benchmark and the hardness measures only apply to tabular RL. In contrast, most of the current empirical work in RL is devoted to the non-tabular setting, and there is also an increasing number of theoretical work that explores non-tabular RL.\n2. The theoretical claims made by this paper are somewhat vague (see the following for details). 1. The authors wrote in the paper \"we believe our contributions are important milestones towards the future development of theoretical and empirical measures of hardness for non-tabular reinforcement learning\". However, it is not clear how the measures presented in the paper can be extended to non-tabular settings (while still being computationally feasible). Adding more discussion on this point can be helpful.\n2. The paper surveys existing hardness measures by dividing them into two categories, namely \"Markov chain-based\" which focuses on the state visitation complexity, and \"value-based\" which focuses on the estimation complexity of value functions. It seems like this taxonomy is closely relevant to the well-known \"exploitation-exploration\" dilemma with state visitation complexity capturing the difficulty in exploration, and estimation complexity capturing the difficulty in exploitation. The authors are encouraged to discuss this relation in more detail.\n3. The message conveyed by Definition 2.1 is somewhat vague: it is not clear to me how this \"complete measure\" connects to the intuition or other theoretical arguments that capture the \"hardness\" of learning a near-optimal policy on an MDP, e.g., the PAC learnability. In other words, is there any conceptual/theoretical evidence implying that this hardness measure is _sufficient_ in the sense of some criteria? Also, while I can understand that existing measures may be insufficient in measuring all aspects of hardness since they either focus on the state visitation complexity or the estimation complexity, can we use Definition 2.1 to certify this? No concern." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 3 ]
[ "h6ezFO7qNSt", "3f7lrVwuzls", "UTKPqmu8FaF", "bRkWdE1DUd4y", "vCE0Eip_C8", "HKSTkoETOJh", "JK4gvN3kcei", "nips_2022_ONB4RdP2GX", "T40pUTIWc8f", "fVyzK749QP-", "8JCuksFBSX", "JK4gvN3kcei", "UtpuGDcSVHT", "xV6IVVwFYfq", "OvjOS8S_1A9", "F5aqWd-lNnh", "xhVf2q_wc8", "nips_2022_ONB4RdP2GX", "nips_2022_ONB4RdP2GX", "nips_2022_ONB4RdP2GX", "nips_2022_ONB4RdP2GX" ]
nips_2022_eXggxYNbQi
On the Interpretability of Regularisation for Neural Networks Through Model Gradient Similarity
Most complex machine learning and modelling techniques are prone to over-fitting and may subsequently generalise poorly to future data. Artificial neural networks are no different in this regard and, despite having a level of implicit regularisation when trained with gradient descent, often require the aid of explicit regularisers. We introduce a new framework, Model Gradient Similarity (MGS), that (1) serves as a metric of regularisation, which can be used to monitor neural network training, (2) adds insight into how explicit regularisers, while derived from widely different principles, operate via the same mechanism underneath by increasing MGS, and (3) provides the basis for a new regularisation scheme which exhibits excellent performance, especially in challenging settings such as high levels of label noise or limited sample sizes.
Accept
This paper is controversial among the reviewers. On the positive side, reviewers like the novelty of the concept, the derivations and the clear presentation. The negative review wonders why the proposed method performs much better than dropout, similarity to Lipschitz constraints, and whether proper early stopping was used. The authors addressed some of the concerns, though the reviewer was not convinced. In the AC's opinion, the objections are sufficiently addressed and are not clear enough for rejection. One reviewer wanted to know more about the computational costs of using this technique and more discussions on the limitations. The paper proposed a novel and interesting approach to regularization and seems to be a good contribution to the community.
train
[ "hVVf-yBooFq", "pVlU5ol-3Zu", "FBjkUhBsMr", "QEcEeM8Q9aB", "xbVvYpgeskn", "aArb2Mx8Ck", "fMsBE2S-b6x" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response and fixing equation (2).", " Thank you for detailed comments and thoughtful review! Below we answer all questions/concerns that you had:\n\n### Originality\nWe have added small discussion on the use of influence functions by Koh and Percy.\n\n### Quality\n> ...the fact that one thing (regularization) can be achieved in multiple ways, doesn’t immediately implies that it is not well understood...\n\nThis is a very good point, and we have now removed that statement.\n\n### Clarity\n> MGS was never defined nor tied to any metric or formula...\n\nWe refer to MGS as a \"framework\" or concept due to it having many use cases, e.g. monitoring metric, explicit regulariser, etc. Therefore, we found it more intuitive to call it a framework as that is in line with the goal of this paper which is to present MGS as a new way of understanding regularisation and a vehicle to propose new methods. The order of sections 2 and 3 is perhaps a little unconventional, but was chosen deliberately so that the emphasis of the paper doesn't solely become the two explicit MGS regularisers we investigated here.\n\n> Plots are small and most axes are not labeled.\n\nWe felt that the plots would become even more cramped with explicit labels. To make it clearer though, we have added some extra explanations to all captions on exactly what is being plotted against what. The plots being small is a valid concern however making them bigger would require moving them onto separate lines which in turn makes it difficult to compare the different plots side-by-side, which is important to see the correlation between test performance and MGS metrics.\n\n> Introducing MGS regularizers in section 4 while already presenting results based on those in section 3 is rather confusing...\n\nAgreed that this can be slightly confusing (see also above). However, we found no efficient way of first presenting the observations about the MGS metrics, then the MGS regularisers, and finally comparing them back against the initial observations without duplicating plots.\n\n> For the abstract, please use `\\begin{abstract}...\\end{abstract}`\n\nThank you for noticing. We have now fixed this!\n\n> L143: does FCN stand for fully-connected network?\n\nThis has been fixed and the complete definition (fully connected network) has been added before the first use of the FCN abbreviation.\n\n### Significance\n> Experiments using AlexNet (Figure 3): how representative is AlexNet w.r.t. state-of-the-art DL...\n\nIn general we don't expect the architecture to influence the main findings about MGS. The core behind MGS in equations 1, 2, and 3 does not specify any particular type of model. For completeness, however, we have added an extra experiment using a more complex network with residual connections (ResNet20) trained on a harder dataset (Fashion MNIST) which shows that the current findings about MGS still hold even in more complicated scenarios (new figure 4). Also another minor note, specifically for the dropout results in the testbenches, batchnorm was used between convolutional layers for LeNet-5.\n\n> For a better overview of the viability of MGS metrics, it is important to report the computational cost...\n\nThe two MGS regularisers proposed in this paper fall into the same class as other \"double back-propagation\" methods in terms of memory and computational costs. Therefore we did not feel that listing explicit computational costs to be of high interest. Generally this class of methods is not used as-is and instead optimised versions of them are derived instead. We expect this to also be the case for MGS based regularisers, and look forward to designing performant ones that operate on the same principle of directly increasing MGS in future work.\n\n### Questions\n> 1 & 2: regarding hyperparameters\n\nIn all experiments, including the test-benches, each method goes through the same hyper-parameter tuning procedure which entails using grid-search with cross-validation. To ensure that this is clear to the reader, we have expanded upon this discussion in the paper.\n\n> 3: How dependent are these results on the batch size?\n\nIn section 5.3 we show that MGS is the least affected when changing batch size (see figure 5). \n\n> 4: What is considered a corruption in this case and what is the motivation behind adding motion blur?\n\nWe have added some new text to make it clear that it is only the training set that is corrupted and why that is done. The reason for doing this is to properly test each regularisation method as the unregularised network will most likely learn features for the corrupted data which are not necessarily suited for predicting on the clean data. Regularisation should help the network to generalise and therefore learn features from the corrupted data that still also work well on the clean dataset.\n\n> 5: Could MGS be used as an early stopping criterion?\n\nA good observation, and definitely something we think could be done, however we have left it for future work.\n\n### Limitations\n\nAll concerns regarding limitations have been answered above.\n\n", " Thank you for your questions. We have taken your objections into consideration and added clarifying text in places to ensure that readers do not miss that e.g. multi-class cases and early stopping are indeed discussed and investigated in the paper. However, with respect to your first question, the link between MGS and GD is already discussed at length in section 2.\n\n> the title of the submission doesn't seem to reflect the content of the submission.\n\nAs written in section 2.2 and 2.3, MGS has an additional contribution to GD itself. If the trace (diagonal elements) is large then the averaging effect caused by the pairwise elements (off-diagonal elements) will be negligible and the model will learn each data point independently. Penalising the trace, diminishes the diagonal, and therefore increases the strength of the averaging effect from the pairwise elements. This, in turn, encourages the network to find relevant groupings in the data, based on gradient similarity, that still minimise the empirical loss. On the other hand, penalising the determinant results instead in the pairwise elements to grow, which directly increases the averaging effect as well. Thus both the trace and determinant contribute to increasing gradient similarity but in slightly different ways.\n\n> I am not sure how the gradient is used when there are multiple outputs, for example, multi-class classification problems.\n\nWe state on L202-203 (close of section 4) that details on how $K_\\theta$ is calculated with regards to mini-batches, **vector-outputs**, and large data-sets are included in the supplementary material. In general we use the same methods found in corresponding Neural Tangent literature [1] (and software libraries) where the gradients for multiple outputs are concatenated prior to calculating the inner product.\n\n> The proposed regulariser is similar to the Lipschitz constraint that has been used in Wasserstein GANs, and other regularisation schemes.\n\nThe gradients are calculated with regards to the model parameters and not the input data (as already identified in your summary). This is not the same as the gradient penalty, used for example in WGANS to implement a Lipschitz constraint, which calculates the gradient w.r.t. input data [2]. Nor do we use MGS to explicitly improve data distribution (perturbations in input data) robustness, though robustness to data corruption is implicitly achieved by the grouping effect of the MGS regularisers.\n\n> In all experiments, I wonder if any early stopping is applied, and if not, I think early stopping could alleviate demonstrated overfitting issues.\n\nEarly stopping is investigated and results are provided in the supplementary materials. There, we already include plots that show the evolution of the test accuracy during training for a selection of the test-bench runs. It is clearly visible that early stopping would not achieve the best performance, and in many instances is significantly worse than what regularisation with MGS achieves. In the main manuscript experiments we don't use early stopping in order to illustrate that MGS is the only method that effectively combats over-fitting while still achieving good performance. These results are shown in plots in figures 2-4, A2, and A3. Note also that maximum test accuracies (minimum loss) during training are provided in Tables 1 and 2.\n\n> It is a little wild to me that weight regularisation and dropout performed very poorly, but I think realistically, early stopping is well-adopted on top of various regularisation schemes, and it could reduce the performance between the proposed approach and all other methods.\n\nFirstly, we disagree that dropout and weight decay perform very poorly. In our testbench results, dropout is usually next best and weight decay works quite well at low- to moderate noise levels. As mentioned above, we did investigate early stopping and max accuracies over training are reported in Tables 1 and 2. Early stopping did marginally improve performance for both dropout and weight decay for several cases. However in Tables 1 and 2 we especially note that MGS is the only method for which final and max accuracies essentially coincide, and for most cases at a higher level than max accuracy for the other regularisers.\n\n1. Jaehoon Lee, Lechao Xiao, Samuel Schoenholz, Yasaman Bahri, Roman No-vak, Jascha Sohl-Dickstein, and Jeffrey Pennington. Wide neural networks of any depth evolve as linear models under gradient descent. Advances in Neural Information Processing Systems, volume 32, pages 8572–8583.\n\n2. Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. Improved training of wasserstein gans. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, page 5769–5779.", " Thank you for the thorough and well-structured review! Below we answer the questions/concerns you had:\n> I would love to see a mathematical analysis of how MGS is linked to regularisation schemes such as weight decay.\n\nThis is exactly what we would like to do in future work. To be able to link existing regularisation schemes to MGS analytically would help in deriving more efficient methods for example.\n\n> Equation (2) contains an error.\n\nThanks for spotting this, we have fixed it in the rebuttal revision.\n", " In this paper, the authors proposed a new concept called \"Model Gradient Similarity\" (MGS), which can be used to analyze the gradient descent algorithm and the effect of explicit regularizations, and can be used to design new regularization schemes. The main contributions of this paper are as follows:\n\n(1) The authors defined MGS as the dot product between two model-parameter gradient vectors from two different training inputs -- this can be viewed as a kernel. The authors did an analysis based on first-order Taylor decomposition, and derived a first-order approximation of how the model behavior would change after one update of gradient descent. In essence, the (approximate) amount of change is (the negative of) the learning rate, multiplied by the matrix vector product between the MGS kernel matrix and the loss-model gradient vector.\n\n(2) The authors further developed two metrics to summarize the MGS, by using the trace and the determinant of the MGS kernel matrix computed from training data. These can be used to approximate variance and covariance of model gradients on training data, and provide summary statistics of MGS -- the smaller these metrics, the larger the MGS is.\n\n(3) The authors tracked the MGS metrics during training for several regularization schemes, and discovered that all of them to some extent function by trying to inhibit the growth of MGS metrics (i.e., encouraging higher MGS among training samples).\n\n(4) Finally, the authors designed explicit regularization schemes, one based on the trace and the other the determinant of the MGS kernel matrix on training data and showed that (1) both worked well on a toy example, and (2) the trace regularization can be applied to real-world datasets to improve both accuracy and robustness -- for computational reasons only trace is applied to real datasets. Strengths:\n\n+ The first-order Taylor decomposition analysis on gradient descent to show the effect of MGS on model outputs is very neat. Given that the learning rate is usually small in practice, the change in parameters after one update is generally small in norm, so the first-order analysis generally holds. Great work!\n\n+ The summary metrics to quantify MGS are theoretically grounded and sound.\n\n+ The observation that all explicit regularization schemes inhibit MGS metrics (trace and determinant) growth is also insightful, and sheds light on why regularization works.\n\n+ The proposed MGS trace regularization is simple and effective.\n\nWeaknesses:\n\nThere are not many weaknesses. If possible, I would love to see a mathematical analysis of how MGS is linked to regularization schemes such as weight decay -- but this can be future work.\n\nThere is one small typo that needs to be fixed:\n\nEquation (2) contains an error after the \"approximately equal\" sign. It should be f_\\theta(x'), rather than f_\\theta(x). I do not have questions -- the paper is very well written and everything is explained clearly. The authors have adequately addressed the limitations of their work, namely, the computationally intensive nature of gradient-based regularization schemes, and also in particular, of the proposed MGS determinant regularization.", " the submission proposed to use the trace of the kernel matrix, which is constructed by taking the inner product of a matrix that has gradients of function outputs w.r.t. parameters and the transpose of the matrix itself, to regularise the training of a neural network. The submission through experiments showed that their proposed gradient-based regulariser provided better generalisation than other existing regularisation approaches. 1. the title of the submission doesn't seem to reflect the content of the submission. \n\nThe analysis and the proposed regulariser eventually don't convey much information on the similarity of gradients computed on a pair of data samples, since it is purely based on the l2 norm of gradient vectors on individual data samples. Even though the determinant of a matrix utilises the pair-wise similarity, as the submission mentioned as well, the difference between the effect of trace and that of the determinant is subtle.\n\n2. I am not sure how the gradient is used when there are multiple outputs, for example, multi-class classification problems.\n\nFor classification problems with more than two output classes, the gradient of the function w.r.t. the parameter on each data sample is a matrix. It doesn't seem clear to me how the proposed approach uses the matrix. \n\n3. The proposed regulariser is similar to the Lipschitz constraint that has been used in Wasserstein GANs, and other regularisation schemes. \n\nThe idea of Lipschitz constraint is to make sure that a small change in the input data sample only leads to a tiny change in the output, and the proposed approach in this paper, which computes the trace of the kernel matrix, regularises / penalises the norm of a gradient vector, which is essentially regularises the Lipschitz constant of the model locally.\n\nA relatively thorough theoretical comparison can be found here: https://proceedings.neurips.cc/paper/2020/file/8929c70f8d710e412d38da624b21c3c8-Paper.pdf\n\n4. In all experiments, I wonder if any early stopping is applied, and if not, I think early stopping could alleviate demonstrated overfitting issues. \n\nIt is a little wild to me that weight regularisation and dropout performed very poorly, but I think realistically, early stopping is well-adopted on top of various regularisation schemes, and it could reduce the performance between the proposed approach and all other methods. See above. See above.", " This paper presents a novel view on regularization for neural networks. The core idea focuses on the similarity of model gradients when evaluating data points. The paper proposes a kernel that conveys the influence of one training sample has over the other. Two global metrics, the trace and the determinant, are used as proxy metrics to track the model generalization. They are also used as regularizers by including them in the optimization objective. Experimental results show that tracking and optimizing these metrics leads to models that retain a high generalization, even in the presence of noisy annotations for classification and regression problems. - *Originality:* \n - Most of the elements in the paper, in isolation, are not new: the topic of regularization in ANNs has been studied for the past few decades. Tracking or including gradients (or other statistics) of data-batches to add constraints to the optimization process has also been done in the past. The idea of measuring the influence between samples during learning has also been addressed before. (In this regard, I was missing a discussion to the work on influence functions from Koh and Percy [1].) However, the originality from this work comes from the unique way to combine these ideas. In particular, the use of the determinant and the trace as global metrics.\n\n- *Quality*:\n - In general, the main claims of the paper are well supported. A weak point can be found e.g., in the motivation (L39-40) where, the fact that one thing (regularization) can be achieved in multiple ways, doesn’t immediately implies that it is not well understood. One could argue that the more understood a problem gets, the more ways one could find to make it work.\n\n- *Clarity*:\n - MGS was never defined nor tied to any metric or formula. (Saying that it is a “framework” is still vague, and needs a more concrete definition.) Something like “any measure of similarity that expresses the influence that one sample has on the model” will make MGS more tangible. Alternatively, MGS could be represented by the kernel $K_{\\theta}$, while $\\text{tr}$ and $\\text{det}$ are the proposed metrics to track MGS.\n - Plots are small and most axes are not labeled.\n - Introducing MGS regularizers in section 5, while already presenting results based on those in section 4 is rather confusing. I strongly suggest restructuring the development of the paper so that it follows a more sequential timeline.\n - For the abstract, please use `\\begin{abstract}...\\end{abstract}`.\n - L143: does FCN stand for fully-connected network? Make sure it is explained somewhere to prevent confusions with “fully *convolutional* networks.”\n\n- *Significance:*\n - Experiments using AlexNet (Figure 3): how representative is AlexNet w.r.t. state-of-the-art DL? Can architectural components such as residual connections, batch-normalization or initialization methods (He initialization vs. truncated gaussians) that are not part of AlexNet, affect the outcome of these experiments?\n - For a better overview of the viability of MGS metrics, it is important to report the computational cost (memory and FLOPs) of using MGS compared to dropout, weight-decay, etc.\n\n## References\n\n1. Koh, Pang Wei, and Percy Liang. \"Understanding black-box predictions via influence functions.\" *International conference on machine learning*\n. PMLR, 2017. 1. Figure 2: regularization methods are dependent on hyperparameters. Dropout depends on the probability of dropping out, weight decay is controlled by a small coefficient $\\alpha_{\\theta}$, etc. Are these results robust to the choice of these hyperparameters?\n2. In general: the different regularizations depend on hyperparameters that are proper of each method. Those that are included in the loss (loss grad and weight decay) are modulated by a hyperparameter $\\lambda$, similar to the way $\\alpha$ is defined for $\\widehat{\\mathcal{L}}(f(X), Y)$ (equations between lines 175-177). Dropout depends on the probability of an activation being dropped (usually $p=0.5$). What protocol was followed to guarantee that those hyperparameters were optimal? This is particularly relevant for section 6.3 where the other hyperparameters were analyzed.\n3. How dependent are these results on the batch size?\n4. Corrupted MNIST: Section 6.1 mentions the use of motion blur, but table 1 only talks about flipped labels. What is considered a corruption in this case and what is the motivation behind adding motion blur?\n5. Could MGS (i.e., $\\text{tr}~K_{\\theta}$) be used as an early stopping criterion? There is little discussion of the limitations of this work, even though there are a few points that could be addressed:\n\n- Even though the method is well supported by theoretical findings, a discussion on the effects of hyperparameters is lacking. In particular, effects of the batch size or assumptions about the data distribution e.g., the test set exhibits no covariance shift.\n- From the empirical results, the model architecture lacks representative (and relevant) components that are often found in modern ANN architectures (see second point on “Significance”).\n- A discussion of the experimental hyperparameters (how they were found and how they affect the results) is also missing in the main paper.\n- Finally, an account on the computational costs of MGS metrics and the other baselines should be presented." ]
[ -1, -1, -1, -1, 7, 3, 6 ]
[ -1, -1, -1, -1, 3, 4, 3 ]
[ "QEcEeM8Q9aB", "fMsBE2S-b6x", "aArb2Mx8Ck", "xbVvYpgeskn", "nips_2022_eXggxYNbQi", "nips_2022_eXggxYNbQi", "nips_2022_eXggxYNbQi" ]
nips_2022_U_hOegGGglw
A Closer Look at Prototype Classifier for Few-shot Image Classification
The prototypical network is a prototype classifier based on meta-learning and is widely used for few-shot learning because it classifies unseen examples by constructing class-specific prototypes without adjusting hyper-parameters during meta-testing. Interestingly, recent research has attracted a lot of attention, showing that training a new linear classifier, which does not use a meta-learning algorithm, performs comparably with the prototypical network. However, the training of a new linear classifier requires the retraining of the classifier every time a new class appears. In this paper, we analyze how a prototype classifier works equally well without training a new linear classifier or meta-learning. We experimentally find that directly using the feature vectors, which is extracted by using standard pre-trained models to construct a prototype classifier in meta-testing, does not perform as well as the prototypical network and training new linear classifiers on the feature vectors of pre-trained models. Thus, we derive a novel generalization bound for a prototypical classifier and show that the transformation of a feature vector can improve the performance of prototype classifiers. We experimentally investigate several normalization methods for minimizing the derived bound and find that the same performance can be obtained by using the L2 normalization and minimizing the ratio of the within-class variance to the between-class variance without training a new classifier or meta-learning.
Accept
It has been shown that linear classifier heads on top of pre-trained models can outperform meta-learning approaches. However, this is less adaptable than prototypical classifier heads and requires retraining with each new set of classes. This paper theoretically investigates the generalization of prototypical classifiers and uses this to explore several feature transformations to improve their performance. While there were concerns about the novelty over and above Cao et al., and some minor clarity issues, the reviewers were all generally in agreement that this is a useful contribution to the community and a good starting point for improving prototype classifiers from a theoretical perspective.
val
[ "1ZhCaa_aWgA", "nnURz19tLJo", "C0s_TDp4kcF", "sQVropbHm_3", "ODBQdyBpel", "394lrsLJNO2", "KM7CD0chfA0", "eCnzsDeAIXo", "oPCaJvKd31P" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the reviewers for providing such a thorough feedback. I have increased my score to weak accept.", " ### Q5. Can t-SNE feature visualization be shown to see how feature transformation affects discriminability?\n\nWe have updated the paper and put the visualization in Appendix A.8.\n\nHowever, the projection into a lower dimensional space for visualization does not accurately represent the relationship of a higher dimension. Furthermore, the distribution that we can observe qualitatively by visualization is one in which the interclass variance is large and the intraclass variance is small.\nTherefore, we quantitatively evaluated the ratio of the within-class variance to the between-class variance ($\\frac{\\mathrm{Tr}(\\Sigma_\\tau )}{\\mathrm{Tr}(\\Sigma)} $), which is related to our theoretical analysis. We show the results in the following table.\n\n| | miniImageNet | tieredImagenet | CIFAR-FS | FC100 |\n| --- | --- | --- | --- | --- |\n| Baseline w/o transformation | 4.50 | 3.91 | 4.99 | 10.01 |\n| Baseline w l2-norm | 4.94 | 4.31 | 5.18 | 11.36 |\n| Baseline w EST | 2.23 | 1.87 | 2.12 | 2.37 |\n| Baseline w EST + L2-norm | 2.44 | 2.11 | 2.16 | 2.69 |\n| Baseline w LDA | 2.15 | 3.37 | 1.99 | 1.64 |\n| Baseline w LDA + l2-norm | 2.41 | 3.76 | 2.09 | 1.88 |\n\nThe Table shows that The L2-normalization slightly changes the ratio while EST and LDA can reduce the ratio. The combination of EST and L2-norm or LDA and L2-norm can reduce the ratio while decreasing the variance of the norm of feature vectors to 0. This implies that the combination of EST and L2-norm or LDA and L2-norm can reduce the terms expressed in equation (6) and (8).\n", " We would like to thank reviewer bREF for providing valuable feedback and raising interesting questions which we answer below.\n\n### Q1. It might be good for the authors to elaborate point-wise differences between the bounds of the paper and the Cao’s.\nThe conclusion and insights of our bound is also different from Cao’s bound as follows:\n\n- While Cao's work cannot explain why the L2-normalization of a feature vector can improve the performance of a prototype classifier, our work theoretically shows that the transformation can improve the performance.\n- We relax the assumption of Cao’s work; specifically, the bound does not require that the features be distributed in Gaussian distribution, and each covariance matrix does not have to be the same among classes. Therefore, our theoretical results can be applied to the features extracted by a model trained with cross-entropy loss with a linear classifier.\n- While Cao’s bound is depend only on the ratio of the within-class variance to the between-class variance, we theoretically show that the bound consists of three terms: (1) the variance of the norm of feature vectors, (2) the difference in the distribution shape constructed from each class embedding, and (3) the ratio of the within-class variance to the between-class variance. We experimentally show that reducing the term (1) and (3) respectively can improve the performance of a prototype classifier.\n\n\n### C1. Comparison studies have only been carried out against a few shot learning methods.\nAs we stated in Abstraction and Introduction, we focus on few-shot learning problems. Few-shot learning requires the model to quickly adapt to new classes. While the linear-evaluation method requires retraining a linear classifier every time a new class appears, a prototype classifier does not require such retraining. If a prototype classifier can achieve comparable performance with linear-evaluation methods or meta-learning-based methods, it can be used as a practically useful first step in tackling few-shot learning problems.\n\n### Q2. More complex feature transformation methods should also have been explored such as exponential, non-linear kernels, power transforms, etc to see whether they can also produce better generalization.\n\nWe did not use complex data transformation methods for the following two reasons. The first is that we used data transformation methods based on the generalization bounds derived in the paper, so we do not try such complex transformations that are not explicitly associated with the generalization bounds. The second is that we do not want to use additional hyperparameters as much as possible and want to keep the simplicity of the prototype classifier, so we did not use complex data transformation methods.\n\n\n### Q3. In Figure 1, don’t the shape of the clusters vary because Euclidean distance is used for ProtoNets while dot-product is used for Linear layers ? Is it because of the loss function ?\n\nAs you have pointed out, the difference occurs because dot-product is used for linear layers. We will modify the cross-entropy loss to dot-product with cross-entropy loss.\n\n### Q4. Can this feature transformation approach improve non-prototypical few shot learning methods ?\n\nWe did not perform this experiment in the main paper because we investigated how an easy-to-use prototype classifier would have the same accuracy as a linear classifier or meta-learning-based methods. We conducted additional experiments to see whether the transformations improve the performance of Baseline on miniImagenet. Table below shows the performance of Baseline with and without the transformation methods.\n\n| | 1-shot | 5-shot|\n|---|---|---|\nbaseline | 54.54$\\pm$0.80 | 76.50$\\pm$0.62 |\nbaseline w L2-norm | 53.86$\\pm$0.80 | 76.47$\\pm$0.65 |\nbaseline w EST | 52.32$\\pm$0.83 | 75.56$\\pm$0.60 |\nbaseline w LDA | 52.46$\\pm$0.80 | 75.62$\\pm$0.66 |\nbaseline w EST + L2-norm | 54.76$\\pm$0.73 | 75.99$\\pm$0.68 |\nbaseline w LDA + L2-norm | 54.35$\\pm$0.72 | 76.93$\\pm$0.69 |\n\n\nAs shown in the table, the performance does not change so much between the transformation methods. This can be explained as follows. The models are trained with a linear classifier on the features without data transformation and the settings are the same during testing and training. Therefore, the accuracy was sufficiently high and the performance did not change so much when data transformation methods were applied. Furthermore, since our generalization bound is based on a prototype classifier, not linear classifiers, these results were different from prototype classifiers' results.", " ### Q2. Does the gap between the performance of a linear classifier and a prototype classifier decrease when cosine distance-based prototype classifiers are used instead?\n\nYes. The following table shows the performance of ProtoNet, a prototypical network trained with cosine similarity, a prototypical network trained with inner-product and a prototype classifier on the features of Baseline++. \n\n||1-shot miniImagenet|5-shot miniImagenet|1-shot tieredImagenet |5-shot tieredImagenet\n|---|---|---|---|---|\n|ProtoNet|54.42 $\\pm$ 0.86 | 73.56 $\\pm$ 0.68 | 56.96 $\\pm$ 0.98 | 78.38 $\\pm$ 0.71 |\n|ProtoNet w cosine similarity|53.98 $\\pm$ 0.83 | 72.38 $\\pm$ 0.66 | 53.17 $\\pm$ 0.91 | 73.61 $\\pm$ 0.77 | \n|ProtoNet w innerproduct|53.87 $\\pm$ 0.83 | 71.86 $\\pm$ 0.69 | 49.10 $\\pm$ 1.00 | 70.11 $\\pm$ 0.84 | \n|Baseline@no|46.36 $\\pm$ 0.58 | 73.97 $\\pm$ 0.62 | 50.60 $\\pm$ 0.87 | 78.10 $\\pm$ 0.67 |\n\nThe performance gap between a prototype classifier based on the features of Baseline and a prototype classifier with cosine distance or inner product is decreased compared to the gap between a prototype classifier based on the features of Baseline and a prototype classifier with Euclidean distance. Note that we l2-normalized only the features of ProtoNet with cosine similarity since the model assumes the norm of the feature vectors to be constant.\n\n### Q3. How does hyperspherical prototype networks relate to the analysis in the paper?\n\nHyperspherical prototype networks is different from our analysis in the following point. Since hyperspherical prototype networks prepare prototypes on hypersphere during training the models, it is similar to Baseline++ described in Chen et al. However, We focus on the features of the pre-trained model, not the behavior during training.\n\n### Q4. How do the multiclass results compare to the binary results? \n\nSince in our experiments and theory, the two-class is a subset of the multiclass, we cannot find any qualitatively different takeaways \n\n### Q5. More discussion about the limitations of the analysis would be useful: e.g. choice of distance and how this could possibly be addressed.\n\nIf we can use the assumptions in the answer of Q1, we can apply our analysis as is for cosine similarity. For other classes of distances, such as Bregman divergence, if the distance equation is expressed with the terms of variance and expected value of the feature vectors, it is possible to expand on this derivation, paying attention to the calculation of the inner product of Bregman divergence.", " We would like to thank reviewer up5k for providing valuable feedback and raising interesting questions which we answer below.\n\n### C1. The discussion of LDA and EST is rather light relative to the important role they play in the experiments.\n\nAs shown in figure2 right, LDA and EST can improve the performance of prototype classifiers when the ratio of the within-class variance to the between-class variance of the features is large. However, LDA and EST negligibly improve the performance on datasets with lower within-class variance (miniImagenet, tieredImagenet and CIFAR-FS). This is because the transformation matrix of LDA and EST is estimated on a 5-shot of the testset or the features of the trainset. Therefore, the estimation of the transformation matrix is unstable and thus the performance improvement is relatively small. \n\n### C2. A discussion of how the multiclass bound relates to the binary bound presented in the main paper is not clear enough.\n\nWe extend the conclusion of the bound on binary-classification to the multiclass case by Frechet’s inequality as shown in Appendix A.5. We will add this at the end of Section 3.3.\n\n### C3. The important term explained in Section 3.3 is not very intuitive how future work should go about trying to better control them.\n\nFrom the results in the Remark in Section 3.3 and our experiment, we can observe that reducing the following two terms is important: 1. the variance of the norm of the feature vectors, and 2. the quadratic expression of the ratio of the within-class variance to the between-class variance. Therefore, for example, a transformation that simultaneously reduces the variance of the norm and the ratio of the within-class variance to the between-class variance would be effective. However, using a complex data transformation method would hurt the simplicity of the prototype classifier (which is also the focus of this paper), so we did not try it in our experiment, and conducted the experiment using a combination of simple data transformation methods.\n\n### C4. Figure 2 is somewhat confusing and it was not immediately clear how the figure relates to the experimental results.\n\nFigure 2 (left) compares the size of each term of our generalization bounds that shown in Section 3.3, and shows that the term of the difference in the shape of the class distribution does not have much impact and thus we should focus on reducing the variance of the norm of the feature vectors and the ratio of the within-class variance to the between-class variance. The right figure shows that FC100 has larger ratio of the within-class variance to the between-class variance compared to the other dataset. Therefore, LDA and EST improve the performance of a prototype classifier on FC100 and only have a small effect on other datasets. The results also show that L2-normalization by itself has little effect on the ratio of the within-class variance to the between-class variance.\n\n### Q1. The analysis is limited to Euclidean distance, but coupling this with a discussion of e.g. cosine distance would be useful. To what extent can the current analysis be extended to cosine distance? \n\nThe L2-norm is applied after the average operation at the test phase, and this is expressed as \n\n $$\\bar{\\phi\\left(S_c\\right)} = \\frac{\\frac{1}{K} \\Sigma_{x\\in S_c} \\phi(x)} {|\\frac{1}{K} \\Sigma_{x\\in S_c} \\phi(x)|}.$$\n\nThe calculation of expectation of $\\bar{\\phi\\left(S_c\\right)}$ will be more complicated and thus our analysis cannot be extended to cosine distance straightforwardly. In a simple way, assuming \n\n$$ \\left|\\frac{1}{K} \\Sigma_{x\\in S_c} \\phi(x) \\right| = 1, $$ \n\nour analysis can also be applied in cosine distance.\n\n", " We would like to thank reviewer 7Qcy for providing valuable feedback and raising interesting questions which we answer below.\n\n### C1. Novelty of generalization bound is limited as it derives a the bound by modifying (Cao et al., 2020)\n\nAs described in Section A.5., our derivation is different from Cao et al.’s study in the following points. \n1. We re-derived Lemma 3 because the term of the difference between the trace of the class covariance matrices is erased in the lemma. This term cannot be omitted in our derivation since we do not assume the class covariance matrix to be the same among classes. \n2. We re-derived the bound on the variance of squared Euclidean distance of two vectors, e.g Lemma 4. The derivation of Cao et al. uses the property of quadratic forms of normally distributed random variables and the fact that the sum of normally distributed random variables is also distributed in Gaussian distribution. The calculation of the variance of squared L2-norm without depending on the property of some distributions is not straightforward. We divide the variance of squared Euclidean distance of two vectors into the variance of the norm of the feature vectors and the variance of the inner-product of vectors. Then we apply Cauchy–Schwarz inequality to the inner-product\n\nThe conclusion and insights of our bound is also different from Cao’s bound as follows:\n- While Cao's work cannot explain why the L2-normalization of a feature vector can improve the performance of a prototype classifier, our work theoretically shows that the transformation can improve the performance.\nWe relax the assumption of Cao’s work; specifically, the bound does not require that the features be distributed in Gaussian distribution, and each covariance matrix does not have to be the same among classes. Therefore, our theoretical results can be applied to the features extracted by a model trained with cross-entropy loss with a linear classifier.\n- While Cao’s bound depends only on the ratio of the within-class variance to the between-class variance, we theoretically show that the bound consists of three terms: (1) the variance of the norm of feature vectors, (2) the difference in the distribution shape constructed from each class embedding, and (3) the ratio of the within-class variance to the between-class variance. We experimentally show that reducing the term (1) and (3) respectively can improve the performance of a prototype classifier.\n\n\n### Q1. Why is there negligible performance boost compared to the Baseline++ in Table 1 and Table4 for 5-shot settings? . It could be that the Baseline++ was not properly tuned or trained. \n\nBaseline ++ is trained with l2-normalized features while Baseline is not; and thus the performance improvement of `Baseline ++ w/o L2-norm -> Baseline ++ L2-norm` is larger than `Baseline w/o L2-norm -> Baseline + L2-norm`.\n\n### C2. The performance gap is negligible compared to the other methods proposed in current studies. \n\nThe purpose of this paper is to relax the unrealistic assumptions of existing theories and make the Prototype classifier more practical and easier to use. Therefore, we aim to show a prototype classifier can perform comparable with other methods in current studies with proper transformation.\n\n### Q2. Why is there not a single transformation method which performs better than any other methods across all settings?\n\nThe reason for this is that it is not the purpose of this paper that we find a SOTA method in every setting. As we have mentioned in the response to C2, the purpose of this paper is to relax the unrealistic assumptions of existing theories and make the Prototype classifier more practical and easier to use. Also, since our theory shows that different transformations are needed depending on the nature of the data set. This is the same as the so-called no-free-lunch theorem.", " Recently, the linear classifier on the top of pre-trained model without meta-learning performs comparably to the prototypical networks for few-shot learning. However, these approaches require re-training every time a new class arrives. This paper focuses on prototype classifier which doesn't require re-training or any meta-learning approach. It founds that the variance of the norm of feature vectors affect the performance of prototype classifiers. It then derives a generalization bound that does not depend on assumptions on the class-conditional distributions. Moreover, the paper theoretically shows that this bound decreases as the variance in norm of the feature vectors decreases, and it empirically investigaes different transformation approaches to lower this variance. Finally, the authors analyze the effectiveness of the feature transformations on multiple datasets that include mini-ImageNet, tiered-ImageNet, CIFAR-FS, CUB, and FC-100. ### Strengths:\n\n* The paper investigates an interesting problem for few-shot learning by proposing to use prototype classifiers which generally don't require re-training or meta-learning algorithm.\n\n* It theoretically analyze the prototype classifier in terms of variance of the norm of the feature vectors\n\n* It investigates several transformation approaches to minimize this variance that includes L2 normalization, EST,LDA,\nEST+L2-norm and LDA+L2-norm\n\n* Experiments on several datasets corresponding to the two different task i.e. i) standard object recognition and ii) cross-domain adaptation.\n\n### Weakness:\n\n* Novelty of generalization bound is limited as it derives a the bound by modifying (Cao et al., 2020)\n\n* 1-shot performance looks good, but for 5-shot experiment, there is negligible performance boost compared to the baseline++ for miniImageNet, tieredImageNet, and CIFAR-FS in Table 1. Could authors provide any explanation or give remarks about it? It is written in the manuscript that all the methods are reimplemented. It could be that the baseline++ was not properly tuned or trained. Even in Table 2, the performance gap is negligible compared to the other methods. Moreover, the same phenomenon can be observed for cross-domain experiments in Table 4 in supplementary materials\n\n* The authors have proposed several feature transformation methods, but there is not a single one which stands out? Can the authors please explain this behavior? Please see my questions in the weakness section. Yes, authors adequately addressed the limitations and potential negative societal impact of their work.", " This paper analyzes which factors lead to better performance of a prototype classifier directly applied on top of a learned representation. This stands in contrast to recent work that has shown that training a linear classifier on top is effective but computationally expensive and requires more hyperparameter tuning. Contributions include a new generalization bound on the performance of a prototype classifier that uses fewer assumptions than previous work and better shines light on the critical factors, including variance of the norm, within-class variance, and between-class variance. Experiments are conducted on several benchmark datasets and shows that with appropriate feature transformation and L2 normalization, a prototype classifier can achieve similar performance to a learned linear classifier.\n Strengths\n+ The aim is well-motivated. Eliminating the need for retraining a linear classifier and instead simply applying a prototype classifier would make it easier to adapt pre-trained representations to the few-shot setting.\n+ The generalization bound removes simplifying assumptions of previous work and in doing so paints a more nuanced picture of the important underlying factors.\n+ Experiments are extensive and consist of four datasets and multiple variations of the prototype classifier, including two feature transformation methods both with and without L2 normalization.\n\nWeaknesses\n- The clarity could be improved in some places. One is that the discussion of LDA and EST is rather light relative to the important role they play in the experiments. Another is a discussion of how the multiclass bound relate to the binary bound presented in the main paper.\n- The discussion of the results could be improved. The important terms are explained at the end of Section 3.3 but it is not very intuitive how future work should go about trying to better control them. I also found Figure 2 somewhat confusing and it was not immediately clear to me how the figure relates to the experimental results.\n- The analysis is limited to Euclidean distance, which is a valuable first step, but coupling this with a discussion of e.g. cosine distance would be useful. This is relevant since standard classifiers use inner products to compute logits, which is more closely related to cosine distance than Euclidean.\n * As mentioned above, the analysis here is done for a prototype classifier based on Euclidean distance, but cosine distance has also shown to be effective if distances can be scaled appropriately (Oreshkin et al. 2018). To what extent can the current analysis be extended to cosine distance? See also ll. 47-49, where the gap in performance is hypothesized to be due to the different in loss function. Does the gap decrease when cosine distance-based prototype classifiers are used instead?\n* The generalization bound presented in Section 3.3 suggests that performance would be improved by controlling the variance of the feature norm. In previous work, hyperspherical prototype networks have been proposed (Mettes et al. 2019), where the norm of feature vectors is enforced to be 1. How does such an approach relate to the analysis in the paper?\n* How do the multiclass results compare to the binary results? Are there any qualitatively different takeaways? More discussion about the limitations of the analysis would be useful: e.g. choice of distance and how this could possibly be addressed.", " The authors derive a more relaxed generalization bound for prototypical networks. Using the generalization bound, the authors propose the use of feature transformation to improve generalization of classifiers. The authors experiment with different normalization techniques on multiple few shot learning datasets and find that L2 based normalization can produce similar recognition performance as the one linear discriminant regularization.\n — Strengths —\n\nThe authors derive a generalization bound that requires fewer assumptions about the data distribution family.\n\nExperimental results on multiple datasets are consistently supported by their developed theorem.\n\nThe authors can produce reasonable amount of recognition performance without any fine-tuning and therefore can be useful for devices where backpropagation is not supported.\n\n— Weaknesses —\n\nThe novelty of the proposed bound is incremental compared to the generalisation bound Cao et al. It might be good for the authors to elaborate point-wise differences between the bounds\n\nComparison studies have only been carried out against a few few shot learning methods.\n\nMore complex feature transformation methods should also have been explored such as exponential, non-linear kernels, power transforms etc to see whether they can also produce better generalization.\n In Figure 1, don’t the shape of the clusters vary because Euclidean distance is used for ProtoNets while dot-product is used for Linear layers ? Is it because of the loss function ? The authors can elaborate on this further.\n\nCan t-SNE feature visualization be shown to see how feature transformation affects discriminability ?\n\nCan this feature transformation approach improve non-prototypical few shot learning methods ?\n\n--------- POST REBUTTAL ----------\n\nAfter rebuttal I have increased form borderline accept to weak accept.\n No societal impacts \n" ]
[ -1, -1, -1, -1, -1, -1, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "C0s_TDp4kcF", "oPCaJvKd31P", "oPCaJvKd31P", "eCnzsDeAIXo", "eCnzsDeAIXo", "KM7CD0chfA0", "nips_2022_U_hOegGGglw", "nips_2022_U_hOegGGglw", "nips_2022_U_hOegGGglw" ]
nips_2022__2-r5UurHp
ALIFE: Adaptive Logit Regularizer and Feature Replay for Incremental Semantic Segmentation
We address the problem of incremental semantic segmentation (ISS) recognizing novel object/stuff categories continually without forgetting previous ones that have been learned. The catastrophic forgetting problem is particularly severe in ISS, since pixel-level ground-truth labels are available only for the novel categories at training time. To address the problem, regularization-based methods exploit probability calibration techniques to learn semantic information from unlabeled pixels. While such techniques are effective, there is still a lack of theoretical understanding of them. Replay-based methods propose to memorize a small set of images for previous categories. They achieve state-of-the-art performance at the cost of large memory footprint. We propose in this paper a novel ISS method, dubbed ALIFE, that provides a better compromise between accuracy and efficiency. To this end, we first show an in-depth analysis on the calibration techniques to better understand the effects on ISS. Based on this, we then introduce an adaptive logit regularizer (ALI) that enables our model to better learn new categories, while retaining knowledge for previous ones. We also present a feature replay scheme that memorizes features, instead of images directly, in order to reduce memory requirements significantly. Since a feature extractor is changed continually, memorized features should also be updated at every incremental stage. To handle this, we introduce category-specific rotation matrices updating the features for each category separately. We demonstrate the effectiveness of our approach with extensive experiments on standard ISS benchmarks, and show that our method achieves a better trade-off in terms of accuracy and efficiency.
Accept
This work deals with incremental semantic segmentation. The authors propose a three-step incremental learning approach. They provide an in-depth analysis of the probability calibration methods widely used for the ISS, and introduce an interesting proposal for incrementally adapting the memorized features using global alignment by rotations. They show strong results on standard benchmarks for incremental segmentation, including the ablation study. The rebuttal provides valuable insight, and the questions raised by the reviewers have been convincingly answered by the authors. On the whole, the reviewers converged positively, the novelty and the interest of the proposal stand out clearly. Authors are encouraged to consider all comments for their final version.
train
[ "SgraBMEBFv5", "NHzjR12yKsU", "afrd-VkfSV6", "UmnO06xx1RM", "nvgNZC837qM", "BRDL97E8T_Y", "OUsvAO3OHI6", "9kOv11eMGnL", "Ewp_vXCATs8", "eSvaRlYFdAu", "RrYHO04HNPu", "qKcX1-FH6pO", "jqF4J6VXohk", "Dcf21AkZoK8", "ipm2baEcZnO", "7fqdGm1-8fQ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for providing detailed answers to my questions and supplementary experiments. Here are few comments on your replies.\n\n[Incremental work] Please don't be offended by this statement. By incremental I only meant that your work, which is done seriously, builds on existing concepts (improvement of distillation based learning loss and memory replay). I agree that this is a subjective statement, and that it depends on what one expects from contributions published in high ranked conferences such as Neurips.\n\n[Extremely challenging scenarios] I wouldn't call scenarios with 11 stages as \"extremely challenging\" but a standard setting of incremental learning. It seems that your approach is efficient for a small number of stages, but degrades more than several other methods when the number of stages increases (this was already noticeable in Table 4 for Pascal-VOC, and with a smaller amplitude for ADE20k on your new experiments). The reason for this behavior should have been analyzed better. You conjecture that \"the proposed ALI along with KD prevents the feature extractor from being updated significantly, since both ALI and KD encourage a current model to imitate a previous one\": I am not convinced this is the case using your rotation based memory alignment, and this is why I suggested to try to give some insight of the learned features dynamics while learning. \n", " Thank you very much for the clarifications and for providing the additional experimental results regarding hyperparameters. I have no further questions and with the clarifications I do not have any major concerns anymore.", " The differences between ADE20K and PASCAL are indeed interesting, and I find the hypothesis put forward compelling. I encourage you to include some of this analysis in the main paper as it might point to interesting new directions on harder continual learning problems. The motivation for using hIoU (borrowed from ZSL) is also clear now.\n\n**To sum up**: The author rebuttal has addressed all questions and potential weaknesses that I identified in my original review, and I remain very positive about the quality and potential impact of this work. ", " Thank you for the clarifications and the new experiments. I missed the analysis in Figure C, although I find this new table to be much more convincing. Again, the new ablations on ADE20K show very marginal differences when compared to PASCAL (see my response to the next comment).", " We sincerely appreciate the reviewer for thoughtful and constructive comments that help us to improve the manuscript. Below we address your questions one-by-one.\n\n**[Detailed comparison with CKD + CCE]** \n- We will correct the typo “CE+ALI” into “CE+ALI+KD” in Fig. 1(a). We would like to note that the detailed comparison is also available in Fig. C of the supplementary material. This figure shows that “CE+ALI+KD” (denoted as “CE+ALI” due to the same typo) encourages classifier weights to be more balanced. To further compare our approach with variants of MiB, we provide the following results on 100-50(5) and 16-5(1) of ADE20K and PASCAL VOC, respectively. Numbers for MiB, which exploits CCE and CKD, are copied from the original paper.\n||100-50(5)|||||16-5(1)||||\n|-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| \n|Methods|mIoU$_{base}$|mIoU$_{new}$|mIoU|hIoU||mIoU$_{base}$|mIoU$_{new}$|mIoU|hIoU|\n|CE+KD|32.43|2.55|22.54|4.73||61.96|9.09|49.37|15.85|\n|CE+CKD|38.67|11.67|29.73|17.93||72.11|9.44|57.19|16.69|\n|CCE|10.19|4.57|8.33|6.31||63.25|38.53|57.37|47.89|\n|MiB(CCE+CKD)|38.21|11.12|29.24|17.23||75.50|49.40|69.00|59.72|\n|Ours-S1|**40.87**|**21.18**|**34.35**|**27.90**||**76.87**|**53.07**|**71.21**|**62.79**|\n- We can clearly see that training incremental semantic segmentation (ISS) models with CE+ALI+KD (denoted by Ours-S1) shows better results in terms of all metrics.\n\n**[Ablations]**\n- Following the suggestion, we conduct an ablation analysis on the objective function in Eq. (15). In particular, we show results on 100-50(5) and 16-5(1) of ADE20K and PASCAL VOC, respectively. Note that the results are obtained for *a single run* due to the limited time during the rebuttal.\n|||100-50(5)|||||16-5(1)||||\n|:-:|:-:|:-:|:-:|:-:|:-:|-|:-:|:-:|:-:|:-:|\n|$L_{FID}$|$L_{REG}$|mIoU$_{base}$|mIoU$_{new}$|mIoU|hIoU||mIoU$_{base}$|mIoU$_{new}$|mIoU|hIoU|\n|||40.92|21.03|34.33|27.78||76.61|53.10|71.02|62.72|\n||o|41.04|21.23|34.48|27.98||76.75|54.17|71.38|63.51|\n|o||40.80|21.15|34.28|27.86||76.53|53.78|71.12|63.17|\n|o|o|**41.12**|**21.54**|**34.63**|**28.27**||**78.02**|**55.40**|**72.63**|**64.79**|\n- We can see from the first two rows that using the regularization term alone shows improvements in terms of all metrics. While the fidelity term in the third row produces better hIoU scores than the vanilla model (i.e., Ours-S1) in the first row, it shows slightly worse results in terms of mIoU$_{base}$ and mIoU. This could be due to the fact that maximizing the cosine similarity alone does not guarantee that the rotated prototypes are compatible with the current classifier (See L260-262 on page 7). The last row shows that both terms are complementary to each other. In our response [Q1], we describe the reason why the gains from memorizing features are lower on ADE20K than on PASCAL VOC.", " **[Q1]** - Why are the gains of memorizing images and features on ADE20K so insignificant?\n- It is obvious that ADE20K is more challenging than PASCAL VOC. In particular, ADE20K contains 35 stuff categories, while PASCAL VOC has a single background category. Since 1) the background category always belongs to base categories, and 2) most unlabeled pixels on PASCAL VOC belong to the background category during incremental stages, the unlabeled pixels are less likely to contain future categories even in the overlapped setting. By contrast, on ADE20K, the single background category is split into multiple stuff categories that could possibly belong to future categories. SSUL-M memorizes previously seen images, which contain unlabeled regions, together with ground-truth labels. Since pixels of stuff categories occupy about 60% of all the pixels on ADE20K (See Sec. 4 in [39]), the unlabeled regions could increase when the stuff categories belong to future ones. This might be problematic in that the number of labeled pixels decreases accordingly, lessening the effectiveness of the memoized images. In our case, the feature alignment scheme computes the correlation score between $f^{t-1}(\\mathbf{p})$ and $m_c(s)$ as in Eq. (12). If the label of position $\\mathbf{p}$ belongs to future categories, it might result in erroneous correlations, reducing the quality of feature alignment.\n\n**[Q2]** - Are the mIoU scores over all categories, denoted by mIoU in Tables 3 and 4, commonly used in the literature?\n- Thanks for your kind reminder. While all previous methods [2,3,9,25,28] report a mIoU score over all categories (denoted by mIoU in Tables 3 and 4), we have observed that the arithmetic mean over all categories does not reflect IoU scores of new categories well. Thus, we have proposed to report the harmonic mean, denoted by hIoU in Tables 3 and 4, as well as the mIoU score, which is commonly used in the literature. The motivation behind using the hIoU score is largely borrowed from the evaluation protocol of zero-shot learning methods (e.g., [A]). We will clarify this.\n\n- [A] Zero-Shot Learning-The Good, the Bad and the Ugly, CVPR 2017.\n\n**[Steps]**\n- Thanks for the valuable suggestion. We will revise Secs 3.2 and 3.3 following your suggestion.\n\n**[Potential risks]**\n- Thanks for the constructive suggestion. We will add a discussion on the potential issues in the limitation part.", " We sincerely appreciate the reviewer’s time and effort to evaluate our work. Below we address your questions one-by-one.\n\n**[Incremental work]**\n- We respectfully disagree that our work is incremental. We would like to note that incremental semantic segmentation (ISS) methods [2,9,30] typically rely on distillation techniques to preserve the discriminative power for previous categories. In particular, they apply a probability calibration method to knowledge distillation as in Eq. (6). We have provided an in-depth analysis on the probability calibration method, and have pointed out that the calibration method has disadvantages in training ISS models. To address this, we have introduced a novel loss function, dubbed ALI, that enables the models to better learn new concepts, while preserving old ones. We do believe that our analysis together with ALI provides useful insights for other researchers who develop ISS methods. Also, although replay-based ISS methods [3,26] adopt a classical replay scheme, we argue that they have practical issues such as large memory requirements and data privacy. Our feature replay scheme is the first attempt to address these issues, achieving a better trade-off between accuracy and memory efficiency on standard benchmarks. \n\n**[Results on extremely challenging scenarios]**\n- Following the suggestion, we further provide results on 11-10(10) and 100-50(10) of PASCAL VOC and ADE20K, respectively. Our results are obtained for *a single run* due to the limited time during the rebuttal, while numbers for other methods are copied from SSUL. \n||100-50(10)||||\n|-|:-:|:-:|:-:|:-:|\n|Methods|mIoU$_{base}$|mIoU$_{new}$|mIoU|hIoU|\n|PLOP|39.11|7.81|28.75|13.02|\n|SSUL|39.94|17.40|32.48|24.24|\n|Ours-S1|**40.01**|**17.62**|**32.60**|**24.47**|\n- We can see that Ours-S1 still outperforms SSUL in terms of all metrics. However, the gains over SSUL are not as drastic as in other cases on ADE20K, where Ours-S1 even outperforms SSUL-M (See Table 3). This implies that the 100-50(10) scenario is still quite challenging for existing methods including ours.\n||11-10(10)||||\n|-|:-:|:-:|:-:|:-:|\n|Methods|mIoU$_{base}$|mIoU$_{new}$|mIoU|hIoU|\n|MiB|12.25|13.09|12.65|12.66|\n|PLOP|**44.03**|15.51|30.45|22.94|\n|Ours-S1|37.49|**27.47**|**32.72**|**31.68**|\n|Ours-S2|43.75|34.28|39.24|38.43|\n|SSUL|71.31|45.98|59.25|55.91|\n|SSUL-M|**74.02**|53.23|**64.12**|**61.93**|\n|RECALL|65.00|**53.70**|60.70|58.81|\n- We can see that Ours-S1 outperforms MiB and PLOP in terms of mIoU and hIoU. In particular, Ours-S1 shows the remarkable hIoU gain (8.74%) over PLOP. The performance gains from Ours-S2 over Ours-S1 demonstrate the effectiveness of memorizing features once again. We can also see that SSUL and RECALL still outperform other methods including ours. However, as mentioned in L327-331 on page 9, we would like to note that our approach provides a better trade-off between efficiency and accuracy and is able to handle stuff categories. For example, RECALL is not applicable on ADE20K and SSUL-M shows worse results than Ours-S1 on ADE20K (See Table 3).", " **[Q1-a]** - Numbers for other methods in Tables 3 and 4\n- In order to measure standard deviations over different seeds, we have implemented MiB and PLOP by using the official codes provided by the authors. The resultant values in Tables 3 and 4 are marked by $\\dagger$. Numbers for other methods in Tables 3 and 4 are copied from SSUL. We will clarify this.\n\n**[Q1-b]** - Are the backbones identical?\n- Following the common practice, we have adopted DeepLab-V3 with ResNet101 pre-trained for ImageNet Classification. In particular, SDR and SSUL use ResNet101 provided by PyTorch, while MiB and PLOP exploit a variant of ResNet101 using in-place ABN [A]. Following SDR and SSUL, we have adopted ResNet101 provided by PyTorch. We will clarify this.\n\n- [A] In-Place Activated BatchNorm for Memory-Optimized Training of DNNs, CVPR 2018.\n\n**[Q2]** - How can it be justified that a group of rotations is a good set of transformations?\n- We agree in part with your point that features could change not only isometrically but also non-uniformly during incremental stages. This is because a feature extractor is updated continually. We however conjecture that the proposed ALI along with KD prevents the feature extractor from being updated significantly, since both ALI and KD encourage a current model to imitate a previous one. This could allow a global distribution of features for each category to be updated in a simpler (e.g., isometric) way. To verify this, we have shown in Table E of the supplementary material that FAN [19] in image classification shows inferior performance. Note that FAN updates memorized features *non-uniformly*, since it consists of multi-layer perceptrons with ReLU activations. As FAN does not differentiate categories, the superior performance of ours might come from the category-specific alignment. To further verify this, we exploit multiple FANs to update features for each category separately as in ours. Note that using multiple FANs for individual categories is computationally expensive compared to ours. The results on 16-5(1) of PASCAL VOC are as follows:\n|Methods|mIoU$_{base}$|mIoU$_{new}$|mIoU|hIoU|\n|-|:-:|:-:|:-:|:-:|\n|FAN|76.79|52.26|70.95|62.19|\n|Multiple FANs|77.83|52.98|71.91|63.04|\n|Ours-S2|78.04|55.34|72.63|64.76|\n- We can see that using multiple FANs show better results than FAN, suggesting that the category-specific alignment works favorably. We can also see that Ours-S2 still achieves the best performance, indicating that the isometric alignment is simple-yet-effective.\n\n- Although there is still a concern that the global distribution of features would also change non-uniformly as the number of incremental stages increases, our experiments have shown that mIoU and hIoU gains from memorizing features are rather remarkable on 16-5(5) of PASCAL VOC (See Table 4). Moreover, we have observed that Ours-S2 also achieves remarkable IoU gains even on the 11-10(10) case (Please see our response [Results on extremely challenging scenarios]). These empirical studies suggest that our approach to using rotation matrices ensures a good set of transformations for feature alignment. Nonetheless, we believe that a feature alignment scheme should update features globally as well as non-uniformly as your suggestion. We leave it for future work and will add a discussion on this issue.", " We sincerely appreciate the reviewer for the favorable consideration of our work. Below we address your questions one-by-one.\n\n**[More memory requirements]**\n- We have mentioned in the limitation part of the supplementary material (L174-180 on page 8) that our approach to memorizing features requires more memory requirements than non-replay-based methods [2,9,30]. However, we would like to emphasize that our method without memorizing features already outperforms the works of [2,9,30] on standard benchmarks by a large margin. Furthermore, since our approach achieves a better trade-off between accuracy and memory efficiency, we believe that it could be an effective and efficient alternative to the works of [3,26] that memorizes raw images.\n\n**[Q1]** - Is it possible to increase the memory to achieve the best performance?\n- A straightforward way to achieve the best performance in incremental semantic segmentation (ISS) is to replay previously seen images along with ground-truth labels during incremental stages. It however suffers from large memory requirements as well as data privacy issues. We would like to note that our feature replay scheme focuses on addressing these problems. Nonetheless, following the suggestion, we implement our method to exploit an experience replay as in SSUL-M that memorizes 100 previously seen images along with ground-truth labels, except that we do not use an off-the-shelf saliency detector. The results on 16-5(5) of PASCAL VOC are as follows:\n||16-5(5)||||\n|-|:-:|:-:|:-:|:-:|\n|Methods|mIoU$_{base}$|mIoU$_{new}$|mIoU|hIoU|\n|Ours-S1-50|75.20|50.77|69.39|60.62|\n|Ours-S1-100|75.66|51.35|69.87|61.17|\n|SSUL-M|78.36|49.01|71.37|60.30|\n|RECALL|67.80|50.90|64.80|58.15|\n- Ours-S1-50 and -100 indicate that the number of memorized images is 50 and 100, respectively. To be specific, since memorized images can be directly used during Step 1, we have skipped Step 2 (i.e., fine-tuning classifiers). We can see that Ours-S1-50 already outperforms RECALL in terms of mIoU and hIoU. Note that RECALL uses 500 images for each previous category. We can also see that Ours-S1-50 and -100 show better hIoU scores than SSUL-M, although SSUL-M outperforms ours in terms of mIoU at the cost of using the off-the-shelf saliency detector.\n\n**[Q2]** - Is the classifier learnable in L137?\n- We have trained both a feature extractor and a classifier during Step 1 (See L140-142 on page 4). For memorizing features (L220-225 on page 6), we have fixed the feature extractor and the classifier. We have also fixed both the feature extractor and the classifier for training rotation matrices (See L263 on page 7). We will clarify this.\n\n**[Q3]** - A few questions in L147 on page 4\n- Thanks for your kind reminder. We will fix the typo $R_{new}$ into $R_{new}^{t}$. Note that $R_{new}^{t}$ indicates labeled regions as defined in Eq. (1). Please note also that training images in ISS are partially labeled to reduce the annotation cost. Specifically, pixels whose label belongs to current categories are marked only, while remaining ones are unlabeled. In the overlapped setting, the unlabeled regions could contain either previous and future categories (See L283-284 on page 8).\n\n**[Q4]** - Are the rotation matrices fixed in L264 on page 7?\n- To fine-tune the classifier, we have fixed the feature extractor and the rotation matrices. We will clarify this.", " We sincerely appreciate the reviewer for insightful suggestions and constructive comments that help us to improve the manuscript. Below we address your questions one-by-one.\n\n**[CKD]**\n- The problem of CKD lies in how it transfers the knowledge from $p_{bg}^{t-1}$ into $p_{bg}^{t}$. In particular, CKD makes $p_{ckd}^{t}$ to imitate $p_{bg}^{t-1}$, instead of directly distilling from $p_{bg}^{t-1}$ to $p_{bg}^{t}$. This is because CKD assumes that new categories of the current stage $t$ are marked as the background at the previous stage $t-1$. Thus, the gradients in Eq. (8) should be removed, while the desired gradient should enable transferring the knowledge from $p_{bg}^{t-1}$ into $p_{bg}^{t}$ directly. We achieve this goal as in Table 2. To be specific, the gradient in the second row $p_c^{t}- p_c^{t-1}$ is now applied for all previous categories including the background one, enabling distilling from $p_{bg}^{t-1}$ to $p_{bg}^{t}$ directly. We will clarify this.\n\n**[ALI]**\n- The vanilla CE term requires ground-truth labels for training. Thus, it is available only for labeled regions, since unlabeled ones do not have annotations literally. Note that the CE term is essential for learning new categories, while ALI acts as a regularizer that prevents incremental semantic segmentation (ISS) models from overfitting to the new categories.\n\n- We would like to note that ALI always reduces logit values of new categories (See the first row of Table 2). Thus, if we apply ALI for all regions, the logit values of new categories decrease even for features within labeled regions. This hinders learning new categories severely, since labeled regions contain new categories only. Thus, we have applied ALI for unlabeled regions.\n\n- While distillation techniques [2,9,30] are typically applied for all regions, we would like to note that ALI already enables better transferring the knowledge for previous categories on unlabeled regions than the vanilla KD. Thus, additionally applying the vanilla KD for the unlabeled regions could be redundant. On the other hand, the KD term can be complementary to ALI for labeled regions. Specifically, using the KD term for labeled regions enforces a current model to produce probabilities similar to a previous one, even though features within those regions do not belong to previous categories. This allows us to further prevent the current model from being changed significantly, preserving the discriminative power for previous categories. We will clarify this.\n\n- We have shown in Fig.1(a) that Ours-S1 outperforms variants of CCE and CKD, and have also provided in Figure C of the supplementary material that Ours-S1 allows ISS models to obtain more balanced classifiers than CCE and CKD. In our response [Detailed comparison with CKD + CCE] to Reviewer PwVw, we provide more comparisons of Ours-S1 with CCE and CKD. Please refer to this. \n\n**[Experiment setting]**\n- Following the suggestion, we further present experimental results on the 10-1 setting (i.e., 11-10(10)). \n|Methods|mIoU$_{base}$|mIoU$_{new}$|mIoU|hIoU|\n|-|:-:|:-:|:-:|:-:|\n|MiB|12.25|13.09|12.65|12.66|\n|PLOP|**44.03**|15.51|30.45|22.94|\n|Ours-S1|37.49|**27.47**|**32.72**|**31.68**|\n|Ours-S2|43.75|34.28|39.24|38.43|\n|SSUL|71.31|45.98|59.25|55.91|\n|SSUL-M|**74.02**|53.23|**64.12**|**61.93**|\n|RECALL|65.00|**53.70**|60.70|58.81|\n- We can see that Ours-S1 outperforms MiB and PLOP in terms of mIoU and hIoU. In particular, Ours-S1 shows the remarkable hIoU gain (8.74%) over PLOP. The performance gains from Ours-S2 over Ours-S1 demonstrate the effectiveness of memorizing features once again. We can also see that SSUL and RECALL still outperform other methods including ours. However, as mentioned in L327-331 on page 9, we would like to note that our approach provides a better trade-off between accuracy and efficiency and is free to handle stuff categories. For example, RECALL is not applicable on ADE20K and SSUL-M shows worse results than Ours-S1 on ADE20K (See Table 3). In our response [Results on extremely challenging scenarios] to Reviewer eeB2, we also provide results of Ours-S1 on 100-50(10) of ADE20K. Please refer to this.", " **[Hyperparameter sensitivity]**\n- [On ADE20K] For Step 1, we have performed a grid search on the cross-validation set to find values of $\\lambda_{ALI}$ and $\\lambda_{KD}$. To be specific, the search has been performed only once in the first incremental stage (i.e., $t=1$), since the hyperparameter search on ADE20K is computationally expensive. In the subsequent stages, we have used the same values of hyperparameters as shown in Table B of the supplementary material. For Step 2, we have performed a grid search for every incremental stage. This is because we fine-tune a classifier *only for a single epoch*, which is computationally acceptable. We will clarify this.\n\n- [On PASCAL VOC] For Step 1, we have additionally searched the number of training epochs along with two hyperparameters (i.e., $\\lambda_{ALI}$ and $\\lambda_{KD}$) for every incremental stage. To be specific, we have performed a grid search with 48 combinations of three hyperparameters, that is, {1,2,3} x {1,3,5,10} x {1,2,3,4} for $\\lambda_{ALI}$ and $\\lambda_{KD}$, and the number of training epochs, respectively. To traverse these combinations, we require 120 epochs (i.e., 120=12 + 12x2 + 12x3 + 12x4) in total. Although this sounds computationally expensive, we would like to note that the search cost of our approach is actually lower than MiB that tunes a single hyperparameter in the first incremental stage only. To find a value of $\\lambda$ in Eq. (1) of the MiB paper, the authors define two hyperparameters A and B, where $\\lambda = A \\times 10^B$. They perform a grid search with 35 combinations of two hyperparameters, i.e., {1,2,3,4,5} x {-3,-2,-1,0,1,2,3} for A and B, respectively. Since MiB trains a model for 30 epochs, the search process requires 1,050 epochs in total. This requires 1.75 times more training epochs than our search cost even on 16-5(5) of PASCAL VOC. Note that we require 600 epochs in this case. For Step 2, we have performed a grid search to find the values of $\\lambda_{ALI}$ and $\\lambda_{KD}$. As aforementioned, the search cost for fine-tuning a classifier is relatively mild.\n\n- We also would like to note that the number of hyperparameters set by the grid search in our approach is comparable with other methods [9,30]. For example, PLOP has at least three hyperparameters (i.e., $S$ in Eq. (4), and two separate values of $\\lambda$ in Eq. (9) for intermediate features and logits [9], while SDR contains at least four hyperparameters (i.e., $\\lambda_{pm}, \\lambda_{cl}$, and $\\lambda_{sp}$ in Eq. (1) and $\\lambda_{kd}$ in Eq. (2) [30]).\n\n**[Hyperparameter ablation studies]**\n- Following your suggestion, we analyze the robustness of our approach to different values of hyperparameters.\n|Ours-S1|100-50(1)|||||100-50(5)||||\n|:-:|:-:|:-:|:-:|:-:|-|:-:|:-:|:-:|:-:|\n|($\\lambda_{ALI},\\lambda_{KD}$)|mIoU$_{base}$|mIoU$_{new}$|mIoU|hIoU||mIoU$_{base}$|mIoU$_{new}$|mIoU|hIoU|\n|(1,1)|41.95|24.87|36.29|31.22||40.29|20.83|33.85|27.46|\n|(2,1)|42.16|23.60|36.01|30.25||40.87|21.18|34.35|27.90|\n|(3,1)|42.29|23.04|35.92|29.82||40.62|21.10|34.15|27.77|\n- From this table, we can see that our approach produces decent performance for different values of $\\lambda_{ALI}$ on ADE20K.\n\n|Ours-S1|16-5(1)||||\n|:-:|:-:|:-:|:-:|:-:|\n|($\\lambda_{ALI},\\lambda_{KD}$)|mIoU$_{base}$|mIoU$_{new}$|mIoU|hIoU|\n|(1,1)|76.36|48.87|69.81|59.60|\n|(1,3)|77.05|49.23|70.43|60.08|\n|(1,5)|76.94|49.12|70.32|59.96|\n|(2,1)|77.53|51.73|71.39|62.06|\n|(2,3)|76.87|53.07|71.21|62.79|\n|(2,5)|76.41|51.21|71.06|61.60|\n|(3,1)|77.35|51.84|71.28|62.08|\n|(3,3)|77.86|52.46|71.81|62.68|\n|(3,5)|77.48|51.09|71.20|61.58|\n- Here all models are trained for 4 epochs. We can see from this table that our approach still shows the robustness to different values of$ \\lambda_{ALI}$ and $\\lambda_{KD}$ on PASCAL VOC.\n\n|Ours-S2|16-5(1)||||\n|:-:|:-:|:-:|:-:|:-:|\n|($\\lambda_{ALI},\\lambda_{MEM}$)|mIoU$_{base}$|mIoU$_{new}$|mIoU|hIoU|\n|(1,1)|77.42|53.46|71.72|63.25|\n|(1,10)|78.04|55.34|72.63|64.76|\n|(3,1)|77.69|54.01|72.05|63.72|\n|(3,10)|78.00|54.25|72.34|63.99|\n- This table shows that our approach to memorizing features is robust to changes of hyperparameters. We will add a discussion about this issue.", " **[Q1]**- Eq. (9)\n- Since our goal is to find an objective function whose gradients are equivalent to the ones in Table 2, we have manually integrated the gradients to obtain such an objective (i.e., Eq. (9)). We will clarify this.\n\n**[Q2]**- Seed\n- We have generated a random seed for individual runs, and have reported the mean and standard deviation over three runs. We will clarify this.\n\n**[Q3]**- Did you determine all hyperparameters by cross-validation?\n- Thanks for your kind reminder. We have performed a grid search on the cross-validation set to find all hyperparameters, except \\tau and \\alpha in Eqs. (13) and (20), respectively. To be specific, the value of \\tau is set to 10 in order to ensure that correlation scores are sharp enough. For the focal loss, we have used the default hyperparameter without tuning them (i.e., \\alpha=2). We will clarify this.\n\n**[Minor comments]**\n- Thanks for the constructive comments. We will follow all your suggestions to improve the manuscript.\n\n- [L44-45 on page 2] We will revise the claims for lack of theoretical support.\n \n- [L146 on page 4] We will fix the typo $R_{new}$ into $R_{new}^{t}$.\n\n- [L162-166 on page 4] We will mention in Table 1 that the optimization process is carried out by stochastic gradient descent, i.e., SGD.\n\n- [The dimensions of features] Our approach memorizes $D$-dimensional convolutional features. We will mention the dimensions of features in L223-225 and Eq. (12) explicitly.\n\n- [Eq. (12) on page 6] We will differentiate the notation of the item index from the one of the correlation score.\n\n- [Tables 3 and 4] We will divide existing methods into two groups: one is non-replay-based methods and the other is replay-based ones.", " This paper describes an approach to Incremental Semantic Segmentation (ISS) of images. The authors provide a detailed and critical analysis of the Calibrated Cross Entropy (CCE) and Calibrated Knowledge Distillation (CKD) losses from the MiB approach [2], and from this analysis derive a new calibrated loss (called ALI) that combines the stability benefits of CCE with the forward transfer benefits of CKD. The authors additionally propose a three-stage incremental learning approach that first fits to a new task using ALI and (uncalibrated) CE and KD losses, then fits a rotation matrix to align memorized features with the new feature space, and finally fine-tunes the new task classifier on the current task samples and the newly rotated and memorized features from previous tasks. An extensive experimental evaluation is given on PASCAL VOC and ADE20K.\n # Strengths\n\n+ **Clarity**: The paper is generally very well-written and the main contributions are clear and understandable. The paper was a pleasure to read and I am convinced it adds something interesting to the discussion on incremental learning. \n+ **Motivation and technical development**: Similarly, the main technical motivations and developments are very clearly described, and the gradient analysis of CCE and CKD (as well as the proposed ALI) losses are quite compelling. The proposed approach is described in detail and I feel like the results presented would be easy to reproduce from the technical descriptions in the paper.\n+ **Experimental evaluation**: The experimental evaluation is also quite convincing -- although see *Weaknesses* and questions below for some issues. The proposed approach demonstrates clear advantages over the state-of-the-art, both in terms of accuracy and memory efficiency. \n\n# Weaknesses\n\n+ **Steps**: Though the paper is well-written, there are quite a few moving parts and I feel that the generic use of \"Step 1\" and \"Step 2\" does some disservice to putting all of the pieces together into a coherent, global picture of the proposed approach. The approach seems more clearly articulated in **three** stages (names improvised): New Task Acquisition (currently Stage 1), Drift Compensation (via rotation), and maybe Current Task Fine-tuning.\n \n+ **Detailed comparison with CKD + CCE**: This seems like a very important comparison as one of the main novelty claims is improvement over MiB which proposed these calibrated losses. The only direct comparison with these seems to be in Fig. 1, and thus gets kind of forgotten when arriving at the experimental evaluation. Moreover, the terminology is somewhat unclear: the proposed approach, I think, should be CE+KD+ALI (eq. 10).\n\n+ **Ablations**: The feature alignment rotation is not ablated, moreover it would be interesting to see the ablations on ADE20K in attempt to understand why replay adds so little.\n + **Improvement of S2 over S1**: on ADE20K the improvement of adding replay (and rotation and fine-tuning) is very marginal compared to that seen on PASCAL. This is also evident in the poor improvement (or even deterioration as noted in the paper) of SSUL-M with respect to SSUL. Do you have an explanation for this? Why would replay perform so poorly in some cases compared to others?\n\n+ **mIoU scores over all**: I found the discussion of the drawbacks of the mIoU over *all* categories a bit hard to parse. Are the columns in Tables 3-4 the mIoU over all categories, as commonly reported in the literature? This should maybe be made more clear to facilitate comparison with existing work.\n The authors include a thoughtful discussion of technical limitations in the Supplementary Material. I think something could and should be said about potential risks of incremental learning as well -- for example, a discussion of the dangers of biases becoming *baked in* to incrementally-learned representations, and the difficulty of eliminating them. ", " The paper addresses the question of incrementally adding new categories in a semantic segmentation problem. This is a variant of the more standard class incremental learning problem where data are annotated with the new categories only in each incremental session and may contain unannotated regions from seen or unseen categories.\n\nThe proposed approach makes two contributions: definition of a new term in the loss for regularization that takes into consideration the various types of data, and alignment of the memorized features from previous sessions to the new features using a category-specific rotation estimated from a criterion combining discriminating capacity and similarity with previous features. The approach is evaluated on two standard benchmarks in this field (PASCAL VOC and ADE20k).\n Strengths\n\n- Good and thorough discussion of loss terms in order to handle the various types of data and annotation state. \n- Nice proposition to incrementally adapt memorized features using global alignment by rotations.\nThe ablation study, experiments and result analysis justify the approach on standard benchmarks.\n\nWeaknesses\n- Incremental work. Evolution of classical concepts of incremental learning: knowledge distillation and data replay.\n- The maximal number of incremental stages is limited to 5: other papers (SSUL [3]) use more stages (until 11). Since the number of incremental steps is a critical parameter of incremental learning, in general, it would be fair to assess the approach on similar grounds.\n - It was not clear where the performances of competing approaches (SSUL, PLOP, etc.) come from. Did you re-implement the methods or take the number from the papers? Are the backbones identical?\n\n- How can it be justified that the group of rotations, which are global isometries, is a good set of transformations ensuring feature alignment? How do the features behave when the number of incremental stages is large (I suspect that new features not only isometrically shift but may also be rearranged in a non-uniform way)? Maybe a T-SNE plot would give some idea about what is happening during sessions.\n Non applicable", " The catastrophic forgetting problem is particularly severe in ISS, since pixel-level ground-truth labels are available only for the novel categories at training time.\nTo address the problem, regularization-based methods exploit probability calibration techniques to learn semantic information from unlabeled pixels. While such techniques are effective, there is still lack of theoretical support. Replay-based methods propose to memorize a small set of images for previous categories. They achieve state-of-the-art performance at the cost of large memory footprint. The author propose in this paper a novel ISS method, dubbed ALIFE, that provides a better compromise between accuracy and efficiency. To this end, it first show an in-depth analysis on the calibration techniques to better understand the effects on ISS. Based on this, it then introduce an adaptive logit regularizer (ALI) that enables our model to better learn new categories, while retaining knowledge for previous ones. It also present a feature replay scheme that memorizes features, instead of images directly, in order to reduce memory requirements significantly. Since a feature extractor is changed continually, memorized features should also be updated at every incremental stage. To handle this, it introduce category specific rotation matrices updating the features for each category separately. The paper demonstrate the effectiveness of our approach with extensive experiments on standard ISS benchmarks. Strengths of this paper are as follows:\n1. The paper shows an in-depth analysis of probability calibration methods widely used for ISS, and introduces ALI that enables our model to better learn new categories, while maintaining the knowledge for previous categories.\n2. The paper present a novel replay strategy using category-specific rotation matrices, which helps to alleviate catastrophic forgetting for previous categories with much less memory requirements than replaying raw images.\n3. Extensive experiments demonstrate the effectiveness of our approach to using ALI and replaying features with rotation matrices. It set a new state of the art on standard ISS benchmarks.\n\nWeakness of this paper are as follows:\n1. The proposed method requires more memory than other similar methods [1, 4, 9] that do not adopt the experience replay.\n2. The proposed method could not achieve the best performance on benchmark in Table 4, though the author claims that the proposed method need far less memory. But is it possible to increate the memory to achieve the best performance?\n3. in line 137, the classifier weight is learnable, right? or it's fixed.\n4. in line 147, how does the labeled regions R_new mean? is any region of image labelled and other regions not labelled? why there is labelled region and unlabelled regions? is it because the labelled regions are previous categories indicated region?\n5. in line 264, in the fine-tuning step, is the rotation matrices fixed or still learnable?\n\n questions are put in Strengths and Weakness. Yes", " The paper proposes a method for class-incremental learning in semantic segmentation. First, in a theoretical analysis of the gradients induced by commonly used loss functions are analyzed and several shortcomings are pointed out. As a way to mitigate these problem, a new knowledge distillation loss dubbed adaptive logit regularizer (ALI) and a feature replay strategy are proposed. Experimental evaluation shows the superiority of ALI over previous (non-replay-based) approaches in several experimental settings and competitive performance compared to other replay-based approaches, while having a much lower memory footprint. Strengths:\n\n1. Strong motivation of the method in the introduction and through the theoretical analysis of previously used loss functions. \n\n2. Well-written related work section, which is informative in putting the paper into the larger context of incremental semantic segmentation. The novelty of the method is clearly stated and discussed w.r.t. other works.\n\n3. Section 3.2 provides an overall very clear and informative mathematical description and analysis of de-facto standard loss functions in incremental semantic segmentation.\n\n4. The experimental results are strong compared to previous methods and appear to be SOTA for non-replay-based methods.\n\nWeaknesses:\n\n1. CKD analysis: In the analysis of the CKD loss, it is not always clear to me, what the desired gradient behavior is vs. the observed behavior. Specifically, it would help, if on page 5, ll. 180-196, this would be clearly stated. Afterwards, it is also not completely clear to me how ALI solves these issues. This could be added in page 5, ll. 197-208 with reference to the desired behavior in the previously mentioned text part.\n\n2. Loss application of ALI: It is not quite clear to me, why you use the vanilla CE and KD losses in Eq. 10 and also why you apply them specifically in these regions. I think this should be motivated in Section 3.2 and supported by an ablation study later on. In particular an ablation showing the experimental comparison with the commonly used CKD and CCE losses would help to better understand the benefits of ALI. Otherwise, also other factors such as better chosen hyperparameters could be the reason for the observed improvements through ALI.\n\n3. Hyperparameter sensitivity. The method seems to contain a lot of hyperparameters, which even have to be determined for every incremental stage separately (cf. Table B in the appendix). I think this might limit the usability of the method, as it is computationally very expensive to run a grid search for each new incremental step. Maybe the authors could elaborate on this point?\n\n4. Experimental setting: The approaches SSUL and RECALL compare on Pascal-VOC on the 10-1 setting and show particularly strong results in this setting. I think it would be better to evaluate this replay-based approach also in the most challenging settings introduced by other replay-based methods in incremental semantic segmentation.\n \n5. Hyperparameter ablation studies: The influence of the different hyperparameters of the method is not shown. Table B of the appendix shows that different hyperparameters are chosen for each setting, so it would be good to know the influence of these hyperparameters on the final performance.\n\nOverall, I really like the approach taken in this work. I also feel, the contributions contain a lot of value to the incremental and continual learning community. I would like to raise my rating, if my concerns are addressed satisfactory in the rebuttal.\n\nMinor comments and suggestions:\n\n6. Page 2, ll. 44-45: I think the CCE and CKD losses are indeed theoretically motivated. Rather than saying there is lack of theoretical support, I would rather rephrase to saying that the conducted theoretical analysis points out shortcomings which are addressed in this work.\n\n7. Page 4, ll. 146: There might be a superscript t missing in the symbol R_new.\n \n8. Page 4, ll. 162-166: Although it might appear trivial, it would be helpful to remind the reader that optimization is carried out in negative direction of the gradients shown in Table 1.\n \n9. It would help a lot to state the dimensions of the features used for replay, mentioned on page 6, ll. 223-225 and in Eq. 12.\n \n10. Eq. 12: The correlation score and the item index s have the same symbol, which is a bit confusing. Also, is the dot operator a scalar product between two feature vectors? Maybe this could be clarified.\n\n11. I think the bold-face highlighting in Table 3 and 4 is a bit misleading. I would propose to compare replay-based methods and non-replay-based methods. In this sense, you would still have SOTA results for non-replay-based methods and competitive results among replay-based methods. But it seems unfair to compare Ours-S2 with, e.g., MiB or PLOP.\n 1. Eq. 9: I am not quite sure how this equation is derived from Table 2. Could the authors elaborate?\n2. Have you varied the random seed between the three experiments that you average over?\n3. Did you determine all method hyperparameters by cross-validation, in particular during the mentioned grid search mentioned in Appendix B?\n Limitations have been discussed to some degree in the supplementary. However, I feel the hyperparameter sensitivity might be an additional limitation." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 5 ]
[ "9kOv11eMGnL", "qKcX1-FH6pO", "BRDL97E8T_Y", "nvgNZC837qM", "jqF4J6VXohk", "jqF4J6VXohk", "Dcf21AkZoK8", "Dcf21AkZoK8", "ipm2baEcZnO", "7fqdGm1-8fQ", "7fqdGm1-8fQ", "7fqdGm1-8fQ", "nips_2022__2-r5UurHp", "nips_2022__2-r5UurHp", "nips_2022__2-r5UurHp", "nips_2022__2-r5UurHp" ]
nips_2022_RQ385yD9dqR
When are Local Queries Useful for Robust Learning?
Distributional assumptions have been shown to be necessary for the robust learnability of concept classes when considering the exact-in-the-ball robust risk and access to random examples by Gourdeau et al. (2019). In this paper, we study learning models where the learner is given more power through the use of local queries, and give the first distribution-free algorithms that perform robust empirical risk minimization (ERM) for this notion of robustness. The first learning model we consider uses local membership queries (LMQ), where the learner can query the label of points near the training sample. We show that, under the uniform distribution, LMQs do not increase the robustness threshold of conjunctions and any superclass, e.g., decision lists and halfspaces. Faced with this negative result, we introduce the local equivalence query (LEQ) oracle, which returns whether the hypothesis and target concept agree in the perturbation region around a point in the training sample, as well as a counterexample if it exists. We show a separation result: on one hand, if the query radius $\lambda$ is strictly smaller than the adversary's perturbation budget $\rho$, then distribution-free robust learning is impossible for a wide variety of concept classes; on the other hand, the setting $\lambda=\rho$ allows us to develop robust ERM algorithms. We then bound the query complexity of these algorithms based on online learning guarantees and further improve these bounds for the special case of conjunctions. We finish by giving robust learning algorithms for halfspaces with margins on both $\{0,1\}^n$ and $\mathbb{R}^n$.
Accept
Solid contribution to NTK theory
train
[ "L4Sjj_fmxvK", "L3LqbiRFPa", "iJSgUjpjZKl", "nVDv8EwlY6W", "CCyaAjMVS-Y", "DOe0k-pu4wq", "9E-LIXoXfUT", "gmuRqD-RbE2", "ADwOdiJwsb", "CIXvv6V2wlJ", "OMON6HmkTei" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We reply to the remaining concerns below. Please let us know if you have any questions. \n\n**For the LEQ**, if one believes that the exact-in-the-ball notion of robust risk is worth investigating, then our lower bound for the LMQ model and our impossibility result for $\\lambda<\\rho$-LEQ give ample justification for looking at the $\\rho$-LEQ to delimitate when distribution-free robust learning is possible in this setting. Using the perfect adversary oracle (PAO) of Montasser et al (2021) could give false counterexamples, for example a point $z$ where $h(z)\\neq c(x)$, but where $h(z)=c(z)$ (i.e. the hypothesis is correct on $z$ w.r.t. the ground truth). The LEQ setting is of course idealized, and it would be worth looking at imperfect oracles in future work. We would also like to note that NeurIPS has a longstanding learning theory branch, which routinely features papers that don’t have immediate applications, but provide substantial insights to various learning problems.\n\n**The EQ+MQ model:** this learning model has been vastly studied since the seminal work of Angluin (1987), cited by over 2600, as outlined in the related work. More recently in the machine learning community, EQs (and MQs) were used for recurrent neural networks in [WGY18] and [WGY19]. [O+20] later followed up on their work by showing how regression-based weighted finite automata extraction could be used to simulate EQs, both theoretically and experimentally. [SDC19] also used the MQ+EQ model to verify binarized neural networks, where the EQs were simulated by a SAT solver. [CM19] also uses MQ+EQ for interpretable models based on linear temporal logic. We are happy to include these references in the related work section.\n \n**[Dia20]**: since we are using different notions of robustness, the hypotheses returned by both algorithms may differ (recall that our algorithm is given counterexamples that the one from [Dia20] will not have access to). Guarantees derived for the constant-in-the-ball notion of robustness will imply that the hypothesis returned has a certain *stability* (perhaps at the cost of accuracy in the agnostic case), because we are trying to limit the probability of a label change in the perturbation region. On the other hand, guarantees derived for the exact-in-the-ball notion of robustness usually give stronger accuracy, as we want to be correct wrt the ground truth in the perturbation region. Apologies for the ambiguous wording.\n \n**Active learning**: in active learning, an algorithm can interact with a (human) user or information source (which can be represented as an oracle). This is usually done by querying the label of a data point. By having a human user in the loop, we can ensure that the counterexamples given to the learner are indeed counterexamples w.r.t. the exact-in-the-ball robust risk (which is necessary for the guarantees we have proven to hold). Here the human is required as we cannot assume that the label is constant in the perturbation region, contrary to existing methods to find adversarial examples in the literature. We will include a short discussion on active learning and how it could relate to the LEQ in practice. \n \nThank you for pointing out your concerns and engaging with us during the discussion period! It has been beneficial to clarify and justify our work. We are looking forward to your response.\n\n**References**\n \n[WGY18] Gail Weiss, Yoav Goldberg, and Eran Yahav. *Extracting Automata from Recurrent Neural Networks Using Queries and Counterexamples*, ICML 2018.\n\n[WGY19] Gail Weiss, Yoav Goldberg, and Eran Yahav. *Learning Deterministic Weighted Automata with Queries and Counterexamples*, NeurIPS 2019.\n\n[O+20] Takamasa Okudono, Masaki Waga, Taro Sekiyama, and Ichiro Hasuo. *Weighted Automata Extraction from Recurrent Neural Networks via Regression on State Spaces*, AAAI 2020. \n\n[SDC19] Andy Shih, Adnan Darwiche, and Arthur Choi. *Verifying binarized neural networks by Angluin-style learning*, International Conference on Theory and Applications of Satisfiability Testing 2019.\n\n[CM19] Alberto Camacho, Sheila A. McIlraith. *Learning Interpretable Models Expressed in Linear Temporal Logic*, International Conference on Automated Planning and Scheduling (ICAPS 2019).\n", " Thank you for taking the time to read our rebuttal, and we hope that our response below addresses your remaining concerns. We have split our response into two parts due to character constraints. \n \n**The constant-in-the-ball** results cannot, in general, be applied to the exact-in-the-ball setting. \n- In terms of techniques, the guarantees obtained by Montasser et al (2021) (we are referring to this version for theorem numbers http://proceedings.mlr.press/v134/montasser21a/montasser21a.pdf) are closest in essence to our work. These guarantees, which notably give a query upper bound as a function of the VC and dual VC dimensions of a concept class (Theorem 2), cannot be applied to the exact-in-the-ball notion of robustness. This is because their argument (which is based on earlier results, Montasser et al (2019)) relies on inflating the training sample $S$ as follows: for $(x,y)\\in S$ and perturbation function $\\mathcal{U}$, add all the points in $\\mathcal{U}(x)$ to $S$ *with label $y$* (this is not what we want for our notion of robustness: we would want the ground truth). Of course, this set can now be infinite. The authors discretize the set to a new set of cardinality $\\tilde{O}(VC(H)VC^*(H))$, where $VC^*(H)$ is the dual VC dimension. Their argument is based on assuming that the ground truth is constant in the perturbation region, as they are looking at the dual set of function $\\mathcal{G}$, defined as $g(x,y) (h) = 1[h(x) \\neq y] $ (see p.7 of the following version: http://proceedings.mlr.press/v99/montasser19a/montasser19a.pdf). This is not possible to do with the exact-in-the-ball notion of robustness. This reasoning is based on the constant-in-the-ball realizable setting, and the agnostic setting is based on a reduction to the realizable setting. As previously mentioned, the constant-in-the-ball agnostic setting could be the exact-in-the-ball realizable, so this reduction does not work. Of course, when the notions of robustness coincide (there is a sufficiently large margin, for example), we could use their reasoning, but it is not true in general, for instance when there is considerable probability mass near the boundary. Finally, for many finite cases, our bounds are significantly smaller than those of Theorem 5 in Montasser 2021 ($VC(H)Lit(C)$ vs $2^{VC(H)^2{VC^*}^2(H)}Lit(C)$), simply due to the nature of the problem studied. Both their and our work use online learning guarantees to derive the $Lit(C)$ part of the upper bound. \n- Another important part of the gap is that, for the same learning problem, the two notions of robustness can imply different solutions (robust risk minimizers), and thus have different implications. For a more applied setting, if we consider the example we gave to reviewer n77F, when we have medical data, changes to a small number of observations may change the diagnosis; here, we would _not_ want the classifier returned by a learning algorithm to be constant-in-the-ball. For a more theoretical example, if we consider the class of parities (functions that are expressed as $f(x)=\\sum_{i\\in S}x_i \\mod 2$, for some subset $S\\subseteq[n]$), then all points are on the decision boundary. Under the uniform distribution, the exact-in-the-ball risk minimizer is the target $f$, but for the constant-in-the-ball, it is the constant function 0 (or 1). This is also an example of a problem that is realizable for the exact-in-the-ball robust risk and agnostic for the constant-in-the-ball one. \n\nPlease let us know if it would be useful to include a discussion summarizing the points above. ", " Thank you for the reply. I guess my concerns were not addressed.\n\n- On the LEQ: as far as I can tell, EQ is not broadly studied in the literature. So it is still unclear to why when LEQ will be useful. (I know this is a theory paper, but NeurIPS is a machine learning conference rather than pure math venue; people care about when and how one can apply the theory.)\n\n\n- \"While this may be harder to implement in practice, it is in line with active learning approaches.\" Can you explain more on the connection to active learning?\n\n- [Dia20]: \"While both papers use the same algorithm, the guarantees are obtained in a different way, and have a different meaning.\" Can you explain more on \"different meaning\"?\n\n- constant-in-the-ball vs exact-in-the-ball: this is the main claim of the paper, but to be very honest, even after reading the rebuttal and the submitted manuscript, it is unclear to me why there is a significant gap between the two setting and why prior works on one cannot be adapted/generalized to the other.\n", " I decided to increase the score and decrease the confidence. It might reflect my opinion better.", " Thank you for your response. I think that we \"agree\" on the fact that my review is somewhat accurate. While I might have sounded a bit too critical, I mainly tried to write the review so that no confusion can occur. Overall I quite liked the paper and the reason for only borderline accept is my lack of expertise in this topic; I wanted to not outweight the other (expert) reviews.", " Dear reviewer, \n\nWe thank you very much for your comments and questions, which are addressed below.\n\n**Local equivalence queries**: \n\n- Regarding applications, if one considers the exact-in-the-ball notion of robustness, the existence of adversarial examples (and thus an entity that can find them) can be seen as a justification for an LEQ oracle. Indeed, if an algorithm can find a point in the perturbation region where the ground truth and hypothesis disagree (with a human in the loop to confirm this), then this can be used as a counterexample to the learning algorithm. While this may be harder to implement in practice, it is in line with active learning approaches. We will include a discussion on this in the revised version.\n- Regardless of practicalities, oracles such as the LEQ have been studied in theory and have resulted in valuable insights. For example, the EQ model (where the queries are made on the whole input space) has been studied in learning theory and automata theory. Our model is a restricted version of this. \n- Since our work wishes to answer, among other questions, when robust learning is possible, our negative results for LMQ and $\\lambda$-LEQ where $\\lambda<\\rho$ implicitly justify the use of the $\\rho$-LEQ oracle.\n\n**Diakonikolas et al 2020**: \n\nThis paper considers the constant-in-the-ball notion of robustness (last paragraph on p.1). While both papers use the same algorithm, the guarantees are obtained in a different way, and have a different meaning (as the robust risk minimizer won’t necessarily be the same for the same learning problem with two different notions of robustness). [DKM20] also study the agnostic setting, while we study the realizable one (though both use a different notion of risk, so a problem that could be agnostic w.r.t. their notion of risk could be realizable for us). We will make sure to add a reference and a brief discussion.\n\n**Robust ERM**:\n\nWhile showing that RERM results in robust learners is not fundamentally different from similar results in the constant-in-the-ball (or even standard PAC) setting, the main difficulty is either designing RERM algorithms for exact-in-the ball risk (or showing that existing algorithms already are). For example, the notion of Robust VC dimension is also different in the two settings. Theorems 10 and 11 establish the existence of robust learning algorithms. We emphasize that designing RERM algorithms for this notion of risk (even without computational constraints) is hard/impossible without access to some oracle like the LEQ one. For the constant-in-the-ball notion of robustness, once we have a sufficiently large sample to guarantee robust generalization, the issue with computing the robust risk minimizer is mainly a _computational_ one, as we wish to know when the hypothesis changes label in the perturbation region. Montasser et al. (2021) introduced the Perfect Adversary Oracle for the cases where this computation is inefficient or impossible. On the other hand, the LEQ tackles both the _information-theoretic_ and computational limitations of learning with only access to a random sample.\nHighlight of the core techniques: page limit allowing, we would like to include some proofs from the appendix (or sketches/more thorough explanations). We would like to point towards the response to reviewer n77F regarding the originality/technicality of the proofs.\n\nThank you for your time, and we are looking forward to your response.\n", " Dear reviewer, \n\nWe thank you very much for your comments. Regarding your questions: \n\n$\\log(1/\\epsilon)$ term: You are absolutely right, the $\\log(1/\\epsilon)$ term was omitted by mistake in the statement of Lemma 5. It is not necessary for Lemma 3 however. Thank you for pointing this out.\n\nStatements in Section 3.2: We agree and we will rephrase the statements in line with your comments.\n\n[BJC21] reference: thank you for pointing this out to us, we will make sure to include it in the revision. Lines 329-338 include a short discussion on when our results differ from when considering the constant-in-the-ball loss, which covers the [BJC21] paper.\n\nFinally, thank you for pointing out the typos!\n", " Dear reviewer, \n\nWe thank you very much for your comments and questions, which are addressed below.\n\n**Originality**: \n\nFor the proofs that use standard methods, we believe that the conceptual contribution (i.e., robust ERM guarantees in the distribution-free setting) is important, as it contrasts with the natural setting of having only random examples, where this is impossible. Moreover, Theorem 7 uses online learning guarantees but also a bound on the size of the Littlestone tree, giving an upper bound with a dependence on the adversarial budget $\\rho$, which doesn’t appear in related work for the constant-in-the-ball notion of robustness (see lines 151-153 in the related work section). \n\nThere are more technical proofs in the paper, which had to be relegated to the appendix because of space constraints, particularly Theorems 11 and 12. Theorem 11 transforms the robust risk function into a first-order logic formula and uses the quantifier-elimination procedure of Renegar (1992) to bound the robust VC dimension of halfspaces with margin. Theorem 12 provides an LMQ lower bound (which interestingly also holds for full membership queries, where queries can be made over the whole input space). We would be happy to include either or both of the proofs in the main body of the paper, page limit permitting, and at the very least include a summary of the technical contributions in the main body of the paper. \n\n**Cons**: \n\nOur results in the rest of Section 3 derive bounds using an LEQ oracle with $\\lambda = \\rho$. In order to justify that, we do feel the need to include Theorem 2 (although the proof is very simple) for conceptual completeness – it is stated as a contribution for that reason.\n\nLemmas 3 & 5: We mention in the submission that the arguments are standard. However, the versions of these for the exact-in-the-ball setting don’t appear elsewhere and need to be included for completeness. Regarding Theorem 9 – it is not true that it is a direct corollary of Lemma 3; the analysis of consistency is different and is better than can be directly derived from the more general result in Lemma 8.\n\nRegarding discussion on Robust VC dimension: this does appear implicitly in Theorem 11, where the RVC of linear thresholds is shown to be $O(n^3)$. We agree that more discussion would be useful, and space permitting we will add this. For other classes we considered in the paper, this discussion was not needed, as they are finite and Lemma 3 already applies.\n\n**Questions**:\n\nThe distribution-free setting refers to settings where learning guarantees hold regardless of the distribution that generated the data (as long as the examples are sampled iid). This is in fact the setting for PAC Learning (Definition 14 in the appendix illustrates this). Related work has shown that the distribution-free setting is impossible for robust learning when having access to only random samples. We show that the impossibility result carries over (though to more restricted concept classes) even when giving a lot more power to the learner. \n\nExact-in-the-ball-robustness: robust (exact-in-the-ball) learning is not the same as in the standard learning, because the notion of robustness (l.165) contains an existential quantifier: we want to be correct in the perturbation region (which may contain points that are less likely to be seen in a random sample), not just on the random sample. It is a much stronger requirement than that of standard learning. \n\nThe exact-in-the-ball notion of robustness is much less studied and understood than the constant-in-the-ball one, which is one of the reasons why we focus on it. We also highlight that, when considering problems other than image classification with small perturbations, there are many examples where the notion that we are using is more justified. This usually happens when there is considerable probability mass near the decision boundary (and thus, a label change in the hypothesis could represent a ground truth label change!). For example, when we have medical data, changes to a small number of observations may change the diagnosis; here, we would _not_ want the classifier to be constant-in-the-ball.\n\nHamming distance: the Hamming distance is the only meaningful notion of distance in the boolean hypercube, but, as you mentioned, other input spaces like a graph or $\\mathbb{R}^n$ have various meaningful notions of distance. We will remove the footnote. \n\nWe hope we have answered your concerns and look forward to reading your response. \n", " The paper considers (adversarial) robust learning in the PAC learning framework. As opposite to the \"standard\" notion of adversarial robustness, where the hypothesis $h$ is considered to be robust at point $x$ with radius $\\epsilon$ if the target concept is $c$ and it holds that $||x'-x|| \\leq \\epsilon \\implies h(x')= c(x)$. Here, the hypothesis needs to be correct in the epsilon ball to be considered robust. That is, it needs to satisfy $||x'-x|| \\leq \\epsilon \\implies h(x')= c(x')$. To achieve this, the learner is allowed to use \"Local membership queries\" in at-most $\\lambda$ distance from the observed point. The paper then shows that when $\\lambda < \\epsilon$, there is a distribution not learnable by any learner (thm 2). Then they provide several robust learnability results (thms 7,9,10,11) on classes such as linear threshold functions and conjunctions. \n\n# originality, clarity, quality, significance\n- Originality - The proof techniques seem to be standard (often the proofs are trivial modification of analogical proofs in the standard setting) but the considered problems are original, so I think the originality is ok. However, if there is some original technique that I have missed, I suggest the authors to emphasize it.\n- Clarity - Clearly written. Only that there are used lots of abbreviations which slows down the reading, but their presence is understandable.\n- Quality - I haven't found a problem.\n- Significance - I don't consider this work to be \"too\" significant, especially because the results are often unsurprising and easy to derive, but I think it has its place.\n\n\n# Pros\n- Easy to read\n- Extensive amount of previous work in the appendix, although something is superfluous.\n- Quite a lot of results.\n\n# Cons\n- The results are sometimes too simple, such as thm 2. which is claimed as one of the main contributions essentially says that when we cannot observe c(x) during training, then we don't know it. For lemma 3, I don't see a difference between the proof and a proof of the lemma in the standard (non-robust) setting (and then thm 9 is a corollary of lemma 3), Similarly lemma 5 seems to only replace VC dimension with robsut VC dimension, but there is almos no discussion on RVC (e.g., what is the RVC of standrd function classes) and so on.\n - Please define the distribution-free setting. Does it mean that there are no assumptions on the distribution? The term suggests that there is no underlying distribution, but then the notion of random sample would be meaningless. Also I think it would be good if all definitions were in the appendix. Now the majority is there, but consistency is in the main text. \n\n- Why is correct-in-the-ball notion of robustness interesting? Specially, in this setting, we would like to be correct at every point (which is the same goal as we usually have, as opposite to the constant-in-the-ball risk), thus, the learning is basically the same as in the standard setting. I think there should be at least a brief discussion in the paper about this.\n\n- Feel free to point out any potential errors in the review.\n\n## misc\n- footnote 4 - why is Hamming distance the only meaningful distance here? While I agree that is a reasonable distance, I think there are many others and the space can have a special structure which can be of interest. E.g., If there is an underlying graph structure (other than a hypercube) which would induce the distances so that the \"adversarial neighborhood\" may contain e.g., neighboring nodes.\n\n\n\n -", " The authors study adversarially robust learning in the \"exact-in-a-ball\" model when the learner has access to additional types of local queries. The `exact-in-a-ball' robustness model replaces the standard PAC classification error with\n\n$$\\Pr_{x \\sim D}[\\exists z \\in B(x,r): h(z) \\neq c(z)]$$\n\nThis differs from the widely studied \"constant-in-a-ball\" model that instead uses $h(x) \\neq c(z)$. Since learning even basic concepts in this model is typically impossible, the authors examine two types of enriched query algorithms: local membership queries and local equivalence queries. The former allows the learner to query any point in a small ball around any training example, and the latter allows the learner to ask whether a given hypothesis h is robust on a small ball around the point.\n\nThe authors show that while local membership queries do not help robust learning even in basic cases such as conjuctions over the uniform distribution, local equivalence queries can be used to robustly learn several non-trivial hypothesis classes such as conjuctions and halfspaces with margin, even in the distribution-free setting. In fact, the authors more generally show it is possible to robustly learn any finite class or class with finite robust VC dimension that has an online learner in the mistake bound model. Their algorithm is computationally efficient as long as the online learner is as well, which leads to statistically and computationally efficient robust learners for conjunctions and halfspaces with margin.\n The `exact-in-a-ball' model of adversarial robustness is a challenging learning model, and this work gives the first efficient robust learners for important classes such as halfspaces with margin. While most of the general techniques in the work are not particularly novel (e.g. Robust VC, use of online learning), the eventual application and analysis showing halfspaces actually satisfy these conditions (namely Robust VC) is nice. Overall, the results are a bit niche, but may be of interest to the adversarial learning sub-community.\n\nOn the other hand, the paper can be a bit confusing at places, and also seems to have some small errors. Most of the sample complexities for instance seem to be missing a $\\log(1/\\varepsilon)$ term that should appear from applying Sauer-Shelah. The statement of results in Section 3.2 are also a bit confusing and stated in a non-standard fashion (perhaps a more typical presentation would be to say any RERM on m samples is a robust learner or something of this sort). Finally, while the authors largely do a good job placing the result in prior literature, it should probably also be mentioned that online learning (namely perceptron) has also been used in other work on robust learning, e.g. in [BJC21] “Sample Complexity of Robust Linear Classification on Separated Data.” A few typos: attaks, dimensino, ispoerimetric N/A", " The paper studies adversarial robustness of learning basic concepts when the learner is allowed to make local queries. It shows that\n\n- Using local membership queries of Awasthi et al (2013) does not help too much on reducing the robust risk. This is in stark contrast to results established under the standard PAC model.\n\n- It then shows that with access to a new query type, equivalence query, the robust risk can be reduced.\n\nIn terms of merit, this paper gives the first robust ERM algorithm when robust risk is defined as being exact-in-the-ball.\n\nIn terms of techniques, this paper leverages many useful prior results and seems to combine them nicely. Strength:\n\n+ The paper studies a timely topic and gives full proof to its main results.\n+ It is interesting to consider various oracles to tackle the essentially hard problem of learning with adversarial examples. In particular, the separation result for local membership query appears interesting.\n\nWeakness:\n\n- It turns out that the main positive result is the improvement from using equivalence queries. However, I feel such query type was not well motivated, and seems too strong to find practical applications.\n\n- There are prior works studying the problem through the lens of online learning, which does not rely on the strong oracles, see e.g.\n\nThe Complexity of Adversarially Robust Proper Learning of Halfspaces with Agnostic Noise, I. Diakonikolas, D. Kane, P. Manurangsi, NeurIPS 2020\n\nThe above paper already sets out PAC learnability of large-margin halfspaces via L_p-norm Perceptron. I would like to see a close comparison to Theorem 11 in the submission.\n\n- While the paper presents a handful set of theoretical results, a highlight of the core techniques is missing. Thus, it is unclear to me whether the developed techniques are novel or the established results are just simple sandwich of prior works.\n\n- It turns out that the main claim is a robust ERM when adversarial examples reside exact-in-the-ball while prior works have built upon the regime constant-in-the-ball. I wonder what is the key challenge in the current setting and why existing results cannot be easily adapted. Please address the weakness part above. Yes." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 1, 3, 3 ]
[ "iJSgUjpjZKl", "iJSgUjpjZKl", "DOe0k-pu4wq", "CCyaAjMVS-Y", "gmuRqD-RbE2", "OMON6HmkTei", "CIXvv6V2wlJ", "ADwOdiJwsb", "nips_2022_RQ385yD9dqR", "nips_2022_RQ385yD9dqR", "nips_2022_RQ385yD9dqR" ]
nips_2022_vGQiU5sqUe3
Contrastive Learning as Goal-Conditioned Reinforcement Learning
In reinforcement learning (RL), it is easier to solve a task if given a good representation. While deep RL should automatically acquire such good representations, prior work often finds that learning representations in an end-to-end fashion is unstable and instead equip RL algorithms with additional representation learning parts (e.g., auxiliary losses, data augmentation). How can we design RL algorithms that directly acquire good representations? In this paper, instead of adding representation learning parts to an existing RL algorithm, we show (contrastive) representation learning methods are already RL algorithms in their own right. To do this, we build upon prior work and apply contrastive representation learning to action-labeled trajectories, in such a way that the (inner product of) learned representations exactly corresponds to a goal-conditioned value function. We use this idea to reinterpret a prior RL method as performing contrastive learning, and then use the idea to propose a much simpler method that achieves similar performance. Across a range of goal-conditioned RL tasks, we demonstrate that contrastive RL methods achieve higher success rates than prior non-contrastive methods. We also show that contrastive RL outperforms prior methods on image-based tasks, without using data augmentation or auxiliary objectives
Accept
How to design RL algorithms that directly acquire good representations? This paper gives an answer that contrastive representation learning can be cast as a goal-conditioned RL using the inner product of learned representations. The technical novelty of this paper is sound, with the thorough theoretic motivation of the proposed method and solid experiments. The presentation of this paper is also satisfactory. All the reviewers provided positive feedback on this paper. I also enjoy reading this paper.
train
[ "qrd0-LPD2QS", "R7omj7GTqGg", "bium8946y4w", "C0eDvQWAep", "wMgFP9Uv3U", "jm-7hNeVKma", "hfYW1BHSJ4Q", "8DXt9JkU3l", "CpibpnJH8Dl" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for their detailed response and for running the additional experiments. I find the additional experiment and video with the hand-tracking camera interesting, thank you for running that.", " I appreciate the authors taking the time to respond to my comments and making appropriate changes. I agree with the detailed responses and recommend an easy \"Accept\" for this paper. ", " Thanks for the detailed and thoughtful response. My concerns have been addressed by the rebuttal and revision. I think it is a strong submission.", " We thank the reviewer for their detailed and thorough review and for the suggestions for improvement. To improve the paper, we have run additional experiments to answer the reviewer questions, studying how contrastive RL copes with partial observability and whether the learned policy generalizes zero-shot. \n\n> Can the algorithm handle first-person views and/or partial observability\n\nWe ran an additional experiment to study how contrastive RL copes with moving cameras and partial observability, finding that it can solve such tasks, though performance is worse than with a fixed camera. To study this question, we modified the sawyer push task so that the camera tracks the hand at a fixed distance, as if it were rigidly mounted to the arm. At the start of the episode, the scene is occluded by the wall at the edge of the table, so the agent cannot see the location of the puck. Nonetheless, contrastive RL (NCE) successfully handles this partial observability, achieving a success rate of around 35%. Please see the new videos at the bottom of the project website (https://contrastive-rl.github.io/) and new Appendix F.6 and Figure 11 for details.\n\n> generalizes zero-shot to changes in the color of the background, the setup of the hand, or to new types/colors of objects.\n\nWe ran an additional experiment to study the robustness of the policy learned by contrastive RL (NCE), finding that it is robust to object color and setup of the hand but is not robust to the color of the table background. The new experiment, which we describe in new Appendix F.7, shows that changing the object color from black to red has little effect, as does moving the initial hand position towards the camera. However, changing the table color from white to yellow decreases the success rate from 78% to 0%. \n\n> In Figure11 in the appendix, the color seems to be wrong way round\n\nThanks for catching this! We have fixed this.\n\n> typos\n\nThanks for catching these! We have fixed them.\n\nWe hope that these responses have addressed the reviewer's questions. **Does the reviewer have additional questions or concerns?**\n", " We thank the reviewer for their very thorough and detailed review and welcome the suggestions for improvement. To improve the paper, we have run two additional experiments, studying how contrastive RL can be applied to the offline RL setting and whether the learned representations can be transferred to solve new tasks.\n\n\n> why not compare with other contrastive learning approaches in RL mentioned in the early sections?\n\nWe have already compared our algorithm to CURL (Fig 4, \"x\" markers), a method that performs contrastive learning to extract representations and then uses those representations for RL tasks. While CURL was designed for maximizing a single task reward, we adapted it to the goal-conditioned RL setting by combining it with the strongest baseline from our experiments (TD3+HER).\n\n> Can this approach tackle common challenges in goal-conditioned RL, especially for offline datasets? \n\nAs suggested by the reviewer, we applied contrastive RL to a benchmark suite of offline datasets. The results, described in new Appendix F.2 and Appendix Table 3, show that contrastive RL is competitive with (and sometimes better than) IQL, a recent paper that claims state-of-the-art results. Compared with recent prior methods (RvS-G, Decision Transformer) that do not use TD learning (IQL uses TD, but contrastive RL does not), contrastive RL improves the results by a wide margin.\n\n> Line 107: π(τ∣St) -> π(τ∣Sg);\n\nFixed.\n\n> Eq. 4: consider using another color; the current v+ is hard to recognize when printing out the paper to read.\n\nWe have switched this to a darker shade of green.\n\n> In this work, the goal is the state/observation given by the task. Can this requirement be looser?\n\nYes, the goal could be any metadata or label provided to the observation. For example, image observations could be labeled with natural language. Then, by applying the same contrastive RL algorithm as in this paper, we could learn a policy where the goal is natural language (\"the bedroom is clean\"). We haven't tried this yet, but the success of contrastive learning on prior multimodal applications (e.g., CLIP [1], Dalle-2 [2]) bodes well for this potential application.\n\n> Can the learned representation be generalized to a set of diverse tasks with only a smaller number of interaction data?\n\nWe ran an additional experiment to study this question, finding that the representations can sometimes be transferred to accelerate solving a new task. We applied contrastive RL (NCE) to three image-based tasks for 1M environment steps, and then used those representations for solving new tasks. The results, shown in the new Appendix Figure 10, indicate that representations from \"sawyer bin\" can help in solving the \"sawyer push\" task and that representations from \"fetch push\" can help in solving the \"sawyer bin\" task; other combinations work no better than randomly initializing the representations.\n\nWe hope that these responses have addressed the reviewer's questions. **Does the reviewer have additional questions or concerns?**\n\n[1] Radford, Alec, et al. \"Learning transferable visual models from natural language supervision.\" International Conference on Machine Learning. PMLR, 2021.\n\n[2] Ramesh, Aditya, et al. \"Hierarchical text-conditional image generation with clip latents.\" arXiv preprint arXiv:2204.06125 (2022).\n", " We thank the reviewer for their thorough and detailed review and welcome suggestions for improvement. We have revised the paper to incorporate the reviewer's feedback. We have also added a new offline RL experiment: contrastive RL is competitive or better than prior offline RL methods on an offline goal-conditioned benchmark suite. Below, we will answer the reviewer's questions. \n\n> It is unclear why random goals are sampled to train the actor loss. Can the authors shed more intuition on this decision?\n\nWhile in theory the distribution of goals for the actor loss shouldn't affect the optimal policy, in practice we found that using random goals in the actor loss works much better than using future goals. We have revised the paper to include an ablation experiment showing this (Appendix Figure 9). Our intuition is that the future goals are too easy to reach and that random goals are required to train the policy to reach more challenging goals. One important exception is for the new offline RL experiments (see Appendix F.2), where we found that sampling future goals for the actor loss improved the performance.\n\n> Can the authors also comment if there’s some minimum goal dimension, after which contrastive RL starts performing better than the model-based baseline?\n\nWe ran an additional experiment to answer this question, varying the goal dimension on the state-based \"sawyer push\" task. These results, now included in Appendix Figure 8, show that contrastive RL starts to perform better than the model-based baseline when the goal dimension is 4 or larger.\n\n> In fetch push (Fig 5(b)), contrastive RL (NCE + C-learning) gets a huge boost over the baseline because of the addition of C-learning pipeline.\n\nWe agree with this interpretation, but note that it can also be interpreted as saying that taking the prior method (C-learning) and combining it with ours (contrastive RL) makes the prior method much better.\n\nRegardless of whether this is viewed as X+Y or Y+X, it remains an open question why this combination would work much better, given that both C-learning and contrastive RL are performing some sort of contrastive learning (see Sec. 4.6). One hypothesis is that contrastive RL helps very early on during training, when the TD updates of C-learning end up \"backing up\" bad predictions; then, once the classifier has stabilized, the TD updates improve the sample efficiency.\n\n> Typos\n\nWe've fixed these.\n\n> For the fetch and sawyer tasks, what is the dimension of goals (state-based)?\n\nThe state and goal dimensions for these tasks are:\n* fetch_reach: dim(state) = dim(goal) = 10\n* fetch_push: dim(state) = dim(goal) = 25\n* sawyer_push: dim(state) = dim(goal) = 7\n* sawyer_bin: dim(state) = dim(goal) = 7\n\n> I am assuming that for image-based, it is 64x64x3, right?\n\nYes, both the observation and the goal images are 64x64x3.\n\n> Could the authors change the color palette for the plots in Figure 5?\n\nWe have updated the figure to use different colors (not different shades of blue).\n\nWe hope that these responses have addressed the reviewer's questions. **Does the reviewer have additional questions or concerns?**\n\n", " The authors have proposed a well-motivated contrastive learning method for solving goal-conditioned RL tasks, thus removing the need for any auxiliary losses or data augmentations for representation learning. The authors then show how this contrastive objective leads to a critic implicitly learning the Q-value function, and empirically demonstrate the effectiveness of their simple approach on many tasks. # Strengths\n\n1. The proposed method simplifies the representation learning + planning in RL problem, by using the inner product of learnt representations as a correlate for the Q-value function.\n2. This is especially useful for the pixel-based environments, for which the authors have shown performance gains in many continuous-control tasks.\n3. I also appreciate that the authors mentioned certain failed experiments in Appendix H, which help in better understanding of the proposed method.\n4. The paper presents theoretical motivation for their method and show how this contrastive objective relates to a critic in deep RL methods. \n5. Thorough comparison with several related works.\n\n# Weaknesses\n\n1. It is unclear why *random goals* are sampled to train the actor loss? Can the authors shed more intuition on this decision?\n2. Can the authors also comment if there’s some minimum goal dimension, after which contrastive RL starts performing better than the model-based baseline?\n3. In Figure 5, while contrastive RL and its variants surely perform better in simpler state-based tasks, however, looking at the plots for image-based observation tasks (Fig 5(b)), it seems that C-learning alone does quite well, especially in the challenging tasks of Sawyer bin. Also given that in fetch push (Fig 5(b)), contrastive RL (NCE + C-learning) gets a huge boost over other the baseline, I think that’s also because of the addition of C-learning pipeline only. 1. typos:\n - line 141: “The ~~objective~~ expected reward objective”\n - line 188: “critic is parametrized as an *inner* product” - but in Algorithm 1, the comment says that you performed an outer product.\n - line 268: “One” → “On”\n - line 879: “~~has~~ uses”\n2. For the fetch and sawyer tasks, what is the dimension of goals (state-based)? I am assuming that for image-based, it is 64x64x3, right?\n3. Could the authors change the color palette for the plots in Figure 5? Right now, it’s extremely difficult to judge which line corresponds to which baseline, especially on print. Yes, the authors have addressed the major limitations of their method.", " This paper proposes leveraging contrastive learning to action-labeled trajectories, where the learned representation will connect to the goal-conditioned value functions. Specifically, they utilize contrastive learning to estimate the Q-function. The critic function (parameterized by the inner-product between representation) in contrastive learning can also be applied as the critic function in actor-critic algorithms in the RL context. Theoretical analysis of the convergence guarantees has been provided in the meantime. The empirical results suggest that the proposed framework can lead to better performances on goal-conditioned benchmarks compared with prior methods, including methods using goal-conditioned RL and different representation learning strategies. ### *Strengths*\n\n#### **1. Originality and significance**\n\nAlthough this work is built upon existing literature on goal-conditioned RL and contrastive learning (in RL), it has enough novelty since it provides a clear direction to link the reward maximization and learning representation using contrastive learning. The way offered in the paper to directly use contrastive learning to perform goal-conditoned RL is very simple and efficient. This work may have the potential to benefit the RL, representation learning, and robotics communities. \n\n#### **2. Relevance**\n\nThe authors discuss most related works, including goal-conditoned RL, representation learning for RL, and contrastive learning (for RL). Detailed comparison has also been given in the associated sections. \n\n#### **3. Algorithms, theories, and evualation**\n\nThe designed algorithm is decent, and I feel this can broadly apply to other RL regimes. The convergence guarantee makes it more solid. The experiments part is also very solid, including most relevant baselines and benchmarks. The appendix H on failed experiments is also very helpful for the community. \n\n### *Weaknesses*\n\nPlease note that below are only for questions and potential discussions. There is no need to rerun experiments during the rebuttal phase.\n\n#### **1. About experiments**\n\nIt seems that, in some cases, improvements compared with C-learning are minor (Fig. 5). Can the authors give some explanation. Meanwhile, why not compare with other contrastive learning approaches in RL mentioned in the early sections?\n\n#### **2. About other common challenges in goal-conditioned RL**\n\nCan this approach tackle common challenges in goal-conditioned RL, especially for off-line datasets? e.g., the generalization ability to different distributions.\n\n#### **3. Minor typos**\n\n-> Line 107: $\\pi\\left(\\tau \\mid S_{t}\\right)$ -> $\\pi\\left(\\tau \\mid S_{g}\\right)$;\n\n-> Eq. 4: consider using another color, the current $v^{+}$ is hard to recognize when print out the paper to read. \n Please note that below are only for questions and potential discussions. There is no need to rerun experiments during the rebuttal phase.\n\n**1. About the required goal**\n\nIn this work, the goal is the state/observation given by the task. Can this requirement be looser?\n\n**2. About the generalization abilities**\n\nCan the learned representation be generalized to a set of diverse tasks with only a smaller number of interaction data?\n\n The authors mentioned the limitations in the paper, mainly about how to generalize the work into other RL settings. I think this would be the potential direction for future works. I raise a few questions, which are more like vague points instead of limitations. I think this is a decent paper and I would vote for acceptance. ", " The paper presents a novel view of how contrastive learning can be used by itself, and without any additional RL training on top, to learn goal conditioned policies from pre-collected data. This is achieved by using the similarity learned during contrastive learning as a Q-function, that is then used to improve a policy in the policy gradient step. It is also shown, via a proof, that the contrastive objective estimates the Q-function, and some additional convergence guarantees are provided. The authors well motivate their approach, compare it extensively with related work on learning goal-conditioned policies, as well as works that use an additional representation learning objective on top of a RL algorithm. The paper shows a good comparison between prior work and the proposed method on a range of goal-conditioned RL tasks that have been used in these prior works. The overall conclusion is that a much simpler method, like the one presented, can perform similarly well, and in some cases even better, than more complicated baselines. ### Strengths\n1. The paper presentation and clarity is excellent and of high quality. The authors have performed an extensive evaluation of their method and compared it against two sets of baselines, one based on representation learning methods and one based on learning goal-conditioned policies. The authors describe their method, motivation and results very clearly and do a very good job at relating it to prior work.\n2. Their method is grounded in theory, though I cannot claim I could follow 100% of their proofs and arguments as I am not an expert on the theoretical side.\n3. Simplicity of the method, while simultaneously performing similarly well to other methods, and in some cases outperforming strong and more complicated baselines.\n4. The authors present an original perspective on how contrastive learning can be used to learn goal-conditioned policies directly.\n\nI did not find any noteworthy weakness to the paper other than some typos and some more questions about the limits of this method that I have included in the next section with Questions.\n One question I would be interested in, is how well the method would perform in partially observable, large environments where the agent has a first-person view.Goal conditioning from pixels seems to be much easier in situations with a fixed camera covering the majority of the possible states, but it seems that the difficulty of the problem quickly grows when we move to first-person views in large environments with partial observability.\nI’m also wondering how well the trained method generalizes zero-shot to changes in the color of the background, the setup of the hand (i.e. having the base of the sawyer arm on the left side of the image) or to new types/colors of objects. \n\nIn Figure11 in the appendix, the colour seems to be wrong way round in the plot of the maze, as the positions in the centre have the colour corresponding to the highest \"steps to centre\". The colour for the tSNE plots seems to be correct though.\n\nTwo typos:\nLine 154: “The negative” -> For the negative\nLine 215: “interation” -> iteration The limitations have not been added to the author’s submission checklist yet, but are addressed in the actual paper." ]
[ -1, -1, -1, -1, -1, -1, 7, 8, 7 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "C0eDvQWAep", "jm-7hNeVKma", "wMgFP9Uv3U", "CpibpnJH8Dl", "8DXt9JkU3l", "hfYW1BHSJ4Q", "nips_2022_vGQiU5sqUe3", "nips_2022_vGQiU5sqUe3", "nips_2022_vGQiU5sqUe3" ]
nips_2022_fSfcEYQP_qc
A Neural Corpus Indexer for Document Retrieval
Current state-of-the-art document retrieval solutions mainly follow an index-retrieve paradigm, where the index is hard to be directly optimized for the final retrieval target. In this paper, we aim to show that an end-to-end deep neural network unifying training and indexing stages can significantly improve the recall performance of traditional methods. To this end, we propose Neural Corpus Indexer (NCI), a sequence-to-sequence network that generates relevant document identifiers directly for a designated query. To optimize the recall performance of NCI, we invent a prefix-aware weight-adaptive decoder architecture, and leverage tailored techniques including query generation, semantic document identifiers, and consistency-based regularization. Empirical studies demonstrated the superiority of NCI on two commonly used academic benchmarks, achieving +17.6% and +16.8% relative enhancement for Recall@1 on NQ320k dataset and R-Precision on TriviaQA dataset, respectively, compared to the best baseline method.
Accept
This paper proposes a new framework for neural IR: given query, directly predict a document ID. The document IDs are obtained by hierarchical clustering of documents beforehand. This is a novel formulation of the problem, and is very distinct from current two-stage methods that have a high-recall sparse retrieval stage, followed by a high-precision neural reranker, or approximate nearest neighbor methods that encode both documents and queries as vectors. The paper is fairly well-written, the authors have addressed reviewers concerns with honest detailed feedback, and have made their code available to facilitate experimentation. The results are particularly strong compared to more traditional BM25-based models and competing neural approaches like DSI. I anticipate there will be much follow-on work and eventually a paradigm shift in neural IR.
train
[ "46EG1rMC16", "3v8uKUbaVDG", "_niAOyJll-A", "Ie9pl5Lvt6H", "9iHltuKlxAL", "q8ObgHbsAds", "ZFdyQa4Tae2", "Yz8jrsogSL", "5T633xe-7i3", "xSA8ciX29e1" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the answers. Most of my questions are answered and I support for the acceptance of the paper.", " Dear reviewers, \n\nThank you for taking time in reading our paper and providing valuable comments. We briefly address common questions here. \n \n1. Regarding the concern that query generation dominates final performance, we compare the performance of NCI and DSI/SEAL in a fair setting without query generation. The table below shows that NCI still outperforms the other two generative retrieval SOTAs significantly. Also, we provide ablations on the NCI version without query generation. The result shows that the PAWA decoder contributes a large improvement based on the raw transformer decoder (from 48.98 to 53.63 on Recall@1). It demonstrates that the PAWA decoder design is critically important in the absence of query generation. Semantic identifiers and consistency-based regularization are also beneficial. \n\n | | Recall@1 | Recall@10 |\n |-----------------------|----------|-----------|\n | DSI (T5-Base) | 27.40 | 56.60 |\n | SEAL (BART-Base) | 26.55 | 53.61 |\n | **NCI (T5-Base w/o QG)** | **53.63** | **67.84** |\n | - w/o PAWA decoder | 48.98 | -- |\n | - w/o Semantic id | 50.78 | -- |\n | - w/o Consistency reg | 51.57 | -- |\n\n\n2. Another contribution of this work is reproducibility. DSI does not provide open-source code, so we only refer to the numbers reported in its paper. We have released the code, data and checkpoints of NCI at an anonymous repository https://github.com/anonymousML36061/NCI. We believe the release will facilitate researchers to work on further comparisons and improvements of deep generative models for semantic text retrieval.\n\n3. For easy reference of reviewers and ACs, all changes are highlighted in blue in the revision.", " We respond to your major comments for a discussion.\n\n1.Some experiments that try different definitions of document \n(e.g., random) would shed light on the assumption of having a strong document clustering prior. \n* In the ablation experiments (table 2), we have the “*w/o semantic id*” setting, which does not use semantic IDs generated by pre-clustered priors. Instead, we assign a unique ID randomly to each document. As shown in Table 2, Recall@1 drops from 88.72 to 87.22 for the NCI model trained by augmented queries. Also, it drops from 53.63 to 50.78 in the setting without query generation (see the result table in our general response). Therefore, NCI already achieves good performance using random identifiers, while the pre-clustering prior brings additional benefits to the model. We appreciate your advice and will analyze other definitions of document IDs for more insights in the next step. \n\n2.Can you provide some timing analysis for the baselines (e.g. in footnote), so Table 4 has some reference of comparison? \n* Thank you for the suggestion. The latency of NCI is on par with DSI and SEAL with the same model size and beam size as all of them conduct beam search based on transformer decoders. We also test the latency of BM25 for reference. We leverage an open-source implementation of BM25 [1] and test it on Intel(R) Xeon(R) CPU E5-2690 v4@2.60GHz. The latency is less than 100ms per query. NCI/DSI/SEAL can also achieve near real-time serving on GPU, but it hardly to be served on CPU. We have other proposals to improve the efficiency of NCI, which is discussed in the section of future work. \n\n3.The index-update aspect may be one of the most important near-term challenges. \n* We concur with you. We have an initial solution: take each leaf node in the tree as a virtual semantic cluster and assign a specific cluster to each new document. Then, in the inference stage, the new document is retrieved along with the generated cluster ID. We are working in this direction and plan to address this problem in the next version. \n\n[1] https://github.com/castorini/anserini", " Thank you for reading the paper carefully and providing helpful feedback. \n \nWe clarify that NCI achieves a significant improvement over DSI and SEAL even without query generation (53.63 for NCI v.s. 27.40/26.55 for DSI/SEAL on Recall@1). This improvement owes to the novel techniques we propose in this paper, including prefix-aware weight-adaptive (PAWA) decoder and consistency-based regularization.\n\nThe ablation results without query generation are discussed in the general response section. Although query generation is important to the final performance, other techniques are also indispensable for the large improvement. In particular, replacing the PAWA decoder by a vanilla transformer decoder leads to a large performance drop (from 53.63 to 48.98 on Recall@1) in a fair comparison setting without query augmentation. The reason is that a vanilla transformer only utilizes fixed weight for next node prediction without considering its specific prefix in the tree. The PAWA decoder makes the routing weights in transformer adaptive to prefixes, and thus better leverages the semantic information incorporated in the hierarchical tree structure to improve the effectiveness of semantic document retrieval. ", " We address your questions as follows. \n\n1.Can we use the same k-means procedure to obtain an identifier for a new document? How does the model perform when there are new documents if we just assign an identifier to a new document using the same k-means algorithm? (Combining question 1 and questions in the Limitation part.) \n* In this paper, we focus on the problem where all documents are given at the outset. Supporting new documents requires augmentations to the proposed framework. For example, one approach is to assume that each leaf node in the tree is a semantic cluster, and each new document is added to one of the semantic clusters. In the retrieval stage, all documents under the same semantic cluster are retrieved if the corresponding cluster ID is generated by NCI. This approach is effective if a small fraction of documents are added to the corpus and there is no obvious topic drift. We are working in this direction and plan to report more empirical studies in the next version. \n \n2.Have you compared other query generation models / techniques so that we can understand what contributes to the success of this method? \n* We leverage DocT5Query [1], which is a commonly used query augmentation method for document retrieval. As shown in Table 1, NCI outperforms BM25 + DocT5Query with a large margin (88.72 v.s. 58.39 on Recall@1), which shows that the NCI model is much more superior to BM25 based on the same query augmentation technique. Regarding the comparison without DocT5Query, the result is shown in the general response section. This verifies that NCI is the current best SOTA among all generative retrieval models even in the absence of query generation. Moreover, the ablation study shows that other proposed techniques, including the PAWA decoder, consistency-based regularization and semantic ID, also make indispensable contributions.        \n \n[1] Rodrigo Nogueira, Jimmy Lin, and AI Epistemic. From doc2query to doctttttquery. Online preprint, 2019, https://github.com/castorini/docTTTTTquery.", " First, we address the three points you mentioned in the weaknesses part. \n\n1. The goal of query generation is to generate training queries that share the same distributions with test queries and are semantically similar to the source documents. The generated queries do not necessarily overlap with the test queries in words. We calculate the overlapped ratio and only 12.45% of the test queries have appeared in the augmented training queries. \n2. Our NCI model falls into the generative retrieval category as it is based on a sequence-to-sequence architecture and can be optimized end-to-end. The main difference between NCI and ANN algorithms using different embeddings is that the index structure of ANN is not differentiable while the entire NCI architecture is differentiable. \n3. We follow the experimental setting of DSI, which evaluates on the NQ320k dataset. Experiments on the full set of NQ or other datasets require training larger models with more resources and time. We will report the results of more datasets in our next version. \n\nWe answer the questions in the following. \n\n1.How sensitive is the model’s performance with respect to the quality of the generated queries? \n* The quality of generated queries does matter a lot for the model’s performance. For instance, upgrading the query augmentation method from Doc2Query [1] to DocT5Query [2] improves the MRR@10 dev score of MSMARCO from 21.8 to 27.7 using BM25 backbone. However, we'd like to highlight that without DocT5Query, NCI still achieves noticeable improvements over state-of-the-art generative retrieval models. This shows that the novel architecture designs (e.g., the PAWA decoder) contribute significantly to the superior performance of NCI (see the result table in the general response section).  In this paper, we leverage DocT5Query as it is the most common query augmentation method used for document retrieval. We appreciate the suggestion to analyze the sensitivity of model performance with respect to the quality of generated queries. We will follow-up this in our future work.  \n\n2.I don't think it is correct to say that \"DSI hardly captures the hierarchical semantics of document identifiers\" in L42. \n \n* Yes, DSI also builds semantic identifiers through hierarchical K-means, but it uses a vanilla transformer decoder, which does not fully leverage the structural information. In NCI, we propose prefix-aware weight-adaptive (PAWA) decoder and consistency-based regularization loss to further guide the decoder to incorporate document semantics more effectively. In the PAWA decoder, a prefix-aware adaptive module is applied to the classification weights for token prediction, thus the tree-based decoder better leverages the semantic prior incorporated in the hierarchical tree structure. With consistency-based regularization loss, semantically similar queries will result in the same routing paths in the tree. Thank you for raising this question. We have polished the corresponding part in the revised version. \n \n3.How are the documents clustered for semantic identifiers? Could you please give more description of Section 3.1. \n \n* We describe the full algorithm of document clustering in Algorithm 1 (Appendix B.2). The appendix PDF can be found in the zip file of supplementary material. In the revised version, we have added more details in Section 3.1 to make the content more self-contained. Specifically, each document is associated with one leaf node with a deterministic routing path $l=\\{r_0, r_1, ..., r_m\\}$ from the root, where $r_i \\in [0, k)$ represents the internal cluster index for level $i$, and $r_m \\in [0, c)$ is the leaf node. The semantic identifier for a document is concatenated by the node indices along the path from root to its corresponding leaf node. For documents with similar semantics, the prefixes of their corresponding identifiers are likely to be the same.\n \n4.What is the difference between p1 and p in L223?\n \n* Thanks for pointing this out. This is a typo. It should be p1 and p2 for two independent query dropouts. Thus, we can add a consistency-based regularization loss in-between, which ensures that two augmented queries generate similar document identifiers.  We have polished Section 3.4 in the revised version. \n\n\n[1] Nogueira, Rodrigo, et al. \"Document expansion by query prediction.\" arXiv preprint arXiv:1904.08375 (2019). \n[2] Rodrigo Nogueira, Jimmy Lin, and AI Epistemic. From doc2query to doctttttquery. Online preprint, 2019", " While the traditional document retrieval problem is formulated as mapping both the document and the query to the same vector space and then performing nearest neighbor search to find the closest document fore each query, the paper instead proposes to formulate the task as a document identifier generation problem, where the input is the query and the output is the target document's id. This is not entirely novel with respect to DSI (Tay et al., 2022) but given that DSI was available on the archive only three months before NeurIPS deadline, one might be able to consider this paper to be a concurrent work. The proposed model by this paper, Neural Corpus Indexer, also introduces a few techniques: (1) query generation that augments the query-to-doc data for training (equivalent to document indexing), (2) prefix-aware weight-adaptive decoder that gives different symbols to the same tokens in different positions of the ids, (3) semantic document id so that similar documents share similar ids, and (4) regularization technique to prevent overfitting. The results on Natural Questions 320k (a subset of NQ) are very promising; the proposed model not only outperforms other generative retrieval models such as DSI and SEAL by a big margin but also strongly outperforms competitive baselines such as ANCE and BM25. Strengths:\n- The results are very strong. The baselines that the paper compares against, such as ANCE and BM25, are very strong ones and hard to beat. It is remarkable that a generative model can beat those baselines.\n- Query generation helps a lot and this is a very useful information for people working in this field. In fact, as shown in Table 2, among other techniques introduced by the paper, QG is by far the biggest factor for the superior performance of NCI.\n\nWeaknesses:\n- If query generation is the dominant factor, then it seems to me that what the model is really doing is \"query indexing\" rather than \"document indexing\". This means the performance of the model will highly depend on generating a comprehensive range of queries which are likely to overlap with the test queries. \n- Constructing the semantic hierarchy of the document ids can be considered as constructing the inverted index in the canonical NNS retrieval setup. Then what the decoder is doing can be considered as locating the correct cluster in each level of the hierarchy. Then I am wondering if it is fair to say this model is really \"generative retrieval\" or it is more of using different embeddings for the nearest neighbor search in each layer of the hierarchy. \n- The paper only experiments on Natural Questions and only a subset of it. While NQ is a very representative dataset in this task, the paper would have looked much better if it was tested on different datasets and/or the full dataset of NQ. It is not clear why such results are not shown. - With regard to the weakness mentioned above, I am wondering how sensitive the model's performance is with respect to the quality of the generated queries.\n- I don't think it is correct to say that \"DSI hardly captures the hierarchical semantics of document identifiers\" in L42. DSI indeed proposes a similar clustering mechanism to what this paper proposes.\n- How is the documents clustered for semantic identifiers? Could you please give more description of Section 3.1.\n- What is difference between p1 and p in L223? I think Section 3.4 is unclear. Yes, the paper mentions crucial limitations such as its lack of scalability compared to canonical NNS retrieval system, which I agree as well.", " The paper proposed a sequence-to-sequence model that generates the relevant document given an input query, as a retrieval method. The proposed method have a huge improvement on the NQ320k dataset (Natural Questions). The most improvement seems to be coming from using generated query (based on the document) paired with the document as augmented training data. Originality:\nThe proposed data augmentation technique is interesting and demonstrate good performance. The other techniques are interesting and providing much smaller but also good gains.\n\nQuality:\nThe author addressed their limitations and present the work in context. The experiments are well designed.\n\nClarity:\nThe paper is clearly written.\n\nSignificance:\nVery good gains on NQ320k over previous best method.\n 1. Can we use the same k-means procedure to obtain an identifier for a new document? I guess new documents will have the same identifier to be meaningful to the model?\n2. It seems that the query generations has most contribution to the improvement. Have you compared with other query generation models / techniques so that we can understand what contribute to the success of this method? The authors acknowledge a set of limitations of the model, including scale of the index set, inference speed and updating the index. I'm most interested in the third one. The model can only work on a fixed set of documents (to retrieve from), or there is a cost to update the model when the index set changes. This is a limitation of the line of works that generating the document identifiers, not specific to this work. However, I'm also curious, given that the document identifier is generated using a k-means algorithm, how the model performs when there are new documents if we just assign an identifier to a new document using the same k-means algorithm.", " The paper focuses on a promising direction to learn retrieval via generation, where the model will generate the relevant document IDs for given queries. It resembles the previous Differentiable Search Index work with several minor improvements.\n Strengths:\nThey propose several techniques to improve the model such as prefix-aware weight-adaptive decoder, query generation, consistency-based regularization. They show excellent improvement in recall@1 and recall@10 on the NQ320k datasets, largely outperforming previous baselines.\n\nWeaknesses:\n\nNCI is quite similar to DSI - e.g., the encoder-decoder architecture, indexing training and the hierarchical clustering algorithm to generate semantic doc IDs. It introduces several small improvements such as using prefixLM, consistency-based regularization, query generation, etc. The overall contribution seems limited. \n\nAnother weakness is that many of the gains come from using the additional query generation data. Interestingly, all other improvements seem to have very limited effect on the performance. This also raises a question about how much the model relies on the quality of the query generation model and how it can adapt to new domains that the QGEN model can’t generate good questions for the target document. \n NA NA", " \nThis paper proposes a new framework for neural IR: given query, directly predict a document ID. The document IDs are obtained by hierarchical clustering of documents beforehand. This is a novel formulation of the problem, and is very distinct from current two-stage methods that have a high-recall sparse retrieval stage, followed by a high-precision neural reranker, or approximate nearest neighbor methods that encode both documents and queries as vectors. \nStrength:\n- The formulation is unique and intriguing. I really like this idea! Despite some limitations, the results are very promising and I imagine many follow-up work may come from this paper. \n\nWeakness:\n- I think the method rests on one large assumption, that documents can be pre-clustered into some fixed set of IDs. All queries are being trained with the same fixed set of document IDs. This is a somewhat unconventional way to think about IR, as one usually does not assume document cluster structure, and ad hoc queries may flexibly retrieve any subset. I think some experiments that try different definitions of document IDs (e.g. random) would shed light on the assumption of having a strong document clustering prior. - Can you provide some timing analysis for the baselines (e.g. in footnote), so Table 4 has some reference of comparison? - The limitation section is reasonable. I think the index-update aspect may be one of the most important near-term challenges. " ]
[ -1, -1, -1, -1, -1, -1, 7, 7, 4, 8 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 5, 3 ]
[ "q8ObgHbsAds", "nips_2022_fSfcEYQP_qc", "xSA8ciX29e1", "5T633xe-7i3", "Yz8jrsogSL", "ZFdyQa4Tae2", "nips_2022_fSfcEYQP_qc", "nips_2022_fSfcEYQP_qc", "nips_2022_fSfcEYQP_qc", "nips_2022_fSfcEYQP_qc" ]
nips_2022_USoYIT4IQz
Invertible Monotone Operators for Normalizing Flows
Normalizing flows model probability distributions by learning invertible transformations that transfer a simple distribution into complex distributions. Since the architecture of ResNet-based normalizing flows is more flexible than that of coupling-based models, ResNet-based normalizing flows have been widely studied in recent years. Despite their architectural flexibility, it is well-known that the current ResNet-based models suffer from constrained Lipschitz constants. In this paper, we propose the monotone formulation to overcome the issue of the Lipschitz constants using monotone operators and provide an in-depth theoretical analysis. Furthermore, we construct an activation function called Concatenated Pila (CPila) to improve gradient flow. The resulting model, Monotone Flows, exhibits an excellent performance on multiple density estimation benchmarks (MNIST, CIFAR-10, ImageNet32, ImageNet64). Code is available at https://github.com/mlvlab/MonotoneFlows.
Accept
The paper proposes a new type of ResNet-based Normalizing Flows that, unlike previous versions of these flows, do not require the Lipschitz constant of each layer to be less than 1. The authors use monotone operators, which they show to be strictly more expressive and propose a new activation function called Concatenated Pila (CPila). The method is evaluated on toy datasets as well as standard image datasets, outperforming the baseline i-DenseNet model. Strengths: 1 - Well written and clear paper. 2 - Originality of the monotone formulation. 3 - Improvements are small, but consistent across multiple settings. 4 - Theoretical analysis of expressive power. 5 - Ablation experiments to justify activation function. Weaknesses: - No significant weaknesses are mentioned by the reviewers. Decision: All the reviewers agree on acceptance, indicating that this is a strong paper. I encourage the authors to use the feedback provided by the reviewers to improve the paper for its camera-ready version.
train
[ "CtGgEqxWF2J", "ksppUzbRjHL", "-U4KPktD-v", "2A65hmkWyAK", "UaY0d0GpBnc", "h0P5w9vY-jx", "lK6AkAK4dK_", "F0aYdHk5z3P", "E_35onA-aTY", "AOAlSsurbx-", "yJDA-iu5cyi" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your reply. I'm happy with all the answers.\n\nI would suggest to incorporate the reply to Q4 into the paper (i.e. that the notation is overloaded and that $F(x) = \\\\{y \\; | \\; (x, y) \\in F\\\\}$). This would help to avoid confusing authors without a background in monotone operator theory.\n\nAfter reading the other reviews and replies, I keep my rating at 7 and recommend the acceptance of the paper.", " Thank you for answering in detail to all my questions. While monotone operators are a fresh taking on normalizing flows I appreciate that the authors pledge to tone down the SOTA claim. I believe this is an interesting addition to the existing flow literature. Correspondingly, I'm increasing my original score from a 6->7.", " We would like to thank you for handling the review of our manuscript entitled *“Invertible Monotone Operators for Normalizing Flows”* and for unanimous support on the novelty of our work. During the revision, we have addressed all suggestions/comments that have been made in the review. Also, we have updated our supplementary material (.zip file) for an additional figure.", " We would like to thank the reviewer for the positive assessment of our work and the insightful comments. During the revision, we have tried to address all comments, and we believe the paper has been further strengthened as a result. In the following, we provide our point-by-point response to the review comments:\n\n> Q1) Thm 4 says $\\mathcal{M}_L = \\frac{1+L^2}{1-L^2} \\mathcal{R}_K$, $K = \\frac{2L}{1+L^2}$. Since $K < 1$ when $L < 1$, is choosing a function from $\\mathcal{G}_L$ and applying your formulation equivalent to choosing a function from $\\mathcal{G}_K$ and applying from the residual formulation, if we disregard the constant $\\frac{1+L^2}{1-L^2}$? If these are the same then what is the advantage of your method? If there is a difference then what is the key difference here?\n\nWhile $\\mathcal{M}_L$ and $\\mathcal{R}_K$ are equivalent theoretically, it is not possible, in practice, to take $L$ (or $K$) close to one. In fact, the variance of the log determinant estimator (6) diverges to infinity if $L$ exceeds a threshold $L_T < 1$ in the Russian roulette estimator. One of the advantages of our method is expressivity. For instance, with the same value of $L$, the residual formulation or the inverse residual formulation yields $\\mathbb{R}^+ \\mathcal{R}_L$, whereas our formulation yields $\\mathbb{R}^+ \\mathcal{R}_\\frac{2L}{1+L^2}$ which is clearly more expressive than $\\mathbb{R}^+ \\mathcal{R}_L$. For more details, please refer to the explicit discussion in our paper, in Line 212-222.\n\n> Q2) Thm2: what is the relationship between $F$ and $G$? Is there a typo here?\n\nYes, it is a typo. Thank you for pointing this out. The equation should be written as $C_F = 2(\\mathrm{Id}+F)^{-1} - \\mathrm{Id}$. We will fix the typo in the final version of the paper.\n\n> Q3) Sec3.4: what is the advantage of using concatenation?\n\nIt helps to ameliorate the vanishing gradient problem, as suggested by i-DenseNets paper [2]. The vanishing gradient can be especially problematic for ResNet-based normalizing flows since they are subject to Lipschitz constraints. As in [2], which provided the experimental and theoretical analysis comparing LipSwish and concatenated LipSwish (CLipSwish), we have introduced Pila and concatenated Pila (CPila). Furthermore, we have provided a clear presentation showing that the CPila less attenuates the Lipschitz constant compared to CLipSwish [2], as visualized in Figure 2 (b) of our main paper.\n\n> Q4) What is the complexity of your algorithms (log-det estimator, forward and back-prop)? How is it compared to previous methods, especially ResFlow?\n\nOur log-determinant estimator has exactly the same complexity, since it is identical to that of Residual Flows (or i-DenseNets) except that the estimator is evaluated at $w = \\left(\\frac{Id + G}{2}\\right)^{-1}(x)$ instead of at $x$. However, our forward and backward passes are both lengthened by fixed-point iterations. Overall, our model has about 60% longer training time compared to i-DenseNets, and 40% longer than Residual Flows. However, note that the sampling speed of our model is only 4% slower than Residual Flows. We will include the running time analysis in the final version of the paper.\n\n> Q5) What is the sampling speed of your method in experiments? What about quality evaluations such as FID? These numbers will make the experiment section more complete.\n\nRegarding the sampling speed, we have attached the time to sample a single batch of 100 images (trained on CIFAR-10) using a single RTX 3090 GPU. The numbers are averaged over ten batches. The full model takes approximately the same time as i-DenseNets, clearly demonstrating that the computational overhead induced by our method is negligible.\n\n| Model | Time (in seconds) |\n|------------------------------------|------------------------------|\n| i-DenseNets | 2.66 s |\n| i-DenseNets run with our implementation | 4.46 s |\n| Monotone Flows (ablate monotone formulation) | 4.59 s |\n| Monotone Flows (ablate CPila) | 2.58 s |\n| Monotone Flows (full model) | 2.70 s |\n\nNote: For fair comparison, both the benchmark option and the non-deterministic computation were disabled while obtaining the times.\n\nRegarding FID, we have attached the FID values for ours and related models. While our model is not free of the general tendency that normalizing flows do not excel at generating low-FID images, it does attain better FID values compared to the predecessors, especially Residual Flows and i-DenseNets.\n\n| Model | CIFAR-10 Train FID (lower is better) |\n|----------------|--------------------------------------|\n| PixelCNN | 65.93 |\n| PixelIQN | 49.46 |\n| i-ResNet | 65.01 |\n| Glow | 46.90 |\n| Residual Flows | 46.37 |\n| i-DenseNets | 41.29 |\n| Monotone Flows | 40.66 |\n\n[1] Cornish et al. Relaxing bijectivity constraints with continuously indexed normalising flows. In ICML, 2020.\n\n[2] Perugachi-Diaz et al. Invertible densenets with concatenated lipswish. In NeurIPS, 2021.\n\n[3] https://openreview.net/forum?id=btfPZRDdP2F", " We would like to thank the reviewer for the constructive feedback on our work and the insightful comments. We will address all the concerns raised by the reviewer below.\n\n> Q1) One of the claims in the abstract is that Monotone Flows outperform SOTA flows. Looking at the presented numbers and comparing it with relevant literature this is not true. Here is a non-exhaustive list of papers that outperform Monotone Flows (albeit using other advances that could also potentially be complementarily used here): Flow++[1], Augmented Normalizing Flow [2], Densely Connected Normalizing Flows [3].\n\nYes, we acknowledge the comparison should have been made from a broader perspective, including Augmented Normalizing Flow which learns invertible transformations in an augmented data space. We appreciate your inspiring comment on the additional related works. We will tone down the SOTA claim and add the discussion about the suggested papers in the experiment section of the final version.\n\n> Q2) The authors should also include FFJORD [4] as a relevant baseline.\n\nWe will include the work as a relevant baseline. Thanks for pointing this out.\n\n> Q3) The use of Russian Roulette, Neumann series, and Hutchinsons trace estimator for training and inference borrows much of what is already known in the flow literature. The authors should try to do a better job of disambiguating the main technical differences in Monotone Flows from the rest of the Residual Flow models.\n\nThank you so much for your sincere comment on the presentation in the manuscript. In fact, we have already stated that the Russian Roulette estimator, Neumann series, and Hutchinsons trace estimator are due to prior works including [2], in Line 156 and Line 158. However, for clear presentation, we will add a sentence to our paper, stating that our key contributions are the monotone formulation and the activation function CPila, while the Russian Roulette estimator, Neumann series, and Hutchinsons trace estimator are adapted from prior work such as [2].\n\n> Q4) The reported tables should have standard deviations as the numbers are very close.\n\nWe agree that running the experiment multiple times and reporting performance with standard deviations are ideal. However due to the limited computing resources and notoriously long training time of normalizing flows, previous works [1,2,3,4] reported their performance with a single run and it is de facto standard as Reviewer BzEz\nmentioned. In addition, we observe that training curves of i-DenseNets and Monotone Flows remain well-separated throughout the whole training session, which can be found in the updated supplementary material (.zip file). We will include the learning curves in the appendix of the final version, although Reviewer BzEz said a single run is a minor problem in the field of normalizing flows.\n\n> Q5) Line 131, $T$ hasn't been introduced yet.\n\nYes, it is a typo. Thank you for pointing this out. Line 131 should be written as “Theorem 1 then implies $F$ is invertible.” (not $T$)\n\n> Q6) Definition 4, $A$ is the set of what function? Be more precise.\n\nYes, $A$ could mean one of the function classes, $\\mathcal{G}_L$, $\\mathcal{R}_L$, $\\mathcal{I}_L$, or $\\mathcal{M}_L$. We will update the paper accordingly. Thanks for pointing this out.\n\n> Q7) Why do Residual Flows have higher precision---i.e. one more digit after the decimal---in Table 2 for Cifar10?\n\nWe quoted the numbers as reported in the respective papers. The Residual Flows paper reports the numbers up to three decimal places, whereas all the other papers report the numbers up to two decimal places. In the final version of our paper, we will display all numbers up to two decimal places.\n\n> Q8) In the original Residual Flows paper there was a precise characterization of the memory cost (Fig 3). How does Krasnoselskii-Mann iteration affect this?\n\nWe have conducted an ablation study on the code-level. The GPU memory consumption is decreased by about 1.3% when the fixed-point routines in the forward and backward passes are disabled while all other components are left intact.\n\n> Q9) An effect of the activation through training like Fig 8 in the ResFlow paper could be illuminating. Can you please consider including it?\n\nThanks for the suggestion. We have included in the updated supplementary materials (.zip file) the training curves from the ablation study which compares the models with and without the activation function CPila. We will update the paper accordingly.\n\n[1] Behrmann et al. Invertible residual networks. In ICML, 2019.\n\n[2] Chen et al. Residual flows for invertible generative modeling. In NeurIPS, 2019.\n\n[3] Durk P. Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. In NeurIPS, 2018.\n\n[4] Perugachi-Diaz et al. Invertible densenets with concatenated lipswish. In NeurIPS, 2021.\n\n[5] Lu et al. Implicit normalizing flows. In ICLR, 2021.", " First of all we would like to express our sincere gratitude for taking your time to review our paper and having a positive view on our proposal. We will try to address your questions below:\n\n> Q1) Some notations in the paper are quite confusing. In Theorem2, the authors say $C_F = \\left(\\frac{\\mathrm{Id}+F}{2}\\right)^{-1} - \\mathrm{Id}$ is its Cayley operator, which is quite confusing and different from Line81. I think the definition in Line81 is the correct one and then the Definition 2 is the 'inverse' Cayley operator so $G = C_F$.\n\nYes, in Theorem2, Line 118 has a typo. Thank you for pointing this out. The equation should be written as $C_F = 2(\\mathrm{Id}+F)^{-1} - \\mathrm{Id}$. We will fix this typo in the final version of our paper.\n\n> Q2) The paper claims that they achieve the SOTA in density estimation task, but to my knowledge, there are some works not included in the experiment section such as Flow++ [1], VFlow [2], Scoreflow [3].\n\nYes, we acknowledge the comparison should have been made from a broader perspective, including Scoreflow, which is a hybrid of the diffusion model and normalizing flows. We appreciate your inspiring comment on the additional related works. We will tone down the SOTA claim and discuss the suggested papers in the experiment section of the final version.\n", " First of all we would like to express our sincere gratitude for taking your time to review our paper and having a positive view on our proposal. We will try to address all of your concerns below:\n\n> Q1) The paper says that The resulting model, Monotone Flows, exhibits an excellent density estimation performance and outperforms existing state-of-the-art normalizing flow models on multiple density estimation benchmarks (MNIST, CIFAR-10, ImageNet32, ImageNet64). This is true when uniform dequantization is used during training. However, the Flow++ paper ([34]) reports an even lower bits-per-dimension score when using variational dequantization. Therefore, the proposed model should also be trained with variational dequantization and compared to Flow++ before making the state-of-the-art claim.\n\nYes, we acknowledge that the comparison should have been made from a broader perspective. We appreciate your inspiring comment on variational dequantization. We will tone down the SOTA claim in the final version of our paper.\n\n> Q2) Each of the experiments was only run a single time, which makes it harder to assess the significance of the reported numbers. However, this is a standard practice in the field of normalizing flows because each training run can take a lot of time. So this is only a minor problem, given the state of the field.\n\nWe agree that it is ideal to run experiments multiple times in order to more rigorously assess the significance of improvement. However, as you have mentioned, normalizing flows usually take a significant amount of time to train, and it is a standard practice to report single-run results [1,2,3,4]. Nevertheless, we would like to note that the training curves of ablated models remain well-separated from the training curve of the full model throughout the whole training session, which can be checked in the updated supplementary material (.zip file).\n\n> Q3) If possible with time, it would be nice if the authors could run their model with variational dequantization on CIFAR10 for the rebuttal.\n\nWe would like to run our model with varitional dequantization but unfortunately, training normalizing flows takes a significant amount of time. We are sorry that it would not be possible to run additional experiments due to the limited time period.\n\n> Q4) Some parts of the definition of monotone and maximally monotone operators are a bit confusing. Regarding monotone and maximally monotone operators: On line 79, what is meant by $u \\in F(x)$, $v \\in F(y)$? From my understanding, $F(x)$ is an $n$-dimensional vector and not a set, so how can it have elements? The same applies on line 108: Given that $F$ is a function, what does it mean for not to be a proper subset of any monotone operator?\n\nThank you for the careful reading. In general, the function $F$ is considered as a single-valued function defined at every point. However, in monotone operator theory, we tried to be more general by allowing $F$ to be an “operator”, which is a subset of $\\mathbb{R}^n\\times\\mathbb{R}^n$, and may have some points where F is undefined or has multiple values. Following mathematical convention, we overload the notation $F(x)$ to mean the set $F(x) = \\lbrace y | (x,y) \\in F \\rbrace$. Please refer to [5, section 2.1] for more details.\n\n> Q5) Line 118: What is $G$ in the definition of $C_F$? Should there be an $F$ in the numerator?\n\nIt is a typo. The equation should be written as $C_F = 2(\\mathrm{Id}+F)^{-1} - \\mathrm{Id}$. We will correct the typo in the final version.\n\n[1] Jens Behrmann, Will Grathwohl, Ricky T. Q. Chen, David Duvenaud, and Jörn-Henrik Jacobsen. Invertible residual networks. In ICML, 2019.\n\n[2] Ricky T. Q. Chen, Jens Behrmann, David K. Duvenaud, and Jörn-Henrik Jacobsen. Residual flows for invertible generative modeling. In NeurIPS, 2019.\n\n[3] Durk P. Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. In NeurIPS, 2018.\n\n[4] Yura Perugachi-Diaz, Jakub Tomczak, and Sandjai Bhulai. Invertible densenets with concatenated lipswish. In NeurIPS, 2021.\n\n[5] Ryu, E., & Yin, W. (2022). Large-Scale Convex Optimization: Algorithm Analysis via Monotone Operators. Cambridge: Cambridge University Press.", " The paper proposes a new type of ResNet-based Normalizing Flows. In contrast to prior studies, which required the Lipschitz constant $L$ of each layer to be less than 1, the authors use monotone operators, which they show to be strictly more expressive. A new activation function called Concatenated Pila (CPila) is also proposed. The suggested model is evaluated on multiple toy datasets as well as standard image datasets, outperforming the baseline i-DenseNet model. ### Strengths\n\n**Originality**\n\nThe formulation of monotone operators is something I haven’t encountered previously in the field. The concept seems intriguing and well-conceived.\n\n**Quality**\n\nI did not check the proofs in the appendix, but the mathematical theory in the main part is sound and comprehensible. The experiments and ablations follow the established standard of the field.\n\n**Clarity**\n\nThe paper is very clearly written and easy to read. The authors do a good job at separating the main mathematical theory from the details and the proofs which are presented in the appendix.\n\n**Significance**\n\nGiven the originality of the monotone formulation, the paper may inspire future work in the field of normalizing flows. The reported improvements are small, but consistent over multiple datasets.\n\n### Weaknesses\n\n* The paper says that *The resulting model, Monotone Flows, exhibits an excellent density estimation performance and outperforms existing state-of-the-art normalizing flow models on multiple density estimation benchmarks (MNIST, CIFAR-10, ImageNet32, ImageNet64).* This is true when uniform dequantization is used during training. However, the Flow++ paper ([34]) reports an even lower bits-per-dimension score when using variational dequantization. Therefore, the proposed model should also be trained with variational dequantization and compared to Flow++ before making the state-of-the-art claim.\n* Some parts of the definition of monotone and maximally monotone operators are a bit confusing. (Further details in the Questions section below.)\n* Each of the experiment was only run a single time, which makes it harder to assess the significance of the reported numbers. However, this is a standard practice in the field of normalizing flows because each training run can take a lot of time. So this is only a minor problem, given the state of the field.\n * If possible with time, it would be nice if the authors could run their model with variational dequantization on CIFAR10 for the rebuttal.\n* Regarding monotone and maximally monotone operators: On line 79, what is meant by $u \\in F(x), v \\in F(y)$? From my understanding, $F(x)$ is an $n$-dimensional vector and not a set, so how can it have elements? The same applies on line 108: Given that $F$ is a function, what does it mean for $F$ not to be a proper *subset* of any monotone operator?\n* Line 118: What is $G$ in the definition of $C_F$? Should there be an $F$ in the numerator?\n Yes, the limitations are adequately addressed.", " The paper \n* constructs normalizing flows based on monotone operators. Specifically, by Definition2, the paper defines $F(x)=(\\frac{Id+G}{2})^{-1}(x) - x$ for a $L$-Lipschitz ($L<1$) function $G$ (thus $G=C_F$ is the Cayley operator of $F$). Next from the Cayley operator identity $C_F+C_{F^{-1}} = 0$, $F^{-1}$ can be defined by \n$$F^{-1}(x) = C_{-G}(x) = (\\frac{Id-G}{2})^{-1}(x) - x.$$\nFor $G$, the authors use the architecture in i-DenseNets and propose a new activation function, CPila. For training and inference, log determinant can be computed via\n$$\\log \\det J_F = \\text{tr}[\\log (I-J_G) - \\log (I+J_G)]$$\nand estimated with some existing unbiased estimators. Training and inference schemes are also developed well using existing techniques in this work. \n\n* analyzes the expressive power of Monotone Flows theoretically, and compares with residual flows and implicit flows. Experimentally, the paper evaluates Monotone Flows on image density estimation tasks and shows the superiority to some normalizing flow models. \n\n\n\n **Strengths**:\n\nA. To the best of my knowledge, constructing normalizing flows with monotone operator is novel. \n\nB. Theoretic analysis of expressive power is valid. Experimental results also shows its superiority to some residual flows. Training and inference schemes based on existing techniques are also valid. Moreover, based on the observation that LipSwish suffers from the vanishing gradient problem, the paper proposes a new activation function CPila, which is also a novel contribution. Ablation studies also support the effects of CPila. \n\nC. I think Monotone flows is a valid and novel contribution to the community of normalizing flow models.\n\n**Weaknesses**:\n\nA. Some notations in the paper are quite confusing. In Theorem2, the authors say $C_F = (\\frac{Id+F}{2})^{-1} - Id$ is its Cayley operator, which is quite confusing and different from Line81. I think the definition in Line81 is the correct one and then the Definition2 is the 'inverse' Cayley operator so $G=C_F$. \n\nB. The paper claims that they achieves the SOTA in density estimation task, but to my knowledge, there are some works not included in the experiment section such as Flow++ [1], VFlow [2], Scoreflow [3]\n\n**Minor**:\n\n**Reference**\n\n[1] Ho, J., Chen, X., Srinivas, A., Duan, Y., and Abbeel, P. Flow++: Improving flow-based generative models with variational dequantization and architecture design. In International Conference on Machine Learning, pp. 2722– 2730, 2019.\n\n[2] J. Chen, C. Lu, B. Chenli, J. Zhu, and T. Tian. Vflow: More expressive generative flows with variational data augmentation. In International Conference on Machine Learning, pages 1660–1669. PMLR, 2020.\n\n[3] Song, Y., Durkan, C., Murray, I., and Ermon, S. Maximum likelihood training of score-based diffusion models. arXiv e-prints, pp. arXiv–2101, 2021. * As mentioned in the weakness.B, some works are omitted in the experiments. Limitations seen above. Societal impact: none", " This paper proposes Monotone Flows which uses monotone operator theory to parameterize Residual Flows. The authors also introduce a novel activation function in Pila and CPila that improves upon the LipSwish activation function. As a theoretical contribution, the authors demonstrate the Monotone Flows using the Cayley operator are able to express a larger family of functions when compared to vanilla Residual and Inverse Residual Flows. Empirically the authors show the benefits of Monotone Flows over standard baselines in qualitative samples and density estimation. Strengths\n- The approach to parametrize Residual Flows using Monotone Operators and Cayley Operators is both an interesting and novel direction. \n- The presented theory is both clear and integrates nicely with the overall goal of building a novel class of Normalizing Flow.\n- The ablation experiments help justify the choice of the activation function.\n- The experiments do enough to demonstrate that Monotone Flows are extremely expressive and powerful.\n- Theorem 4, is especially crisp at outlining the main advantages of the Cayley Formulation from a theoretical point of view.\n\nWeakness:\n- One of the claims in the abstract is that Monotone Flows outperform SOTA flows. Looking at the presented numbers and comparing it with relevant literature this is not true. Here is a non-exhaustive list of papers that outperform Monotone Flows (albeit using other advances that could also potentially be complementarily used here): Flow++[1], Augmented Normalizing Flow [2], Densely Connected Normalizing Flows [3].\n- The authors should also include FFJORD [4] as a relevant baseline.\n- The use of Russian Roullete, Neumann series, and Hutichinsons trace estimator for training and inference borrows much of what is already known in the flow literature. The authors should try to do a better job of disambiguating the main technical differences in Monotone Flows from the rest of the Residual Flow models. \n- The reported tables should have standard deviations as the numbers are very close.\n\n\nMinor:\n- line 131, T hasn't been introduced yet.\n- Definition 4, A is the set of what function? Be more precise.\n- Why do Residual Flows have higher precision---i.e. one more digit after the decimal---in Table 2 for Cifar10?\n\n[1] Ho, Jonathan, et al. \"Flow++: Improving flow-based generative models with variational dequantization and architecture design.\" International Conference on Machine Learning. PMLR, 2019.\n\n[2] Huang, Chin-Wei, Laurent Dinh, and Aaron Courville. \"Augmented normalizing flows: Bridging the gap between generative flows and latent variable models.\" arXiv preprint arXiv:2002.07101 (2020).\n\n[3] Grcić, Matej, Ivan Grubišić, and Siniša Šegvić. \"Densely connected normalizing flows.\" Advances in Neural Information Processing Systems 34 (2021): 23968-23982.\n\n[4] Grathwohl, Will, et al. \"Ffjord: Free-form continuous dynamics for scalable reversible generative models.\" arXiv preprint arXiv:1810.01367 (2018). 1. In the original Residual Flows paper there was a precise characterization of the memory cost (Fig 3). How does Krasnoselskii-Mann iteration affect this?\n2. An effect of the activation through training like Fig 8 in the ResFlow paper could be illuminating. Can you please consider including it? N/A", " The paper proposed a bigger class of invertible residual-type blocks based on monotone operators. Theoretically, the paper shows with the same Lipschitz constraints on the network, their blocks can express more functions than the simple residual formulation. They also provide algorithms to estimate log-det and compute gradients. They then introduced Pila activation to tackle vanishing gradients. Experimentally, they show the proposed flow achieves better density estimation results than other flow-based methods on some standard image datasets. The paper is very clearly written. The application of monotone operators on invertible flows is novel to my knowledge. This paper is interesting to researchers who work on theory and applications of flow-based models, in the area of generative modeling. \n\nStrengths:\n- The biggest strength is that they derived $\\mathcal{M}_L$ with the monotone formulation, and show this class is rigorously larger than the ones in i-ResNet or implicit NFs. \n- The derivations of log-det estimator and back-prop are mostly standard. These provide ways to train their model with similar techniques as previous work (e.g. ResFlow). \n- They introduced the new activation function called Pila. I am not an expert in activation functions but it seems to work well according to Table 3. \n- They show better NLL than other flow-based models on several standard image datasets. \n\nWeakness:\n- As mentioned in Appendix F, their model has similar weaknesses as previous models such as computation and generation quality. \n- In experiments, improvement on density estimation is very marginal, and cannot compete with other generative methods outside normalizing flows. For example, some score-matching based models achieved NLL<2.5 on CIFAR10.\n- The experiments lack some other metrics such as sampling speed and FID of generated samples. - Thm 4 says $\\mathcal{M}_L=\\frac{1+L^2}{1-L^2} \\mathcal{R}_K, K=\\frac{2L}{1+L^2}$. Since $K<1$ when $L<1$, is choosing a function from $\\mathcal{G}_L$ and applying your formulation equivalent to choosing a function from $\\mathcal{G}_K$ and applying the residual formulation, if we disregard the constant $\\frac{1+L^2}{1-L^2}$? If these are the same then what is the advantage of your method? If there is a difference then what is the key difference here?\n- Thm2: what is the relationship between $F$ and $G$? Is there a typo here?\n- Sec3.4: what is the advantage of using concatenation? \n- What is the complexity of your algorithms (log-det estimator, forward and back-prop)? How is it compared to previous methods especially ResFlow? \n- What is the sampling speed of your method in experiments? What about quality evaluations such as FID? These numbers will make the experiment section more complete. \n\n\n----------------------------------------------\n\nAfter rebuttal:\n\nThanks for the author response. All my questions are answered. I think this is a very solid paper. I'll raise my score to 7. Yes, they discussed limitations in the appendix. " ]
[ -1, -1, -1, -1, -1, -1, -1, 7, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "lK6AkAK4dK_", "UaY0d0GpBnc", "nips_2022_USoYIT4IQz", "yJDA-iu5cyi", "AOAlSsurbx-", "E_35onA-aTY", "F0aYdHk5z3P", "nips_2022_USoYIT4IQz", "nips_2022_USoYIT4IQz", "nips_2022_USoYIT4IQz", "nips_2022_USoYIT4IQz" ]
nips_2022_Ya9lATuQ3gg
Large-Scale Retrieval for Reinforcement Learning
Effective decision making involves flexibly relating past experiences and relevant contextual information to a novel situation. In deep reinforcement learning (RL), the dominant paradigm is for an agent to amortise information that helps decision-making into its network weights via gradient descent on training losses. Here, we pursue an alternative approach in which agents can utilise large-scale context-sensitive database lookups to support their parametric computations. This allows agents to directly learn in an end-to-end manner to utilise relevant information to inform their outputs. In addition, new information can be attended to by the agent, without retraining, by simply augmenting the retrieval dataset. We study this approach for offline RL in 9x9 Go, a challenging game for which the vast combinatorial state space privileges generalisation over direct matching to past experiences. We leverage fast, approximate nearest neighbor techniques in order to retrieve relevant data from a set of tens of millions of expert demonstration states. Attending to this information provides a significant boost to prediction accuracy and game-play performance over simply using these demonstrations as training trajectories, providing a compelling demonstration of the value of large-scale retrieval in offline RL agents.
Accept
This paper uses nearest neighbor methods to retrieve and exploit information from similar games during planning, whilst playing the game of go (although the method is extensible to other environments which support muzero-style agents). The reviewers found this approach interesting and ultimately worth publishing, although there was a range of scores. However, the emerging consensus during the review phase seemed to lean towards acceptance, a recommendation I am happy to support from having read the discussion.
val
[ "fhyj8wqrppS", "_L5TOR5QWxd", "QbCWNEUSwIm", "6F0V7ID6Hcf", "6Bs87cX8FVt", "p2mG3uYsXpy", "kTLJS81kISt", "JZstx1fY-G", "oxmQWYza7t", "FNgcnFVhKOT", "iMPS0Xkay-Y", "GR6SvKdgZQW" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your comments. We continue to strongly believe that Go is if anything a particularly challenging domain for Offline RL. We note that the existence of superhuman Atari agents does not preclude studying offline RL in Atari (as in the RL Unplugged suite).", " Thank you for your helpful responses! After reading the author comments as well as the concerns from other reviewers I currently intend to keep my weak accept rating. Fundamentally I think this paper would be much stronger if:\n\n1. Instead of using an AlphaGo-generated training set, you instead used a human-expert-generated dataset. While I appreciate your note regarding injecting noise at training time and agree that it is suggestive, I do not find it to be strong enough evidence to allay my concerns.\n2. As suggested by reviewer LtnM, you also showed results using different environments (e.g. Atari). That Go may, hypothetically, be a highly challenging environment for retrieval doesn't seem to be a compelling reason to only show results on Go. While I understand the concern that you may simply be able to recall a near-identical experience, it seems like this simply suggests that datasets should be split intelligently or that non-Atari environments should be used (e.g. the BabyAI tasks in the MiniGrid environment).\n\nIf this paper is accepted, I would also echo reviewer LtnM's sentiments that the title be made more narrow/specific.", " My rating remains \"borderline reject.\" **If this paper gets accepted anyway, then I request that (1) the title be changed to be more modest (eg \"Retrieval for Reinforcement Learning in 9x9 Go\"), and (2) figures 4 and 5 be modified to include the parameter count of the retrieval network.**", " The title of this work is grand -- \"Large-Scale Retrieval for Reinforcement Learning\" -- and the abstract claims that the experiments:\n> provid[e] a compelling demonstration of the value of large-scale retrieval in offline RL agents.\n\nHowever, the experiments do not support this claim. Go is already a solved problem: we already have superhuman Go playing RL agents. Furthermore, Go is not an offline RL benchmark. The authors don't compare against any offline RL algorithms on the actual benchmarks (eg, D4RL or RL Unplugged) where offline RL algorithms are being studied.\n\nIn another response, the authors argue that they chose Go instead of offline RL benchmarks because\n> Go... has a much larger state space than e.g. Atari. We hypothesise that this should provide the strongest challenge to retrieval-based mechanisms\n\nIf the authors' hypothesis is right and Atari is easy for retrieval-based methods, then they should demonstrate this with experiments. The authors should actually show that retrieval-based methods lead to an improvement in offline RL benchmarks, such as RL Unplugged, that use Atari.\n\nI appreciate that the authors clarify that they are using MuZero Unplugged as the baseline. It would be helpful if they modify the figures so that they actually refer to MuZero Unplugged by name rather than just saying \"baseline.\" I also appreciate the author's discussion that the ~7% win-rate boost they achieve is nearly 2 dan levels. I now believe this win-rate boost is more significant than I originally thought.\n\n> these parameters are fixed in pretraining and not further updated. Hence they cannot change to improve the training objective\n\nEven if the parameters are fixed after pretraining, they should still be counted as they are needed for inference. When we pretrain a large language model with billions of parameters and then finetune only the last layer of the language model, we still need to count all of the billions of model parameters in our final analysis.\n\nI remain \"borderline reject\" because the experiments in the paper don't support the paper's claims (see my discussion above).", " Thank you for your review. \n\n*About the choice of the Go domain:*\n\nWe choose to use Go not because it is a domain where only offline RL is possible, but rather because it is a challenging test bed where basic memorisation is not sufficient. Other offline research domains, such as RL Unplugged, are also studied using both online and offline RL since e.g. Atari/MuJoCo simulators are available. Moreover, we believe that these domains are more limited than Go for studying retrieval due to the lower complexity of the state space - there are more opportunities to directly match expert data to a given state, which is a less convincing test of retrieval even if it leads to score improvements. \n\n*Why did the authors not directly use MuZero Unplugged?:*\n\nWe did use Muzero Unplugged as our baseline. In the case that we studied, there is no difference between the MuZero algorithm and MuZero Unplugged - it simply labels whether it is used Offline or Online.\n\n*About capacity:*\n\nThe parameters g_phi are utilised in retrieval. However, these parameters are fixed in pretraining and not further updated. Hence they cannot change to improve the training objective. Nevertheless, in the most conservative assessment of considering $\\phi$ as extra usable/trainable parameters for the main network, this would shift the relative retrieval model size by ~0.4. It is clear from Figs. 4&5 that this does not change the conclusions - all retrieval model sizes will still perform significantly better than their capacity matched baselines in all metrics apart from the win rate for the very smallest 0.5 model size. \n\n*About the opponent:*\n\nA 5% win-rate increase against our reference opponent on 9x9 is significant given the opponent is very strong (amateur play was referring to dan level which requires years of human experience, Pachi in 9x9 plays at high-dan level if not pro level). Go ranks are denoted in terms of “1-dan”, “2-dan” etc, separated by steps of 30 Elo (a relative strength metric) - a boost in win rate of ~7% as we achieve using retrieval constitutes an increase of 50 Elo, so almost 2 dan levels. Our opponent, Pachi, is a well-known and standard Go program used as a reference player in many publications (e.g for AlphaGo, it is also a standard opponent in the OpenAI Gym Go environment). \n", " We thank you for your review. You are right that the proposed approach should be able to benefit from other forms of data than trainable trajectories in the same format as agent trajectories. We are excited to explore these forms of data in future work. In this first investigation, we decided to focus on a scenario that most clearly highlights the unique advantages of retrieval, while minimising additional complexities: retrieval from high quality trajectory data. In this scenario, an agent can straightforwardly be trained by performing behavior cloning or offline RL on this trajectory data. It is therefore less obvious that a retrieval-based approach will have an edge. However, as we show, retrieval allows for a distinctly different training modality, in which other trajectory data are directly used to inform decisions about related trajectories. BC and Offline RL do not have this capacity. As we find, this leads to a significant boost in performance, while further enabling performance improvements without retraining by augmenting the retrieval dataset. \n\nTo clarify the concern about baseline capacity, this is already controlled for in the experiments, since the retrieval-disabled model has the same number of trainable parameters as the retrieval-enabled one (The retrieval-disabled model (i.e, baseline) can leverage the neighbour processing part of the network as extra capacity, with the input being to it being only the main observation). We also look at the condition where the baseline has more capacity than the retrieval in Fig 5b, where for example the baseline with model size 2 only just matches the retrieval with model size 1.\n\nLearning the query vector is outside the scope of this work, but will be an important aspect of subsequent studies and enable non-trajectory auxiliary data to be better leveraged. The REALM (retrieval in language modelling) paper we cited ([12]) shows one possible implementation for learning queries.", " Thank you for your review.\n\n### Reviewer question - Why not use AlphaZero?\n\nAs the reviewer notes, the reason for our choice of dataset in this domain was purely practical - we had access to expert level AlphaZero 9x9 Go gameplay data. However, we believe a strong argument for the broader validity of our results arises from our choice of 9x9 Go as a domain. Unlike for simpler environments, we specifically chose Go as it has a much larger state space than e.g. Atari. We hypothesise that this should provide the strongest challenge to retrieval-based mechanisms, as it is not sufficient to just retrieve a near identical experience from the dataset, but instead requires generalisation from related but distinct experiences to a novel setting. Indeed, as we discuss in the paper, there is a strong distributional shift from the training data to our evaluation regime, but we nevertheless see a benefit from retrieval. As part of our training process, we actually introduce noise into our retrieved neighbors, which in a way emulates the noisier dataset that the reviewer alludes to - training the model with this noisy data makes it more effective in novel situations. This gives us confidence that our approach is likely to also prove useful in other domains and with more mixed datasets. We are excited to test and extend our retrieval implementation in new domains to validate this in subsequent works.\n\n### Reviewer comment - scope of this study\n\nWe did not endeavour to create a false impression of the scope of this work. We believe that our study represents a concrete initial step towards an exciting longer term vision, and simply aim to motivate our work in this way (while being clear about the scope of what we study). We have modified our abstract to clarify that in this work we study offline reinforcement learning:\n\n> We study this approach for *offline reinforcement learning (RL) in 9x9 Go*, a challenging game for which the vast combinatorial state space privileges generalisation over direct matching to past experiences [….] Attending to this information provides a significant boost to prediction accuracy and game-play performance over simply using these demonstrations as training trajectories, providing a compelling demonstration of the value of large-scale retrieval in *offline RL agents*. \n\nWe note that, despite the agent being trained using Offline RL, we do see evidence of online policy improvement *without further training* when games played by the agent are incorporated into the retrieval dataset. \n\n### Reviewer further questions\n\n* In Figure 1, we want to emphasise that there *is* gradient flow, in contrast to previous works using hand-crafted methods for neighbor synthesis. We therefore chose to have explicit gradient propagation lines. We believe a stop-gradient indicator would not convey this point as clearly.\n* Learning the query vector is outside the scope of this work, but will be an important aspect of subsequent studies. The REALM (retrieval in language modelling) paper (citation [12]) shows one possible implementation for learning queries.\n* All observations from a given game are within the same portion of the dataset split. We have clarified this in the manuscript.\n* It is not possible to give a single number for the number of MCTS rollouts that approximately equal one retrieved neighbor, as it depends on the baseline from which you start from. The results in Fig. 10 suggest that, in this particular regime, adding a single retrieved neighbour is approximately equivalent to additional 10 MCTS simulations.\n* AlphaZero agent architectures trained in our codebase achieve similar accuracies to our baseline when matched to the same parameter count. \n* We agree with the reviewer that it is interesting that networks trained with retrieval show a residual performance gain even without neighbour inputs. As the paper states, this is only when using MCTS simulations, showing that the network is still sensitive to these inputs. We hypothesise that the neighbor inputs favourably alter the training dynamics, but ultimately a fraction of the information gained from these neighors is predictable from the base observations and hence is amortised by the network. We are interested in further investigating this effect in subsequent work.\n* The use of large-scale retrieval in NLP is a rapidly evolving field. We cited the most relevant papers that we were aware of at the time (cf [7] and [12] and pointers from these papers), but we will investigate whether there are key citations missing. \n", " Thank you for your review and comments. \n\nAs you observe, beyond a certain point, adding more neighbours leads to a decrease in agent performance. This phenomenon is also observed in e.g. the RETRO language modelling work. We hypothesise that this is likely due to the bottleneck after the neighbours processing, at which each neighbours data stream is summed together. Beyond a certain number of neighbours, the fixed channel capacity may not be sufficient to prevent unwanted collisions between these data streams. We intend to investigate this in more detail in subsequent studies.", " This paper mainly experimented on Go. It leverages fast, approximate nearest neighbor techniques in order to retrieve relevant data from a\nset of tens of millions of expert demonstration states. This information provides a significant boost to prediction accuracy and game-play performance over simply using these demonstrations as training trajectories, providing a compelling demonstration of the value of large-scale retrieval in reinforcement learning agents. 1. The paper is well organized and easy to follow.\n\n2. The motivation of this paper is really interesting. It show that the retrieval can help RL for training. It also shows that retrieval can be effectively combined with model-based search. The benefits from retrieval and search are synergistic, with increasing numbers of retrieved neighbors and increasing simulations both leading to performance increases.\n\n 1. In figure 5a, it shows the retrieval networks using varying numbers of retrieved neighbors and for a baseline non-retrieval\nnetwork. However, I found that a larger number of neighbors (10 vs 20) actually hurts the performance. Do you have a explanation for that? None", " The goal of the paper is to enable RL agent to retrieve large amount of information from sources using nearest neighbor queries over experience datasets. As testbench the authors rely on the game GO, where the agent can retrieve information from an auxiliary source of 50M board-state observations. The work studies the trade-off between leveraging such external information repositories and storing all necessary information into the network as weight to optimize a policy in an offline RL setting. An extension for online RL is left open for future work. \nTo retrieve information the model first precomputes embeddings for all possible observations based on hidden activations of a surrogate model which are compressed by PCA. The embeddings are used as query and key for NN search over an external dataset. Lastly, the model output is then generated using the NN results and the current observation. Strengths:\n- In my opinion, augmentation of RL agents with \"any kind of\" external data is a great idea, since it can deliver benefits for real-world use cases. For instance, by altering the external data an operator can have direct and immediate influence on a deployed RL policy. It would be interesting to further study how the auxiliary data influences the behavior of the RL policy. \n- Evaluation against a human competitor and analysis over different retrieval data size demonstrate a strong benefit.\n\nMain concern: Evaluation does not support some of the claims of the paper\n- The authors make the claims that potential auxiliary sources can be diverse, e.g. text, video and that the approach is versatile regarding the type of auxiliary data for training. However, in the experiments they only use trajectories as auxiliary data. To be fair other kinds of external data should be included in the evaluation to assess if this versatility claim really and how general the approach is. \n- For estimating the impact of the retrieval the paper only compares a retrieval-enabled model against retrieval-disabled model with the same architecture. However, how do we know that the improvements are not only due to an increased parameter space? The evaluation would benefit from a comparison where only the number of parameters in the retrieval-disabled model are increased to study the claim that a retrieval-augmented network can utilise more of its capacity for computation instead of having to amortise all relevant information into its network weights. \n\nMinor concerns:\n- A lot is left for future experiments, such as the online RL setup or learning of query vectors, indicating that the paper is still preliminary. How would one need to extend the presented approach to incorporate non-trajectory auxiliary data? yes", " This work investigates the impact of integrating a large-scale, nearest-neighbors, retrieval mechanism into a MuZero-inspired model-based RL agent with application to playing the game of Go. In particular, a large dataset of 50M Go board states is first embedded by a pretrained network. During training, an input board state (representing the current state of the game) is itself embedded by the same network and used to perform a fast approximate nearest-neighbor (NN) lookup across the retrieval dataset. The top-K NNs are (along with relevant metadata) given as input to the model-based agent who may use the context of these games to both improve its prior policy as well as inform its rollouts during Monte-Carlo tree search. A number of experiments show that this retrieval-based agent (RBA) outperforms related baselines trained without retrieval and that, interestingly, the RBA agent can even adapt, without additional training, to new retrieval datasets.\n ## Strengths\n\nResearch on semi-parametric approaches that enable agents to reuse training data beyond what they can distill into their model weights is an exciting direction that is underdeveloped in the reinforcement learning area. Differing from past work in this direction, this work integrates retrieved experience using a simple, but highly generic, mechanism that can, in principle, be extended to retrieving information beyond training data (e.g. an embodied agent might retrieve YouTube videos related to completing some household task). The empirical results presented are limited in scope but are convincing in that there is clear evidence that the RBA agent makes effective use of the retrieved data and that this leads to measurable performance improvements. Finally, the paper is well-written and all details are well explained.\n\n## Weaknesses\n\nWhile I am generally positive about this work there are some weaknesses, detailed below, that keep me from a more enthusiastic recommendation.\n\n- Why not use AlphaZero?\n\nAs noted on line 189, the training dataset used in this work is generated by an AlphaZero-style agent which raises a somewhat unfair question: if AlphaZero-level performance is what we're attempting to achieve, why not just use the AlphaZero agent? I.e. what is to be gained from training a model to imitate a model that we already have? I assume the rebuttal to this is that this dataset choice was purely practical and that anyone looking to apply these ideas to a new domain where no such expert is available should instead use the likely human-generated, expert demonstrations they have available. My worry is then that the use of this AlphaZero-generated dataset may not be a fair stand-in for multi-modal, error-rich, human-generated expert datasets; because of this, the performance of the retrieval-based agent using this artificially generated dataset may be a \"best-case\" and others may find achieving such gains more difficult.\n\nWhile the results from Sec. 3.4 with the augmented retrieval dataset provides some evidence that the above is not the case, I do believe that this work would be much stronger if it had used a human-generated expert dataset or if there were additional results applying this method to other domains.\n\n- Grand aims but limited scope\n\nThis paper (especially the title and abstract) suggests a grand scope: reinforcement learning augmented with retrieval towards a \"vision in which an agent can flexibly draw on diverse and large-scale information sources.\" As one reads deeper into the paper, however, the scope is iteratively narrowed until we are left with an agent that is trained to play 9x9 Go with expert supervision in an off-policy manner with a uni-modal dataset being used for retrieval (namely the training dataset). This is a failure in expectation management, while I still find the work interesting it is difficult to not feel let down going from the grand vision to the reality. This would work be significantly stronger if any step was made along:\n\n1. Using on-policy training data.\n2. Using multi-model data or, at the very least as noted above, not artificially generated expert trajectories.\n3. Showing applicability to other games / environments.\n\n## Line-by-line comments\n\n- Fig. 1\n * I find the use of lines to denote gradient flow to be a little confusing. I would suggest instead only showing the forward computation and using a symbol along these lines to denote a stop-gradient (often represented as a line break, e.g. ---| |---> ).\n\n- Lines 142-143\n * End-to-end learning of the query vector seems challenging, can you expand on how you intend to accomplish this?\n\n- Lines 198-199\n * To confirm, does this mean that all observations from a particular game will be in the same portion of the dataset split? I.e. if o_t is in split 1 then o_{t+1} will also be?\n\n- Lines 224-225\n * Is it possible that the gap between the retrieval-based method and the baseline can be closed simply by doing more MCTS rollouts? If so, how many are needed to do so? I believe Fig. 10 begins to get at this question but not completely. It would be interesting to know how many rollouts approximately equal one retrieved neighbor.\n \n- Fig. 4\n * If you evaluate the AlphaZero agent on this dataset, what is its Top-1 accuracy?\n\n- Lines 271-273\n * This is quite surprising to me, doesn't this suggest that the majority of your gains over the baseline are actually not related to retrieval at test-time? I would be interested in seeing more analysis of this.\n\n- Lines 275-291\n * While I am not an expert in this domain, it seems as though this related work is missing of how people use retrieval-based ideas outside of RL. E.g. a quick search on Google Scholar shows a large number of works related to building NLP models with access to large-scale knowledge bases. My main concerns in this work are outlined in the weaknesses section and largely center around the applicability of this method to other domains and more diverse retrieval datasets. I would be happy to increase my rating if given strong arguments addressing these weaknesses or if new relevant results can be presented in the rebuttal period. I have also listed a number of questions in the \"Line-by-line comments\" section in the above answer.\n There is some note of limitations throughout the paper but no dedicated section for their discussion. I believe there is room to more emphasize some of the weaknesses I've addressed above (assuming I have not identified them incorrectly). I don't see any direct potential for negative societal impact from this work.\n", " The authors propose a method for retrieving relevant training experience and then making decisions using that experience as context. They run experiments in the game of Go, showing that when they equalize model sizes, their model looking at retrieval information performs better than a baseline method that doesn’t see retrieval information. Specifically, the retrieval information helps predict values (Figure 4) and helps win the game (Figure 5). ## Strengths\nThe motivation of this work – conditioning on prior experience – is creative. I am not aware of any state-of-the-art algorithms that condition on prior experience, so it would be neat to see that this helps in offline RL.\n\nAs far as I can tell, the authors propose a method that can easily be generalized to other domains. If the authors’ method helps in diverse settings, that would be significant!\n\n## Weaknesses\nMy major issue with this paper is that I don’t see the significance of the authors’ experimental setup. The authors motivate their work via offline RL, but then they study the game of Go. Go is precisely the type of domain where we _don’t need offline RL_ because we can easily simulate the environment dynamics! If the authors want to demonstrate the relevance of their method for offline RL, they should use an offline RL benchmark such as D4RL or RL Unplugged. This would allow them to compare against other offline RL algorithms in suitable offline RL environments.\n\nAs it stands, the authors only compare against a baseline that they themselves came up with. However, there is little analysis to justify that this is a good baseline. Why did the authors not directly use MuZero Unplugged ([36] in the authors’ list of references) as a baseline? Is this not possible?\n\nI am also unconvinced that the proposed method does that much better than the baseline. Figure 5 would be the most important figure in the paper, showing that the proposed method helps win the game. However, if I understand correctly, the “model size” on the x-axis of Figure 5 is an unfair comparison because it is leaving out the parameter count of $g_\\phi$ which is needed for retrieval. It also seems that the largest model size only shows a 5% increase in winrate over the baseline. This doesn’t seem very impressive to me as the opponent is a fixed program with “strong amateur level of play” (and I’m also not confident in the baseline).\n\nFinally, I think the title of the work is overclaiming because it is too broad. Since the authors only evaluate in Go, I think the title should make this clear, e.g., “Large-Scale Retrieval for Reinforcement Learning in the Game of Go.”\n\n## Minor issues / typos\nFigure 2 shows $D_r$ and describes it as having “keys $k$”. I believe the keys $k$ are generated using $g_\\phi$, but there is no indication of this in the figure. This confuses me.\n\n## I Request Title and Figure Changes\nUpdate after author response: My rating remains \"borderline reject.\" **If this paper gets accepted anyway, then I request that (1) the title be changed to be more modest (eg \"Retrieval for Reinforcement Learning in 9x9 Go\"), and (2) figures 4 and 5 be modified to include the parameter count of the retrieval network.** Why do you consider an offline RL version of Go, a game where we can easily simulate environment dynamics and already have superhuman algorithms such as MuZero? Why did you not use an offline RL benchmark such as D4RL or RL Unplugged?\n\nWhy did you not directly use a baseline from prior work? Is MuZero Unplugged not directly applicable to your setting? While the title of the work makes it seem like a general-purpose RL study, the experiments are only in the game of Go. The authors motivate the work via offline RL, but Go is a setting where it is very easy to simulate environment experience, so we don't need offline RL in Go." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 3, 4 ]
[ "6F0V7ID6Hcf", "kTLJS81kISt", "6Bs87cX8FVt", "6Bs87cX8FVt", "GR6SvKdgZQW", "FNgcnFVhKOT", "iMPS0Xkay-Y", "oxmQWYza7t", "nips_2022_Ya9lATuQ3gg", "nips_2022_Ya9lATuQ3gg", "nips_2022_Ya9lATuQ3gg", "nips_2022_Ya9lATuQ3gg" ]
nips_2022_88_wNI6ZBDZ
Make Sharpness-Aware Minimization Stronger: A Sparsified Perturbation Approach
Deep neural networks often suffer from poor generalization caused by complex and non-convex loss landscapes. One of the popular solutions is Sharpness-Aware Minimization (SAM), which smooths the loss landscape via minimizing the maximized change of training loss when adding a perturbation to the weight. However, we find the indiscriminate perturbation of SAM on all parameters is suboptimal, which also results in excessive computation,~\emph{i.e.}, double the overhead of common optimizers like Stochastic Gradient Descent~(SGD). In this paper, we propose an efficient and effective training scheme coined as Sparse SAM (SSAM), which achieves sparse perturbation by a binary mask. To obtain the sparse mask, we provide two solutions which are based on Fisher information and dynamic sparse training, respectively. In addition, we theoretically prove that SSAM can converge at the same rate as SAM,~\emph{i.e.}, $O(\log T/\sqrt{T})$. Sparse SAM not only has the potential for training acceleration but also smooths the loss landscape effectively. Extensive experimental results on CIFAR10, CIFAR100, and ImageNet-1K confirm the superior efficiency of our method to SAM, and the performance is preserved or even better with a perturbation of merely 50\% sparsity. Code is available at \url{https://github.com/Mi-Peng/Sparse-Sharpness-Aware-Minimization}.
Accept
This paper proposes a new scheme to improve computational efficiency of the SAM optimizer. The original SAM requires to asses the loss value at a perturbed point. The perturbation lives in the full parameter space. This paper shows that computing the perturbation in every direction is not necessary. By only selecting a sparse subset of the parameters to undergo a perturbation, the authors are able to maintain the original generalization performance of SAM while significantly reducing the number of FLOPS. The paper also presents a convergence analysis for the proposed scheme. There was an active discussion between authors and reviewers, and authors provided very very thorough response to the questions and issues raised by the reviewers. As a result of this, multiple reviewers increased the original score. At the end, 3 out of 4 reviews are on the positive side, and the remaining one is a borderline reject. In concordance with the majority of the reviewers, I believe improving the computational cost of SAM is of huge practical interest, and this paper is a step toward in direction. I recommend accept. As the thorough reply from the authors contains interesting details, I strongly suggest that authors include these details to their submission (maybe to the supplementary material?)
train
[ "f_qElCoEags", "k6vzSXUG9jS", "WY11JyKGsQc", "IcJzL5QfML9", "lVBc-t7FBvk", "HtUS3R_usz", "OPFozuHC_O", "I8ASUvzgB9", "BAFHKJ2ETfY", "KgyuYnUVWTC", "EDVGJaecPsbP", "DSj5YCtertH", "2gTU92j7M3q", "kterOJSghnSD", "m5MYUSlLR1I", "oHs-ctrB0m", "pH0s9GyVkPx", "HSyTEc5YM76", "ed-lcCrSo_4" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank you for your reply. Your suggestions make our paper better. Using Pytorch API \"requires_grad\", we extend our SSAM-F into Block-wise SSAM-F, i.e., the unperturbed weights do not compute the gradient and save the training time. The Block-wise SSAM-F achieves the comparable performance on CIFAR and the training time decreases. Compared to the Block-wise SSAM-F, the extra time for computing Fisher Information is negligible.\n\n | Model | Datasets | Optimizer | Sparsity | Test Acc | Training for One Epoch(Speed up) |\n | :--: | :--: | :--: | :--: | :--: |:--: |\n |ResNet18|CIFAR10|SAM| / | 96.83% | 13.60s(x1) |\n |ResNet18|CIFAR10|Block-Wise SSAM-F| 1% | 96.93% | 13.74s(x0.989) |\n |ResNet18|CIFAR10|Block-Wise SSAM-F| 5% | 96.76% | 13.90s(x0.978) |\n |ResNet18|CIFAR10|Block-Wise SSAM-F| 10% | 96.86% | 13.83s(x0.983) |\n |ResNet18|CIFAR10|Block-Wise SSAM-F| 25% | 96.84% | 14.05s(x0.967) |\n |ResNet18|CIFAR10|Block-Wise SSAM-F| 50% | 96.80% | 13.69s(x0.993) |\n |ResNet18|CIFAR10|Block-Wise SSAM-F| 80% | 96.64% | 13.61s(x0.999) |\n |ResNet18|CIFAR10|Block-Wise SSAM-F| 90% | 96.55% | 13.56s(x1.002) |\n |ResNet18|CIFAR10|Block-Wise SSAM-F| 95% | 96.65% | 13.20s(x1.030) |\n |ResNet18|CIFAR10|Block-Wise SSAM-F| 98% | 96.66% | 12.79s(x1.063) |\n |ResNet18|CIFAR10|Block-Wise SSAM-F| 99% | 96.53% | 12.62s(x1.077) |", " In light of the very thorough author response, wherein all my questions were answered, I've increased my score from 4 to 5. Given the low signal to noise ratio in the experimental results, the added complexity of this method, and the fact that the sparsity cannot be leveraged too well given current hardware / training setups (as the authors mention), I hesitate to score the paper as a 6. I, however, commend the authors for the comprehensive author response.", " Dear Reviewer ym8S\n\nWe thank for your time and suggestions. We have responded to your questions in detail accordingly. Would you mind checking it and confirming if you have further questions? Because there is only few hours left for us to answer your questions.\n\nBest,\n\nAuthors\n\n", " Dear Reviewer BQW5\n\nWe thank for your time and suggestions. We have responded to your questions in detail accordingly. Would you mind checking it and confirming if you have further questions? Because there is only few hours left for us to answer your questions.\n\nBest,\n\nAuthors", " Dear reviewer kk3C, \n\nWe thank for your time and suggestions. We have responded to your questions in detail accordingly. Would you mind checking it and confirming if you have further questions? Because there is only few hours left for us to answer your questions.\n\nBest,\n\nAuthors", " We thank all reviewers for your time and suggestions, and we expect to have a further discussion. We have responded to your questions in detail accordingly. If you have further questions or concerns, we still reply before the end of the author-reviewer discussion. Thank you very much for your review time and efforts.", " Thanks for the response. The authors have clearly addressed my questions and concerns, I increase my score to 6. ", " ### Dear Review ym8S:\n\nThank you for your all valuable suggestions and meticulous comments. We will incorporate your suggestions in the revision. Below we respond to your key concerns point by point. Please let me know if there are further more questions.\n\n- **Q1: Although SSAM does in theory reduce SAM's FLOPS, it's unclear whether it would actually help in practice. Due to the nature of how gradients are computed in practice, computing gradients for a sparse subset of weights is often not much faster than computing the full set, especially when the subset is 50% and the weights could be from any of the layers. While the paper shows theoretical FLOPS, it does not have any numbers on actual wall-clock speed, etc, though I do understand that reporting wall-clock speed has some issues of its own.**\n\n A1: We agree that the computing a sparse gradient is not much faster than computing the dense one. Since the weight-wise sparse training is limited to the hardware, we do not achieve actual weight-wise sparse speedup. At present, Xu et.al. [1] has implemented the relevant sparse matrix calculation based on NVIDIA's 2:4 sparse operator [2], which means the sparse operator is gradually supported. \n\n We report the wall-clock of our weight-wise Sparse SAM in the following table. Due to the absent of actual speedup of weight-wise sparse SAM and the extra computation of Fisher Information, the training time is longer.\n | Model | Datasets | Optimizer | Sparsity | Test Acc | Time for One Epoch (Speed up) |\n | :--: | :--: | :--: | :--: | :--: |:--:|\n |ResNet18|CIFAR10|SAM| / | 96.83% | 13.60s(x1) |\n |ResNet18|CIFAR10|Weight-Wise SSAM-F| 1% | 96.74% | 14.21s(x0.957) |\n |ResNet18|CIFAR10|Weight-Wise SSAM-F| 5% | 96.93% | 15.08s(x0.901) |\n |ResNet18|CIFAR10|Weight-Wise SSAM-F| 10% | 96.69% | 14.60s(x0.931) |\n |ResNet18|CIFAR10|Weight-Wise SSAM-F| 25% | 96.82% | 14.76s(x0.921) |\n |ResNet18|CIFAR10|Weight-Wise SSAM-F| 50% | 96.81% | 15.55s(x0.874) |\n |ResNet18|CIFAR10|Weight-Wise SSAM-F| 80% | 96.64% | 14.30s(x0.951) |\n |ResNet18|CIFAR10|Weight-Wise SSAM-F| 90% | 96.75% | 14.43s(x0.942) |\n |ResNet18|CIFAR10|Weight-Wise SSAM-F| 95% | 96.66% | 15.15s(x0.897) |\n |ResNet18|CIFAR10|Weight-Wise SSAM-F| 98% | 96.55% | 14.42s(x0.943) |\n |ResNet18|CIFAR10|Weight-Wise SSAM-F| 99% | 96.52% | 14.26s(x0.953) |\n\n To achieve really speedup for our Sparse SAM-Fisher, we also conduct the block-wise sparse fisher training. We actually compute the sparse gradients in the gradient ascent (to find the perturbation). Specifically, we implement the block-wise sprase fisher in Pytorch. We treat the block, *e.g.*, a kernel of an convolution layer, as unit and use `requires_grad` API in Pytorch to stop gradient computation. According to Pytorch's automatic differentiation mechanism, a parameter will no longer calculate gradient if none of its previous parameters require gradients. In this case, our block-wise SSAM Fisher achieve actually speedup.\n\n The wall-clock of block-wise sparse training is shown in the following table. The table shows that our block-wise sparse SAM-Fisher is able to actually accelerate the training and maintain comparable performance. \n\n | Model | Datasets | Optimizer | Sparsity | Test Acc | Training for One Epoch(Speed up) |\n | :--: | :--: | :--: | :--: | :--: |:--: |\n |ResNet18|CIFAR10|SAM| / | 96.83% | 13.60s(x1) |\n |ResNet18|CIFAR10|Block-Wise SSAM-F| 1% | 96.93% | 13.74s(x0.989) |\n |ResNet18|CIFAR10|Block-Wise SSAM-F| 5% | 96.76% | 13.90s(x0.978) |\n |ResNet18|CIFAR10|Block-Wise SSAM-F| 10% | 96.86% | 13.83s(x0.983) |\n |ResNet18|CIFAR10|Block-Wise SSAM-F| 25% | 96.84% | 14.05s(x0.967) |\n |ResNet18|CIFAR10|Block-Wise SSAM-F| 50% | 96.80% | 13.69s(x0.993) |\n |ResNet18|CIFAR10|Block-Wise SSAM-F| 80% | 96.64% | 13.61s(x0.999) |\n |ResNet18|CIFAR10|Block-Wise SSAM-F| 90% | 96.55% | 13.56s(x1.002) |\n |ResNet18|CIFAR10|Block-Wise SSAM-F| 95% | 96.65% | 13.20s(x1.030) |\n |ResNet18|CIFAR10|Block-Wise SSAM-F| 98% | 96.66% | 12.79s(x1.063) |\n |ResNet18|CIFAR10|Block-Wise SSAM-F| 99% | 96.53% | 12.62s(x1.077) |\n", " - **Q2: Key ablations missing. What happens if you invert the mask? For SSAM-D, instead of the dropping the flattest weights from the mask, drop the sharpest weights. Furthermore, what if a random subset is dropped?**\n\n A2: Following your suggestion, we performed more ablations, including (1) Sparse SAM, whose mask is generated completely random, shown in the 4-th row corresponding to \"SSAM(Random)\" as \"Optimizer\" and \"/\" as \"Drop Criterion\"; (2) SSAM-D, which drops a random subset, shown in the 5-th row corresponding to \"SSAM-D\" as \"Optimizer\" and \"Random\" as \"Drop Criterion\"; (3) SSAM-D, which drops the sharpest weights, shown in the 6-th row corresponding to \"SSAM-D\" for Optimizer and \"Sharpnest weights\" for Drop Criterion. In ablation, we use ImageNet as dataset and ResNet50 as model with 0.5 sparsity. The results of ablation are shown below. For the convenience of comparison, we also show the results in the original paper in the following Table, *i.e.*, the first three lines and the 7-th line.\n\n | Optimizer | Drop Criterion | Acc |\n | :---: | :---: | :-: |\n | SGD | / | 76.67% |\n | SAM | / | 77.25% |\n | SSAM-F | / | 77.31% |\n | SSAM(Random) | / | 77.08% |\n | SSAM-D | Random | 77.08% |\n | SSAM-D | Sharpest weights | 76.68% |\n | SSAM-D | Flattest weights | 77.25% |\n\n From the above table, we can see that performance of SSAM-D drops a lot when SSAM-D drops the sharpest weights, even worse than SSAM-D dropping randomly, which is consistent with our conjecture.\n\n- **Q3: What's the influence of m-sharpness (doing m perturbations and averaging the gradients at the perturbations -- see original paper) on SSAM?**\n\n A3: Thanks for this suggestion. Similar to SAM, we use a small ResNet on CIFAR10 using SSAM 50% with a range of values of $m$. The results are reported below.\n\n |Optimizer|$m$| Acc |\n |:--:|:--:|:--:|\n |SSAM-D |1 |94.89%|\n |SSAM-D |4 |94.96%|\n |SSAM-D |16 |95.21%|\n |SSAM-F |1 |94.60%|\n |SSAM-F |4 |94.95%|\n |SSAM-F |16 |95.35%|\n\n- **Q4: Another strategy to reduce compute is using a 20% subset of the minibatch for the ascent gradient calculation (e.g. to find the perturbation). This strategy reduces the FLOPs significantly. How does SSAM compare to SAM when this approximation is used? Reference: https://arxiv.org/abs/2110.08529.**\n\n A4: Insightful question. We run the experiment which takes the 25% subset of minibatch to compute the perturbation with 50% sparsity. We keep the original settings unchanged except that only a part of minibatch is used to calculate the perturbation. We use ResNet18 on CIFAR10 in this part and the results are shown below. \n\n | Optimizer | Data for Ascent Gradient | Acc | Training for One Epoch(Speedup)\n | :---: | :---: | :-: | :--: |\n | SAM | 100% minibatch | 96.83% | 13.60s(x1) |\n | SSAM-F | 100% minibatch | 96.81% | 15.55s(x0.87) |\n | SSAM-D | 100% minibatch | 96.87% | 14.67s(x0.92)\n | SAM | 25% minibatch | 96.55% | 12.56s(x1.08) |\n | SSAM-F | 25% minibatch | 96.57% | 13.38s(x1.01) | \n | SSAM-D | 25% minibatch | 96.66% | 13.47s(x1.009)|\n\n Since the data to be computed is smaller, the training time decreased. However, we find a performance drop when taking 25% subset to compute the perturbation.\n\n- **Q5: The claim that SSAM can outperform SAM is weak: looking at Table 1, cifar100 95% and 98% have 0.40% gap for SSAM-F, which is larger than SSAM-F 50%'s improvement over SAM, suggesting the claimed improvement may just be noise.**\n\n A5: Since we have finetuned the experimental setting of SAM on CIFAR100 carefully, *e.g.*, 81.03%(SAM in ours) *v.s.* 80.17%(SAM in [3]), the 0.21% improvement of SSAM-F 50% is hard to achieve. As for the high sparsity, *i.e.*, 95% and 98%, the model is closer to being optimized by SGD. The robustness of high sparsity is different from that of 50%. We rerun the ResNet18 with SSAM-F 50% on CIFAR100, and get 81.18%, 80.95% and 81.21% as accuracy.", " - **Q6: Tables start sparsity at 50%. Interested in seeing 1%, 5%, 10%,.. etc. At low sparsities it should be nearly identical to SAM.**\n\n A6: Great idea. Following your suggestions, we report more experimental results with different sparsity , *e.g.*, 1%, 5%, 10%, 25% on CIFAR and ResNet18 in the following table. For the convenience of comparison, we also show the existing results in the original paper.\n\n | Model | Optimizer |Sparsity|Acc(CIFAR10)| Acc(CIFAR100)|\n |:--:|:--:|:--:|:--:|:--:|\n |ResNet18|SGD |/ | 96.07% | 77.80% |\n |ResNet18|SAM | 0% | 96.83% | 81.03% |\n |ResNet18|SSAM-F/SSAM-D| 1% |96.74%/96.71%|81.01%/80.90%|\n |ResNet18|SSAM-F/SSAM-D| 5% |96.93%/96.71%|81.47%/80.98%|\n |ResNet18|SSAM-F/SSAM-D| 10% |96.69%/96.72%|81.07%/80.97%|\n |ResNet18|SSAM-F/SSAM-D| 25% |96.82%/96.83%|80.95%/80.84%|\n |ResNet18|SSAM-F/SSAM-D| 50% |96.81%/96.87%|81.24%/80.59%|\n |ResNet18|SSAM-F/SSAM-D| 80% |96.64%/96.76%|80.47%/80.43%|\n |ResNet18|SSAM-F/SSAM-D| 90% |96.75%/96.67%|80.02%/80.39%|\n |ResNet18|SSAM-F/SSAM-D| 95% |96.66%/96.56%|80.50%/79.79%|\n |ResNet18|SSAM-F/SSAM-D| 98% |96.55%/96.61%|80.09%/79.79%|\n |ResNet18|SSAM-F/SSAM-D| 99% |96.52%/96.59%|80.07%/79.61%|\n\n> ### Reference\n> - [1] [Towards Fully Sparse Training: Information Restoration with Spatial Similarity](https://aaai-2022.virtualchair.net/poster_aaai376)\n> - [2] [Accelerating Sparsity in the NVIDIA Ampere Architecture](https://developer.download.nvidia.com/video/gputechconf/gtc/2020/presentations/s22085-accelerating-sparsity-in-the-nvidia-ampere-architecture%E2%80%8B.pdf)\n> - [3] [Efficient Sharpness-aware Minimization for Improved Training of Neural Networks](https://arxiv.org/abs/2110.03141)", " ### Dear Reviewer kk3C:\n\nThank you for the constructive comments which help us improve the paper. We have prepared our responses to each of your point.\n\n- **Q1: I don't quite understand how the FLOPs reduction are calculated in Table 1, 2, 3. From the algorithm, SSAM still requires twice full back-propagation. The only difference is multiplying the mask with the perturbations. Can the authors explain the FLOPs and it would be better to show the clock time of the training process.**\n\n A1: Sorry for our unclear presentation, we counted the time used in an iteration, and the ratio of forward and backward in one iteration is about 3:7. Based on this, the FLOPs reduction would be $0.7\\times s$. We claim that the computing a sparse gradient is not much faster than computing the dense one. Since the weight-wise sparse training is limited to the hardware, we do not achieve actual weight-wise sparse training speedup. At present, Xu et.al. [1] has implemented the relevant sparse matrix calculation based on NVIDIA's 2:4 sparse operator [2], which means the sparse operator is gradually supported. \n\n We report the wall-clock of our weight-wise Sparse SAM in the following table. Due to the absent of actual speedup of weight-wise sparse SAM and the extra computation of Fisher Information, the training time is longer.\n\n | Model | Datasets | Optimizer | Sparsity | Test Acc | Time for One Epoch (Speed up) |\n | :--: | :--: | :--: | :--: | :--: |:--:|\n |ResNet18|CIFAR10|SAM| / | 96.83% | 13.60s(x1) |\n |ResNet18|CIFAR10|Weight-Wise SSAM-F| 1% | 96.74% | 14.21s(x0.957) |\n |ResNet18|CIFAR10|Weight-Wise SSAM-F| 5% | 96.93% | 15.08s(x0.901) |\n |ResNet18|CIFAR10|Weight-Wise SSAM-F| 10% | 96.69% | 14.60s(x0.931) |\n |ResNet18|CIFAR10|Weight-Wise SSAM-F| 25% | 96.82% | 14.76s(x0.921) |\n |ResNet18|CIFAR10|Weight-Wise SSAM-F| 50% | 96.81% | 15.55s(x0.874) |\n |ResNet18|CIFAR10|Weight-Wise SSAM-F| 80% | 96.64% | 14.30s(x0.951) |\n |ResNet18|CIFAR10|Weight-Wise SSAM-F| 90% | 96.75% | 14.43s(x0.942) |\n |ResNet18|CIFAR10|Weight-Wise SSAM-F| 95% | 96.66% | 15.15s(x0.897) |\n |ResNet18|CIFAR10|Weight-Wise SSAM-F| 98% | 96.55% | 14.42s(x0.943) |\n |ResNet18|CIFAR10|Weight-Wise SSAM-F| 99% | 96.52% | 14.26s(x0.953) |\n\n To achieve really speedup for our Sparse SAM-Fisher, we also conduct the block-wise sparse fisher training. We actually compute the sparse gradients in the gradient ascent (to find the perturbation). We implement the block-wise sprase fisher in Pytorch. Specifically, we treat the block, *e.g.*, a kernel of an convolution layer, as unit and use `requires_grad` API in Pytorch to stop gradient computation. According to Pytorch's automatic differentiation mechanism, a parameter will no longer calculate gradient if none of its previous parameters require gradients. In this case, our block-wise SSAM Fisher achieve actually speedup.\n\n The wall-clock of block-wise sparse training is shown in the following table. The table shows that our block-wise sparse SAM-Fisher is able to actually accelerate the training and maintain comparable performance. \n\n | Model | Datasets | Optimizer | Sparsity | Test Acc | Training for One Epoch(Speed up) |\n | :--: | :--: | :--: | :--: | :--: |:--: |\n |ResNet18|CIFAR10|SAM| / | 96.83% | 13.60s(x1) |\n |ResNet18|CIFAR10|Block-Wise SSAM-F| 1% | 96.93% | 13.74s(x0.989) |\n |ResNet18|CIFAR10|Block-Wise SSAM-F| 5% | 96.76% | 13.90s(x0.978) |\n |ResNet18|CIFAR10|Block-Wise SSAM-F| 10% | 96.86% | 13.83s(x0.983) |\n |ResNet18|CIFAR10|Block-Wise SSAM-F| 25% | 96.84% | 14.05s(x0.967) |\n |ResNet18|CIFAR10|Block-Wise SSAM-F| 50% | 96.80% | 13.69s(x0.993) |\n |ResNet18|CIFAR10|Block-Wise SSAM-F| 80% | 96.64% | 13.61s(x0.999) |\n |ResNet18|CIFAR10|Block-Wise SSAM-F| 90% | 96.55% | 13.56s(x1.002) |\n |ResNet18|CIFAR10|Block-Wise SSAM-F| 95% | 96.65% | 13.20s(x1.030) |\n |ResNet18|CIFAR10|Block-Wise SSAM-F| 98% | 96.66% | 12.79s(x1.063) |\n |ResNet18|CIFAR10|Block-Wise SSAM-F| 99% | 96.53% | 12.62s(x1.077) |", " - **Q2: I don't quite understand why option 2 is faster than option 1 when generating the mask (the first sentence in Section 4.3). From my understanding, they both rely on the gradient scale and require twice gradient computations.** \n\n A2: The option 1, *i.e.*, SSAM-F, is based on Fisher Information, which needs an additional computation. SSAM-F requires sampling $N_F$ data from dataset and calculate their respective gradients through an additional backpropagation. The option 2, *i.e.*, SSAM-D is based on dynamic sparse training and the gradients used to generate the mask comes directly from the gradient in the iteration, which means SSAM-D does not need additional gradient computations.\n\n- **Q3: Can the authors further demonstrate that the weight they select to perturb coincide with those appear to the right half in Figure 1? Just want to confirm that the masking strategy is align with the gradient information.**\n\n A3: Thank you for this suggestion. We count the statistics of ratio, *i.e.*, $ratio=\\log |(g_{SAM}-g_{SGD})/g_{SGD}|$, in SSAM. Using SAM, the parameters with ratio greater than 0 account for 19% of the perturbated parameters, and the SSAM's ratio is 21.28%. \n\n- **Q4: A random masking baseline should be included to show the effectiveness of SSAM.**\n\n A4: Thank you for pointing the absent of this ablation. Following your suggestion, We experimented the SSAM with a random mask, *i.e.*, the **last** line with an accuracy of **77.08%**. We report the results as shown in the following table.\n\n |Model|Dataset| Optimizer | Sparsity |Detail |Acc |\n |:--:|:--:|:--:|:--:|:--:|:--:|\n |ResNet50|ImageNet|SGD|/| baseline | 76.67%|\n |ResNet50|ImageNet|SAM|0| baseline | 77.25%|\n |ResNet50|ImageNet|SSAM|50%|SSAM-D/SSAM-F|77.31%/77.25%|\n |ResNet50|ImageNet|SSAM(Random)| 50% |Random Mask | 77.08% |\n\n\n- **Q5:** Potential limitations are limited results on other architectures besides ConvNets like Transformers.\n\n A5: Thanks for this suggestions. We experiment the Vit-Tiny training on CIFAR10. Limited to the long training time and hardware requirements, we only train Vit-Tiny by SSAM-F. The results are shown below. It should be noted that ViT-Tiny has achieved 98% accuracy in CIFAR10, and the 0.16% improvement of SSAM is hard to reach.\n \n | Model | Opimizer | Acc |\n | :--: | :--: | :--:|\n |Vit-Tiny| AdamW | 97.81%(-0.21%) |\n |Vit-Tiny| SAM | 98.02%(+0.00%) |\n |Vit-Tiny| SSAM-F 50% | 98.18%(+0.16%) |\n\n> ### Reference\n> - [1] [Towards Fully Sparse Training: Information Restoration with Spatial Similarity](https://aaai-2022.virtualchair.net/poster_aaai376)\n> - [2] [Accelerating Sparsity in the NVIDIA Ampere Architecture](https://developer.download.nvidia.com/video/gputechconf/gtc/2020/presentations/s22085-accelerating-sparsity-in-the-nvidia-ampere-architecture%E2%80%8B.pdf)", " ### Dear Reviewer KeX3:\n\nThank you for your suggestions. Please see our follow-up responses. We hope the responses can further address your concerns. We always welcome further discussions. \n\n- **Q1: The experimental settings are not sufficient as compared with SAM paper. It would be better to have further experiments and a few ablation studies to confirm the model-agnostic characteristic of this technique.**\n\n A1: Thanks for your suggestion. We conduct more experiments and ablations below.\n\n - **1.** To confirm the effectness of our SSAM, we try to (1) randomly completely determine whether the weights will be perturbed or not, corresponding to the 4-th row with 77.08% acc, (2) random drop the weights for SSAM-D, corresponding to the 5-th row wi+th 77.08% acc, (3) drop the sharpnest weights, *i.e.*, weights with the smallest gradients, corresponding to the 6-th row with 76.68%. We also show the results reported in original paper in the first three lines. Recall the performance of SSAM-D, which drop the flattest weights is 77.25%. From the following table, we can see that drop the sharpnest weights are suboptimal for SAM and get a bad performance. \n\n |Model|Dataset| Optimizer | Sparsity |Detail |Acc |\n |:--:|:--:|:--:|:--:|:--:|:--:|\n |ResNet50|ImageNet|SGD|/| baseline | 76.67%|\n |ResNet50|ImageNet|SAM|0| baseline | 77.25%|\n |ResNet50|ImageNet|SSAM|50%|SSAM-D/SSAM-F|77.31%/77.25%|\n |ResNet50|ImageNet|SSAM(Random)| 50% |Random Mask | 77.08% |\n |ResNet50|ImageNet|SSAM-D | 50% | Random Drop| 77.08% |\n |ResNet50|ImageNet|SSAM-D | 50% | Drop Sharpnest | 76.68% |\n\n\n - **2.** To confirm the model-agnostic characterisitic of the purposed method, we also test the VGG-style arch on CIFAR. Following the [https://github.com/weiaicunzai/pytorch-cifar100](https://github.com/weiaicunzai/pytorch-cifar100), we test our SSAM training the VGG11-BN on CIFAR, and the results are shown in the following table. For training as SAM and SSAM, we set $\\rho=0.05$ for CIFAR10. The results are shown in the following table.\n\n |Model|Dataset| Optimizer | Sparsity| Acc |\n |:--:|:--:|:--:|:--:| :--:|\n |VGG11-BN|CIFAR10|SGD | / | 93.42% |\n |VGG11-BN|CIFAR10|SAM | 0% | 93.87% |\n |VGG11-BN|CIFAR10|SSAM-F/SSAM-D| 1% | 93.85%/93.76% |\n |VGG11-BN|CIFAR10|SSAM-F/SSAM-D| 5% | 94.01%/93.78% |\n |VGG11-BN|CIFAR10|SSAM-F/SSAM-D| 10%| 93.92%/93.70% |\n |VGG11-BN|CIFAR10|SSAM-F/SSAM-D| 25%| 93.88%/93.86% |\n |VGG11-BN|CIFAR10|SSAM-F/SSAM-D| 50%| 94.03%/93.79% |\n |VGG11-BN|CIFAR10|SSAM-F/SSAM-D| 80%| 93.83%/93.95% |\n |VGG11-BN|CIFAR10|SSAM-F/SSAM-D| 90%| 93.76%/93.85% |\n |VGG11-BN|CIFAR10|SSAM-F/SSAM-D| 95%| 93.77%/93.48% |\n |VGG11-BN|CIFAR10|SSAM-F/SSAM-D| 98%| 93.54%/93.54% |\n |VGG11-BN|CIFAR10|SSAM-F/SSAM-D| 99%| 93.47%/93.33% |\n\n - **3.** To confirm the model-agnostic characterisitic of our SSAM, we plan to conduct the experiments training Vision-Transformer on CIFAR10. Limited to the long training time and hardware requirements, we only train Vit-Tiny by SSAM-F. The results are shown below. It should be noted that ViT-Tiny has achieved 98% accuracy on CIFAR10, and the 0.16% improvement of SSAM is very significant.\n \n | Model | Opimizer | Acc |\n | :--: | :--: | :--:|\n |Vit-Tiny| AdamW | 97.81%(-0.21%) |\n |Vit-Tiny| SAM | 98.02%(+0.00%) |\n |Vit-Tiny| SSAM-F | 98.18%(+0.16%) |\n\n\n- **Q2: There is a misunderstanding between the definitions of sparsity in line 176. The sparsity s should be the percentage of parameters to be perturbed, i.e., s=1 means all the parameters should be perturbated. However, s=1 in the table 1 means no parameters will be perturbated. Could the authors check for the definition?**\n\n A2: Thanks for pointing out this issue. We addmit the original presentation has a typo. The sparsity $s$ means *how many parameters do not calculate their perturbation?*, *i.e.*, $s=1$ means no parameters will be perturbated and $s=0$ means all parameters will be perturbated. We will fix our typo in the original paper in revsion.", " - **Q3: The results shown only from sparsity 50% and only for ResNet model. Could the authors have more experimental results on different sparsity and models with different architectures?**\n\n A3: Following your suggestions, we report more experimental results on different sparsity first, *e.g.* 1%, 5%, 10%, 25% on CIFAR and ResNet. We maintain the setting and only change the sparsity of SSAM and the results are shown in the following table. To facilitate comparison, we also report the sparsity greater than 50%, which have been reported in original paper. \n\n | Model | Optimizer |Sparsity|Acc(CIFAR10)| Acc(CIFAR100)|\n |:--:|:--:|:--:|:--:|:--:|\n |ResNet18|SGD |/ | 96.07% | 77.80% |\n |ResNet18|SAM | 0% | 96.83% | 81.03% |\n |ResNet18|SSAM-F/SSAM-D| 1% |96.74%/96.71%|81.01%/80.90%|\n |ResNet18|SSAM-F/SSAM-D| 5% |96.93%/96.71%|81.47%/80.98%|\n |ResNet18|SSAM-F/SSAM-D| 10% |96.69%/96.72%|81.07%/80.97%|\n |ResNet18|SSAM-F/SSAM-D| 25% |96.82%/96.83%|80.95%/80.84%|\n |ResNet18|SSAM-F/SSAM-D| 50% |96.81%/96.87%|81.24%/80.59%|\n |ResNet18|SSAM-F/SSAM-D| 80% |96.64%/96.76%|80.47%/80.43%|\n |ResNet18|SSAM-F/SSAM-D| 90% |96.75%/96.67%|80.02%/80.39%|\n |ResNet18|SSAM-F/SSAM-D| 95% |96.66%/96.56%|80.50%/79.79%|\n |ResNet18|SSAM-F/SSAM-D| 98% |96.55%/96.61%|80.09%/79.79%|\n |ResNet18|SSAM-F/SSAM-D| 99% |96.52%/96.59%|80.07%/79.61%|\n\n For the different architectures, we conduct more experiment on CIFAR with VGG, and test the Vision-Transformer on CIFAR10. The detailed results could be found in Q1&A1. \n\n- **Q4: Why does it appear that employing fewer perturbation in Table 1 is unable to enhance the performance, if according to your purpose, that you can find certain parameters that can make loss flatter without perturbation?**\n\n A4: We count the ratio, *i.e.*, $ratio=\\log |(g_{SAM}-g_{SGD})/g_{SGD}|$,of the parameters selected by SSAM for perturbation. Using SAM, the parameters with ratio greater than 0 account for 19% of the perturbated parameters, and the SSAM has 21.28%. \n\n- **Q5: As stated in Section 3.2, “merely about 5% parameter space is sharp while the rest is flat”. Do you believe you have correctly identified the 5% parameters that do not require perturbation application through your methods and that experimental results demonstrate this?**\n\n A5: We agree your opinion, we do not find the rigorous statistics like 5% ourselves. This statement comes from the reference https://arxiv.org/pdf/1609.04836.pdf, as a quote from the description in the fourth and fifth lines on page 7 of this paper, \"we observe that it rises steeply only along a small dimensional subspace (e.g. 5% of the whole space); on most other directions, the function is relatively flat.\". Motivated by the above discovery, we investigate the difference of gradient calculated by SGD and SAM with various models and datasets, *i.e.*, $log|(g_{sam}-g_{sgd})/g_{sgd}|$. As we can see in Fig.1 in our original paper, the gradient calculated by SGD for most parameters is not much different from that of SAM.\n\nHope our answers resolve your concerns. If you have further confusion, we always welcome further discussion.", " ### Dear Reviewer BQW5:\n\nThank you for your review and affirmation of our work. We'll answer your questions one by one below. We are also very honored to share our understanding with you. \n\n- **Q1: I am very interested in why SAM makes the originally flat gradients worse?**\n\n A1: Interesting and insightful question. SAM is more effective compared to SGD, which has been confirmed in many different tasks and different models [1,2,3]. Our conjecture is that not every dimension needs to be smoothed, *i.e.*, not each parameter needs to be perturbated, since most of them are already flat. From the experimental results that performance becomes better when the flat parameters are not disturbed, SAM does make the originally flat gradients worse compared to the SSAM. We will study why SAM makes the flat gradient worse in the future.\n\n- **Q2: The ArgTopK operation selects $k$ parameters to be perturbed. How sensitive is the performance of SSAM with respect to $k$ in a certain task? How sensitive is the value of $k$ in different tasks (if training SOTA classifiers)?**\n\n A2: The $k$ is the number describing how many parameters of the model in total will be perturbed. Since the number of parameters of each model is different, we use sparsity ratio $s$, *i.e.*, the proportion of parameters that are not perturbed to all parameters, to determine $k$ : $k=(1 - s)\\cdot |w|$, where the $|w|$ is the number of model parameters. We report our SSAM with different sparsity $s$ optimizing ResNet, WideResNet on CIFAR10, CIFAR100 and ImageNet in our original paper, *e.g.*, Table 1, Table 2 and Table 3. \n\n> ### Reference\n> - [1] [Sharpness-Aware Minimization for Efficiently Improving Generalization](https://arxiv.org/abs/2010.01412)\n> - [2] [When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations](https://arxiv.org/abs/2106.01548)\n> - [3] [Sharpness-Aware Minimization Improves Language Model Generalization](https://arxiv.org/abs/2110.08529)\n", " The paper proposes to make the recent Sharpness-Aware Minimization (SAM), which tries to find flatter minima (which tend to generalize better) more efficient. Vanilla SAM incurs a 2x compute as 2 gradient computations (one for the perturbation and one at the perturbation) are needed at each training step. The paper proposes sparsifying SAM (SSAM) by only applying SAM on some fraction $s$ of the model weights. Which $s$ fraction is determined either in a cheap greedy fashion (SSAM-D) or via Fisher Info (SSAM-F). Non-convex analysis is done showing that SSAM converges at the same rate as SAM. SSAM is shown to perform nearly as well as SAM on benchmark tasks like CIFAR10, CIFAR100, and ImageNet using a ResNet model of modest size.\n Strengths:\n* The paper is well-motivated -- SAM is a powerful method but its 2x performance hit may hinder its widespread adoption and this paper shows how it can be sped up with little loss in performance.\n* Paper is clear, easy-to-follow, and mostly free of typos.\n* Theoretical analysis supports the proposed method.\n\nWeaknesses:\n* Although SSAM does in theory reduce SAM's FLOPS, it's unclear whether it would actually help in practice. Due to the nature of how gradients are computed in practice, computing gradients for a sparse subset of weights is often not much faster than computing the full set, especially when the subset is 50% and the weights could be from any of the layers. While the paper shows theoretical FLOPS, it does not have any numbers on actual wall-clock speed, etc, though I do understand that reporting wall-clock speed has some issues of its own.\n* Key ablations missing. What happens if you invert the mask? For SSAM-D, instead of the dropping the flattest weights from the mask, drop the sharpest weights. Furthermore, what if a random subset is dropped?\n* What's the influence of m-sharpness (doing m perturbations and averaging the gradients at the perturbations -- see original paper) on SSAM? \n* Another strategy to reduce compute is using a 20% subset of the minibatch for the ascent gradient calculation (e.g. to find the perturbation). This strategy reduces the FLOPs significantly. How does SSAM compare to SAM when this approximation is used? Reference: https://arxiv.org/abs/2110.08529.\n* The claim that SSAM can outperform SAM is weak: looking at Table 1, cifar100 95% and 98% have 0.40% gap for SSAM-F, which is larger than SSAM-F 50%'s improvement over SAM, suggesting the claimed improvement may just be noise.\n* Tables start sparsity at 50%. Interested in seeing 1%, 5%, 10%,.. etc. At low sparsities it should be nearly identical to SAM.\n\n\n\n\n\n\n * Ablations would greatly strengthen the paper.\n* More experiments / datasets would also help. Yes", " The problem addressed in this study is that the indiscriminate perturbation manipulated by SAM on all parameters is suboptimal. The authors give two interesting observations that: there is little difference between the updating gradients of SGD and SAM in most dimensions; and there exist certain parameters that will still be flatter in some dimensions even if SAM is not used. \nBased on the observations, they propose an efficient and effective training scheme to obtain sparse perturbations by crafting a binary mask. This paper employs the Fisher Information and dynamic sparse training to select top-K most significant and relative parameters to craft a sparse binary mask for perturbation. \nThe authors verify the effectiveness of their proposed training scheme on CIFAR10/CIFAR100 and ImageNet-1K datasets. The experimental results demonstrate that even with very large sparsity, both SSAM-F and SSAM-D can still obtain competitive performance against SAM. The authors also conduct visualization of landscape and Hessian spectra to show better generalization of SSAM.\n Strengths: \n\n1. The motivation is transparent and derived from the experimental observations. The authors accurately identify the indiscriminate perturbation in previous sharpness-related works. The motivational analysis and experiments are persuasive to me. \n2. The written is straightforward and easy to understand. In addition, the landscape visualization is also informative from a motivational standpoint. \n\nWeakness: \n1. The experimental settings are not sufficient as compared with SAM paper. It would be better to have further experiments and a few ablation studies to confirm the model-agnostic characteristic of this technique. \n2. There is a misunderstanding between the definitions of sparsity in line 176. The sparsity s should be the percentage of parameters to be perturbed, i.e., s=1 means all the parameters should be perturbated. However, s=1 in the table 1 means no parameters will be perturbated. Could the authors check for the definition? \n3. It would be more convincing if the authors could have more analysis of the experimental outcomes. In the section titled \"Questions,\" several inquiries regarding experiments are presented. \n\n 1. The results shown only from sparsity 50% and only for ResNet model. Could the authors have more experimental results on different sparsity and models with different architectures? \n\n2. Why does it appear that employing fewer perturbation in Table 1 is unable to enhance the performance, if according to your purpose, that you can find certain parameters that can make loss flatter without perturbation? \n\n3. As stated in Section 3.2, “merely about 5%parameter space is sharp while the rest is flat”. Do you believe you have correctly identified the 5% parameters that do not require perturbation application through your methods and that experimental results demonstrate this? \n\nI am willing to increase my rating if my questions could be my addressed properly. \n The authors haven’t addressed the limitations and potential negative societal impact of this work. It would be better to have discussion on SSAM’s effectiveness of more different datasets and model structures, as well as the potential future work of SSAM. ", " This paper rethinks the indiscriminate perturbations of SAM on all parameters and claims that not all parameters are needed to be perturbed. The conclusion is supported by two observations --- (i) little difference between SGD and SAM gradients in most dimensions and (ii) more flatter without SAM in some dimensions. Inspired by these observations, this paper proposes to improve the effectiveness and efficiency of SAM via a sparse perturbation scheme (SSAM). Two realizations of SSAM are provided, namely SSAM-Fisher and SSAM-Dynamic. The convergence of SSAM is also discussed. Extensive experiments, including visual and numerical results, demonstrate the effectiveness and efficiency of SSAM. Strength:\n\n- The paper is well-written and organized. \n- Interesting observations on the flatness of SGD and SAM. It is surprising that the difference between SGD and SAM gradients is little and flat dimensions will be flatter without SAM\n- It is quite natural to apply fisher information and dynamic strategy to encourage sparsity\n- The effectiveness of SSAM is comparable to SAM; the computational cost is clearly reduced.\n\nWeakness:\n- I am very interested in why SAM makes the originally flat gradients worse?\n- The ArgTopK operation selects $k$ parameters to be perturbed. How sensitive is the performance of SSAM with respect to $k$ in a certain task? How sensitive is the value of $k$ in different tasks (if training SOTA classifiers)?\n\n Please refer to the weakness part N.A.", " This paper presents a sparse version of sharpness-aware minimization (SAM). The proposed SSAM does not apply perturbation to all the model parameters, where the criteria is based on the Fisherman Information or dynamically generating a mask through training. The authors claim improved generalization and training speed over SAM. - Strengths:\n - The idea and observation that not all the model weights need to be perturbed in SAM is generally novel. The authors also provide empirical justification that the difference between $g_{SAM}$ and $g_{SGD}$ is not very significant for lots of weights.\n - I like the provided training curves as they show that SSAM behaves very alike to SAM in terms of the training dynamics.\n\n\n- Weaknesses:\n - I don't quite understand how the FLOPs reduction are calculated in Table 1, 2, 3. From the algorithm, SSAM still requires twice full back-propagation. The only difference is multiplying the mask with the perturbations. Can the authors explain the FLOPs and it would be better to show the clock time of the training process.\n - I don't quite understand why option 2 is faster than option 1 when generating the mask (the first sentence in Section 4.3). From my understanding, they both rely on the gradient scale and require twice gradient computations.\n - Can the authors further demonstrate that the weight they select to perturb coincide with those appear to the right half in Figure 1? Just want to confirm that the masking strategy is align with the gradient information.\n - A random masking baseline should be included to show the effectiveness of SSAM. Please see the weaknesses for questions. The authors haven't covered the limitations. Potential limitations are limited results on other architectures besides ConvNets like Transformers." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 7, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "k6vzSXUG9jS", "HtUS3R_usz", "oHs-ctrB0m", "HSyTEc5YM76", "ed-lcCrSo_4", "nips_2022_88_wNI6ZBDZ", "kterOJSghnSD", "oHs-ctrB0m", "oHs-ctrB0m", "oHs-ctrB0m", "ed-lcCrSo_4", "ed-lcCrSo_4", "pH0s9GyVkPx", "pH0s9GyVkPx", "HSyTEc5YM76", "nips_2022_88_wNI6ZBDZ", "nips_2022_88_wNI6ZBDZ", "nips_2022_88_wNI6ZBDZ", "nips_2022_88_wNI6ZBDZ" ]
nips_2022_bk8vkdQfBS
Explainability Via Causal Self-Talk
Explaining the behavior of AI systems is an important problem that, in practice, is generally avoided. While the XAI community has been developing an abundance of techniques, most incur a set of costs that the wider deep learning community has been unwilling to pay in most situations. We take a pragmatic view of the issue, and define a set of desiderata that capture both the ambitions of XAI and the practical constraints of deep learning. We describe an effective way to satisfy all the desiderata: train the AI system to build a causal model of itself. We develop an instance of this solution for Deep RL agents: Causal Self-Talk. CST operates by training the agent to communicate with itself across time. We implement this method in a simulated 3D environment, and show how it enables agents to generate faithful and semantically-meaningful explanations of their own behavior. Beyond explanations, we also demonstrate that these learned models provide new ways of building semantic control interfaces to AI systems.
Accept
This paper proposes causal self-talk (CST) as a means to obtain more explainable AI systems. The work lists a set of desiderata for explainable AI and argues that CST satisfies this set. The paper is well written and the experimental results, although in a toy setting called "DaxDucks" are reasonably convincing. The reviewers lean towards acceptance, with Reviewer g2gH in particular strongly advocating for the work. I like that this approach is generally applicable in principle, and I strongly dislike that it is only showcased on one (non-open source, toy) task. The other qualm I have is that the introduction reads way too much as if the authors came up with this all on their own, and I strongly encourage them to pay proper respect to prior work in this field, rather than stashing all that work away as a long enumeration in the related work section, which most people will gloss over.
train
[ "zbh7f02RCC3", "NTU-RQd_TyD", "v7fbbTP-Et", "dO929dyN49B", "kwHSnNenLeJI", "6m5OVO4wUUT", "EdHzQPPaRxJ", "ITqJnNnzwBr", "EAZr21q_-f1", "CpAeWbveU4C", "ddiv2XW_cvJ", "OyJBosXDyHs", "eO8YOBTFHY", "GhUF4ugzR1g", "N8dgj6jjHNL" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " As promised, we have amended our previous submission to return the page count to 9 pages. This has involved merely minor changes to the text.", " Thanks the authors for their clarifications. I will keep my original ratings.", " Thank you authors for engaging with my comments and taking steps to improve the clarity of the paper overall. I will increase the presentation score accordingly.", " Thank you for your response. I think the first step in algorithmic development is understanding how an approach scales to varying problem complexities. Having said that, I do agree that there is some novelty in the approach and that there is some benefit in publishing the approach in its current form. For this reason I am happy to increase my score. ", " Thank you for the response to my comments. As promised, I will increase my rating to 9.", " We thank the reviewer for their comprehensive and thoughtful review. We are especially grateful for the characterization of our work as original, clear, and significant. We also appreciate providing a clear and accurate summary of our work – we couldn’t have done this better!\n\n> **I thought the experiments were useful for demonstrating the efficacy, but they also felt rather contrived relative to real-world problems where explainability is needed.**\n\nThe goal of this paper is to clearly articulate the nature of the problem, present a viable path forwards, and to show how it works. As the reviewer indicates, the experiments were targeted directly at these problems, and we are glad that the reviewer appreciated their success in achieving this.\n\nApplying our technique in more complex and real-world environments is definitely a necessary future step, but there is unfortunately already too much to be done within 9 pages. Our environment was chosen to facilitate a clean demonstration that our algorithms are doing the right thing. We hope that the solid foundation of results made in this work convinces the community to extend our techniques to real-world domains.\n\n> **Only variations of the proposed system were studied. This has the advantage of provided well-controlled comparisons, but it doesn't establish to what extent the system outperforms previously proposed systems.**\n\nIn developing this work, we undertook extensive literature review, which we summarize in Sections 2-3. Our conclusion was that existing XAI techniques fail on the criteria we care about _a priori_. To the best of our knowledge, there aren’t any existing techniques to compare against.\n\nThe only comparison we really were able to do were against ordinary decoders (and Ord-Dec) and re-routed decoders (ReR-Dec). We show that CST outperforms these in causal faithfulness.\n\nWe sincerely hope and expect CST to be outperformed in future.\n\n> **I would have thought that human judgement of explanatory adequacy is important for evaluating real-world explanation systems.**\n\nWe agree! And we are buoyed by recent empirical works (e.g. [Abramson et al. (2022)](https://arxiv.org/abs/2205.13274); [Thoppilan et al (2022)](https://arxiv.org/abs/2201.08239); [Kiela et al, (2021)](https://arxiv.org/abs/2104.14337)) that introduce techniques for incorporating human judgements in benchmarking an agent’s or a language model’s performance, as well as recent sociological and psychological perspectives on this question (e.g. [Miller (2017)](https://arxiv.org/abs/1706.07269) and [Miller (2020)](https://arxiv.org/abs/1811.03163)).\n\nNonetheless, we argue that there is a set of technical requirements for an explanation system that must be satisfied before human judgment can even come into play. The focus of our work in this paper is on addressing these.\n\n---\n\nWe appreciate that the reviewer gave actionable guidance as to how we could convince them to raise their rating. We hope that these responses adequately address the questions specified by the reviewer; if so, we ask the reviewer to consider increasing their rating as suggested.", " _(note that this is part 2/2)_\n\n> **Q1: Do you think the intervention could hamper policy learning in complex environments?**\n\nThe reviewer is correct that online resetting interventions could indeed have such detrimental effects. There are two straightforward solutions to this.\n\nFirst, for simplicity, we focus our results on resetting interventions (i.e. where the memory state is reverted to the state at time `t’ = 0`). However, this is just the simplest case, and one can also intervene by “rewinding” memory back to some earlier time (`t’ > 0`). This is a less drastic intervention. We have done some experiments with this more general case, and it indeed works. We haven’t included any results because the paper is already packed enough as it is; also, in the simpler environment we consider, there’s no additional benefit in using the more general method.\n\nSecond, and perhaps more importantly, if there’s a concern about such interference, one should consider using CST-PD rather than CST-RL. In the RL variant, interventions are executed online, which as the reviewer intuits, can disrupt the data distribution on which the policy is trained. The PD variant, in contrast, operates entirely in replay, so disruption to policy learning is at most indirect, and far less likely. Moreover, as discussed in the main text, it is possible to contain the gradient flow from the CST-PD objective to mitigate or prevent interference with policy learning altogether.\n\n> **Q2: Do you have any plans for releasing code + environment?**\n\nWe would very much like to release both the code and the environment, and are actively looking into making this possible. We are currently blocked while waiting for an upstream library dependency to be open-sourced.\n\n> **Q3: Have you considered how the method might be used for language-conditioned imitation learning?**\n\nAn excellent question, and very similar to one raised by Reviewer KW43. We address the topic at length in our response to their review, which we hope this reviewer finds satisfying. \n\n---\n\nWe hope that each of the reviewer’s concerns have been addressed. If so, we ask that the reviewer reconsider their recommendation that this paper be rejected.\n", " We thank the reviewer for their thoughtful review. We are especially grateful for the recognition of our work as well-structured and important.\n\n> **There seems to be a disconnect between the evaluation and the desiderata which can be bridged through thorough experiments.**\n\nWe take this to be the headline of the reviewer’s critique, and that this, in turn, breaks down into two major pieces:\n\n* **Weakness #4: We don’t sufficiently explain the relationship between the experiments and the desiderata.**\n* **Weaknesses #2 & 3: The experiments themselves are too simplistic to show that the desiderata are satisfied.**\n\nRegarding Weakness #4, we recognize that we did not spell this out clearly enough in the initial submission of the paper. Nonetheless, this is easily addressed:\n\n* Because the CST algorithm develops upon decoders (which are standard architectural components in Deep RL), CST inherits decoders’ properties of groundedness, flexibility, and scalability _by design_, and thus these properties do not require experimental validation. We do demonstrate two forms for the self-model (one-hot and language), but this is merely to illustrate flexibility and grounding, not to experimentally validate that these are satisfied.\n* CST also largely inherits the properties of minimal interference from decoders _by design_. Like decoders, one can tune the degree to which CST alters the weights of the base system (for better or for worse) by choosing how to propagate the additional gradients. The only experimental test required is validating that the introduced changes do not disrupt policy learning. This indeed was our finding.\n* Faithfulness is the major desideratum that requires experimental testing. This is the focus of our experiments in Sections 5 and 6.\n\nWe have clarified this by changing the text at the end of Section 4, under the heading “CST and the five desiderata”.\n\nRegarding Weaknesses #2 & 3, we appreciate the reviewer’s point, and agree that there is much to do to push this technique further toward richer forms of self-model and richer environments. From what we have seen so far, we are confident that the method will scale to larger problems. \n\nHowever, we argue that a first paper on this approach should not be about scaling up. It is first necessary to clearly articulate the nature of the problem, to present a viable path forwards, and to demonstrate how and why it works. We explicitly designed an evaluation setting that would foreground the problems at hand, and allow us to carefully measure whether the algorithms are doing the right thing. We ask the reviewer to consider that the experiments themselves are pitched at just the right level to evaluate the fundamentals: that CST satisfies the desiderata at hand.\n\nWe agree that there is much to do to push on scaling up and performance. Nonetheless, trying to cram more into the already-packed 9 pages would be at the significant detriment of clarity, readability, and accessibility (it is already pushing it to the limit!). We believe that the paper as it is covers the appropriate scope in establishing the fundamentals. We hope that the reviewer will give us the opportunity to publish this, so that we and the community can steer our focus to exactly these concerns.\n\n> **Weakness 1: No pictorial representation of architecture.**\n\nTo assist in exposition, we have now: (1) amended the pictorial representation in Figure 1 to clarify that `z_t` is fed back into the agent on the next time step, and (2) included an expanded version on this figure in the supplement, as Figure S1. \n\n> **Weakness 5: Missing citing work.**\n\nWe thank the reviewer for these suggestions. We looked into them, but do not think they are ultimately directly related. These studies involve building an agent’s knowledge of the causal structure of the environment, either through curiosity about causal relationships in MDPs ([Sontakke et al (2020)](https://arxiv.org/abs/2010.03110)), measuring how much causal influence one’s actions have at any time ([Seitzer et al (2021)](https://proceedings.neurips.cc/paper/2021/file/c1722a7941d61aad6e651a35b65a9c3e-Paper.pdf)), or constructing state abstractions that foreground the causally relevant variables in environment dynamics ([Lee et al (2021)](https://ieeexplore.ieee.org/document/9561439)). In contrast, CST does not aim to build casual knowledge about the environment. Rather than learning about some causal relationships in the extrinsic environment, CST aims to induce causal relationships inside of the agent itself (i.e. make the actions not just correlated with the messages `z_t`, but causally dependent on them).", " We thank the reviewer for their thorough review. We especially appreciate the highlighting of our core contributions as the major strengths of the paper.\n\n> **Relationship to grounded representation learning**\n\nWe understand the reviewer’s primary concern to be that CST may bear a relationship to existing techniques in language-conditioned and grounded policy learning, which may impact the relative significance of our contribution.\n\nIndeed there is a relationship! – but also two major points of difference.\n\n(1) While both our work and this other literature seek to ground part of an agent’s representations using pre-existing structures, the goals are very different.\n\nIn the grounded representation learning literature, the goal is to scaffold agent representations using these pre-existing structures (typically language) in order to yield better-performing policies and more systematic generalization. This can take the form of grounding the agents with language inputs (e.g. [Hermann et al (2017)](https://arxiv.org/pdf/1706.06551.pdf), [Hill et al (2020)](https://arxiv.org/pdf/2009.01719.pdf)), language outputs (e.g. [Zellers et al (2022)](https://arxiv.org/pdf/2106.00188.pdf), [Lampinen et al (2022)](https://proceedings.mlr.press/v162/lampinen22a/lampinen22a.pdf)), or structured intermediate representations (e.g. [Koh et al (2020)](http://proceedings.mlr.press/v119/koh20a/koh20a.pdf)).\n\nIn contrast, the primary goal of our paper is not to improve policy performance or systematic generalization, but rather to build an explanatory model of the agent. With this goal, the grounding of the model’s representation is a basic necessity (otherwise external users can’t use it), but it is by no means sufficient for success. In fact, in the domain we consider, grounding is one of the easier problems to solve, and we largely take it for granted in the paper. The far more interesting and novel contributions of our paper are with respect to the other desiderata, particularly faithfulness, which are not issues generally addressed by this other literature.\n\n(2) While both our work and this other literature consider “semantic control” interfaces to agents, these take on very different forms and play very different roles in the respective research.\n\nIn the grounded representation learning literature, semantic control is often a primary objective in and of itself: researchers want to equip agents with language inputs so that users can issue instructions in a familiar medium. The agents must thus consume these instructions as _inputs_.\n\nIn contrast, in our work, semantic control is not a goal in itself, but rather a means for validating the faithfulness of the agent’s outputs. Rather than explicitly training the agent to follow instructions as an _input signal_, we are training the agent to make a strong enough commitment to its _outputs_ such that if they were counterfactually changed, they could function as a control mechanism. Thus the site of control is different (inputs vs outputs) as well as when control occurs (at training time vs at evaluation time only in our case).\n\n(Finally, the fact that this control mechanism is “semantic” is really just a bonus from having grounded the outputs; the interesting achievement here is that counterfactual interventions on an agent’s _output_ can be made into a control mechanism in the first place. This is what CST enables.)\n\n> **It will also be interesting if the authors could discuss if the composition of the three methods is feasible and will perform well.**\n\nThe reviewer raises an interesting question. Our intuition is that the composition of the three methods is feasible, and might be desirable in some domains, as the different objectives have distinct strengths and weaknesses.\n\nFor example, CST-RL delivered the best performance in the data regime we explored, but this comes at the requirement of external reward signals and online learning. CST-MR generates dense learning signals while also permitting offline learning, but may be prone to overfitting as the agent attempts to reconstruct the entirety of its internal state. It is also less appropriate for high-dimensional agent representations, such as transformers. CST-PD strikes an interesting balance between the two, as it privileges reconstructing those features in the internal representation that actually have behavioral consequences. One can also imagine other variants, that attempt to distill value functions, or generalized value functions, or that use contrastive objectives or learned discriminators.\n\nThese are interesting ideas to speculate upon. For the moment, we believe it is best to focus the paper on simple and straightforward demonstrations of the methods in isolation due to space constraints. \n\n---\n\nWe hope that we have resolved the reviewer’s only stated concern with the paper. If so, we ask that the reviewer consider raising their score accordingly.\n", " We thank the reviewer for their careful review and thoughtful comments. We especially appreciate the recognition of our work as thorough and original.\n\n> **There’s lots of parts (good) but they're all kind of underdeveloped (bad). The main limitation is that it’s very hard to follow.**\n\nFirst, we feel it is important to address the reviewer’s concerns regarding presentation, especially with respect to the density of the paper, its overall structure, and the rationale behind writing it in this particular way. We acknowledge that there’s a fair amount of ground to cover!\n\nWe believe that to be impactful, this paper must achieve two goals:\n\n* (A) Identify conceptual issues in the XAI field, and iron them out (Sections 1-3)\n* (B) Once this is in place, identify novel solutions to the XAI challenge (Sections 4-6)\n\nWe believe that (A) is an important contribution, but not enough for a technical paper; however, (B) cannot be done unless (A) is already in place. We agree (A) and (B) together make for a dense 9 pages, but limiting it to either (A) or (B) would make for a substandard paper. \n\nNonetheless, we recognize that the reviewer’s concerns about clarity are valid, and so have taken several steps to make the presentation more digestible to the reader:\n\n* Updated the final paragraph of the introduction to better prepare the reader for the transition from theory in Sections 1-3 to practice in Sections 4-6.\n* Improved Figure 1, as suggested, so that the message `z_t` is more clearly depicted as an input to the network.\n* Added an additional supplemental Figure S1 that illustrates the CST approach in more detail.\n* Updated Section 2 to explicitly flag that the desiderata definitions are more fleshed out in Section 3.\n* Added more subsection headings in Section 4.\n* Reduced the (admittedly!) “creative” writing throughout the paper, especially at the beginning of Section 4.\n\n### Other comments:\n\n> **Q3) Does this method only work for single time-step instances?**\n\n> **Their limitation of memory states not currently workable for general multi-time steps is a big one.**\n\nThis is not actually a limitation of the method. We simply chose to focus on always reverting to `t’=0` in our presentation in order to simplify our (already complex) paper and clarify our evaluation metrics. Indeed, we have experimented with the “multi-time step version” of the technique (where `t’` could take on any value), and it indeed works, though it doesn’t offer any particular advantage in the evaluation setting we present. Exploring the multi-time step version would be more interesting and useful in more complex domains, and we have updated the Discussion to make clear that this is a valuable and immediately viable next step.\n\n> **Section 2, for instance, seems pretty important considering their claimed contribution but looks not fully fleshed out.**\n\nAs discussed above, there are many things to cover in our 9 pages. We agree that Section 2 is particularly important, and we elected to briefly introduce these concepts there before unpacking them in a more concrete context in Section 3. We have updated Section 2 to make this flow clearer to the reader from the start.\n\n> **Maybe Figure 1 can be improved to better relay the idea of time-step inputs?**\n\nWe interpret this comment to mean that it should be made more clear that the message `z_t` depicted in Figure 1 should be more clearly depicted as an input (like `x_t`). We agree, and have updated the figure in the main text. We also introduced a new expanded Figure S1 in the supplement that relays these ideas in more detail, which we hope clarifies the core design.\n\n> **Q1) …this seems to be a big claim …**\n\nWe agree — this strong claim slipped in from an earlier draft of the paper. We have amended this.\n\n> **Q2) Section 2: I would elaborate this section more. Especially “faithful” since that seems to be multiple desiderata in and of itself and vague. Also maybe add in-line citations.**\n\nWe find that faithfulness is best understood in the context of decoders, specifically in how they fail to be faithful. This is discussed in context at the end of Section 3 (with 10 in-line citations). Nonetheless, we agree that it would be useful for the reader to at least be aware that this fuller explanation is forthcoming and have updated Section 2 accordingly.\n\n> **Typos and Writing suggestions**\n\nWe agree with all the reviewer’s suggestions and have updated the paper accordingly. In particular, we agree that the introduction could mention “feed[ing] the model’s outputs as inputs” as a key step of CST.\n\n---\n\nPlease let us know if these changes to the paper are not what was intended, as we are eager to make our paper as clear as possible. Given that the primary objection to the work was due to the presentation, we would ask that the reviewer reconsider their recommendation that this paper be rejected in light of these revisions, such that the contributions of this work can reach the larger NeurIPS audience.", " We greatly appreciate the reviewers’ valuable and insightful comments and suggestions. \n\nWe have edited the paper accordingly, and believe it has improved significantly. We hope each reviewer considers adjusting their score upward in light of these improvements.\n\nNote that, given the edits, our page count goes slightly over 9 pages. We intend to address this as the reviewers reach consensus.", " This paper introduces a new approach to explainable models called CST, or causal self-talk. There are 4 variations presented. The paper also defines several “desiderata” that should guide explainable AI systems such as faithfulness and scalability. Authors claim only self-models can maximize desiderata and CST is an example of that. There are experiments of CST-based models in a virtual environment called DaxDucks. Originality:\nThis paper is original in introducing and testing out the new concept of causal self-talk for explainable AI. It gives a pretty thorough coverage (concepts, methods, and experiments) of the new idea and offers several variations. The paper also defines concepts good to optimize in explainable AI.\n\nQuality:\nThis paper has mixed quality. Some sections look underdeveloped. Section 2, for instance, seems pretty important considering their claimed contribution but looks not fully fleshed out. I can tell the authors did a lot of work for this paper and perhaps there’s much valuable insights here but it’s lost in the presentation and explanation.\n\nClarity:\nThe paper is well written and clear in some sections (lit review and historical overview) but not others (methods and experiment). The writing is somewhat experimental/creative(?) in some parts. Section 4 in particular, is confusing. Maybe Figure 1 can be improved to better relay the idea of time-step inputs? \n\n\nSignificance:\nTheir newly introduced concept is interesting and paves a path forward for the XAI community on causal methods. Their limitation of memory states not currently workable for general multi-time steps is a big one. Overall it’s an interesting early idea and implementation that can be built on with future work.\n Questions:\n1.Pg. 1, line 30 “ We argue that only the AI system itself is capable of supplying a model that fulfills all the key desiderata”: this seems to be a big claim. I’m not convinced this is argued considerably in the paper or really the focus. The paper does propose one such method but doesn’t build a case for it being the only such system. \n2.Section 2: I would elaborate this section more. Especially “faithful” since that seems to be multiple desiderata in and of itself and vague. Also maybe add in-line citations.\n3.Does this method only work for single time-step instances?\n\nTypos and Writing suggestions:\n1.Pg. 3, line 86, “au naturale”: odd word choice\n2.pg . 1 line 58, “to apply it to any network” change to “to apply it to any size network”\n3.Pg. 1, line 30 “key desiderata”: I would flesh out these desiderata (which are discussed in section 2. \n4.Pg. 3, line 117 “feed the model’s outputs as inputs”: this seems to be the crux of the whole paper. I would introduce this way earlier.\n5.Pg.2 line 59 “accurate, factual, and truthful”: seems to all mean the same thing?\n I can tell this paper was a lot of work to write. There’s lots of parts (good) but they're all kind of underdeveloped (bad). The main limitation is that it’s very hard to follow. The methods and experiments sections can be clarified. If the authors can do that successfully, this could be a huge improvement. Also, the paper tries to do a lot of things in just 9 pages. There’s the desiderata definitions and start off sounding like it’s a conceptual/theoretical paper. Then it introduces and experiments with four different variations of CST. ", " This paper focuses on the problem of improving the faithfulness and explainability of AI system. The authors proposed a architecture based on POMDP and add a decoder to the internal representation to generate semantically meaningful information that could be further leveraged for internal state update and future policies. The authors showed that by intervening on the semantically grounded information, models could learn from the causal self-talk by incorporating prior internal representations with new semantic information. The experimental results show improvement of the models causal faithfulness (i.e., when passed in new semantic information, how often will the model act following the guidance) in a synthetic environment. [+] In this paper the authors listed five desiderata for explainable AI systems and designed a corresponding model architecture that contains semantic outputs at each time-step. The proposed architecture is general enough to be a common backbone of explanatory models with different design choices of intermediate representation used and decoder architectures.\n\n[+] The proposed causal self-talk asks the model to condition on the semantic output $z_t$ of itself as input and do counterfactual reasoning given a state $m_t'$ that is not aligned to the $z_t$. With this intuitive design, the model can learn to ground its internal representation and policy on the semantically meaningful representation. This makes the model follows from the explanations $z_t$ it provided and could be acting faithfully with new semantic guidance.\n\n[-] One concern of this paper is on the causal faithfulness. The current definition provided by the authors is to what degree would the model act consistently with the intervention of semantic input $z_t$. This definition to me seems directly related to some current studies in language-conditioned policies learning or grounded policy learning. As the overall goal is to have model that not only acts with its own semantic output but also act consistently given new semantic output, there should be other offline learning methods that solves similar issue. This makes potentially makes the self-talk process less significant. As stated previously, my questions are as follows:\n1. How significant is the self-talk process compared to grounded representation learning that requires $z_t$ and $m_t$ to be aligned?\n2. The proposed 3 causal-self talk models have similar performance under current experimental settings. I was wondering if there will be specific scenarios where some of them would be best (e.g., is CST-MR always subordinate compared to the other two or could it be useful in some specific scenario). It will also be interesting if the authors could discuss if the composition of the three methods is feasible and will perform well (since each of them works toward the same goal)\n The authors have stated their limitations thoroughly in the paper.", " The authors delineate a set of desiderata for interpretabiliy of RL systems and then build a causally inspired interpretable solution in which the agent generate a textual output to describe its internal state and utilizes it as a form of memory to update its internal state during the course of a trajectory rollout. The causal aspect stems from interventions on the internal state of the agent being set to its initialization value. The authors evaluate their method on their own benchmark. Strengths:\n1. Well Written paper: The authors describe not only the intuition behind their work but also how the intuition came into being. This is useful when learning to do research in a new field. \n2. Important problem: The authors have selected an important problem. Explainability of RL agents is key to making RL agents safer and interpretable. \n3. New evaluation environment. The authors have built their own RL environment which (if open-sourced) could yield future results. \n4. Detailed Desiderata: The authors define their own set of desiderata for interpretable RL systems. \n\nWeaknesses:\n1. No pictorial representation of architecture. The authors should provide a figure depicting a rollout of the RL agent in the environment. I can conceive of this by representing an MDP along with a rollout of the policy for a few steps. \n2. Biggest Gains are in the most simple settings: Providing the textual input as a one-hot encoded vector generates most gains. The authors could try to pretrained embeddings for text to improve this.\n3. Highly simplistic evaluation setting: There exist multiple environments with textual annotations, e.g., https://openreview.net/forum?id=eCPCn25gat which could allow for similar experiments. Experiments on them would be useful. My worry is that the approach will not scale.\n4. Not enough explanation delineating the relationship between experiments and desiderata: While defining desiderata is helpful, a callback to the desiderata is necessary in the experiments sections describing how they were achieved. This is missing. \n5. Missing citing work: https://arxiv.org/abs/2010.03110 and https://proceedings.neurips.cc/paper/2021/file/c1722a7941d61aad6e651a35b65a9c3e-Paper.pdf and https://ieeexplore.ieee.org/document/9561439 \n\n 1. Do you think the intervention could hamper policy learning in the complex environments? It appears that “resetting” the internal state constantly to the initialization value could potentially be harmful. \n2. Do you have any plans for releasing code + environment?\n3. Have you considered how the method might be used for language-conditioned imitation learning? One could feed in instructions instead of z from the previous time step. This could work like a conversational RL agent where it takes an instruction and performs some actions and then returns a string of text for conversation. \n I believe the paper in its current form needs significant improvement. There seems to be a disconnect between the evaluation and the desiderata which can be bridged through thorough experiments. Additional work is necessary before the work is ready for publication. I wish the authors all the best for this. ", " In this paper, the authors define 5 desiderata for AI explainability: grounding, flexibility, minimal-interference, scalability, and faithfulness. They show how these desiderata are satisfied by a system that learns an internal model via \"causal self-talk\" (a data augmentation strategy in which externally grounded information is decoded and then rerouted back into the internal state). Intuitively, the decoded information functions as an explanation for why the system behaved in a particular way. To enforce \"causal faithfulness\" (the decoded information exerts a causal influence on the system's output), the authors train the decoder to communicate information to earlier versions of the same agent. This creates the information asymmetry necessary for the agent to use the decoder output. The authors explore 3 different objective functions for training the decoder (based on reinforcement learning, memory reconstruction, and policy distillation). They find that the reinforcement learning variant performed best in terms of causal faithfulness and \"semantic control\" (using decoded information to obtain reward). Strengths:\n- The work is original, as far as I can tell (I am not an expert in this area).\n- The paper is clearly written and logically argued.\n- The ideas are interesting.\n- The experiments demonstrate the efficacy of the proposed method.\n- Given that explainability is an important real-world problem, the results are potentially significant.\n\nWeaknesses:\n- I thought the experiments were useful for demonstrating the efficacy, but they also felt rather contrived relative to real-world problems where explainability is needed.\n- Only variations of the proposed system were studied. This has the advantage of provided well-controlled comparisons, but it doesn't establish to what extent the system outperforms previously proposed systems. \n- This is perhaps less important, but I would have thought that human judgment of explanatory adequacy is important for evaluating real-world explanation systems. This matters at the end of the day, right? If the authors can address the two main weaknesses raised above (somewhat contrived experiments and lack of comparison to other approaches) then I will be happy to increase my rating.\n\nUPDATE: the authors have addressed these concerns. Although nothing really substantive has changed in the paper, I understand their reasoning as well as the page constraints. I don't see any major negative societal impact of this work. I thought the authors did a good drop addressing limitations and future directions." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 5, 9 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 3, 3 ]
[ "ddiv2XW_cvJ", "EAZr21q_-f1", "CpAeWbveU4C", "ITqJnNnzwBr", "6m5OVO4wUUT", "N8dgj6jjHNL", "GhUF4ugzR1g", "GhUF4ugzR1g", "eO8YOBTFHY", "OyJBosXDyHs", "nips_2022_bk8vkdQfBS", "nips_2022_bk8vkdQfBS", "nips_2022_bk8vkdQfBS", "nips_2022_bk8vkdQfBS", "nips_2022_bk8vkdQfBS" ]
nips_2022_t0VbBTw-o8
Randomized Message-Interception Smoothing: Gray-box Certificates for Graph Neural Networks
Randomized smoothing is one of the most promising frameworks for certifying the adversarial robustness of machine learning models, including Graph Neural Networks (GNNs). Yet, existing randomized smoothing certificates for GNNs are overly pessimistic since they treat the model as a black box, ignoring the underlying architecture. To remedy this, we propose novel gray-box certificates that exploit the message-passing principle of GNNs: We randomly intercept messages and carefully analyze the probability that messages from adversarially controlled nodes reach their target nodes. Compared to existing certificates, we certify robustness to much stronger adversaries that control entire nodes in the graph and can arbitrarily manipulate node features. Our certificates provide stronger guarantees for attacks at larger distances, as messages from farther-away nodes are more likely to get intercepted. We demonstrate the effectiveness of our method on various models and datasets. Since our gray-box certificates consider the underlying graph structure, we can significantly improve certifiable robustness by applying graph sparsification.
Accept
The paper proposes a novel approach to certify the robustness of graph neural networks via randomized smoothing. It does so by treating the networks as ``gray-box'' models and leveraging message passing routines. This yields an improved lower bound on probabilistic certification. All reviewers recognized the technical quality of the work and its novel perspective on adversarial robustness for graph neural networks. Some concerns regarding the probabilistic certification and the experimental details have been successfully addressed during the rebuttal. The paper is recommended for acceptance, conditioned on the inclusion of the additional experiments and discussions arisen in the rebuttal.
train
[ "kdNaPYdZq6c", "z5gUZozcBRK", "cKwU-NdVEh", "_2wNvAIfSDk", "ST1r1n7csHK", "QudunzAfTM", "Ztum2Og--2j", "eCVtujtxj-M", "CZpoAj7MyPd", "XvyO-xPQO0W", "xz_WM8EiKk5", "JiRxqFGj19Y", "ik8kYZ_P3dP", "lFropGBgXpC", "IVszeRwJNT", "IhFLWBkPgr0" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the detailed explanations from the author. My questions are solved and I would raise my score.", " Thank you for the clarification and updating the paper. I am satisfied with the response. Therefore, I decided to keep my support of acceptance of this paper.", " Thank you for your response!\n\nAlthough the confidence level $\\alpha$ has just a minor effect on the strength of our certificates, we agree with your concerns regarding probabilistic certificates. Please note that there are already first works that \"derandomize\" probabilistic certificates [1,2]. The main idea is to evaluate the smoothed classifier exactly, which leads to deterministic certificates.\n\nOne naive approach would be to simply iterate over the support of the smoothing distribution (all possible edge deletions and node ablations). However, similar to how we use the message-passing structure for certification, we can also leverage it to partition the support of the smoothing distribution into a smaller number of equivalence classes. While derandomization is not the focus of our work - and in fact orthogonal to the task of certification - we will conduct additional derandomization experiments and add a discussion of this approach to the camera-ready version. We believe that future work can build upon it towards even more efficient and scalable derandomization schemes.\n\nThank you.\n\n### References\n[1] Alexander Levine, Soheil Feizi: (De)Randomized Smoothing for Certifiable Defense against Patch Attacks. NeurIPS 2020.\n\n[2] Alexander Levine, Soheil Feizi: Improved, Deterministic Smoothing for L1 Certified Robustness. ICML 2021.", " Thank you for your follow-up question!\n\nWe also delete edges and ablate node features during training (using different probabilities $p_d$ for edge deletion and $p_a$ for node feature ablation during training and inference). Please note that only the probabilities at inference eventually determine the strength of our certificates, while the probabilities used during training are just hyperparameters that we can optimize to improve the robustness-accuracy trade-offs. We consider this in our extensive experiments (Lines 297-304), where we explore the joint space of training-time and inference-time smoothing parameters.\n\nIn response to your comment, we now explicitly state in the paper that we also delete edges and ablate node features during training (Lines 223-224). In addition, we specify the values of $p_d$ and $p_a$ that we use to train specific models (see added Appendix N). Moreover, we will also consider your comment for improving our camera-ready version by adding additional experiments on the effect of choosing different $p_d$ and $p_a$ during training.\n\nThank you.\n", " Thank you for providing results for different levels of alpha. Although I do have concerns about this kind of probabilistic guarantee (as there is always a non-negligible chance the result is wrong), I understand this is also what other people claimed and not an issue of this single paper. Please do add these discussions to the paper. I keep my positive score and support the acceptance of this paper. Thank you.", " Thanks for the authors for the clear response and efforts on paper revision. Most of my concerns are addressed. There is just one concern remaining:\n\n- What is the model's training $p_d$ and $p_a$ throughout the experimental evaluation? The manuscript does not mention that they use node or edge ablation in \"Section 6 - Datasets and models'' paragraph. So I am confused on whether node and edge ablation are actually in use during training. Though by inferring from the context like training of $t$ mask or \"different $p_d$ and $p_a$ during training and inference\" I can know $p_d$ and $p_a$ ablations are used during training, this point seems not be made clear, and the actually values of $p_d$ and $p_a$ in training are either omitted or not shown in an apparent place.\n", " Thank you for the reply. My questions have been answered so far, I will keep monitoring the discussion with other reviewers.", " Thank you for your review!\n\n### Concerning our focus on feature perturbations (Comment 1)\nThank you for pointing out that our method can be technically extended to certify the robustness against edge-modification attacks. In this paper, we focus on the novel problem of arbitrary feature manipulations of entire nodes since (1) there are already certificates against edge-modification attacks in the literature (see e.g. [1]), and (2) such feature-based attacks are becoming significantly stronger in recent years and are realistic scenarios [2,3]. \n\n### Concerning the choice of adversarial nodes (Comment 2)\nAs you correctly pointed out, the threat model \"given a target node, we can allow arbitrary N nodes to be attacked\" may be a more common setting in practice. In fact, we discuss both scenarios in our work: given a specific set of adversarial nodes, or given N arbitrary adversarial nodes (compare Lines 137-139). We agree with you that the latter certificate may be more useful in general, and we therefore present a thorough evaluation of such certificates in Section 6.\n\n### How are the subgraphs generated?\nWhile edges are randomly deleted, nodes are only randomly ablated (i.e. we replace their features with a special token). In practice, we sample edges to delete and nodes to ablate independently. Given we did not sample an edge for deletion, it remains in the graph even if its incident nodes are ablated since we do not delete nodes but only ablate their features.\n\n### How do we guarantee that the graph is still connected and not too sparse?\nWe do not guarantee that the graph remains connected, which is not required for the base models or for computing certificates. We control sparsity of the resulting graphs by adjusting the edge deletion probability p_d. With larger p_d, the sampled graphs become sparser and our certificates become stronger. In our extensive experiments with varying edge deletion probabilities (Section 6), we analyze the tradeoffs between accuracy and robustness of the models under randomized edge deletion (Figure 6).\n\n### Why is our certificate a \"gray-box\" certificate?\nThe term \"gray-box\" is not referring to the information the model has about its input data, but to the information the certificate has about the model. Existing certificates differ in their knowledge about the model (Section 7):\n\nFor example, the certificate proposed in [4] considers the specific form of message-passing used in Graph Convolutional Networks and cannot be applied to other GNNs. Their certificate is a white-box certificate since it has full knowledge about the model.\n\nAnother example is the method proposed in [1], which can certify any GNN model, even any function $f: \\\\{0, 1\\\\}^D \\rightarrow \\mathbb{R}^C$, since it does not take any properties of the function $f$ into account (like the fact that it is a message-passing GNN). Their certificate is a black-box certificate since it has no knowledge about the model. \n\nIn contrast, our paper represents a novel contribution to combine the best of both worlds: our certificates are model-agnostic *and* consider the underlying message-passing structures of GNNs. As a result, we obtain significantly stronger robustness guarantees compared to existing certificates.\n\n### References\n[1] Aleksandar Bojchevski, Johannes Gasteiger, Stephan Günnemann. Efficient Robustness Certificates for Discrete Data: Sparsity-Aware Randomized Smoothing for Graphs, Images and More. ICML 2020.\n\n[2] Jiaqi Ma, Shuangrui Ding, Qiaozhu Mei. Towards More Practical Adversarial Attacks on Graph Neural Networks. NeurIPS 2020.\n\n[3] Xu Zou, Qinkai Zheng, Yuxiao Dong, Xinyu Guan, Evgeny Kharlamov, Jialiang Lu, Jie Tang. TDGIA: Effective Injection Attacks on Graph Neural Networks. KDD 2021.\n\n[4] Daniel Zügner, Stephan Günnemann. Certifiable Robustness and Robust Training for Graph Convolutional Networks. KDD 2019.\n", " Thank you for your review!\n\n### Concerning real-world examples for our threat model (Comment 1)\nPlease consider that we motivate our threat model in the introduction where we also provide a real-world example (Lines 22-24). Specifically in social networks, adversaries typically have full access to features of a few nodes in the graph. Further real-world examples include (1) vandalism against entities in public knowledge graphs [1], (2) fraud in online reviews [2], and (3) many security-related applications in financial or medical domains [3] where adversaries can also control entire nodes in the graph. Beyond that, our certificates also provide helpful information on the robustness of GNNs under non-adversarial perturbations including incomplete data, random perturbations, noisy signals, and data measuring errors. We included these additional examples and justifications for the threat model in Appendix L, and we will consider them for the introduction of the camera-ready version.\n\n### Concerning computational cost (Comment 1)\nOur certificates are more sample-efficient than existing probabilistic certificates for GNNs. As we discuss in Lines 263-273, our method is significantly more efficient than existing work since 2,000 samples are sufficient to obtain strong guarantees, which takes only a few seconds in practice. We believe future work could even improve upon our results towards more robust models in online settings. Moreover, an offline analysis is also useful for the defender since e.g. knowing how robust a model is can help defenders to decide whether the model is ready to run in production.\n\n### Concerning r-robustness (Comment 2)\nThank you for pointing out this interesting connection to the r-robustness literature. We are not familiar with this field so far and the short answer period is not enough time to carefully consider all connections between the two fields and potential implications. From an introductory read of the literature, r-robust graphs have the property that we can remove r-1 nodes from the neighborhood of every remaining node such that the resulting graph is still connected, and certain algorithms can still find consensus under r-robust graphs even if there are malicious nodes [4,5]. They have very different goals as r-robustness is about reachability, while we want to limit the chances of adversarial messages to reach target nodes. Moreover, it appears that high r-robustness implies low certifiable robustness since higher connectivity yields lower probability to intercept messages. But we will need to look deeper into the literature about r-robustness and consider the connection to consensus algorithms more carefully until the camera-ready deadline.\n\n### References\n[1] Stefan Heindorf, Martin Potthast, Benno Stein, Gregor Engels: Vandalism Detection in Wikidata. CIKM 2016.\n\n[2] Bryan Hooi, Neil Shah, Alex Beutel, Stephan Günnemann, Leman Akoglu, Mohit Kumar, Disha Makhija, Christos Faloutsos. BIRDNEST: Bayesian Inference for Ratings-Fraud Detection. SDM 2016.\n\n[3] Leman Akoglu, Hanghang Tong, Danai Koutra: Graph based anomaly detection and description: a survey. Data Min. Knowl. Discov. 29(3): 626-688 (2015)\n\n[4] Haotian Zhang, Elaheh Fata, Shreyas Sundaram: A Notion of Robustness in Complex Networks. IEEE Trans. Control. Netw. Syst. 2(3): 310-320 (2015)\n\n[5] Heath LeBlanc, Haotian Zhang, Xenofon D. Koutsoukos, Shreyas Sundaram: Resilient Asymptotic Consensus in Robust Networks. IEEE J. Sel. Areas Commun. 31(4): 766-781 (2013)\n\t", " Thank you for your review!\n\nWe choose $\\alpha=0.05$ since this is a standard value used in the literature (see e.g. [1]). Nonetheless, we share your concerns and therefore conducted additional experiments using smaller confidence levels (see added Appendix M). Notably, our method still yields strong guarantees for significantly smaller confidence levels such as alpha=0.0001, for which our robustness guarantees hold with probability 99,99%. We believe that we can present all results for smaller confidence levels in the camera-ready version. Please also consider our additional elaboration in Appendix M where we explain that alpha has just a minor effect on the strength of our certificates.\n\n### How strong is the method with smaller alpha and how many Monte-Carlo samples are required?\nWe obtain strong guarantees for smaller confidence levels, requiring little computational efforts. We found that the difference in certifiable robustness for alpha=0.05 and alpha=0.0001 is already extremely small when drawing just 2,000 Monte-Carlo samples (Figure 20). Drawing 2,000 samples takes only 12 seconds on Cora-ML on average. This is significantly faster compared to all previous probabilistic certificates for GNNs that use up to 10^6 Monte-Carlo samples [2].\n\n### What is the certification accuracy for different levels of alpha?\nIn additional experiments with smaller confidence levels, we also found that the classification accuracy is high for just a few thousand Monte-Carlo samples. Please find more exhaustive experiments regarding accuracy for different levels of alpha and Monte-Carlo samples in Figure 21 (Appendix M).\n\n### References\n[1] Alexander Levine, Soheil Feizi: Robustness Certificates for Sparse Adversarial Attacks by Randomized Ablation. AAAI 2020. \n\n[2] Aleksandar Bojchevski, Johannes Gasteiger, Stephan Günnemann. Efficient Robustness Certificates for Discrete Data: Sparsity-Aware Randomized Smoothing for Graphs, Images and More. ICML 2020.", " Thank you for your review!\n\n### Concerning further extensions of our threat model (Comment 1)\nThank you for pointing out that our method can be extended to a wider range of threat models. Please consider that we already discuss extensions like adversarial node insertion and deletion in Lines 206-210, where we point out that evaluating such certificates may not be possible for general graphs without further assumptions about how inserted/deleted nodes are connected to target nodes. Moreover, there are already robustness certificates for adversarial edge modification [1] (see also our response to Reviewer EH8E). We believe that sharing our core contribution is more important than introducing all its possible extensions, but we will discuss future work including such extensions more carefully in the camera-ready version.\n\n### Concerning more training techniques (Comment 2)\nWe agree with you that other training techniques may yield even stronger models in terms of accuracy and robustness. We will experiment with more training techniques (including the ones you mentioned) until the camera-ready deadline. Beyond that, we believe our main contribution to the scientific community is our robustness certificate, which (1) holds independently of the training technique, and (2) may further promote novel architectures and training methods towards more robust GNNs in the future.\n\n### How is the node mask $t$ optimized?\nDuring training, we sample one different graph each epoch, where each sampled graph contains nodes with features replaced by the ablation representation $t$. For optimization, we implement $t$ as a parameter of our models: We initialize $t$ using Xavier initialization and we optimize it as we optimize the GNN weights during training. We added the corresponding details to Appendix I to better describe the optimization process, and also note that we uploaded our code as supplemental material.\n\n### What does $\\epsilon$=0.022 stand for?\nGraph Diffusion Convolution (GDC) [2] is a graph preprocessing technique that we use to sparsify the graph (Lines 274-284). We set the hyperparameter $\\epsilon$ of GDC to 0.022, which controls the sparsification strength of the preprocessing technique. We consider this detail as an important hyperparameter for reproducing our result that sparsification improves certifiable robustness. We agree that this was unclear and we therefore added clarifying words.\n\n### Lines 252-262 and Figure 4a)\nThank you for pointing out potential for saving space, and we will consider this for the camera-ready version. We further believe that Lines 252-262 are important to discuss the difference between our method and existing certificates for GNNs. \n\n### References\n[1] Aleksandar Bojchevski, Johannes Gasteiger, Stephan Günnemann. Efficient Robustness Certificates for Discrete Data: Sparsity-Aware Randomized Smoothing for Graphs, Images and More. ICML 2020.\n\n[2] Johannes Gasteiger, Stefan Weißenberger, Stephan Günnemann: Diffusion Improves Graph Learning. NeurIPS 2019.", " We replied to all reviewers and changed our initial submission in response to their comments as follows:\n\n- Reviewer KAVi: Additional optimization details in Appendix I \n- Reviewer KAVi: Additional training details (training-time smoothing parameters) in Appendix N\n- Reviewer dqSE: Additional experiments using smaller confidence levels in Appendix M\n- Reviewer 51To: Additional motivation for our threat model in Appendix L", " This paper proposes a novel method based on randomized smoothing to achieve robustness for graph neural networks. The task here is to certify the classification accuracy of GCN under any possible attacker who can perturb a bounded number of node features. Compared to the existing method, the proposed one considers random edge ablation and random node-level ablation. As a result, the approach achieves higher certified accuracy than existing baselines by a large margin. Strengths:\n- Solid technical contributions: To the best of my knowledge, this is the first paper that applies random **edge** ablation. Existing approaches for certifying the smoothed GCNs only consider random **node** ablation. When edge ablation comes into play, the certification can take the graph topology into consideration to achieve a better robustness certification. This paper studies this problem in depth, by providing an upper bound of $\\Delta$ for certification and characterizing the computing methods for exact certification for general graphs or tree-structured graphs.\n\n- Great writing quality: the paper is easy to follow with just few typos/ambiguities. Especially, the paper studies the proposed smoothing scheme in-depth, beyond just showing a good empirical performance. This in-depth investigation includes theoretically characterizing the limitations and advantages of different graph smoothing protocols, proposing more expensive though tighter computing methods, and presenting quite a few ablation studies, especially those revealing the benefits of sparsity.\n\nWeaknesses:\n- The proposed method is a bit specific to graph neural networks with node perturbations. In other words, the method could be further benefited from extensions to a wider range of threat models, i.e., node addition/removel, edge feature perturbations, etc.\n\n- More training techniques can be applied to further strengthen the certified robustness guarantee. For example, SmoothMix training (Jeong et al), SmoothAdv training (Salman et al), and DRT training (Yang et al).\n\nOverall, I think these weaknesses are overall not critical. Disclaimer: I'm not actively working with GNNs. So I didn't evaluate the impact of this paper to GNN community nor whether the evaluation follows standardized protocols. I will refer to other reviewers' comments on these aspects to adjust my score if necessary. In terms of suggestions for the authors, see weaknesses.\nQuestions on writing:\n- Line 234 and Appendix I: in the training process, how is the node mask t optimized?\n- Line 284: what does $\\epsilon=0.022$ stand for?\nSuggestions on writing:\n- Line 252-262: from the equation formulation, it is straightforward to see that the proposed certification is independent of perturbed feature dimensions $d$. Maybe we can omit Figure 4(a) and Line 252-262 to the appendix since it consumes much space. See weaknesses.", " This paper presents a new gray-box certification method for graph neural networks (GNNs) using smoothing. Unlike previous approaches, it uses utilizes the underlying message passing based principle (e.g., deleting edges, ablating nodes, etc) to obtain a stronger guarantee. To achieve certification, the authors first derived the worst case change in label probability and then proposed practical way to give a sound lower bound.\n Strength:\n\n1. This method can give stronger certification compared to previous approach.\n2. The theoretical results presented in this paper are novel and practical.\n3. Experimental evaluation shows both speed and certification benefits compared to pure black box methods.\n\nWeaknesses:\n\n1. I am a bit concerned about the confidence level alpha=0.05 used for certification, because it means that the guaranteed robustness only holds with a probability of 95%. Some existing works such as (Zugner and Gunnemann 2019) using deterministic certification methods do not have this limitation.\n Can you show results with different confidence levels and see how strong the method is with a smaller alpha? How many samples are required and what is the certification accuracy for different levels of alpha?\n Yes, the author discussed limitations of this work explicitly and I see no obvious negative societal impacts.", " This paper presents a technique to certify the robustness of Graph Neural Networks. Inspired by randomized smoothing, the proposed method takes advantage of the graph structure to get obtain various certificates (e.g., on accuracy). For the studied threat model, these certificates are tighter than proposed in the literature so far. The threat model consist of allowing an adversary to arbitrarily manipulate the features (i.e., emitted messages) of a subset of nodes. Strengths:\n* The authors are the first to truly take advantage of network structure to certify GNN (at least from a smoothing perspective).\n* The results clearly demonstrate that they can certify a larger proportion of cases than prior methods (when the number of features increases).\n* The threat model is fairly broad and could have practical applications\n\nWeaknesses:\n* It would help the reader to explain a bit why the studied threat model is interesting and when it arises in the real-world. The certification method which relies on smoothing can only be used for offline analysis or, if used online (let's say on the network of machine), will be prohibitly expensive to run (i.e., sampling of various network configurations).\n* My knowledge of graph robustness is fairly limited, but I assume there has been a number of works (not specifically designed for machine learning applications) that also provide robustness to the threat model considered (e.g., r-robustness). Maybe expanding the literature survey slightly and expanding the experiments to demonstrate that r-robust networks are 100% certifiable (using the proposed approach) would help make a stronger and broader point. The paper is well-written. I would appreciate if the authors address the weaknesses stated above. Yes, the limitations are reasonably addressed.", " This paper proposes a robustness certification against adversarial attacks on graph data. The model will randomly ablate some nodes and delete some edges so that the adversarial impact on certain nodes will not propagate to the target node with high probability. They show that the pipeline achieves good robustness, especially at larger distances. ## Strengths\n\n* The robustness certification can be verified against arbitrary perturbation on the adversarial nodes. In the pipeline, the certification is achieved by blocking the messages from the adversarial nodes. Therefore, it can provide certification without restricting the perturbation magnitude on each node. This seems a realistic setting to me - in practical graph data settings, considering the number of adversarial nodes (users) is more important than considering the magnitude of adversarial perturbation on each node. \n\n* The experimental evaluation is comprehensive. The authors evaluate their approach in different settings, e.g. with skip-connections and under different sparsifications. These approaches show the flexibility of their approach to different cases.\n\n\n## Weaknesses:\n\n* Only feature-based attack robustness is considered. Actually, a wide range of attacks on GNNs will focus on edge-modification attacks (e.g. [a,b]). The paper does not discuss the applicability of their approach against this type of attack. Actually, it feels to me that the proposed approach can still be adapted as long as both nodes of the edge are considered as adv nodes, although such a bound seems like a loose bound.\n\n[a] Zügner D, Günnemann S. Adversarial Attacks on Graph Neural Networks via Meta Learning[C]//International Conference on Learning Representations. 2018.\n\n[b] Xu K, Chen H, Liu S, et al. Topology attack and defense for graph neural networks: an optimization perspective[C]//Proceedings of the 28th International Joint Conference on Artificial Intelligence. 2019: 3961-3967.\n\n* The certification result is node-specific and not applicable in practice. If I understand correctly, the exact certification provided by the algorithm is \"given a target node, the adversarial perturbation on a specific node set can/cannot be certified to be robust\", but not \"given a target node, we can allow arbitrary N nodes to be attacked\". The latter is a more useful certificate that can be achieved by previous approaches, while the current one is limited to a specified node set. In the experimental setup, how is the subgraphs generated exactly? Since the nodes are randomly sampled, does the edge only exist if both nodes are sampled? How do you guarantee that the graph is still connected and not too sparse?\n\nI am curious why the authors keep mentioning gray-box certification in contrast to black-box. Since the certification is performed by the model itself, the model will for sure know the exact data as well as other information. So I can only imagine the white-box (which also satisfies gray-box and black-box) setting of the certification. As mentioned in the weaknesses, I think the authors can improve by considering the edge-based attacks." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 3, 4 ]
[ "eCVtujtxj-M", "_2wNvAIfSDk", "ST1r1n7csHK", "QudunzAfTM", "XvyO-xPQO0W", "xz_WM8EiKk5", "CZpoAj7MyPd", "IhFLWBkPgr0", "IVszeRwJNT", "lFropGBgXpC", "ik8kYZ_P3dP", "nips_2022_t0VbBTw-o8", "nips_2022_t0VbBTw-o8", "nips_2022_t0VbBTw-o8", "nips_2022_t0VbBTw-o8", "nips_2022_t0VbBTw-o8" ]
nips_2022_djOANbV2zSu
UQGAN: A Unified Model for Uncertainty Quantification of Deep Classifiers trained via Conditional GANs
We present an approach to quantifying both aleatoric and epistemic uncertainty for deep neural networks in image classification, based on generative adversarial networks (GANs). While most works in the literature that use GANs to generate out-of-distribution (OoD) examples only focus on the evaluation of OoD detection, we present a GAN based approach to learn a classifier that produces proper uncertainties for OoD examples as well as for false positives (FPs). Instead of shielding the entire in-distribution data with GAN generated OoD examples which is state-of-the-art, we shield each class separately with out-of-class examples generated by a conditional GAN and complement this with a one-vs-all image classifier. In our experiments, in particular on CIFAR10, CIFAR100 and Tiny ImageNet, we improve over the OoD detection and FP detection performance of state-of-the-art GAN-training based classifiers. Furthermore, we also find that the generated GAN examples do not significantly affect the calibration error of our classifier and result in a significant gain in model accuracy.
Accept
The authors propose a new approach for training image classifiers with complete uncertainty quantification based on generative adversarial networks. The main idea is to use GANs to "shield" each class separately from the out-of-class (OoC) regime. This is done in combination with a one-vs-all classifier in the final DNN layer trained jointly with a class-conditional generator for out-of-class data in an adversarial framework. Finally, these classifiers are then used to model class conditional likelihoods. The empirical validation shows improved OoD detection and FP detection performance when compared to SOTA in this setting. The reviewers appreciated the clarity of exposition and the positioning with respect to the related works. The unified approach applicable both to FP detection and OoD detection was deemed novel. On the negative side, the method seems to be extremely involved in terms of the required architectural pieces, distinction between low-dim and high-dim settings, primarily low-resolution data used for evaluation, and the number of hyperparameters. During the discussion the authors addressed the main questions raised by the reviewers. Nevertheless, given that all of the reviewers are leaning positive, I'll recommend the acceptance of this work. Please do a full pass in terms of formatting of the whole manuscript, including removing inline tables and figures, removing things like double parenthesis, bolding specific letters (e.g. L247), clarify the flow of information in figure 1 so that one can grasp the high-level overview of the algorithm, and incorporate the remaining points raised during the discussion.
train
[ "LysUvSOUmwy", "0TeK_QGlfPi", "LBgploJ5QkZ", "GVLDtijmSqp", "hfKN5IJ6oE", "r4bRgx9aXkw", "Li7tyqogcP", "rZqYATMYyLj", "CwoCIOVEE-v", "XXWfjMpFsLQ", "vWJUxLW3hE", "NCivu_8FmQm" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your feedback, which resolves most of my concerns. My rating remains unchanged.", " Thanks for the response. My questions have been answered, and the additional experiments do reinforce that the setting is likely to be robust. My ratings remain unchanged.", " Thanks for the response. \nMost of my concerns have been addressed and I will keep my rating. \nWe suggest the authors polish the final version according to the response if it is accepted. ", " I would like to thank the authors for the rebuttal and providing additional experiment on TinyImageNet. I therefore raise my score from 4 to 5.", " We included additional toy examples on a 3x3 grid of Gaussians in the appendix of the revised paper version. To give more insight into the performance on higher dimensional data we also included results on TinyImageNet (64x64 images and 200 classes) at the end of the appendix. For the camera ready version the new table 10 will then be moved to the main part of the paper.\n\nTraining the conditional autoencoder jointly with the other models was done in GEN, which indicates that this might be viable. However, GANs are usually difficult to train and this can already be observed when the data generating distribution is stationary. Results from statistical learning theory on the convergence of GANs and AEs however support our choice. We also noticed in our experiments on the toy examples that the low dimensional regularizer prevents the mode collapse of the generator. In the appendix you can now also find additional OoC examples with and without the low-dimensional regularizer applied. Especially on the MNIST 0-4 dataset one can notice that the samples are much more diverse when using the regularizer.\n", " Our parameter studies in the appendix show that the exact choice of hyperparameters is insignificant, reducing the amount of hyperparameter tuning (please see also our additionally added results on the higher dimensional TinyImageNet dataset which used the same hyperparameters as for CIFAR10/CIFAR100). On the toy examples we observed that the low dimensional regularizer is preventing a mode collapse of the generator. In general, we observed that the training remains stable even when extending the number of epochs. As we observed during our studies, using a Wasserstein formulation for the GAN also helps to stabilize the training. Altering the weight between real and generated OoC examples (coefficient $\\lambda_\\textrm{real}$) did not affect the stability during training which is why we did not have a look at alternative formulations. We would still like to thank the reviewer for the interesting idea.\n\nPlease also have a look at the additional OoC examples, showing qualitatively the influence of the low-dimensional regularizer on the generated samples, as requested by other reviewers. For the camera ready version the new table 10 will be moved to the main body of the paper.", " Although conformal prediction methods are conceptually quite remote from our approach, we are ok with extending our related work by citing [a], [b] and [c].\n\nGenerated samples $\\tilde{x}$ are used in equation (9) as a proxy for out-of-class examples. We compute aleatoric uncertainty as the Shannon entropy over the predicted class probabilites. This is a valid definition because we showed that $\\tilde{C}(y|x) \\xrightarrow[|S|\\to\\infty]{} p(y|x)$. If the real $p(y|x)$ is close to a uniform distribution over the classes (which maximizes the Shannon entropy) we have a high uncertainty and vice versa. For epistemic uncertainty $\\tilde{C}(i|x)$ can be considered as a proxy for $p(i|x)$, which results in the epistemic uncertainty by $1-\\tilde{C}(i|x)$. We thank the reviewer for the remark and will make this clearer by adding the aforementioned explanation in line 161.\n\nFor our approach a one-vs-all classifier is necessary to formulate equation (4) ($\\tilde{C}(i|x)$). This also allows $\\tilde{C}(i|x,y)$ to predict arbitrarily small in-distribution probabilities, which would not be possible with softmax. Additionally this formulation allows us to utilize the out-of-class examples. As these OoC examples can be in-distribution for other classes, this would also not be possible with a multi-class approach.\n\nWe apply MC-Dropout to our approach because it is complementing our method in a canonical way by additionally quantifying model uncertainty. If we omit the MC-Dropout we are still improving on the more challenging benchmarks of CIFAR10 0-4 vs. CIFAR10 5-9 and CIFAR100 0-49 vs. CIFAR100 50-99 over the closely related methods \"Confident Classifier\" and \"GEN\" especially when considering AUROC, AUPR-In and AUPR-Out.\n\nWe choose a GAN for our method as we are relying on the min-max game between the discriminator and classifier to let the generator distribution converge to the boundary of our training distribution. Utilizing other generative models would require changing the respective objective drastically to be able to sample from the boundary. This might even result in additional optimization and preprocessing steps in the pipeline, reducing the end-to-end character of our approach.\n\nIn the revised paper version we add additional experiments on the TinyImageNet dataset. You can find the corresponding tables at the end of the appendix. For the camera ready version we will add the new table 10 to the main part of the paper.\n\n[a] Soundouss Messoudi, Sylvain Rousseau, Sébastien Destercke: Deep Conformal Prediction for Robust Models. IPMU 2020 \n[b] Maxime Cauchois, Suyash Gupta, John C. Duchi: Knowing what You Know: valid and validated confidence sets in multiclass and multilabel prediction. JMLR (2021) \n[c] Sangdon Park, Osbert Bastani, Nikolai Matni, Insup Lee: PAC Confidence Sets for Deep Neural Networks via Calibrated Prediction. ICLR 2020\n", " In terms of the influence of quality of generated OOD examples, we tested generator networks with different numbers of parameters, which should also alter the quality of generated samples, and noticed low to no influence on the OoD detection performance of the final classifier. We will address this with a short note in the appendix. \nBy design of the min-max game between the discriminator and the classifier in the GAN objective, the generator produces samples on the boundary of the training distribution which can then be utilized as OoC examples. To some extent the loss weight for the classifier, $\\lambda_\\textrm{cl}$, is also controlling the closeness to the training distribution. To provide some insight into the influence of the number of classes, we will add an ablation study showing the performance of the UQGAN classifier in comparison to the entropy baseline dependent on the number of classes.\n\nOur related work is organized such that we reference common methodology and publications that are related to the different parts of our method in dedicated paragraphs. For the camera ready version we will add paragraph headings to make this structure clearer and add a small introduction as follows:\n\n\"In this section we give an overview of common uncertainty quantification methods as well as publications related to the different parts of our method. In this context the task of uncertainty quantification is to assign scalar values to predictions, quantifying aleatoric (in-distribution) and epistemic (out-of-distribution) uncertainty.\"\n\nWe thank the reviewer for the constructive remark.\n\nPlease see the appendix for more generated OoC examples with and without the low-dimensional regularizer. Additionally we supplied results on the TinyImageNet dataset (64x64 images and 200 classes) which is well beyond the evaluation of the cited publications. For the camera ready version the new table 10 will then be moved to the main body of the paper.\n", " This paper proposes a GAN-based approach to quantify both out-of-distribution and in-distribution uncertainty in image classification. Specifically, the authors use GANs to generate examples from the out-of-class regime several times, and each time then use the examples to train a one-vs-all classifier in the final DNN layer. Finally, the resulting classifiers are sued to model class conditional likehoods. The authors conduct experiments on MNIST, CIFAR10, CIFAR100 and show state-of-the-art performance in terms of OoD detection and FP detection. Strengths: \n1. The presentation of this paper is clear and easy to follow. \n2. Existing GAN-based approaches mostly only predict a single score for OoD examples, while this paper proposes to quantify both in-distribution uncertainty and out-of-distribution uncertainty. This paper achieves this by repurposing the training scheme of the classifier and the generation of OoD examples. The idea is interesting and can inspire others in this community. \n3. The authors show strong results in several datasets, including MNIST, CIFAR10, CIFAR100.\n\nWeaknesses:\n1. There are no ablation studies to study and verify the effectiveness of each design of the proposal. For example, the number of classes used for training the classifier each time, the different image quality of GAN-generated examples, etc. \n2. It would be better to clarify the necessity and importance (or practical significance) of quantifying both in-distribution and out-of-distribution uncertainty in a single model.\n 1. Typically, GANs are trained to model the distribution of real data, and sometimes even humans cannot distinguish real data from examples generated by state-of-the-art GANs [1]. My question is how to determine whether a GAN-generated example is out-of-distribution or in-distribution? How to choose the GAN models? How does the quality of GAN-generated examples affect uncertainty quantification?\n\n[1] Karras T, Laine S, Aittala M, et al. Analyzing and improving the image quality of stylegan[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 8110-8119.\n\n2. Although the authors provide a survey [16] on uncertainty quantification methods in Line 28, we suggest a brief introduction to uncertainty quantification in the section of Related Work. Besides, we suggest breaking the section of Related Work into several subsections, which can make it clear.\n 1. The organization of the section of Related Work is not clear. \n2. It would be better to show more GAN-generated examples for reference. Besides, it would be better to conduct experiments on a relatively large dataset, e.g., ImageNet. \n", " This paper proposes a method to estimate both aleatoric and epistemic uncertainties. It does so by training a conditional GAN in a latent space of a pretrained autoencoder to shield each class with out-of-class samples. ## Strength\n- Evaluation metrics are comprehensive.\n- The provided toy dataset is illustrative.\n\n## Weakness\n- Some related works are missing, e.g., conformal prediction.\n- It is hard to assess the effectiveness of the proposed method without experiments on ImageNet. I understand that the limited computational resources might be a concern, but any form of ImageNet like low resolution of 64x64 or TinyImageNet (which has 200 classes) would be very helpful.\n- Section 3.1 and 3.2 are disconnected. How exactly are aleatoric and epistemic uncertainties computed? How are generated $\\tilde{x}$ used? Why are they valid definitions?\n- Why is one-vs-all classifier necessary here? Can similar results be achieved by normal multiclass classifiers?\n- MC-Dropout is basically Bayesian ensemble and can be plugged in other methods as well. It is known that ensemble can drastically improve performance. So the comparisons in Table 3, 4 seem not very fair. Without MC-Dropout, the improvements of the proposed method is not always significant. Please see my questions in Weakness. Additional question:\n- We know that GAN usually produces much higher sharper images than VAEs. However, given the images Fig. 3 are so blurry, and image quality is not the main focus of this paper, can the authors their design choice of using GAN here? Can other generative model be used here? N/A", " The paper presents UQGAN, a model for uncertainty quantification, including both OoD and FP detection.\nThe proposed model contains a classifier C that outputs for a pair sample-label $(x,y)$ the probability that the pair is in-distribution. \n\nFor training C, the in-distribution pairs consists of real pairs $(x,y)$ directly drawn from the dataset, while out-of-distribution pairs are of two types: either $(x,y')$ where $y'$ is not the correct label associated to $x$, accounting for class uncertainty, or $(\\hat{x}, y)$ where $\\hat{x}$ is a sample generated by a cGAN conditionned on $y$.\nThe cGAN's generator is trained in the latent space of a pretrained conditional auto-encoder jointly with a discriminator, but also with the classifier C so that the generated data pair $(\\hat{x}, y)$ is considered to be out-of-distribution by C.\nAdditionally, a regularizer pushes the generated samples to cover a large range of directions in the latent space.\n\nThey also combine their method with MC dropout and observe further improvements. Strength:\n- The paper is well organized and didactic. As someone not particularly familiar with the uncertainty literature, I especially appreciate the effort put into the related work section.\n- It proposes a novel unified approach for both OoD and FP detection.\n- The experiments are comprehensive, with detailed breakdown and hyper-parameter analysis in the supplementary.\n\nWeakness:\n- The method seems fairly complex to use as it requires different parts (cAE, Generator/Discriminator, Classifier) and hyper-parameters to tune. Looking at the generator objective, it includes a GAN objective, so that the generated distribution p(G(e,y), y) is similar to the real latent distribution p(z,y). This would imply that it pushes $p(\\hat{x}, y)$ close to $p(x,y)$, according to a metric induced by the latent space and the discriminator.\nThe other objective of the generator is given by C, and makes it so that the generated data $(\\hat{x}, y)$ is out-of-distribution, essentially pushing away $p(\\hat{x}, y)$ and $p(x,y)$.\nIn some sense, the overall objective of the generator is a compromise between being in-distribution (according to D) and out-of-distribution (according to C).\n- Can the authors comment on how much those conflicting objectives are a problem when training the generator? I would guess it would be quite unstable when training for a long time, but that the use of a pre-trained decoder alleviates the issue.\n- If we consider the overall objective of the discriminator as a compromise between the two losses, perhaps another way to balance them would be to have the generator target intermediate values for $C(o | \\hat{x},y)$ and $D(\\hat{x},y)$ such as .75 for instance instead of binary classification. Of course, that would require a different loss for the GAN (and not the wasserstein loss). Have the authors considered something in this direction?\n\n The authors have adressed societal impact in the adequate section.", " This paper introduces cGANs to help deep classifier modelling uncertainty. The generator is encouraged to generate features that lie in real distribution (evaluated by the discriminator) and out-of-class regions (assessed by the classifier). A low-dimensional regulariser is introduced to generate diverse features. The authors verify their ideas on several benchmarks. The improved performance on out-of-distribution data splits shows the effectiveness of the proposed method. Ablation studies verify the importance of the proposed regulariser. Strengths:\n\n-The authors conduct experiments on several datasets, with results showing that the proposed methods are better than previous baselines.\n\n-The authors provide comprehensive ablation studies in the supplementary material to support the soundness of the design of the method.\n\n-The idea of generating out-of-class samples is interesting.\n\n\nWeakness:\n\n-The visualization experiments are conducted on simple datasets which only contain 2 classes. Authors are encouraged to show some visualization results on datasets having multiple-classes.\n\n-All experiments are conducted on small-scale and small-resolution datasets. The authors are encouraged to verify their method on more challenging datasets, such as ImageNet-Dog, ImageNet-O [1] dataset.\n\n[1] Hendrycks, et al. \"Natural adversarial examples.\" CVPR 2021.\n -What will happen if the cAE is also trained (not pretrained) during training.\n\n-It seems that the discriminator can easily tell the real features from the fake ones since the distribution of the real features are far away from the fake distribution. In such senario, the generator will occur mode collapse easily as far as my experience. Howevery, mode collapse is not occured in the proposed method. Maybe it's because the low-dimensional regularizer alleviates this issue. Can the authors provide the generated images of the variant in which the low-dimensional regularizer is removed? NA" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 2, 4 ]
[ "hfKN5IJ6oE", "r4bRgx9aXkw", "rZqYATMYyLj", "Li7tyqogcP", "NCivu_8FmQm", "vWJUxLW3hE", "XXWfjMpFsLQ", "CwoCIOVEE-v", "nips_2022_djOANbV2zSu", "nips_2022_djOANbV2zSu", "nips_2022_djOANbV2zSu", "nips_2022_djOANbV2zSu" ]
nips_2022_WOuGTb9QswS
Oscillatory Tracking of Continuous Attractor Neural Networks Account for Phase Precession and Procession of Hippocampal Place Cells
Hippocampal place cells of freely moving rodents display an intriguing temporal organization in their responses known as `theta phase precession', in which individual neurons fire at progressively earlier phases in successive theta cycles as the animal traverses the place fields. Recent experimental studies found that in addition to phase precession, many place cells also exhibit accompanied phase procession, but the underlying neural mechanism remains unclear. Here, we propose a neural circuit model to elucidate the generation of both kinds of phase shift in place cells' firing. Specifically, we consider a continuous attractor neural network (CANN) with feedback inhibition, which is inspired by the reciprocal interaction between the hippocampus and the medial septum. The feedback inhibition induces intrinsic mobility of the CANN which competes with the extrinsic mobility arising from the external drive. Their interplay generates an oscillatory tracking state, that is, the network bump state (resembling the decoded virtual position of the animal) sweeps back and forth around the external moving input (resembling the physical position of the animal). We show that this oscillatory tracking naturally explains the forward and backward sweeps of the decoded position during the animal's locomotion. At the single neuron level, the forward and backward sweeps account for, respectively, theta phase precession and procession. Furthermore, by tuning the feedback inhibition strength, we also explain the emergence of bimodal cells and unimodal cells, with the former having co-existed phase precession and procession, and the latter having only significant phase precession. We hope that this study facilitates our understanding of hippocampal temporal coding and lays foundation for unveiling their computational functions.
Accept
This paper presents non-trivial and novel theoretical and computational modeling that accounts for experimentally observed phenomena: the theta phase procession and precession. These phenomena are implicated in the neural representation and learning of neuronal networks involving hippocampus. Although the current manuscript does not address the learning and the presentations are limited to a linear track environment, it represents a clear advance in linking spatial and temporal information representations by extending the standard continuous attractor models that do not exhibit phase procession/precession. Furthermore, the model elegantly explains the biologically observed unimodal and bimodal cells. The authors are encouraged to improve the clarity of some parts of the writing.
train
[ "OHqZqhVrR1a", "tiZnl1YiphN", "f0UjaT07PRD", "ZAi8_pGYYG0x", "P6__NJpkfg", "bOXVvJFNvr-", "fPKMYyWUKhg", "HJGQG2DMad" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear All:\n\nWe appreciate the reviewers for their questions/comments and their time and effort in reviewing the paper. Since we haven't got any feedback from the reviewers (the deadline is Aug 09 '22 08:00 PM UTC), we are wondering if the updates to the manuscript and replies to the corresponding questions resolve the raised concerns. If this is indeed the case, we hope the reviewer could consider updating the score accordingly. If not, we hope the reviewer could clarify any remaining concerns/questions and we are happy to engage in further discussion.\nWe again thank the reviewers for the thoughtful comments and for the time and effort in reviewing the paper.\n\nYours sincerely,\n\nPaper3363 Authors\n\n", " \n**References:**\n\n[1] Bi, G. Q., & Poo, M. M. (1998). Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type. Journal of neuroscience, 18(24), 10464-10472.\n\n[2] Vafidis, P., Owald, D., D'Albis, T., & Kempter, R. (2022). Learning accurate path integration in ring attractor models of the head direction system. Elife, 11, e69841.\n\n[3] Urbanczik, R., & Senn, W. (2014). Learning by the dendritic prediction of somatic spiking. Neuron, 81(3), 521-528.\n\n[4] Blum, K. I., & Abbott, L. F. (1996). A model of spatial map formation in the hippocampus of the rat. Neural computation, 8(1), 85-93.\n\n[5] George, T. M., de Cothi, W., Stachenfeld, K., & Barry, C. (2022). Rapid learning of predictive maps with STDP and theta phase precession. bioRxiv.\n\n[6] O'keefe, J., & Burgess, N. (2005). Dual phase and rate coding in hippocampal place cells: theoretical significance and relationship to entorhinal grid cells. Hippocampus, 15(7), 853-866.", " **We acknowledge the valuable and encouraging comments of the reviewer and would like to address them in the below.**\n\n**About the 2D Case:**\n\n1) First of all, we would like to point out that under the 2D case of modeling the animals’ moving behavior in an open field, the mechanism for generating theta phase precession and procession is the same as that in the 1D case, i.e., the interplay between intrinsic mobility (arising from the feedback inhibition) and the extrinsic mobility (arising from the external input) generates oscillatory tracking in the network. Our simulation result (not shown in the current manuscript) shows that the network bump in the 2D arena sweeps back and forth around the external input along the tangent of the trajectory.\n\n2) We would love to emphasize that the study of the 1D case is not simple, which matches the setting of most rodent experiments on studying theta phase coding, that is, these experiments were done when animals run on a linear track (a few of them were done on the Z-shaped or W-shaped track). Therefore, the study of the 1D case can link directly to the experimental data. \n\n3) Experimentally, it is unclear yet what theta weeps exactly look like when an animal runs in the open field, due to the lack of recorded neurons, making it hard to accurately decode the animals’ position (via private communication with the authors of the paper, Wang et al. Science 2020). Some experimental scientists believe that theta sweeps in the 2D space is similar to the 1D case, as predicted by our model. But Edvard I. Moser gave a talk at the Cosyne conference several year ago, indicating that theta sweeps of grid cells in the 2D space may consist of an intermittent switch between left and right sweeps in consecutive theta cycles. It will be very interesting to model theta sweeps in the 2D CANN in a more accurate way when more concrete experimental data are available. \n\n**About the synaptic plasticity:**\n\nIt is an interesting issue to explore what kind of synaptic plasticity drives the form of a CANN, as well as the effect of synaptic plasticity during theta phase shift. Indeed, synapses in the hippocampus can be very plastic during the postnatal period or during reward-based learning. However, in the present study, we focused on investigating the mechanism of generating theta phase shift of place cells, a phenomenon widely observed during free locomotion. Therefore, we consider that the CANN has been preconfigured through previous training (see below on how a CANN can be learned through experiences). Notably, the Gaussian form of synaptic connections in our model is chosen for the convenience of theoretical analysis. In practice, our main results remain unchanged when the synaptic connections are not of the idealized form or fluctuated due to continuous plasticity, as long as the synaptic strength distribution follows the basic principle, that is, nearby cells have stronger connections and faraway cells have weaker connections, as well as the feedback inhibition exists (remarkably, in the case of non-idealized synaptic connections, the feedback inhibition, which tends to destabilize the bump state, has a positive contribution to avoid that the network dynamics is trapped in local minimums.). \n\nThe issue of “learning a CANN through synaptic plasticity” has been largely studied previously. It has been demonstrated that the synaptic distribution needed by a CANN can be learned through local Hebbian learning or STDP [1]. For example, this study [2] demonstrated that a ring attractor for the head direction system can be learned by a very simple biological plausible learning rule [3]. Also, an early work by Blum & Abbott [4] and a recent work [5] demonstrated that the STDP rule can lead to the translation-invariant synaptic connections.\n\n**On the link between firing rate based CANN model and the precise spike timing of theta phase precession and procession in place cells’ firing:**\n\nIn our model, phase precession and procession of a neuron are quantified by the shifts of the timing of peak firing rate of the neuron in consecutive cycles. The similar measurement has been used in modelling studies in the field for many years, see, e.g., the classical dual oscillatory model for explaining phase precession [6]. Turning the firing rate into Poisson spikes in our model is straightforward, which generates the same phase shift phenomenon, although it is a bit noisy. Notably, in the experiments, theta phase precession was displayed by aggregating spikes over many theta cycles (since the timing of individual spikes is very noisy), which is effectively a rate-based plot, similar to Fig.5C in our paper. \n\nWe hope our replies have clarified the concerns of the reviewer.\n\n", " **We thank the reviewer for encouraging comments about our paper and valuable suggestions for improving the presentation.**\n\n**About the formulation of the model, i.e., the detailed realization of feedback inhibition through the interplay between the hippocampus (HPC) and the medial septum (MS).**\n\nThanks for pointing out this issue of biological implementation. The short answer is that we are still not very sure about the biologically detailed implementation of feedback inhibition for theta phase shift. In reality, there are two possible ways to implement such feedback inhibition: one is the spike frequency adaptation (SFA) widely observed in neuronal firing in the brain [1,2], whose effect is to counterbalance neural firing, and the other is the interplay between HPC and MS. The reason for us to consider the interplay between HPC and MS in the present study is that MS acts as a pacemaker to generate hippocampal theta in the experiments: lesions of MS abolished theta oscillations in the hippocampus (and hence theta phase precession) [3], and cooling MS slowed down the theta rhythm [4]. Experimental data also recognized an interaction pathway between HPC and MS [5,6] (as discussed in the paper line.313-323): HPC pyramidal cells (place cells) project to GABAergic neurons in HPC which further projects to GABAergic neurons in MS; in return, GABAergic neurons in MS project back to GABAergic neurons in HPC which inhibit place cells’ activities. Such interaction loop might implement the feedback inhibition effect used in our model. Nevertheless, we would like to point out that the current experimental data do not exclude the possibility of SFA (feedback inhibition at the single neuron level) in generating theta sweeps, and more experimental data are needed for establishing a detailed model for the HPC-MS pathway. \n\nWe will include more discussions about the biological implementation of feedback inhibition in the revised paper.\n\nAs recognized by the reviewer, we adopted a simple mathematical form of feedback inhibition with the aim for elucidating the possible mechanism of generating theta phase shift clearly. We expect that no matter what kind of detailed mechanism the hippocampus adopts to implement such feedback inhibition, as long as the feedback inhibition exists, the CANN can generate the oscillatory tracking behavior, which leads to theta phase shift. \n\n**Regarding the presentation in line 217-231:**\n\nThanks very much for the suggestion! We removed the absolute form of h(t) and updated the descriptions in line 217-231 to present the results more clearly (see line.217-232 in the rebuttal pdf). \n\n**About theta sequences and theta sweeps:**\n\nThanks for the suggestion! We will differentiate them clearly in the revised paper to avoid confusion. Basically, theta sequences refer to the sequential firing of place cells in each theta cycle, while theta sweeps refer to that the decoded position (typically reconstructed based on theta sequences by using a standard Bayesian decoding algorithm) sweeps back and forth around the physical position of the animal. They are the two sides of the same coin. \n \nWe hope our replies have clarified the concerns of the reviewer.\n\nReferences:\n\n[1] Alonso, A., & Klink, R. (1993). Differential electroresponsiveness of stellate and pyramidal-like cells of medial entorhinal cortex layer II. Journal of neurophysiology, 70(1), 128-143.\n\n[2] Ha, G. E., & Cheong, E. (2017). Spike frequency adaptation in neurons of the central nervous system. Experimental neurobiology, 26(4), 179.\n\n[3] Partlo, L. A., and Sainsbury, R. S. (1996). Influence of medial septal and entorhinal cortex lesions on theta activity recorded from the hippocampus and median raphe nucleus. Physiol. Behav. 59, 887–895. doi: 10.1016/0031-9384(95)02208-2\n\n[4] Petersen, P. C., and Buzsáki, G. (2020). Cooling of medial septum reveals theta phase lag coordination of hippocampal cell assemblies. Neuron 107, 731–744.e3. doi: 10.1016/j.neuron.2020.05.023\n\n[5] Toth, K., Borhegyi, Z., & Freund, T. F. (1993). Postsynaptic targets of GABAergic hippocampal neurons in the medial septum-diagonal band of broca complex. Journal of Neuroscience, 13(9), 3712-3724.\n\n[6] King, C., Recce, M., & O’keefe, J. (1998). The rhythmicity of cells of the medial septum/diagonal band of Broca in the awake freely moving rat: relationships with behaviour and hippocampal theta. European Journal of Neuroscience, 10(2), 464-477.\n", " **We thank the reviewer for encouraging comments and valuable suggestions for improving the presentation.**\n\nAccording to the reviewer’s suggestions, 1) we have replaced Fig.1B with a new figure to better illustrate theta phase precession and procession (please see the new Figure 1 in our rebuttal pdf); 2) we will add more descriptions to explain the plot of Fig.5C in the revised paper. Basically, Fig.5C follows the same form of Fig.3B in the experimental paper (Wang et. al, Science 2020), where the authors aligned phase shifts of neurons according to the normalized position of the animal in the place field of each neuron and then calculated the probability of phase shift (the detailed calculation process is introduced in Sec 4.2 in SI).\n\n**About publishing our work at NeurIPS:**\n\n**We strongly believe that NeurIPS is the right venue to publish our work. The reasons are below:**\n\n1) Since it was founded in 1987, NeurIPS (formerly NIPS) have accepted computational neuroscience papers. Especially in the early years, it was mainly targeted for neuroscience modeling. Nowadays, even though it flooded with deep learning papers, it still accepts many papers on neuroscience and brain per year, e.g., about 140 submissions in 2018 and 170 in 2019. We ourselves have been publishing and reviewing computational neuroscience papers in NeurIPS for many years.\n\n2) Closely related to our study on the topic of hippocampal/entorhinal modeling, here are some papers published recently at NeurIPS:\n\na)\tWhittington, J., Muller, T., Mark, S., Barry, C., & Behrens, T. (2018). Generalisation of structural knowledge in the hippocampal-entorhinal system. Advances in neural information processing systems, 31.\n\nb)\tSorscher, B., Mel, G., Ganguli, S., & Ocko, S. (2019). A unified theory for the origin of grid cells through the lens of pattern formation. Advances in neural information processing systems, 32.\n\nc)\tVértes, E., & Sahani, M. (2019). A neurally plausible model learns successor representations in partially observable environments. Advances in Neural Information Processing Systems, 32.\n\nd)\tGao, R., Xie, J., Wei, X. X., Zhu, S. C., & Wu, Y. N. (2021). On Path Integration of Grid Cells: Group Representation and Isotropic Scaling. Advances in Neural Information Processing Systems, 34, 28623-28635.\n\ne)\tNayebi, A., Attinger, A., Campbell, M., Hardcastle, K., Low, I., Mallory, C. S., ... & Yamins, D. (2021). Explaining heterogeneity in medial entorhinal cortex with task-driven neural networks. Advances in Neural Information Processing Systems, 34, 12167-12179.\n\n3) Although the current study mainly focuses on modeling the firing properties of place cells, it has strong implications to learning algorithms in AI. As we stated in Line.34-38, theta sequences of place cells essentially compress the temporal structure of animal experiences from a behavior time scale (seconds) to a time scale compatible with the STDP learning rule (<100 ms), which enables the learning in neural systems. Our model unveils the underlying mechanism of implementing this compression. This may inspire us to develop biologically plausible learning algorithms in AI to process similar learning tasks, e.g., navigation in the real world, planning and decision making. \n\n**CANN v.s. Hopf bifurcation:**\n\nWe agree with the reviewer about the terminology concern. Indeed, strictly speaking when the negative feedback is relatively strong, the network is no longer a “CANN”, since the bump is no longer stationary in the attractor space. However, we prefer to keep using the term CANN through the paper (as this is conventionally used in the neuroscience society) to avoid confusing readers with only neuroscience background. We will, however, clarify this point in the revised paper.\n\n**On the stochasticity of I^{ext}:**\n\nIf only a white noise input (setting I^{ext}=0) is applied to the network, the bump will experience diffusive dynamics (as the reviewer mentioned) or super-diffusive dynamics (depending on the strength of negative feedback). If white noises are added on top of the external input, it will not change our main results. The external input I^{ext} in our model mimics the physical location of the animal during locomotion which usually continuously changes over time. Including white noises in I^{ext} will induce small fluctuations of the animal location, but the oscillatory tracking behavior of the network remains unchanged. \n\nWe hope our replies have clarified the concerns of the reviewer.\n", " The authors of the current submission set out to provide a biologically inspired \"neural field\" (herein a PDE model) model of the experimental data reported in Wang et al. (2020).\n The proposed model, unlike those previously proposed, accounts for \"phase procession\" (cf. phase precession), as reported in Wang et al.\n To incorporate the above phenomenon, the usual PDE equations are combined with \"feedback inhibition\".\n The resulting system is shown to undergo a Hopf bifurcation giving rise to \"oscillatory tracking\", which is proposed as an explanation for the coexistence of phase precession and procession.\n Additionally, neurons with \"unimodal\" and \"bimodal\" tuning functions (as measured wrt to the theta oscillation) are shown by varying model parameters thus giving a unified description of the two. Admittedly I was not familiar with Wang et al. (2020), which attests to my limited ability to judge the significance of this submission.\nThat being said, I have since familiarized myself with the works citing Wang et al and I believe the contribution offered by the current submission to be novel.\nOverall, I found the writing and derivations to be very clear. Passages and figures requiring further are mentioned in the `Questions` field below.\n\nIn sum, I believe this to be a technically sound, interesting contribution.\nNevertheless, I hesitate to enthusiastically recommend it for acceptance for the following reason:\n\n1. As stated I cannot accurately gauge the significance of this submission, especially to the NeurIPS community. \n2. Because the submission is devoted to deriving a particular model of place cell function (ie. without relying on statistical or machine learning methods, nor without any immediate implications for those communities), it might be an equally good if not better fit in specialized journals like PlOS CB, Neural Computation, eLife etc. \nNaturally, this is no grounds for dismissal but I would like to allow fellow reviewers and the authors to discuss whether NeurIPS is the appropriate venue for this work. Regarding clarity:\n1. Having read Wang et al. I found that their subfigure 1A (which is also reproduced in the appendix of the current submission Figure SI 1 ) to be a clearer explanation of phase precession vs procession. I believe a schematic in the vein of SI 1B would make for an easier explanation than the current Fig. 1B from the main text.\n\n2. There is little information as to what the heatmap in Fig 5C represents, and a fortiori how this normalized histogram was computed.\n\nTerminology:\nThe authors say the model is a \"continuous attractor neural network\". Please correct me if I am wrong, but the incorporation of feedback inhibition intentionally de-stabilizes the bump attractor; it appears that feedback inhibition was *specifically* added to allow for a Hopf bifurcation.\nIf I understood it correctly, I believe this ought to be clarified that though derived from a CANN, the model is not a CANN.\n\nMisc:\n$I^{ext}$ is modeled assumed to be linear function of time. I would find it interesting to compare numerically how the proposed model deals with stochastic $I^{ext}$ in comparison with previously proposed models.\nNeutrally stable dynamics in conjunction with a white noise drive ought to give rise to a diffusing receptive field. I believe the proposed model will not be susceptible to such behavior.\n Not applicable", " The authors propose an attractor network model which includes a “feedback inhibition” term that jointly exhibits phase precession and procession in hippocampal place cells. The model is defined via standard neural network rate equations in one-dimension using recurrent connections with a Gaussian profile. Feedback inhibition to a neuron encoding location x is modeled as a dynamic term that depends only on the activity of that neuron (it is a low pass filter of the synaptic input to neurons encoding location x). This is described qualitatively as the “overall effect of the interplay between the hippocampus and the medial septum”. The consequence second order dynamics gives rise to traveling wave solutions. The competing effects of the external input and intrinsic traveling wave dynamics induces oscillatory behavior in a region of phase space that accounts for the jointly observed precession and procession. The authors also show how this model can be tuned to account for both “unimodal” and “bimodal” cells. This papers presents a simple and elegant model that incorporates multiple observations into a single framework (unimodal and bimodal cells). It is clearly presented and the figures illustrate well all main conceptual points (the example movies in the supplementary material are also excellent and helpful). The structure of the model is sufficiently detailed with all important steps of the construction and derivation of results presented. The paper appears to exhibit overall high quality and clarity (though there are a few places where writing can be cleaned up and polished).\n\nThe results also appear original (to me, though I am less familiar with this section of the literature on hippocampal place cells than I used to be) and significant (in that they demonstrate a model that jointly exhibits phase precession and procession - again, to me). \n\n The primary weaknesses I see are addressed in “Questions” below.\n\n I have a couple of presentation related points and one main issue regarding the formulation of the model. \n\nRegarding the model: Using a term like equation 3 brushes a lot under the rug in terms of how the system itself works. The second order dynamics that arises seems to exhibit the desired effect, and it is great to have identified a dynamical system that shows the appropriate empirical effects, but this formulation leaves as a mystery exactly how the “interplay between the hippocampus and medial septum” generates this effective inhibition term. As it is, the term essentially just introduces the simplest second order temporal dynamics into equation (1) that (along with the Gaussian profile weights) turns it into an effective wave equation. This is all great in terms of identifying the possible mechanism, but it would be nice to have some idea of what model of such interactions would give rise to such a term. (Note that as I write this, this criticism sounds more severe than I intend it.)\n\nRegarding presentation: I think lines 217-231 could be greatly improved so the reader has to do less work. In particular, it could be more immediately clear how h(t) relates to the position relative to the probe neuron (you erase this by defining h(t) as an absolute value). It was easier for me to think in terms of the actual distance from the probe, personally. Putting the sign back and considering the separate cases was an easier way to think about it for me.\n\nFinally, the language around “forward/reverse theta sequences” and “forward and backward” sweeps of decoded position is likely very confusing for readers who are not sufficiently familiar with the place cell literature. (There is some probability that they have confused this reviewer and I have fundamentally misunderstood the paper.). I agree with the authors that there are negligible societal impacts of this work. ", " This paper proposed a CANN model with feedback inhibition to elucidate the mechanism underlying phase precession and procession of the firing of hippocampal place cells. Compared to standard CANN models, a feedback inhibition mechanism is used to perturb the steady state of a standard CANN, so that the activity bump oscillates around the true location of a rat. This paper derived analytic formulas of the CANN under different input and feedback conditions. This paper derived the expression of the trajectory of the oscillatory state. This paper simulated the proposed model and observed that the activity bump sweeps back and forth across the true location, similar to what observed in previous experiments. The firing rate dynamics of the model show behaviors similar to phase precession and procession. Strengths:\n+ The presentation and writing of this paper is good.\n+ The mechanism underlying phase precession and procession is still unknown. It is meaningful to propose computational models to understand the mechanism.\n+ The theoretical results are sound and comprehensively show the steady state behaviors of the proposed model.\n+ Both analysis and simulations show that the proposed model provides a reasonable and relatively abstract explanation of both the phase precession and procession phenomena.\n\nWeaknesses:\n- The CANN model with feedback inhibition is itself an existing model of oscillatory behaviors of activity bumps. The main novelty is the analysis and applications of such models to understand the phase precession and procession.\n- The animal movement model is too simple (movement along a line) to account for many more realistic movement scenarios.\n- The synaptic strength distribution of the model is assumed to be an idealized form without considering learning effects.\n- Phase precession and procession are defined in terms of spiking times during consecutive theta cycles. The previous model does not simulate the generation of individual spikes.\n\nUpdate after rebuttal:\nThank you to authors for their responses. From their responses, the major difficulty of generalizing the current framework lies in the lack of experimental records. Regardless, it would be helpful to generalize their framework or experimental records to 2D or learning scenarios. Such generalization will increase the impact of this paper and might be very helpful to machine learning community.\n 1. 1D model can only explain the phase precession and procession of a rat running on a linear track. How the proposed model can be generalized to 2D cases where a rat can run in an open field? \n2. The synaptic strength of the CANN is preconfigured through an ideal Gaussian form. In reality, such synaptic strengths might fluctuate continuously due to plasticity. Does the main results still hold if synaptic strengths become somewhat not ideal? How the synaptic strengths are learned through experiences?\n3. Is there a missing link between the firing rate model of a CANN and precise spike firing times observed during phase precession and procession? Yes." ]
[ -1, -1, -1, -1, -1, 5, 7, 6 ]
[ -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "nips_2022_WOuGTb9QswS", "f0UjaT07PRD", "HJGQG2DMad", "fPKMYyWUKhg", "bOXVvJFNvr-", "nips_2022_WOuGTb9QswS", "nips_2022_WOuGTb9QswS", "nips_2022_WOuGTb9QswS" ]
nips_2022_gd7ZI0X7Q-h
ELASTIC: Numerical Reasoning with Adaptive Symbolic Compiler
Numerical reasoning over text is a challenging task of Artificial Intelligence (AI), requiring reading comprehension and numerical reasoning abilities. Previous approaches use numerical reasoning programs to represent the reasoning process. However, most works do not separate the generation of operators and operands, which are key components of a numerical reasoning program, thus limiting their ability to generate such programs for complicated tasks. In this paper, we introduce the numEricaL reASoning with adapTive symbolIc Compiler (ELASTIC) model, which is constituted of the RoBERTa as the Encoder and a Compiler with four modules: Reasoning Manager, Operator Generator, Operands Generator, and Memory Register. ELASTIC is robust when conducting complicated reasoning. Also, it is domain agnostic by supporting the expansion of diverse operators without caring about the number of operands it contains. Experiments show that ELASTIC achieves 68.96 and 65.21 of execution accuracy and program accuracy on the FinQA dataset and 83.00 program accuracy on the MathQA dataset, outperforming previous state-of-the-art models significantly.
Accept
The paper presents a new model called Numerical Reasoning with Adaptive Symbolic Compiler that is able to perform numerical reasoning tasks. One of the key ideas in this model is to separate the generation of operators and operands and to include a memory register to remember intermediate values. The method is compared against FinQANet and shows good results on both the MathQA and FinQA datasets. In the reviews and rebuttal, it emerged that the performance of this model is marginally lower than the performance of state-of-the-art large language models on this task; however, the difference in size and training resources required by this model compared with the LLM is so vast that this still represents a significant advance over the state of the art. It would nevertheless be good for this to be mentioned in the paper itself. Overall, this is a novel approach that advances an important problem.
train
[ "-s7DWja2kNO", "-LorF9aSAlb", "YWLv9tgEibp", "CmrLWSAEoWe", "52p1szSO7gz", "Xe9UFGW1s2q", "1zIHA2raYr" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The writing has been improved in the revised version and most of the questions have been resolved. Although the authors' response still does not convince me of an excellent technical novelty, it's reasonable to raise the overall rating (borderline accept).", " We thank the R_2nDD for providing two references and one link. We uploaded a revised paper and supplementary material version, highlighting changed parts in blue. In addition, We address the comments as follows.\n\n- R_2nDD_c1: “The writing needs to be improved … which is somewhat confusing.”\n - We rewrote the introductory part of Section 3 and provided an example in Appendix F to show how different modules of ELASTIC work. In addition, the added Figure 3 in Appendix F describes how the Memory Register stores and uses the cache token. To make Table 2 clear, we added Appendix E to illustrate the notations of Table 2 using an example. Finally, we described the limitation of ELASTIC in terms of more data required in the conclusion section. The modifications could be observed in the revised version of the paper and supplementary material, which are uploaded for the rebuttal phase.\n\n- R_2nDD_c2: “Is seems that the method proposed … has not been discussed.”\n- R_2nDD_c3: “Although the authors say ELASTIC is domain agnostic … MathQA DSL with python?”\n - Thanks for providing the paper and code. We decide to answer the above questions in one block.\n - The domain-agnostic refers to our model’s generalizability to different data from different domains. For example, FinQA is from the finance domain, and MathQA is from the math domain. Both datasets require numerical reasoning abilities but involve different operators and operands. Table 3 shows that ELASTIC achieves SOTA results on two datasets: FinQA and MathQA.\n - We have looked at the data from MathQA-python, and found the difference between original MathQA with MathQA-python is that they use an alternative way to represent the intermediate reasoning process. As a result, ELASTIC could solve MathQA-python by adopting several prerequisite steps. Take one example from the MathQA-python, which represents the intermediate reasoning process as “n0 = 5, n1 = 31.1, t0 = n1 + 100.0, t1 = 100 – n0, t2 = t0 * 100, t3 = t2 / t1, answer = t3 -100”. To make ELASTIC generate this, we should create a new DSL for MathQA-python. For example, replacing the “add” from DSL defined in the paper with “+”. Next, we should discard the previous two equations, where the values of n0 and n1 have been extracted by ELASTIC during the pre-processing step. Consequently, the intermediate reasoning process of this example from the MathQA-python is changed to “+(n1, 100), -(const_100 - #0), *(#0, 100), /(#2, #1)”, which could be generated by ELASTIC directly. As a result, the generation processes for MathQA and MathQA-python by ELASTIC are the same, and the only difference between generation processes is how to define the operators and operands in DSL.\n\n- R_2nDD_c4: “Comparison with large language models … and generalization ability, etc.?”\n - Recent large language models show promising results in numerical reasoning tasks. However, the parameter size of the smallest model in Table 6 from paper [1] provided by R_2nDD is 8B, which is 1600 times larger than our biggest model ELASTIC-RoBERTa-large (approximately 500M in parameter size). Despite this vast gap between the model size, ELASTIC-RoBERTa-large still outperforms the 8B model in paper [1].\n - It is true that the large language models in the size of 68B/137B in paper [1] achieved 2.8%/3.8% incremental accuracies compared to ELASTIC-RoBERTa-large on MathQA dataset. However, ELASTIC-RoBERTa-large could be trained on a single A100 GPU, which hugely saves computational cost and time.\n\n- R_2nDD_c5: “Some details introduces in Section 3 … when the memory is full [2]?\"\n - We have read paper [2], particularly on why they train a network to predict whether the variable need to be dropped. The reason for [2] introducing this network is that they have a fixed maximum number of operands for an operator. For ELASTIC, such a limitation doesn’t exist. The Operands Generator of ELASTIC stops when producing a “none” token after an operator, and the numbers of operands following different operators in one reasoning program could be different. Thus, we do not need to discard the cache token since we will not meet the situation of full memory. For example, the cache token #n refers to the n-th previous sub-program’s result. And #n is just like an operand with its own embedding vector. After each sub-program is generated, the embedding vector of #n is updated by the Memory Register. As a result, their embeddings are not static. For example, given to programs “add(3,2),subtract(#0,1)” and “exp(x), add(1,#0)”, both #0s refer to the first sub-program of each reasoning programs, however, #0 in different programs refers to different sub-program. In this example, it refers to either “add(3,2)” in the first program example, or “exp(x)”in the second program example. Since the embedding vector of #0 is replaced by the information of the sub-program once the first sub-program is generated, the embedding vector of #0 is unique in different reasoning programs.", " We thank the R_vH6J for the constructive comments. We uploaded a revised paper and supplementary material version, highlighting changed parts in blue. In addition, we provide some specific responses and clarifications as follows.\n\n- R_vH6J_c1: “The motivation and effectiveness … long reasoning programs.”\n - Separating the operators generation and operands generation allows ELASTIC to support diverse operators with any number of operands. Methods without separating the generations, like FinQANet, can only support a fixed number of operands following an operator, and are usually hardcoded in the implementation. One example of such a hardcoding can be observed from the FinQANet code: https://github.com/czyssrs/FinQA/blob/main/code/generator/Model_new.py#L277.\n - Figure 2(c) shows ELASTIC achieves higher score if only considering the accuracy of generated operators (orange line), compared to the considering both operators and operands (blue line). This indicates that the wrong generated operands could result in the wrong generated program. However, ELASTIC could escape from the cascading error from the wrong generated operands, and generates the correct operators.\n - Table 4 is not used to demonstrate the effectiveness of separating the operators generation and operands generation. However, Figure 2 compares ELASTIC with FinQANet (which does not separate the generation processes) and demonstrates the improvements brought by the separate modules design of ELASTIC.\n\n- R_vH6J_c2: “The improvements achieved by memory register seems to be marginal (Table 4).”\n- R_vH6J_c3: “The experiments might be designed … if possible.”\n - The most significant advantage of ELASTIC is the separation of the operators generation and operands generation, which is the integral and core component of the model. If we remove this component from ELASTIC, it will lose the advantage of being adaptable to the number of operands following an operator. However, we compare ELASTIC with FinQANet, the model that doesn’t separate the generators. As shown in Table 3, ELASTIC outperforms FinQANet.\n - Also, FinQANet is with Memory Register by nature (we mentioned this in lines 256-257 of the original paper). Therefore, we have compared with FinQANet with Memory Register as R_vH6J requested. Table 2(a)(b) compare ELASTIC (w Memory Register) with FinQANet (w Memory Register), the better results achieved by ELASTIC (w Memory Register) shows the superiority of our Memory Register design. Finally, Table 4 points out the necessity for the Memory Register, by showing ELASTIC (w Memory Register) performs better than ELASTIC (wo Memory Register)\n - The reason for the marginal improvement shown in Table 4 is that it compares models’ performances on the entire test data, where some examples contain reasoning programs without using the results from the previous sub-programs. The proportions of these programs are 57% and 10% in FinQA and MathQA datasets, respectively.\n\n- R_vH6J_c4: “The wring should be improved … Please clarify.”\n- R_vH6J_c5: “To make section 3 easier to follow … when possible.”\n- R_vH6J_c6: “Typos: line 107-108, 144, 262.”\n - We apologize for the typos, and sincerely appreciate for pointing them out.\n - As suggested, we added one sentence to explain the different inputs to Reasoning Manager. Also, we rewrote the introductory part of Section 3 and provided an example in Appendix F to show how different modules of ELASTIC work. In addition, the added Figure 3 in Appendix F describes how the Memory Register stores and uses the cache token. To make Table 2 clear, we added Appendix E to illustrate the notations of Table 2 using an example. The modifications could be observed in the revised version of the paper and supplementary material, which are uploaded for the rebuttal phase.\n\n- R_vH6J_c7: “Results in Figure 2 are a bit confusing … M-MDD=3 in figure 2 (a), why?”\n - Figure 2(a) shows that both ELASTIC and FinQANet encounter this problem. This is because the small size of testing data causes a large variance. In particular, the number of examples in FinQA dataset with M-MDD equal to 2 and 3 in the test data is only 20 and 13, respectively, compared to the numbers of examples with M-MDD equal to 0 and 1 are 657 and 456, respectively.\n\n- R_vH6J_c8: “What’s the possible reason … exist in the other dataset?”\n - MathQA test data contains examples with the M-MDD of 7,8,9,10 in the size of 28,7,12,7. The small size of the examples leads to fluctuations (or high variance), which could be observed from both ELASTIC and FinQANet. The reason for the performance of ELASTIC starts to increase when the program step becomes bigger than 10 is also due to that the test examples with lengths of program steps bigger than 10 are limited in size of 64, 38, 29.\n", " We thank the reviewer R_3k7j for the positive and constructive feedback. We uploaded a revised paper and supplementary material version, highlighting changed parts in blue. In addition, we address the comments as follows.\n\n- R_3k7j_c1: “Why is NumNet+ the only … Table 3 stronger.”\n - Besides NumNet/NumNet+, NeRd (in Table 3) is one of the strong baselines for DROP dataset. Paper [1] evaluated NeRd on both DROP and MathQA datasets. The reasons for us to select NeRd and NumNet/NumNet+ from the DROP leaderboard are as follows:\n 1. We couldn’t evaluate our model against other models in the DROP leaderboard that their codes, papers, or both are not provided. Here is a list showing the status of codes and papers for models from the DROP leaderboard, which achieve accuracies higher than 0.8: OPERA(no paper, no code), QDGAT(has paper, no code), sna_albert + Ensember(no paper, no code), na_albert+(no paper, no code), ALBERT-Calculator(based on [2], both it and [2] have no code).\n 2. We compared against NeRd which adopts a different method than NumNet/NumNet+. NeRd represents the reasoning program as the symbolic nested program, whereas NumNet/NumNet+ assign the “+/-” signs for numbers, which is turned into a classification problem. As a result, we think that they are two strong baselines.\n 3. Other models such as, BERT-Calculator [2], MTMSN, NAQANet+, are considered similar to NumNet/NumNet+ but with lower performances. Particularly, NumNet/NumNet+ adopt the same arithmetic head for numerical question types as them. Therefore, we decide not to use them as baselines. \n\n- R_3k7j_c2: “Please rewrite Section 3 … to read through.”\n- R_3k7j_c3: “Similarly, I feel it … for completeness.”\n- R_3k7j_c4: “Please add a figure … Table 2 clearly.”\n- R_3k7j_c5: “Please try to add a section … along these lines.”\n - We decide to answer the above four comments in one block. We rewrote the introductory part of Section 3 and provided an example in Appendix F to show how different modules of ELASTIC work. In addition, the added Figure 3 in Appendix F describes how the Memory Register stores and uses the cache token. To make Table 2 clear, we added Appendix E to illustrate the notations of Table 2 using an example. Finally, we described the limitation of ELASTIC in terms of more data required in the conclusion section. The modifications could be observed in the revised version of the paper and supplementary material, which are uploaded for the rebuttal phase. \n\n- References:\n - [1] Neural Symbolic Reader: Scalable Integration of Distributed and Symbolic Representations for Reading Comprehension\n - [2] Giving BERT a Calculator: Finding Operations and Arguments with Reading Comprehension\n", " The authors propose ELASTIC, a multi-modular network for numerical reasoning on MathQA and FinQA datasets. It consists of an encoder and a compiler with four different modules. Each module is specifically designed to solve a part of the numerical reasoning task. For example, the operator generator decides the next operation, and then the operand generator picks the operands to be used. The memory register stores some intermediate computations for future use. Overall, they achieve strong improvements over prior baselines. Additionally, this method generalizes better to higher program steps. Strengths:\n* The paper is very nicely motivated, with good supporting experiments and ablations\n* The design of each component is generally well described and easy to follow through (except some parts mentioned below)\n* This direction of using specific modules for a fixed set of tasks (similar to NMN's), is indeed very interesting for this area\n\nWeakness:\n* Why is NumNet+ the only baseline from the DROP dataset leaderboard used in the paper? There are several other meaningful baselines that can be tried by just keeping the arithmetic heads at classification time. Please try adding more of these baselines, as applicable, to make Table 3 stronger. Writing suggestions:\n* Please rewrite Section 3 introduction using less technically repetitive terms and more general ones/synonyms. Currently, it's a bit hard to read through.\n* Please add a figure demonstrating the terms in Table 2 clearly.\n* Similarly, I feel it would help to have a motivating figure to explain the need and functionality of the memory register, for completeness. Please try to add a section specific to the limitations of the method. Other than the generalization issues, I feel these modular approaches might need more training data compared to the end-to-end frameworks. You may consider discussing some points along these lines.", " The authors present a novel model (ELASTIC) for numerical reasoning problems. The proposed model consists of an encoder and a compiler which separates the generation of operators and operands. The authors claim that the separated generators enhance the adaptability and elasticity of the model. The proposed model also incorporates a memory register to improve the model’s performance on the numerical problems with large M-MDD. The proposed model achieves state-of-the-art performance on two datasets: FinQA and MathQA.\n Strengths: Numerical reasoning is an important area in machine learning and natural language processing. This work proposes novel methods to enhance the numerical reasoning ability of the model. On one hand, the generation of operators and operands are separated. On the other hand, a memory register is introduced to make it easier for the model to use the previous sub-program results. The proposed model outperforms FinQANet on both FinQA and MathQA. \n\nWeaknesses: \n1. The motivation and effectiveness of separating the generation of operands and operators is not clear. The authors claim that previous work that does not use separated generators suffers from cascading errors when encountering complicated tasks (line 36-37), but the results shown in figure 2 (c) and Table 4 may not be sufficient to support this claim. The proposed model does not consistently stabilize the performance on long reasoning programs. \n\n2. The improvement achieved by memory register seems to be marginal (Table 4). \n\n3. The writing should be improved. The approach section is not easy to follow. The authors use notations throughout this section and some terms and notations are not well explained at their first appearance. \n\nFor example, DSL (line 108), NGS (line 136), W_{op} in eq 6, and the index i in eq 6 and 7 have different meanings. \n\nFigure 1 shows the reasoning manager generates g^{op} and g^{oe} in two different flows and the inputs are different, but the difference is not explained in line 134-143. Although the following introduction of two generators explains the difference, it might be better to move this content to line 134-143. Figure 1 and line 160-172 shows how the results from the RM updates the memory register but does not show how the vectors stored in the memory register is used during the generation. I guess there should be another arrow from the memory register to the generator in Figure 1. Please clarify. \n Results in Figure 2 are a bit confusing. The accuracy with M-MDD=2 is much lower than that with M-MDD=3 in figure 2 (a), why? What’s the possible reason of the large fluctuations in figure 2 (b) and 2(c)? Why the performance of ELASTIC starts to soar when the program steps are bigger than 10 in figure 2 (c)? Does the similar phenomenon exist in the other dataset?\n The experiments might be designed more carefully. When analyzing the necessity of memory register, the effect of separated generators is not disentangled with the effect of the memory register. For more effective analysis, there should be a setting ELASTIC w/o separated generator, or FinQANet w memory register if possible.\n\nTo make section 3 easier to follow, it might be better to use more textual examples instead of notations when possible.\n\nTypos: line 107-108, 144, 262\n", " This paper proposes a new model called numEricaL reASoning with adapTive symbolIc Compiler (ELASTIC) to solve numerical reasoning QA problems. ELASTIC separates the generation of operators and operands, and is adaptable to the number of operands, making it more accurate and flexible. They show promising results on two datasets including FinQA and MathQA. Numerical reasoning is an important and challenging task of AI. Although the idea of generating operators and operands separately is already been adopted in other areas like program synthesis, it is still novel and valuable in numerical reasoning to alleviate the cascading error problem. Experimental results are solid and promising, which well supports the authors’ claims.\n\nThe writing needs to be improved, especially the method part, which is somewhat confusing. It seems that the method proposed cannot be applied to QA problems without labeled programs. Also, the influence of different programming language has not been discussed. 1. Comparison with large language models. The authors claim that pretrained language models fall short of numerical reasoning from the text. However, recent advances in program synthesis show that large language models perform well in this area: they achieve 83.8% on MathQA which outperforms the 83.00% in this paper. I agree that this paper is valuable but for the completeness could you make more comparisons with these large language models in terms of number of parameters, runtime costs, and generalization ability, etc.? \n\n2. Some details introduced in Section 3 are unclear. For example, what does “NGS” mean? What’s the size of the memory register (size = max program steps or size = 1)? Is it possible to train another network to predict which memory slot should be overwritten when the memory is full [2]?\n\n3. Although the authors say ELASTIC is domain agnostic, it seems that the performance of these program-based models can be significantly influenced by the programming language. Can ELASTIC solve MathQA-python[3], which replaces the original MathQA DSL with python?\n\n[1] Program Synthesis with Large Language Models\n\n[2] Automatic Program Synthesis of Long Programs with a Learned Garbage Collector\n\n[3] https://github.com/google/trax/blob/master/trax/examples/MathQA_Python_generation_notebook.ipynb Please see questions listed above." ]
[ -1, -1, -1, -1, 8, 5, 6 ]
[ -1, -1, -1, -1, 4, 4, 3 ]
[ "YWLv9tgEibp", "1zIHA2raYr", "Xe9UFGW1s2q", "52p1szSO7gz", "nips_2022_gd7ZI0X7Q-h", "nips_2022_gd7ZI0X7Q-h", "nips_2022_gd7ZI0X7Q-h" ]
nips_2022_Y0Bm5tL92lg
Adaptation Accelerating Sampling-based Bayesian Inference in Attractor Neural Networks
The brain performs probabilistic Bayesian inference to interpret the external world. The sampling-based view assumes that the brain represents the stimulus posterior distribution via samples of stochastic neuronal responses. Although the idea of sampling-based inference is appealing, it faces a critical challenge of whether stochastic sampling is fast enough to match the rapid computation of the brain. In this study, we explore how latent stimulus sampling can be accelerated in neural circuits. Specifically, we consider a canonical neural circuit model called continuous attractor neural networks (CANNs) and investigate how sampling-based inference of latent continuous variables is accelerated in CANNs. Intriguingly, we find that by including noisy adaptation in the neuronal dynamics, the CANN is able to speed up the sampling process significantly. We theoretically derive that the CANN with noisy adaptation implements the efficient sampling method called Hamiltonian dynamics with friction, where noisy adaption effectively plays the role of momentum. We theoretically analyze the sampling performances of the network and derive the condition when the acceleration has the maximum effect. Simulation results confirm our theoretical analyses. We further extend the model to coupled CANNs and demonstrate that noisy adaptation accelerates the sampling of the posterior distribution of multivariate stimuli. We hope that this study enhances our understanding of how Bayesian inference is realized in the brain.
Accept
The reviewers agree that the paper makes an interesting contribution, connecting inference in probabilistic models with network models from computation neuroscience.
test
[ "RflrZneAuD", "jTe45k97nY", "2BFz2RINkmX", "35nV-RCFgiy", "bbfrUEslFNM", "nbh53ZrFpaG", "wrz-BXo5bXC", "soV-imBAEMC", "fNYOruOzvau", "pO13zsDeDVU", "lB_dTpnDShW", "rwZez-Qg4Yr", "6J2TvEYPutq", "0C2KfUkk8sN", "lXVInxUzEPW", "FFEY-x_ia9Q", "wcpRF9YAnJV" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the comments. Yes, we will add the points on the extensions to multimodal distributions and experimental predictions in the revised manuscript.", " Thank you for the detailed responses to my questions. After taking these into consideration, I have increased my rating by 1 point.\nIt would be useful to include some points on the extensions to multimodal distributions and experimental predictions in the discussion. ", " Thank you for your questions/comments and your time and effort in reviewing the paper. Just wondering if our comments resolved your question/concerns. If this is not the case, we hope the reviewer could clarify any remaining concerns/questions and we are happy to engage in further discussion. ", " We hope our replies and corresponding changes in the updated paper have addressed all concerns/questions raised by the reviewer. If this is not the case, we hope the reviewer could clarify any remaining concerns/questions and we are happy to engage in further discussion.\n\nWe again thank the reviewer for the thoughtful comments and for the time and effort in reviewing the paper.\n\n", " We thank the reviewer for the comments. Yes, we will add the assumptions and limitations of the model in the revised manuscript.", " thanks to the authors for answering my questions: I think these points regarding the assumptions and limitations of the model in the context of the biological brain are important to mention in the discussion, and I am satisfied with their response otherwise.", " We acknowledge the encouraging and insightful comments of the reviewer, which concisely summarize the contributions of our work. In the below, we would like to address the questions of the reviewer. \n\n**Eq. 15: no $x$ on RHS and Eq. 15/16 $G(\\cdot)$ vs $G[\\cdot]$**\n\nThanks for pointing out these mistakes. We will correct them in the revised manuscript.\n\n**The projection seems like the key step. Can we give some flavour of how that works in the main text, and whether any assumptions go in there (e.g. linearity).**\n\nYes, the projection method is the key strategy in our theoretical analysis. This method comes from the unique property of CANNs, that is, due to the translation-invariant recurrent connections between neurons, the dynamics of a CANN is dominated by very few motion modes (the eigenvalues of motion modes decay exponentially), and the first two dominating motion modes are the position and height changes of the bump. Utilizing this property, we can then use the projection method to reduce the dimension of the network dynamics significantly, and the reduced dynamics allows us analyze the model behaviors theoretically. The rigorous analysis of the projection method can be found in [1], and the use of the projection method to simplify the model dynamics is given in SI. As suggested by the reviewer, we will add more descriptions about the project method in the revised manuscript.\n\n*Reference:*\n\n[1] Fung, CC Alan, KY Michael Wong, and Si Wu. \"A moving bump in a continuous manifold: a comprehensive study of the tracking dynamics of continuous attractor neural networks.\" Neural Computation 22.3 (2010): 752-792.\n\n**It would be worth exploring the biology. Is there really this kind of adaptation present in e.g. head-direction cells? That would seem strange, as it would destabilize the bump. This mechanism would be a potential explanation.**\n \nAdaptation is a very general property of neural systems widely found in the brain [1]. A number of mechanisms can lead to spike-frequency adaptation (SFA). At the single neuron level, there are three mechanisms: 1) the current caused by voltage-dependent, high-threshold potassium channels; 2) the current mediated by calcium-dependent potassium channels; 3) the current caused by slow recovery from in-activation of fast sodium channel. In brain areas where the CANN is involved, adaptation (SFA) was reported by experiments, e.g. in hippocampus [2] and in visual cortex [3].\n\n*Reference:*\n\n[1] Ha, Go Eun, and Eunji Cheong. \"Spike frequency adaptation in neurons of the central nervous system.\" Experimental neurobiology 26.4 (2017): 179.\n\n[2] Gu, Ning, Koen Vervaeke, and Johan F. Storm. \"BK potassium channels facilitate high‐frequency firing and cause early spike frequency adaptation in rat CA1 hippocampal pyramidal cells.\" The Journal of physiology 580.3 (2007): 859-882.\n\n[3] Sanchez-Vives, Maria V., Lionel G. Nowak, and David A. McCormick. \"Cellular mechanisms of long-lasting adaptation in visual cortical neurons in vitro.\" Journal of Neuroscience 20.11 (2000): 4286-4299.\n\n**We hope that our replies have addressed the concerns of the review.**\n", " We acknowledge the valuable comments of the reviewer and would like to clarify the concerns of the reviewer in the below. \n\n**On sampling Gaussian vs. arbitrary distributions**\n\n1. It is still an open question whether the brain samples arbitrary complex distributions, and if so, when and where the brain does this. There are reports that the brain perceives complex multimodal distributions, while there are also reports that people tend to mistakenly recall complex multimodal distribution to be unimodal [1]. Interestingly, in visual perception where the CANN is widely used as a computational model (see, e.g., [2]), it was reported that people tend to ignore higher-order statistical moments of inputs [3]. It seems that different brain areas have different requirements on representing distributions, and in some areas (such as the visual system), sampling a Gaussian distribution may already be adequate, as suggested by [3]. \n2. It is worth to mention that even for the Gaussian distribution, sampling stimulus values is still necessary in the brain. It has remained debated in the field as how the brain represents the uncertainty of information (often consider the Gaussian distribution as the example). The theory of probabilistic population code (PPC) assumes that the brain extracts the mean and variance of the Gaussian distribution explicitly by utilizing the Poisson statistics of neuronal responses, but exactly how the brain reads out this information remains unclear. The sampling-based theory assumes that the neural system reads out the stimulus information through sampling the posterior distribution, which is consistent with the irregular firings of neurons. Here, we adopt the sampling point of view. \n3. In the current study, we consider that the neuronal recurrent connection profile is Gaussian, which leads to that the network dynamics samples the Gaussian posterior. Alternatively, if we set the neuronal connection profile to be the Von Mises distribution, then the network dynamics samples the Von Mises posterior. In theory, if we can set the neuronal recurrent connections properly, the network can sample an arbitrarily given distribution. However, a single CANN with a fixed connection pattern can only sample a fixed form of distribution. Applying a single neural circuit to sample many distributions may be computationally interesting, but may not be biologically relevant, since the brain can easily employ different neural circuits to sample different distributions. \n4. In case that the CANN samples the Gaussian posterior, while the true posterior is non-Gaussian, then the network performs a Gaussian approximation, similar to the idea of Variational Inference in machine learning. This mismatching may explain the experimental findings in [3], and we will explore this issue in the future work (thank the reviewer for raising this interesting question). \n5. Overall, the current study has focused on studying the case of sampling the Gaussian posterior (and so did most previous sampling works in computational neuroscience, see, e.g., [4-5]), and the simple model has the advantage of allowing us to analytically elucidate how the neural circuit realizes the optimal Hamiltonian Monte Carlo sampling. Nevertheless, since the CANN with Gaussian tuning is a canonical network model in neural systems, our study already has potentials to explain the sampling of a number of stimulus features, including orientation, head-direction, and spatial location etc. Thus, we believe that our work is an important step towards understanding how sampling inference is accelerated in the brain. \n\n*References:*\n\n[1] Nisbett, Richard E., and Ziva Kunda. \"Perception of social distributions.\" Journal of Personality and Social Psychology 48.2 (1985): 297.\n\n[2] Ben-Yishai, Rani, R. Lev Bar-Or, and Haim Sompolinsky. \"Theory of orientation tuning in visual cortex.\" Proceedings of the National Academy of Sciences 92, no. 9 (1995): 3844-3848.\n\n[3] Waskom, M. L., J. Asfour, and R. Kiani. \"Perceptual insensitivity to higher-order statistical moments of coherent.\" (2018).\n\n[4] Echeveste, Rodrigo, et al. \"Cortical-like dynamics in recurrent circuits optimized for sampling-based probabilistic inference.\" Nature neuroscience 23.9 (2020): 1138-1149. \n\n[5] Savin, Cristina, and Sophie Denève. \"Spatio-temporal representations of uncertainty in spiking neural networks.\" Advances in Neural Information Processing Systems 27 (2014).\n", " **On the external input Eq. 14.**\n\nThe external input in Eq.14, $I(x)=\\Lambda \\exp[-(x-s^\\rm{o})^2/(4a^2)]$, can actually be regarded as the neuronal responses in the sensory layer which convey the stimulus information to the CANN, where the neuronal tuning function has the Gaussian shape and $x$ is the preferred stimulus value of the neuron. Note that the standard deviation in the exponential is the tuning width $a$, not the peak firing rate $\\Lambda$. In the framework of PPC, the inverse of $\\Lambda$ is interpreted as the uncertainty of the stimulus information. \n\nNotably, in the high-dimensional case of our model (coupled-CANNs), the external input conveys the likelihood distribution, while the prior distribution of stimulus features is stored in the reciprocal connections between CANNs (Eq.29), and no posterior distribution is given. It is the dynamics of couple-CANNs “computing” the posterior and performing sampling in the posterior space (see lines274-280). \n\n**On the clarity of theoretical derivations**\n\nThe projection method and some fundamental properties of CANNs were introduced in previous works. Here, our new contributions are on applying the projection method to formally derive how the CANN dynamics with adaptation implements the standard HMC and analyzing the sampling behaviors of the network systematically. As suggested by the reviewer, we will add more descriptions in the revised manuscript to make the derivations clearly. \n\n**We hope that our replies have clarified the concerns of the reviewer.**\n", " We acknowledge the encouraging and insightful comments of the reviewer, and would like to address the questions of the reviewer in details below. \n\n**On the generality of the results**\n\nTo pursue theoretical analysis, we have considered a canonical network CANN, a simple form of adaptation dynamics, and an external input in the form of Gaussian tuning function. This simple model allows us to solve the network dynamics analytically and elucidate the sampling acceleration mechanism clearly. All these simplifications are believed to be biologically plausible, as they are widely used in modelling studies in the field. Relaxing these assumptions to a large extent will not change our main results: as long as the network can hold a family of attractors and that the adaptation can destabilize these attractor states, then the network can perform the HMC dynamics approximately. \n\n**On the extension to multimodal, non-Gaussian distributions**\n\nThank the reviewer for raising this important question. \n1. On the computational theory perspective, it is still an open question whether the brain processes complicated distributions, and if so, when and where the brain does this. There are reports that the brain perceives multimodal distributions, while there are also reports that people tend to mistakenly recall multimodal distribution to be unimodal [1]. Interestingly, in visual perception where the CANN is widely used as a computational model (see, e.g., [2]), it was reported that people tend to ignore higher-order statistical moments of inputs [3]. It seems that different brain areas have different requirements on representing distributions, and in some areas (such as the visual system), sampling a Gaussian distribution may already be adequate. \n2. For our network model, a single CANN can only sample a unimodal distribution, and the form of distribution depends on the profile of neuronal recurrent connections. For instance, in the current study, we set the neuronal connectional profile to be Gaussian, then the network dynamics samples the Gaussian posterior; if we set the neuronal connection profile to be the Von Mises distribution, then the network dynamics samples the Von Mises posterior. \n3. We did observe that in coupled-CANNs, when the range of reciprocal connections between CANNs is much smaller than the range of neuronal recurrent connections in individual CANNs, the network dynamics can sample a multimodal distribution. However, we have not investigated this issue carefully, so we would prefer to report the result in the future work. \n4. Overall, our work model can implement the sampling dynamics for a unimodal distribution, and particularly, the Gaussian posterior. This may appear to a limitation in computation, however, consider that we are studying brain functions and that the CANN is a canonical network in the brain, our model can already explain the sampling of a number of stimulus features, including orientation, head-direction, and spatial location. \n\n*References:*\n\n[1] Nisbett, Richard E., and Ziva Kunda. \"Perception of social distributions.\" Journal of Personality and Social Psychology 48, no. 2 (1985): 297.\n\n[2] Ben-Yishai, Rani, R. Lev Bar-Or, and Haim Sompolinsky. \"Theory of orientation tuning in visual cortex.\" Proceedings of the National Academy of Sciences 92, no. 9 (1995): 3844-3848.\n\n[3] Waskom, Michael L., Janeen Asfour, and Roozbeh Kiani. \"Perceptual insensitivity to higher-order statistical moments of coherent random dot motion.\" Journal of vision 18, no. 6 (2018): 9-9.\n\n**On the sampling speed**\n\nIn the neuroscience society, it remains unknown yet the exact time cost for the brain performing Bayesian inference. The only known fact is that this process is extremely fast [1]. In monkey experiments, the electrophysiology studies revealed that the visual cortex can accomplish a probabilistic decision-making task in less than 800ms [2] and a contour integration task in less than 200ms [3]. If we plug in the biological relevant parameter value $\\tau_s=$5-10ms in the model, it gives that the sampling speed of HDF is around 100-200ms, while the speed of FLD is around 2-4s in a single CANN (see Fig.2B-C). Hence, HDF is about 20 times faster than FLD.\n\n*References:*\n\n[1] Knill, David C., and Whitman Richards, eds. Perception as Bayesian inference. Cambridge University Press, 1996.\n\n[2] Nienborg, Hendrikje, and Bruce G. Cumming. \"Decision-related activity in sensory neurons reflects more than a neuron’s causal effect.\" Nature 459, no. 7243 (2009): 89-92.\n\n[3] Chen, Minggui, Yin Yan, Xiajing Gong, Charles D. Gilbert, Hualou Liang, and Wu Li. \"Incremental integration of global contours through interplay between visual cortical areas.\" Neuron 82, no. 3 (2014): 682-694.\n", " **Is the support of $s$ finite?**\n\nIn theoretical analysis, we consider the support of $s$ is infinite ($s \\in \\mathbb{R}$). For some stimulus features, such as orientation, $s \\in (-\\pi, \\pi]$ with a periodic boundary. Consider the neural interaction range $a$ is much smaller than $2\\pi$ in our model, treating the support of $s$ to be infinite is still a good approximation.\n\n**On the experimental predictions**\n\nOur model can generate testable experimental predictions. Below lists two of them.\n\n1. Our model predicts that reaction times of neurons are positively correlated with their oscillation frequencies in sampling-based Bayesian inference. According to our theoretical analysis, when the SFA strength $m$ is larger than the critical value for realizing the maximum sampling speed (this always occurs due to adaptation noise), the eigenvalues of the drift matrix become complex numbers (i.e., they have imaginary parts). This leads to oscillation of neurons in the network dynamics, and the sampling speed decreases the oscillation frequency. Thus, in a task involving Bayesian inference, e.g., probabilistic decision making, we may record the oscillation frequencies of neurons and their reaction times. Our model predicts that the reaction time is positively correlated with the oscillation frequency.\n2. Our model predicts that the sampling speed increases with the SFA strength $m$ for $m<m_{max}$, where $m_{max}$ is the critical value for achieving the maximum sampling speed. Suppose one can manipulate the SFA strength in the experiment, then we can test this prediction. There are a number of mechanisms responsible for SFA, and three of them are: 1) the current caused by voltage-dependent, high-threshold potassium channels; 2) the current mediated by calcium-dependent potassium channels; 3) the current caused by slow recovery from in-activation of fast sodium channels. Consider the calcium-dependent potassium channels as an example. We can perform whole cell recording of neurons in a brain slice in vitro to test whether calcium-gated potassium channels could reduce the effect of SFA by applying a channel blocker, e.g., Iberiotoxin K_{Ca} (BK channel blocker), Apamin K_{Ca^2} (SK channel blocker), Lei-Dab 7 (high affinity and selective K_{Ca^{2.2}} (SK) blocker). If indeed the SFA effect can be controlled by applying ion channel blockers, we can then test its effect on the sampling speed as predicted by our model. If indeed the SFA effect can be controlled by applying ion channel blockers, we can then test its effect on the sampling speed as predicted by our model.\n\n**We hope that our replies have addressed all concerns of the reviewer.**\n\n", " We acknowledge the encouraging and valuable comments of the reviewer, and would like to address the questions of the reviewer in details below. \n\n**On the sampling speed**\n\n1. As suggested by the reviewer, we plug in the biological relevant parameter value $\\tau_s=$5-10ms, and obtain that the sampling speed of HDF is around 100-200ms, while the speed of FLD is around 2-4s in a single CANN (see Fig.2B-C). This gives that HDF is about 20 times faster than FLD in the 1D feature case. Notably, for sampling high-dimensional stimulus features (the case of coupled CANNs), the speed of FLD is further delayed compared to HDF (see Fig.3C, N=5).\n\n2. To our best knowledge, we could not find a formal report about the time cost of perceptual inference in the brain, but a large volume studies has indicated that this is an extremely fast process, see, e.g., the studies in the book [1]. In human experiments, the reaction time of subjects in a perceptual inference task is typically around 1-2s [2,3], which includes the extra time for processing sensory inputs and accomplishing motor behaviors. In monkey experiments, the electrophysiology studies suggest that the visual cortex can perform a decision-making task in less than 800ms (Fig.2c in [4]) and a contour integration task in less than 200ms (Fig.3 in [5]). \n\n3. Overall, the experimental evidence suggest that sampling acceleration is needed, if the brain does employ the sampling-based Bayesian inference.\n\n*References:*\n\n[1] Knill, David C., and Whitman Richards, eds. Perception as Bayesian inference. Cambridge University Press, 1996.\n\n[2] Fetsch, Christopher R., Gregory C. DeAngelis, and Dora E. Angelaki. \"Bridging the gap between theories of sensory cue integration and the physiology of multisensory neurons.\" Nature Reviews Neuroscience 14, no. 6 (2013): 429-442.\n\n[3] Laquitaine, Steeve, and Justin L. Gardner. \"A switching observer for human perceptual estimation.\" Neuron 97, no. 2 (2018): 462-474.\n\n[4] Nienborg, Hendrikje, and Bruce G. Cumming. \"Decision-related activity in sensory neurons reflects more than a neuron’s causal effect.\" Nature 459, no. 7243 (2009): 89-92. \n\n[5] Chen, Minggui, Yin Yan, Xiajing Gong, Charles D. Gilbert, Hualou Liang, and Wu Li. \"Incremental integration of global contours through interplay between visual cortical areas.\" Neuron 82, no. 3 (2014): 682-694.\n\n**On the stochasticity of the adaptation dynamics**\n\nIn our model, we consider spike frequency adaptation (SFA) to implement adaptation. A number of mechanisms in biological systems can realize SFA, and three of them are often studied [1], which are: 1) the current caused by voltage-dependent, high-threshold potassium channels; 2) the current mediated by calcium-dependent potassium channels; 3) the current caused by the slow recovery from in-activation of fast sodium channels. All these mechanisms depend on ion concentrations, release of neural transmitters, activation/inactivation of ion channels, buffering and diffusion, and all these processes are very noisy (see Fig.9 in [2]). Recent works also show that adaptation noises can play important computational roles (see, e.g., [3]). Indeed, previous modeling works rarely consider adaptation noises, since they focused on different things rather than the functions of adaptation noises. Here, we show that adaption noises can actually contribute to accelerate stochastic sampling. \n\nAs suggested by the reviewer, we will add discussions about the biological plausibility of adaptation noise in the revised paper.\n\n*References:*\n\n[1] Benda, Jan, and Andreas VM Herz. \"A universal model for spike-frequency adaptation.\" Neural computation 15, no. 11 (2003): 2523-2564.\n\n[2] Alonso, Angel, and Ruby Klink. \"Differential electroresponsiveness of stellate and pyramidal-like cells of medial entorhinal cortex layer II.\" Journal of neurophysiology 70, no. 1 (1993): 128-143. \n\n[3] Fisch, Karin, Tilo Schwalger, Benjamin Lindner, Andreas VM Herz, and Jan Benda. \"Channel noise from both slow adaptation currents and fast currents is required to explain spike-response variability in a sensory neuron.\" Journal of Neuroscience 32, no. 48 (2012): 17332-17344.\n\n\n\n", " **On the difference to Ref. 16 & 18**\n\nThere are a number significant differences between our work and Ref.16,18, which are:\n1. The encoding paradigms are different. In Ref.16,18, the authors consider that the sampled features are encoded by individual neurons, while we consider that the sampled feature is encoded by neural population (i.e., the CANN). See descriptions in lines 46&154 in the paper.\n2. The models used are different. Ref.16,18 consider linear networks, while we include nonlinearity in the model (Eq.12). \n3. The details of the sampling dynamics are different. In Ref.16,18, to inject noises, the authors use a modified HMC with mixed first-order Langevin dynamics, while our model realizes the standard HDF sampling dynamics. \n4. The momentum terms in HMC are different. In Ref.16,18, the authors consider that the dynamics of a group of inhibition neurons serve as the momentum term, while in our model, SFA effectively induces the momentum effect in HMC.\n\n**Limitation**\n\nAs suggested by the reviewer, we will add more discussions about the limitation and restrictions of our model in the revised manuscript.\n\n**On the biorxiv paper**\n\nSince the reviewer mentioned a biorxiv paper in the comments,\n(https://www.biorxiv.org/content/10.1101/2021.03.15.434192v2)\nwe read this paper and confirmed that it is about evidence accumulation in decision-making and is not relevant to our study. To our best knowledge, we have included all references on sampling acceleration and also compared them to our model.\n\n**We hope that we have clarified all concerns of the reviewer.**", " In this work, the authors:\n- review Bayesian inference for linear gaussian models, as well as HMC\n- derive how the equations of a CANN with noisy adaptation is identical to that of HMC, with the appropriate substitutions, and how parameter regimes translates to sampling behavior\n- demonstrate how CANN with noisy adaptation converges to the true posterior (in mean, variance, and DKL) more quickly than without, in single and coupled networks originality: spiking networks have been shown to approximate HMC sampling (in ref 16 & 18). The current authors extend this investigation to rate-based bump attractor networks, which is commonly used in computational neuroscience. I don't know of other works on this particular topic of adaptation, though I'm not well read in this literature (but see https://www.biorxiv.org/content/10.1101/2021.03.15.434192v2)\n\nquality: at a superficial level and without re-deriving the proofs, I believe the work is of high quality, and that the results fully support the central claim of efficiency (but not necessarily the claim of biologically relevant speed, expanded below)\n\nclarity: overall, I find this paper to be of high quality, and very clearly presented, even for a relative outsider of the literature. The brief but concise review of Bayesian inference, HMC and CANNs touched on all the relevant background without unnecessary length, and I commend the authors for writing such an enjoyable paper and without assuming basic knowledge from the reader, since I learned much from reading it. The figures are also well-presented and with readable font sizes.\n\nsignificance: I believe this paper links two very interesting ideas: adaptation and neural implementation of probabilistic sampling, and will be of significance for a segment of the computational neuroscience community - the work is motivated by the claim that naive Langevin sampling cannot work at perceptually relevant speeds due to its slow dynamics around the posterior mode. This is not strictly true, since speed can come at a cost of certainty, and it's unclear if FLD performs unreasonably (and unrealistically) poorly compared to human performance. While it's convincingly shown that higher adaptation (up to the derived max speed) increases sampling efficiency, I'm curious how the performance of both algorithms are when measured against absolute time, instead of in units of synaptic time constant tau_s (e.g., in Figure 2B). In other words, if one plugs in a reasonable value for tau_s (e.g., 5-10ms for AMPA receptors), would HDF actually achieve reasonable performance in reasonable time?\n\n- a critical component of the model seems to be the stochasticity in the adaptation dynamics (Eq13). To my limited knowledge, this is not a common assumption for CANNs, and is typically not included for adaptation dynamics in other models (such as spiking neuron). I would ask the authors to provide some evidence for why this assumption is justified (other than the currently cited \"biological systems is noisy\"). While SI3 provides some evidence that small deviations from the exact value of sigma_V is fine, perhaps a broader treatment is necessary, since one typically assumes the cell has near-perfect access to its own spike history (through calcium dynamics, etc)\n\n- my only concern wrt to originality / significance is, how different is this work from ref 16 & 18? In other words, does CANN need an entirely different treatment than from spiking networks, which is arguably a more difficult problem? This question stems from my naivety, but it would be good if the authors further discuss why CANN should be significantly different, perhaps from the perspective of adaptation (or something else) very brief discussion of limitation is included, and while the robustness of model parameters (in relation to deriving an exact match to HMC) is discussed in the SI, it's worth mentioning again in the limitations, as well as other strong assumptions of the model. No discussion of societal impact (and none from my perspective)", " In this paper, the authors address the question of how sampling-based probabilistic inference could be implemented in the brain. An important concern about sampling-based theories is that the sampling speed is too slow to be compatible with brain function. \n\nOne potential approach that circumvents the speed issue is Hamiltonian Monte Carlo (HMC) sampling. Previous works have shown that HMC can be implemented by a biologically plausible neural network with balanced excitation-inhibition among neurons. \nHere, they present an alternate neural implementation of HMC using a canonical network model called continuous attractor neural networks (CANNs). \n \nUsing a linear Gaussian generative model, the authors analytically show how the dynamics of a CANN are equivalent to HMC. Further, they show that adding a noisy adaptation term in the CANN dynamics accelerates the sampling process. They also show how this model could be extended to sampling from a multivariate Gaussian posterior. Finally, they present simulation results that confirm their theoretical analyses. This work demonstrates how continuous attractor neural networks (CANNs) could implement Hamiltonian Monte Carlo (HMC) sampling. The authors show that (i) when the dynamics of the adaptation current in CANNs take the form of spike frequency adaptation, and (ii) CANN dynamics are projected onto the first two dominating motion modes, the resulting dynamics of the latent features are equivalent to HMC.\nIn figure 2 they also clearly show how varying the adaption strength results in different sampling regimes. In particular, the simulation results show how the sampling is sped up as the adaptation strength increases in the HMC regime. Overall, this is an interesting implementation of HMC with friction using CANNs. \n\nAs the authors mention in the discussion, the results presented here are for a linear Gaussian model. Real-world distributions are more complicated and often multimodal. It would be interesting to consider how the current approach could be extended to perform inference on more complicated distributions. \nIt would also be useful to provide a sense of how the speedup obtained here compares with inference timescales observed in the brain. \n\nThe paper is quite clear and the figures are very neat and well-annotated. \nHowever, there are a few minor typos and grammatical errors. - There are certain assumptions placed on the form of the neuronal connection strengths, firing rate, adaptation, and external input for the CANN dynamics to be equivalent to HMC sampling for a linear Gaussian model. Are all of these biologically plausible? How would the dynamics change if any of these assumptions are relaxed? \n- Are there any thoughts about how this approach could be extended to multimodal, non-Gaussian distributions? \n- The speed-up observed here is relative to first-order Langevin dynamics. Could you provide a sense of how this compares with inference time scales observed in neuroscience experiments?\n- Clarifying question: in eq 2, where the prior is assumed to be uniform, is the support of $s$ finite? \n- Are there any concrete experimental predictions that could arise from this work? \n There is no potential negative societal impact - the authors state that their study is fundamental in neuroscience and not tied to applications. \n\nThe authors also discuss the limitations of their work - the analytical results derived here and the simulations correspond to a linear Gaussian generative model. It would be interesting to see how their work could be extended to performing inference on more complicated, multimodal distributions. ", " In this manuscript the authors describe how adaptation can speed up posterior sampling in continuous attractor neural networks, a neural population model known to be capable of posterior sampling. Concretely, they show that the dynamics of networks with adaptation is equivalent to Hamiltonian dynamics with friction. This equivalence gives a theoretical justification why the methods with adaptation perform better than the ones without adaptation, which effectively implement (first order) Langevin dynamics. Furthermore, their theoretical analysis allows the authors to compute good parameter settings to sample from simple one and two dimensional Gaussians, which are confirmed in simulations. \nThe observation that an adaptation mechanism that pushes the sampling chain away from previously visited states can improve convergence speed makes intuitive sense and the derivations in the paper seem correct. Also, the formal connection to hamiltonian sampling with friction are interesting. However, parts of the theory and all of the evaluation are specific to Gaussian probability distributions, for which sampling is not strictly necessary. Thus, this paper is an interesting step towards understanding the dynamics of sampling networks, but not a big break through.\n\nConcrete points:\n1) I am generally worried that all theory here is implemented only for Gaussians and some of the parameters like the optimal and threshold adaptation strength depend on the input distribution. Computing with Gaussians really does not require sampling or a whole population of neurons arranged along the feature space. To make sampling sensible at least arbitrary distributions should be covered I think.\n2) The setup for the CANNs discussed here seems a bit odd, because the input current in itself already a representation of the distribution, which allows computation as discussed in the probabilistic population coding framework. It is unclear to me why one would want to sample from a distribution given in this format.\n3) Without a deep dive into the CANN literature, some of the derivations are extremely hard to follow. In particular, which properties are first derived here, which ones are taken from which older literature and under which assumptions these results hold is impossible to take from the manuscript alone. Here, the authors could definitely communicate more clearly.\n\n\nSmall points:\nIn (14) the standard deviation in the exponential is a, but should be Lambda.\n If the authors can show robustness in some way, i.e. that this works for other distributions, this would go a long way towards convincing me that their sampling method could work for other distributions where sampling is necessary. ---", " The paper shows that a particular model of continuous attractors very relevant in neuroscience performs HMC sampling. Strengths:\n* It is actually surprisingly hard to make rigorous connections between HMC sampling and real neural circuits. In the Aitchison and Lengyel, they were for instance forced to use linear networks, without the spiking nonlinearity. But they managed to make such a connection here! The key innovation here was to consider HMC sampling over higher-level variables such as the location of the bump of activity, rather than considering e.g. firing rates themselves as the variables that HMC samples.\n\nThe key weakness is the need for clarification of the actual derivation of the equivalence, which is currently quite unclear. Here are some specific issues (but my comments are not limited to these issues):\n* Eq. 15: no x on RHS. \n* Eq. 15/16 G() vs G[] (i.e. sometimes G has curly, sometimes square brackets)\n* The projection seems like the key step. Can we give some flavour of how that works in the main text, and whether any assumptions go in there (e.g. linearity).\n* It would be worth exploring the biology. Is there really this kind of adaptation present in e.g. head-direction cells? That would seem strange, as it would destabilise the bump. This mechanism would be a potential explanation. N/A It isn't clear how far the basic ideas here will extend (e.g. to learned representations, visual system etc.) But it is definitely an approach worth considering in future." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 3, 4 ]
[ "jTe45k97nY", "lXVInxUzEPW", "FFEY-x_ia9Q", "lXVInxUzEPW", "nbh53ZrFpaG", "6J2TvEYPutq", "wcpRF9YAnJV", "FFEY-x_ia9Q", "FFEY-x_ia9Q", "lXVInxUzEPW", "lXVInxUzEPW", "0C2KfUkk8sN", "0C2KfUkk8sN", "nips_2022_Y0Bm5tL92lg", "nips_2022_Y0Bm5tL92lg", "nips_2022_Y0Bm5tL92lg", "nips_2022_Y0Bm5tL92lg" ]
nips_2022_hYx-xr1wdo
Subspace clustering in high-dimensions: Phase transitions \& Statistical-to-Computational gap
A simple model to study subspace clustering is the high-dimensional $k$-Gaussian mixture model where the cluster means are sparse vectors. Here we provide an exact asymptotic characterization of the statistically optimal reconstruction error in this model in the high-dimensional regime with extensive sparsity, i.e. when the fraction of non-zero components of the cluster means $\rho$, as well as the ratio $\alpha$ between the number of samples and the dimension are fixed, while the dimension diverges. We identify the information-theoretic threshold below which obtaining a positive correlation with the true cluster means is statistically impossible. Additionally, we investigate the performance of the approximate message passing (AMP) algorithm analyzed via its state evolution, which is conjectured to be optimal among polynomial algorithm for this task. We identify in particular the existence of a statistical-to-computational gap between the algorithm that requires a signal-to-noise ratio $\lambda_{\text{alg}} \ge k / \sqrt{\alpha}$ to perform better than random, and the information theoretic threshold at $\lambda_{\text{it}} \approx \sqrt{-k \rho \log{\rho}} / \sqrt{\alpha}$. Finally, we discuss the case of sub-extensive sparsity $\rho$ by comparing the performance of the AMP with other sparsity-enhancing algorithms, such as sparse-PCA and diagonal thresholding.
Accept
The reviewers appreciate the solid theoretical results concerning statistical-computational tradeoff in subspace clustering. The exact asymptotics and clear presentation make the paper stand out. Therefore, I recommend acceptance. Meanwhile, please carefully revise the paper according to the reviews to highlight its strengths and originality.
train
[ "T5uDZmw7KOh", "EqGlM7m3YV-", "Xqevl_M01xC", "37j7DYFdl9f", "ILCp1dcPaCg", "phnG4XgHFD", "8Mo8n9Mp4S", "pq77t80jMx", "ZoUlENfju0s", "tYfFZIryCrz", "vABRFKQbDl", "9SPuGlBiKid", "j1rU9DOj7y_", "U1O-q2-pMvV", "qC8sIpmDbIw", "ybPQRBgsj1gP", "ZKhLrqvsS10", "3_2I4mesXQ", "sQfH4tGo9Ju", "CZwr5R3TerX", "LSEjJ5hnRpc" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response. ", " All of my concerns have been addressed by the authors. Based on the comments of the other reviewers and the authors' responses. I'd like to keep the same rating as in my previous review: 8: Strong Accept.", " Thanks for the rapid reply! Your note clears up my misunderstanding. I see now that the supports for all of the *columns* of V are actually equal to each other. ", " We thank the reviewer for the specific question, we hope that this (very fast) answer will help to address his/her concern on the setting of our work.\n\nWe detail more explicitly how the matrix $VU^T$ is built: let us consider a fixed sparsity $\\rho = s/d$ and that datapoint $\\nu$ belongs to the first cluster , hence the label vector is $u_{\\nu} = (k-1/k, -1/k, \\dots, -1/k)$. The vector $v_i$, for all $i \\in [d]$, will be sampled from a Gauss Bernoulli (GB) distribution and it will be the null vector with probability $1-\\rho$ or a Gaussian vector with unit covariance with probability $\\rho$.\nNow let us consider the i-th element of the column $\\nu$ of the matrix $VU^T$ , this will be equal to $v_i^T u_{\\nu}$: this element will be zero with probability $(1-\\rho)$ following the reasoning above on the sampling of the vector $v_i$. Repeating this reasoning for all $i \\in [d]$ we have that the number of non-zero elements in the column $\\nu$ of the matrix $VU^T$will be $s$ (on average).\n\nWe hope that a practical example could help to clarify the confusion. For the sake of simplicity we consider a low dimensional case, although the results shown in our work are valid only in high dimensions.\nTake $(d=3,n=2,k=2)$ and fix a sparsity level of $\\rho = 33$ % , meaning that “two features out of three will be irrelevant”. Now consider that our mixture is formed by two datapoints, one in the first cluster and the other in the second one. \n\nWe sample the vectors $(v_i)_{i \\in [d]}$ with a GB distribution and the resulting matrices are:\n\n$V = \\begin{pmatrix} 0&0 \\\\\\\\ z_1&z_2 \\\\\\\\ 0&0 \\end{pmatrix}$ and $U = \\begin{pmatrix} 1/2 & -1/2 \\\\\\\\ -1/2 & 1/2 \\end{pmatrix}$\n\nwhere the vector $z = (z_1,z_2)$ is a unit covariance Gaussian vector.\n\nWe considered in this simple example that the number of non-zero elements in the columns of $V$ are exactly equal to $s=1$, but in high dimensions we will have $s = \\rho d$ relevant features only on average.\n\nWe thank the reviewer for his/her critical reading and for pointing out this source of confusion, we will explain in more detail the construction above in the revised version.", " We thank the reviewer for the answer to our rebuttal and we are happy that we could clarify most of the points. We do hear that the reviewer has still some concerns on the clarity of our work, and we address his/her question below.\n\n- *I have another question regarding computation of the IT threshold. Is this done via Theorem 1, using a brute force evaluation of the minimizer in (9)? It would nice to have some discussion of this optimzation as minimizing over the space of all kxk matrices quickly becomes infeasbile as k grows.*\n\nThe computation of the IT threshold is greatly simplified thanks to the discussion in Sec. 5. We find a suitable parametrization of the $k \\times k$ matrices (Mu,Mv) in eq. (19) which is invariant under the state evolution updates and reduce the number of parameters to be tracked, although we allow $k$ to be large. The details of the computation of the IT threshold are given in Appendix B: we need to evaluate the difference of the free energy potential between the informed and uinformed fixed points and find the SNR such that this quantity is zero, see eq. (38) and Fig. 6.\n\n- *I am now more persuaded about the novelty of the contribution, but the paper needs to be rewritten to clearly highlight the novel aspects and implications in the main text.*\n\nWe thank the reviewer for the remarks on the clarity of our work. We will make sure to clearly state what are the novel implications of our work in the revised version. We are aware that if the paper is accepted this discussion will be made public and we would not be making it if we were not to stand by it.", " Thanks very much for the fast and thoughtful reply.\n\nSorry, I'm still a little confused about the sparsity. If I understand correctly, a column of $VU^T$ takes the form (up to relabeling indices) $\\mu = \\frac{k-1}{k} v_1 - \\frac{1}{k} \\sum_{i =2}^{k} v_i$ , where the $v_i$ are sampled as in (2) and have (roughly) $s = \\rho d$ nonzero entries.\n\nIf $k$ is large (say $k \\gg \\sqrt{d}$), then the second term should \"average out\", resulting in a mean vector $\\mu$ with roughly $s = \\rho d$ nonzero entries. But for smaller $k$, the supports of the $v_i$ will not entirely overlap -- so in principle the number of nonzero entries of $\\mu$ could be something like $k s$. \n\nIs my understanding correct here? (Also I acknowledge that we are quite close to the author discussion deadline -- if you are not able to reply in time, and I will address this question with the other reviewers.)\n\n", " Thanks very much for your answers and clarifications. It is indeed interesting that the IT-threshold does not coincide with the one corresponding to AMP with fully informed initialization. The subtleties in the phase diagram easy to miss as much of the interesting discussion on the thresholds is relegated to the supplementary material. \n\nI have another question regarding computation of the IT threshold. Is this done via Theorem 1, using a brute force evaluation of the minimizer in (9)? It would nice to have some discussion of this optimzation as minimizing over the space of all $k \\times k$ matrices quickly becomes infeasbile as $k$ grows.\n\nI am now more persuaded about the novelty of the contribution, but the paper needs to be rewritten to clearly highlight the novel aspects and implications in the main text. Thanks again for your helpful responses.", " We thank the reviewer for the answer to our rebuttal and we are happy that we could clarify most of the points. We do hear that the reviewer's biggest concern remains clarity, and we address his/her question below. \n\n- *My understanding is that $VU^T$ has columns that comprise the cluster means specified as in (1). Usually, the community labels are one-hot vectors, but here a different choice of labels is used as in (3). Can you tell me roughly what is the sparsity of the columns of $VU^T$?*\n\nThe reviewer is correct. While the one-hot encoding is more common, the mathematical treatment would be more cumbersome. We have exploited the parametrization freedom for the community labels to judiciously choose an encoding that simplifies the analysis.\nThe columns of $VU^T$ do comprise the cluster means. The number of non-zero elements in the columns of the matrix $VU^T$ is determined by the sparsity of the centroids (quantified by $\\rho = \\frac{s}{d}$), and therefore is equal to $s$ (on average).\n\n- *I have taken a closer look at the Appendix. My understanding is that the thresholds stated in the theorem are derived in Appendix C based on taking limits as $\\rho \\to 0$ and $k \\to \\infty$. Is it true that the results only apply for a specific limiting scenario? I am not clear about the `rate' at which these limits are taken with respect to each other (eg, does ?). In this limit, are we still guaranteed that the cluster means (columns of ) are sparse?*\n\nWe thank the rewiever for the critical reading of the appendix. The mathematical derivation in order to compute the analytical scaling of the IT threshold is: a) First take the proportional high-dimensional limit $n,d,s\\to\\infty$ at fixed $\\alpha=n/d$, $\\rho=s/d$ and $k$; b) Then take the high-sparsity limit $\\rho \\to 0^{+}$; 3) Finally consider large $k\\to\\infty$ for approximating the integral appearing in eq 45. \nIn this limit we are still guaranteed that the columns of $VU^T$ are sparse: it will depend on the value of $\\rho$ as we discussed in the first point. However, as showed in Fig 9 (left) the large $k$ approximation of eq. 45 for the IT transition is fairly accurate even at moderate $k \\sim 10$.\n\n- *I am fine with the level of rigor described in your response, as long as the paper is precise about what is mathematically proven and what results are shown in the calculations. For example, I would appreciate some further clarity in 2 above regarding what asymptotic regimes in $\\rho,k$ your results apply to.*\n\nWe thank the reviewer for pointing this out - we agree that we might not have stressed enough which results are rigorous or not. We will make sure to clarify what results are rigorous and what are conjectured in a revised version. We are aware that if the paper is accepted this discussion will be made public and we would not be making it if we were not to stand by it.", " Thank you for the detailed and thoughtful response. I have a few more comments and questions below. \n\n1. \"*The sparsity of the matrix is associated only to the sparsity level in the cluster means...*\"\n\nMy understanding is that $VU^T$ has columns that comprise the cluster means specified as $\\mu_c$ in (1). Usually, the community labels are one-hot vectors, but here a different choice of labels is used as in (3). Can you tell me roughly what is the sparsity of the columns of $VU^T$?\n\n2. *Regarding threshold derivations*\n\nI have taken a closer look at the Appendix. My understanding is that the thresholds stated in the theorem are derived in Appendix C based on taking limits as $\\rho \\to 0^+$ and $k \\to \\infty$. Is it true that the results only apply for a specific limiting scenario? I am not clear about the `rate' at which these limits are taken with respect to each other (eg, does $\\rho k \\to \\infty$?). In this limit, are we still guaranteed that the cluster means (columns of $VU^T$) are sparse? \n\n\n3. \"*Note, however, that we choose to write \"theoretical results\" instead of \"theorem\". This is because...*\"\n\nI greatly appreciate the clarity on this. A remark along these lines in the main text would help readers understand the level of rigor of this work. \n\n4. *Regarding the level of rigor* \n\nI am fine with the level of rigor described in your response, as long as the paper is precise about what is mathematically proven and what results are shown in the calculations. For example, I would appreciate some further clarity in 2 above regarding what asymptotic regimes in $\\rho, k$ your results apply to. \n\nAgain, thank you for the helpful reply. ", " We thank the referee for her/his detailed comments and for the appreciation of our work. We are particularly grateful for her/his critical reading, the suggestion for the open direction to explore is a valuable comment. We address her/his questions in the answer below. \n\n**Reply to points in \"Questions\"**\n- *Please briefly explain the meaning of the indicator vector in Eq.(3)*\n\nThe indicator vector in eq. (3) $\\bf{u}_{c}^{\\nu}$ mathematically encodes that datapoint $\\bf{x}_\\nu$, for some $ \\nu \\in [n]$, is associated to cluster $c \\in \\mathcal{C}$. Its mathematical form, assumed without loss of generality, is just a convenient encoding for the theoretical analysis. \n\n- *In the impossible phase, the Bayes-optimal MMSE is not better than a random guess. Does this statement hold only for the two-clusters problem? or general mixture case also applies.*\n\n\nThe statement is valid in general for any number of classes $k$. In Main result 1 we characterize the MMSE for any number of clusters $k$. \n\n- *Can the results in this paper apply to more general right-unitarily-invariant noise matrices? Recently, there are some new AMP-type algorithms such as OAMP/VAMP, CAMP, and MAMP that solve more general signal recovery problems with right-unitarily-invariant matrices. The authors may consider extending their work to right-unitarily-invariant matrices.*\n\nWe agree with the reviewer that the Gaussian assumption is one of the limitations of our work. Indeed, a natural idea to go beyond this limitation is to consider other flavours of message passing algorithms, such as VAMP for rotationally invariant matrices. We thank the referee for this interesting suggestion, this is definitely an open direction to explore. We added in the revised \"Conclusion\" section a comment about this interesting (and other) open directions.", " We thank the reviewer for the positive comments and for the interest in our work. We address each of her/his questions below. \n\n**Reply to points in \"Questions\"**\n\n- *[...] For example, what is the ’s-sparse vector’ in the first paragraph in page 2?*\n\nThe presence of a typo in that line could have led to a confusion, and we have now corrected it the revised version. We refer to the cluster means $\\bf{\\mu}_c$ as the \"s-sparse vector\". The number of non-zero components in the centroids $s$ will be determined by the parameter $1-\\rho$, which we refer to as \"the sparsity level\". On average the centroids will have only $s$ components different from zero, hence the name $s-$sparse vectors.\n\n- *I would like to see more about the originality and the significance of the work. In the current writing, it gives impression that the current work is merely an interpretation of existing results [3-5, 14]. The model should be explained in further detail* \n\nThe starting point of our work is to map the subspace clustering problem to low-rank matrix factorization. This is a fundamental problem in statistics with a rich literature. Indeed, in our work we leverage this connection to characterize the statistical-to-computational gaps of subsplace clustering. However, we believe our work is not a \"simple interpretation of existing results\". First, the results derived are novel, and provide original insight into the computational hardness of subspace clustering, an important problem of interest. Second, the derivation of the sharp thresholds from eqs. (8-12) is technically involved, as the subspace clustering problem combine two hard factors: sparsity and multi-class structure. For this reason, the analysis required a series of new insights to make the equations amenable to an analytical treatment.", " **Reply to points in \"Questions\"**\n\n- Could the authors explain how the subspace clustering problem is exactly equivalent to the Eq. (4)?\n\nIn the subspace clustering problem we are interested to find the relevant features in our dataset. In order to study that, we consider the data model in eq. (1), in which we have a low-dimensional subspace of relevant features, determined by the non-zero components of the cluster means, embedded in a much higher dimensional space, defined by the acquisition space $d$. Mapping eq. (1) (subspace clustering) to eq. (4) (low-rank matrix factorization) requires finding the appropriate parametrization, given here by the vectors $(\\bf{v} _i, \\bf{u}_c^{\\nu} )$ with ${i \\in [d], \\nu \\in [n]}$. \n\n- *Does the equivalence of subspace clustering to low-rank estimation hold even in the unbalanced case, i.e., when $p_c$ does not necessarily equal ? Analyzing this general case and exploring the effect of unbalancedness would be one way to increase the novelty of the contribution.*\n\n Although it would be possible to treat the unbalanced case exploiting the low-rank estimation mapping, moving away from the balanced case would end up in considering an easier problem. Since the main goal of this work is to shed light on the statistical-to-computational gap and presence of hard phases in the subspace clustering problem, we believe the balanced case is the most relevant regime. However, we completely agree with the reviewer that the extension to the unbalanced case is an interesting open direction and we added a comment about it in the conclusions. \n\n- *Abstract and l.107: could you explain what subextensive means in this context? (is it related to sublinear?)*\n\nThe referee is correct and it is indeed related to sublinear in this context, with respect to the dimension $d$. In our setting we consider the density of non-zero elements $s$ in the cluster means to scale with the dimension $d$, which is taken to infinity. Hence, we have $\\rho = \\frac{s}{d} = O(1)$. In the sub-extensive (or sub-linear) regime we allow the number of non-zero component $s$ to scale sub-linearly, hence $\\rho = \\frac{s}{d} = o(1)$. \n\n- *I think the references to Eq. (38) should be to Eq. (9) [....] I think Algorithm 3 refers to Algorithm 1?*\n\nWe thank the reviewer for the suggestions and for pointing out typos. We welcomed all of the suggestions and corrected the typos in the revised version. We confirm that his/her interpretation of this sentence was correct: in the new version eq. (38) is replaced by eq. (9), and Algorithm 3 by Algorithm. 1.\n\n**Reply to points in \"Limitations\"**\n- *Connections and references to prior work are good overall, but it would be nice to have at least a brief discussion of open questions and generalizations.*\n\nWe thank the referee for this comment. The mapping of subspace clustering to low-rank matrix factorization is flexible and opens up interesting open directions. We added a comment in the conclusions, clarifying the limitations of this work and highlighting prospective future directions.", " We thank the reviewer for his/her comments that help us see how to improve the presentation of our results. We address more in details some points in the answer below.\n\n**Reply to points in \"Weaknesses\"**\n- *The main weakness of the paper is novelty. As the authors acknowledge, all the results in the paper are obtained by evaluating known results for this particular model. In particular, the information-theoretic limits for low-rank matrix estimation (Theoretical result 1) have been derived in a series of papers including [2,3,4,14], and a perusal of the supplementary material did not reveal any technical challenges in evaluating the known formulas for the subspace clustering model studied here.*\n\nWe politely disagree with the reviewer that our results are a mere \"formula evaluations\". Indeed, the starting point of our analysis is to map the subspace clustering problem to a low-rank matrix factorization problem, a classical problem in high-dimensional statistics with a rich literature. However, the subspace clustering problem combines two hard technical problems: sparsity and multi-class structure, and obtaining the thresholds from the generic formulas available involves a considerable tour de force, such as finding the right reduction of the equations and closing them on an ansatz for the scaling at large sparsity. We detail these technical challenges also in the reply to *Reviewer U268*.\n\n- *[...] Similarly, the AMP and the state evolution characterization of its perforamance (Theoretical result 2) are also well known from previous works. Comparing the the IT limits with the AMP overlap just involves running the state evolution equations from two distinct initial conditions -- perfect initialization to obtain the IT limit, and uninformative initialization to obtain the AMP overlap. The behavior in the high sparsity regime is novel and a nice result, but the overall novelty of the paper still falls short.*\n\nThe IT threshold cannot be computed by simply running the state evolution equations from uninformed and informed initializations. In fact, a close inspection of Fig. 2b, shows that *only after* the IT threshold (dashed light blue line) the MMSE is achieved by the informed initialization (solid orange line). Indeed, the exact computation of this threshold requires evaluating the free energy potential, which is technically more involved than running state evolution (see App. B for the details). \n\nWe thank the referee for his/her positive comments on the results in the high sparsity regime, although we politely disagree, for all the reasons above, that the novelty of this paper falls short. \n\n", " We thank the referee for the appreciation of our work, we also like very much the illustrative power of the phase diagram in Fig. 1. We thank the reviewer for the list of suggestions and typos, we welcome all of them, and are corrected in the revised version. We address his/her question below.\n\n**Reply to points in \"Questions\"**\n- *It would be nice if a definition of \"s-sparse vectors\" can be given here: is it in terms of expectation or deterministic?*\n\nThank you for the valuable comment, indeed the definition of \"s-sparse vector\" is given in terms of expectation. We added a comment in the revised version to clarify this point at line 51. Thanks to the assumption of the Gauss-Bernoulli distribution in eq. (2) with parameter $\\rho = \\frac{s}{d}$, the cluster means $\\bf{\\mu}_c $ will have, in expectation, a number of non-zero components equal to $s$. \n", " **Reply to points in \"Minor issues\"**\n\n We thank the referee for the list of suggestions and typos in this section, we welcome all of them, and they have been fixed in the revised version. We make sure that there is no ambiguity left, by answering in detail to some comments in the answer below. \n\n- *Line 143-144: this comment is hard to understand because (38) is in the Appendix, and then I'm having trouble seeing the connection to (6).*\n\nThere was a hitch in the cross-referencing, the comment in line 143-144 will not create any confusion in the revised version: eq. (38) is replaced with the correct reference, i.e. eq. (9), and the minimisation problems in eq. (6) is strongly different from the one in eq. (9) since the latter involves high dimensional quantities intractable in the limit considered $n,d \\to \\infty$. \n\n- *Equation (2) defines vectors that are (i) standard Gaussian with probability $\\rho$ OR (ii) the zero vector with probability $1-\\rho$. I think what is meant is for there to be a random subset of zero entries, with the rest of the entries being Gaussian.*\n\nIndeed, there was a typo in the definition of the vector $\\bf{v}_i$. This is corrected in the revised version, and there should be no ambiguity left: the vector $\\bf{v}_i \\in \\mathbb{R}^k$ is the selector of the relevant features in the mixtures. For a given direction $i \\in [d]$, the vector $\\bf{v}_i$ is the null vector with probability $1-\\rho$, meaning that feature $i$ is irrelevant as all the centroids have zero component along that direction. \n\n**Reply to points in \"Questions\"**\n\n - Is it possible to understand intuitively why the algorithmic threshold does not depend on $\\rho$? \n\nWe completely agree that this is an important and unintuitive point. We study the algorithmic threshold in Sec. 5, in particular by linearizing the SE equations around the uninformative fixed point, we found that $\\lambda_{\\text{alg}} = \\frac{k}{\\sqrt{\\alpha}}$, i.e. the so-called BBP threshold [13]. Thanks to this identification we can build intuition on this result from the standpoint of Random Matrix Theory. If we work at low enough SNR ($\\lambda \\ll 1$), the data matrix $X$ in eq.(4) resembles effectively a Gaussian matrix, whose spectral density is close to the Wigner semicircle distribution. As we increase the SNR, the BBP transition is associated to the emergence of an informative eigenvalue, due to the low-rank part of the matrix $X$, from the \"Gaussian bulk\" of the spectrum. Therefore, we note that we can tune the strength of the two terms in $X$ (low-rank vs. high-rank) by varying the SNR $\\lambda$. We also intuitively understand that as we decrease $\\rho$, i.e. increase the sparsity of the centroids, the clustering problem becomes *statistically* easier. Nevertheless, at fixed SNR, decreasing $\\rho$ does not change the relative strength of the two competing terms in the matrix $X$, hence it does not change the behaviour of the algorithmic transition.\n\n- *The state evolution equations (17) coincide exactly with running gradient descent 175 on the potential defined in eq. (40)!\" Is this connection known for other problems? Why is this observation crucial for the results here?*\n\nThis connection is indeed at the roots of the conjectured link between hardness and AMP. This has been explored in many references, e.g. [7] [14], or very recently in [BAH$^+$22]. \n\nThe observation is crucial in order to analyze the phase transitions for the subspace clustering problem:\na) the linearization of the SE equations around the uninformative fixed point, done in Sec. 5, is equivalent to study the gradient of the potential $\\Phi_{\\text{rs}}$ in the origin; b) the statistical-to-computational gap opens up as the global minima of the potential is attained far from a locally stable region around the uninformative fixed point, where the AMP algorithm get stuck (see Figs. 5 and 7). \n\n[BAH$^+$22]: Afonso S. Bandeira, Ahmed El Alaoui, Samuel B. Hopkins, Tselil Schramm, Alexander S.\nWein, and Ilias Zadik. The franz-parisi criterion and computational trade-offs in high\ndimensional statistics, 2022.\n\n**Reply to point in \"Limitations\"**\n- *The weak recovery problem studied here is primarily of theoretical interest, and it is not clear if the AMP algorithm is useful for non-Gaussian problems. So practical impact may be limited*\n\nWe agree with the referee that our work is primarily of theoretical interest. However, to the best of our knowledge this work is the first to analyse the statistical-to-computational gap of subspace clustering for extensive sparsity, and therefore we think that it is a natural choice to present the results in the Gaussian case. We agree with the reviewer that considering other distributions is an interesting direction. For instance, rotationally invariant distributions could be amenable to a similar treatment using VAMP [36]. We have added this interesting prospective direction in the \"Conclusion\" section of the revised manuscript", " We thank the referee for her/his comments that help us see where the discussion of our paper should be improved. Below, we address the concerns and questions raised by the reviewer.\n\n**Reply to points in \"Weaknesses\"**\n\n - *Originality: Main Result 1 relies on known formulas for low-rank matrix factorization. It is not clearly explained what are the major technical challenges, if any, in obtaining this result.*\n\nThe main goal of our work is to analytically characterize the statistical-to-computational gap in subspace clustering at large sparsity. Our first result is to identify the connection between the problem of interest (subspace clustering) and a low-rank matrix factorization problem. Indeed, this connection allows us to leverage the rich literature from this field, and in particular a set of very generic formulas for the MMSE. However, using these formulas for subspace clustering and computing the respective thresholds analytically is far from trivial and consitute a considerable technical challenge.\n\nThe two major difficulties are (details are given in Secs. 5 and 6): (a). Find a suitable parametrization of the $k\\times k$ matrices $(M_u,M_v)$ in eq. (19) which is invariant under the state evolution updates and reduce the number of parameters to be tracked; (b) Find the right scaling ansatz in the high-sparsity limit $\\rho \\to 0^{+}$ which allow us to close the equations on amenable quantities. Thanks to these two non-trivial steps, we are able to find closed expressions for the relevant thresholds which are plotted in the phase diagram Fig. 1, and analytically characterize the statistical-to-computational gap in the large sparsity regime of interest.\n\nWe definitely agree with the reviewer that these challenges were not stressed enough, and we will add a discussion in the revised version. \n\n- *Clarity: The community labels in (3) and the model (4) are such that the E X does not have sparse columns if k is small. For this reason, I feel the paper is more about the large k version of the sparse clustering problem.*\n\nThe sparsity of the matrix $\\mathbb{E} \\left[X \\right]$ is associated only to the sparsity level in the cluster means ($1-\\rho$) and not with the parameter $k$, therefore the model is well defined for any number of cluster $k \\geq 2$.\n\n- *Clarity: From the main text alone, it is unclear how the information-theoretic threshold is obtained.*\n\nThe detailed discussion of how to compute the information-theoretic threshold is given in App. B, and in particular in eq. (38). The key idea is to compute the difference in the free energy potential $\\Phi_{\\text{rs}}$ between the \"informative\" and \"uninformative\" fixed points, and to search for the SNR for which the difference is zero. This is illustrated in Figs. (5,7). \n\n- *Clarity: [...] The formula of the MSE is difficult to interpret, so I can’t see if the threshold is a consequence of this. Some further explanation is needed here*\n\nWe completely agree with the reviewer that it is not immediate to get the thresholds from the formulas, which in full generality are quite involved. As we discussed above, although eqs. (8-12) is all you need at a first glance, it is hard to approximate them numerically with a good precision and technically challenging to compute the thresholds analytically from eqs. (8-12). The computation takes a few pages, and for this reason we detail it in App. B.\n\n- *Quality/Clarity: It is difficult to assess the rigour of both main results, especially Main Result 2. This is because the appendix is not organized in a conventional way with a clearly demarcated proof of Main Results 1 and 2.* \n\nOnce the mapping to the low-rank matrix factorization is done, closed-formula characterizing the MMSE has been proven in a series of works [3-5,14]. The second main result comes from the general rigorous connection between AMP algorithm and its associated SE [31]. \nNote, however, that we choose to write \"theoretical results\" instead of \"theorem\". This is because, when we evaluated our closed-form formulas, we did not take all the necessary precautions to claim full rigor, e.g. prove formally that the minimum was unique (though we checked all these both analytically and numerically).\n\n- *[...] For Main Result 2, I do not see in the Appendix any explanation of how the asymptotic algorithmic MSE is computed. I only see plots rather than arguments*\n\nThe asymptotic algorithmic MSE is computed by iterating the SE equation defined in eqs. (16,17) until convergence. We plug the fixed point of the iteration into eq. (18) in order to find the asymptotic performance. \n\n- *Quality/Clarity: [...] I also do not see any derivation of the Bayes-optimal MSE or reference to known (rigorous) formulas*\n\nThe formula in eq. (8) of the MMSE, is derived directly from the definition of the MSE in eq. (5) by exploiting the prior distribution on the encoding labels $P_u$. The expression is rigorously proven in a series of works [3-5,14].", " This work explores statistical-computational gaps in the problem of obtaining non-trivial correlation (sometimes referred to as `weak recovery') with the cluster labels of a balanced mixture of isotropic Gaussian vectors that have sparse means drawn from a Gaussian-Bernoulli prior. Via a reduction to a low-rank matrix factorization problem, explicit formulas are derived for the asymptotic mean-squared error of the Bayes estimator, which is statistically optimal for this problem although likely hard to compute. On the algorithmic side, approximate message passing is proposed and an exact asymptotic formula for its mean-squared error is derived. Information-theoretic and algorithmic thresholds are described, and for large enough sparsity levels, these two thresholds differ. Numerical experiments explore what happens when the sparsity level is sub-extensive and show that the phase transitions have a different behavior in this regime. # Strengths\n\n- *Significance/Quality*: The theorems give a roughly complete understanding of the weak recovery problem, from an information-theoretic and algorithmic point of view, in the linear-sparsity, linear-dimension regime. In particular the algorithmic threshold is identified with a sharp constant factor.\n\n- *Clarity*: The simulations for the large-sparsity regime and discussion are helpful for understanding the complex behavior of this problem as the sparsity varies.\n\n# Weaknesses\n\n- *Originality*: Main Result 1 relies on known formulas for low-rank matrix factorization. It is not clearly explained what are the major technical challenges, if any, in obtaining this result. \n\n- *Clarity*: The community labels in (3) and the model (4) are such that the $\\mathbb{E}X$ does not have sparse columns if $k$ is small. For this reason, I feel the paper is more about the large $k$ version of the sparse clustering problem. \n\n**Edit 08/19:** After discussion with authors, the previous point is resolved.\n\n- *Clarity*: From the main text alone, it is unclear how the information-theoretic threshold is obtained. The formula of the MSE is difficult to interpret, so I can't see if the threshold is a consequence of this. Some further explanation is needed here. \n\n- *Quality/Clarity*: It is difficult to assess the rigour of both main results, especially Main Result 2. This is because the appendix is not organized in a conventional way with a clearly demarcated proof of Main Results 1 and 2. For Main Result 2, I do not see in the Appendix any explanation of how the asymptotic algorithmic MSE is computed. I only see plots rather than arguments. I also do not see any derivation of the Bayes-optimal MSE or reference to known (rigorous) formulas. \n\n# Minor issues\n\n- Line 39: Equation (2) defines vectors that are (i) standard Gaussian with probability $\\rho$ OR (ii) the zero vector with probability $1 - \\rho$. I think what is meant is for there to be a random subset of zero entries, with the rest of the entries being Gaussian. \n\n- Main Theorem 1: Please take a careful proofread over this. Here $\\mathbf{v^*}, \\mathbf{u^*},$ and $\\mathbf{w}$ have not been defined in (10). Also $Z_u$ does not appear in (10).\n\n- Consider making the boundaries bolder in Figure 1. Also I found the color of $\\lambda_{it}$ to be hard on the eyes\n\n- Line 70: Typo \"statitiscal\"\n\n- Line 85: Typo \"analyis\"\n\n- Line 91: Typo \"Statistics\", change to \"statistics\"\n\n- Line 111: Change to \"In particular, [10] conjectured and [11] proved...\"`\n\n- Line 143-144: This comment is hard to understand because (38) is in the Appendix, and then I'm having trouble seeing the connection to (6). \n\n- Line 195: Typo \"Invextigate\"\n\n- Line 261: Replace \"Despite of this fact\" with \"Despite this fact\"\n\n# Summary of score\n\nMy score is due to concerns mostly about the rigor and partially about the novelty of this submission. I also feel there is a lack of clarity in explaining how the main results are obtained.\n\n# Update of score 08/19\n\nThe authors' rebuttal addressed my concerns about rigor and somewhat about novelty. I agree with other reviewers that the strengths and technical challenges of this paper are not highlighted enough in the main text. I also think further clarity is needed on the level of rigor and the asymptotic regime to which the results apply (which seems to be for k growing large and rho going to 0). I have raised my overall score from 3 to 4 because further serious revision is needed. I have also upgraded the soundness from 1 to 2. \n **1.** Is it possible to understand intuitively why the algorithmic threshold does not depend on $\\rho$?\n\n**2.** \"The state evolution equations (17) coincide exactly with running gradient descent\n175 on the potential defined in eq. (40)!\" Is this connection known for other problems? Why is this observation crucial for the results here? **1.** I agree with the authors' assessment that there are no apparent potential negative societal impacts.\n\n**2.** The weak recovery problem studied here is primarily of theoretical interest, and it is not clear if the AMP algorithm is useful for non-Gaussian problems. So practical impact may be limited.", " This paper studies statistical and computational limits of subspace clustering, focusing on high-dimensional Gaussian mixture model with sparse centroids. The paper considers the regime where both the fraction of non-zero components of the cluster means and the ratio between the number of samples and the dimension are fixed (which differs from previous literature). The information-theoretic threshold (below which reconstruction is statistically impossible) and algorithmic limitations (in terms of an approximate message passing algorithm) are obtained, which indicate a statistical-to-computational gap. The results are illustrated and compared with previous works through simulations. \n\n\n\n The analysis of statistical and computational limits and the resulting phase diagram in this paper is an interesting contribution to the study of high-dimensional Gaussian mixture models and subspace clustering. The regime that the paper focuses on (extensive sparsity: the fraction of non-zero components of the cluster means is fixed) is different from that in the previous literature (where the fraction of non-zero components of the cluster means goes to 0). Many of the theoretical results are adapted from existing analysis on statistical and computational limits of low-rank matrix factorization, although the idea that statistical and computational limits of Gaussian mixture model can be connected to low rank matrix factorization is new. The ideas and results are clearly presented in the paper, and the phase diagram and simulation studies give good illustrations to the statistical-and-computational gap.\n I have the following comments on details:\n(1) Line 12 in the Abstract: \"require\" might be \"requires\"?\n(2) Line 36: it would be nice if a definition of \"s-sparse vectors\" can be given here: is it in terms of expectation or deterministic?\n(3) Line 53: the second \"is\" might be \"in\"?\n(4) Line 84: \"arise\" might be \"arises\"?\n(5) Line 138: \"allow\" might be \"allows\"?\n(6) Line 177: is it \"closest to\"?\n(7) Line 195: \"invextigate\" might be \"investigate\"?\n(8) Line 204: is it \"have access to\"? The limitations of the work are already discussed in the paper.", " This paper studies the performance of subspace clustering in a high-dimensional Gaussian mixture model. The cluster centers are assumed to be sparse. The problem is equivalent to one of low-rank matrix estimation. Using this connections, the information-theoretic limit for this problem is derived and compared with the performance of AMP (generally thought to be the best among poly-time algorithms for this problem). The results are specialized to the setting of sparsity ratio approaching one, and the statistical-computational gap is found to be of the order of $\\sqrt{\\rho}\\log(1/\\rho)$. The paper is generally well written and pleasant to read. Having exact, computable expressions for the asymptotic MMSE as well as the asymptotic overlap of AMP that are validated by numerical simuations is welcome. The scaling behavior in the large sparsity regime and the characterization of the stat-comp gap in this regime is nice.\n\nThe main weakness of the paper is novelty. As the authors acknowledge, all the results in the paper are obtained by evaluating known results for this particular model. In particular, the information-theoretic limits for low-rank matrix estimation (Theoretical result 1) have been derived in a series of papers including [2,3,4,14], and a perusal of the supplementary material did not reveal any technical challenges in evaluating the known formulas for the subspace clustering model studied here. Similarly, the AMP and the state evolution characterization of its perforamance (Theoretical result 2) are also well known from previous works. Comparing the the IT limits with the AMP overlap just involves running the state evolution equations from two distinct initial conditions -- perfect initialization to obtain the IT limit, and uninformative initialization to obtain the AMP overlap. The behavior in the high sparsity regime is novel and a nice result, but the overall novelty of the paper still falls short. -- Could the authors explain how the subspace clustering problem is exactly equivalent to the Eq. (4)?\n\n-- Does the equivalence of subspace clustering to low-rank estimation hold even in the unbalanced case, i.e., when $p_c$ does not necessarily equal $1/k$? Analyzing this general case and exploring the effect of unbalancedness would be one way to increase the novelty of the contribution.\n\n-- Abstract and l.107: could you explain what subextensive means in this context? (is it related to sublinear?)\n\n-- Typos: l.85: \"analysis\" misspelled; l.128: unpractical --> impractical; l.137-147: I think the references to Eq. (38) should be to Eq. (9)? l. 162, 172 etc. : I think Algorithm 3 refers to Algorithm 1? l.195: \"investigate\" misspelled.\n\n\n Connections and references to prior work are good overall, but it would be nice to have at least a brief discussion of open questions and generalizations.", " The authors consider a clustering problem where the cluster means are sparse vectors and the noise is Gaussian. They prove the information-theoretic threshold in terms of the signal-to-noise ratio (SNR) under which the clustering is impossible statistically. They also analyze a performance of an approximate message passing (AMP) algorithm for the problem. The difference between the informatin-theoretic transition and the algorithmic transition (based on the AMP algorithm) is discussed. Strengths: The authors showed a statistical-to-computational gap for the large (but not very high) sparsity regime where the density of non-zero elements in the cluster mean is of order 1. They provide a detailed analysis between their results and the existing results for the very high sparse regime.\n\nWeaknesses: The originality is somewhat limited as main theoretical results are based on previous results. The writing is not entirely clear. I would like to see more about the originality and the significance of the work. In the current writing, it gives impression that the current work is merely an interpretation of existing results [3-5, 14]. The model should be explained in further detail. For example, what is the 's-sparse vector' in the first paragraph in page 2? Yes.", " This paper considers a subspace clustering problem as a low-rank matrix factorization problem. The performance of the Bayes-optimal estimator is characterized by the high-dimensional limit. Clearly, the computationally hard phase, easy phase, and impossible phase are provided via an information-theoretical threshold and an algorithmic threshold. Authors also show that the statistical-to-computational gap will grow with both the sparsity and the rank. Strengths:\n\n1. This paper provides an exact asymptotic characterization of the MMSE in the high dimensional limit. \n\n2. In the large sparsity regime, the authors uncover a large statistical-to-computational gap as the sparsity level grows and unveiled the existence of a computationally hard phase. \n\n3. It shows that the SNR threshold below which recovery is statistically impossible. \n\n4. The results in this paper solve an apparent contradiction due to the scaling assumption of the sparsity level with the dimension of the features. \n\n5. The findings of this paper are verified by algorithms for subspace clustering such as sparse principal component analysis and diagonal thresholding. \n\nWeaknesses\n\nThis paper only works on a low-rank matrix factorization problem with IID Gaussian noise matrices, which limits the applications of the results in this paper to more general right-unitarily-invariant noise matrices. 1. Please briefly explain the meaning of the indicator vector in Eq.(3).\n\n2. In the impossible phase, the Bayes-optimal MMSE is not better than a random guess. Does this statement hold only for the two-clusters problem? or general mixture case also applies.\n\n3. Can the results in this paper apply to more general right-unitarily-invariant noise matrices? Recently, there are some new AMP-type algorithms such as OAMP/VAMP, CAMP, and MAMP that solve more general signal recovery problems with right-unitarily-invariant matrices. The authors may consider extending their work to right-unitarily-invariant matrices.\n\n[OAMP] J. Ma and L. Ping, “Orthogonal AMP,” IEEE Access, vol. 5, pp. 2020–2033, 2017.\n[VAMP] S. Rangan, P. Schniter, and A. Fletcher, “Vector approximate message passing,” IEEE Trans. Inf. Theory, vol. 65, no. 10, pp. 6664-6684, Oct. 2019.\n[CAMP] K. Takeuchi, “Bayes-optimal convolutional AMP,” IEEE Trans. Inf. Theory, vol. 67, no. 7, pp. 4405-4428, July 2021.\n[MAMP] L. Liu, S. Huang, and B. M. Kurkoski, “Memory AMP,” IEEE Trans. Inf. Theory, 20222, early access. The results in this paper are limited to IID Gaussian noise matrices. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 5, 5, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 4, 3 ]
[ "vABRFKQbDl", "tYfFZIryCrz", "37j7DYFdl9f", "phnG4XgHFD", "8Mo8n9Mp4S", "pq77t80jMx", "j1rU9DOj7y_", "ZoUlENfju0s", "qC8sIpmDbIw", "LSEjJ5hnRpc", "CZwr5R3TerX", "j1rU9DOj7y_", "sQfH4tGo9Ju", "3_2I4mesXQ", "ybPQRBgsj1gP", "ZKhLrqvsS10", "nips_2022_hYx-xr1wdo", "nips_2022_hYx-xr1wdo", "nips_2022_hYx-xr1wdo", "nips_2022_hYx-xr1wdo", "nips_2022_hYx-xr1wdo" ]
nips_2022_K-A4tDJ6HHf
Diagnosing failures of fairness transfer across distribution shift in real-world medical settings
Diagnosing and mitigating changes in model fairness under distribution shift is an important component of the safe deployment of machine learning in healthcare settings. Importantly, the success of any mitigation strategy strongly depends on the \textit{structure} of the shift. Despite this, there has been little discussion of how to empirically assess the structure of a distribution shift that one is encountering in practice. In this work, we adopt a causal framing to motivate conditional independence tests as a key tool for characterizing distribution shifts. Using our approach in two medical applications, we show that this knowledge can help diagnose failures of fairness transfer, including cases where real-world shifts are more complex than is often assumed in the literature. Based on these results, we discuss potential remedies at each step of the machine learning pipeline.
Accept
This is a compelling work characterizing some forms of model (non-)robustness to drift through a causal lens, with a focus on performance metrics including group-level fairness. The methodological novelty to the work is a method for discovering structure for that drift, then using that structure to (i) estimate impact on metrics and (ii) mitigate those impacts. That method requires a domain expert to provide a rough causal graph for the application at hand, which is a light negative; yet, the work then presents two in-depth studies in the healthcare space to argue that this is a surmountable requirement, at least in some settings. This is a tough space to operate in, and I appreciated this very deep dive into two "real" cases -- as did the reviewers.
train
[ "KYmbCxLOtV", "o0Eec0ncvWo", "VzRL_ZXp9j0", "upHtjwYKWnM", "cENEpiGuzp", "Jo9xzwSWTI", "YxB0Xbe6_Xs", "hY5uJ20kosG", "2UizF4ntcYQ", "20SqT_A3l1", "K0hpbJGM6y3", "_3600EdWMD", "q_qLAw8-LEN", "IpXQJg104Ul", "d7OZTaulT13", "ybyyAGn7bqO", "z9IsR-su8oy", "UOXdSJWjN7A5", "vVoAW6iUk8", "Kze6JYe6icM", "_Yn2Vaz-DmE" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Hi folks --\n\nThanks to the reviewers for their initial reviews, to the authors for a thoughtful rebuttal, and to both sides for their discussion in the meantime. Reviewers -- is there anything else that would be helpful to ask the authors before we move to our final deliberative phase? Please do get those final comments posted!\n\n-- AC", " Hi, thanks for following up!\n\nI appreciate your responses and the insight you've been able to provide. I'm happy to see that the paper has been improved, most importantly its clarity, as it has helped me to re-evaluate the work and view it more positively. I feel well informed ahead of the reviewer discussion.", " Dear Reviewer,\n\nThank you again for acknowleding our response and clarifying some of your concerns.\n\nWe hope we have addressed your comments in our point by point responses in each part. If so, we would appreciate it if you would consider revising your score or let us know if there are still remaining issues unaddressed so that we can respond to them during the discussion.\n\nThank you,\nThe authors", " We would like to thank the reviewer for engaging in the discussion and really appreciate their re-assessment of our work.\n\nWe hope that our further changes to the text answer your concerns (see below, in our other responses to specific points, and in the uploaded revised paper).\n\n**Sensitivity of algorithm 1**\n\nWe have amended the text in section 3 to be more explicit about the experiments performed in the Appendix:\n\n*\"We conduct three validation experiments for our testing procedure using data from our dermatology application in Sec. A.3.3. We first assess Type I error in an experiment where we test random splits of the same data against each other. Our test displays a false positive rate similar to our threshold of 5% for hypothesis testing. We however note that the variance of this result increases with smaller numbers of samples. We then confirm non-trivial power in an experiment where we introduce a shift by subsampling younger patients with a particular skin condition in the source dataset (a shift on $A$ and $Y$). Our testing approach correctly identifies the dimensions of $A$ and $Y$ that were affected by $S$ due to the engineered shift. We replicate this test for a shift in $Y$ only, with the same result.\"*\n\nWe have also modified the text in Appendix A.3.3 to be more explicit:\n\n*\"We assess the specificity and sensitivity of our testing procedure (see Algorithm 1) using engineered shifts on the dermatology data. To estimate Type I error, we compare the source data to itself, which should lead to all variables being independent of the environment. More specifically, we select $10,000$ random samples from the dataset to compute the weights, and another $1,000$ random samples to apply the weights on and perform the weighted test. These sets are boostrapped 100 times (separately for the source and target sets) and we assess the proportion of false positives that is obtained for each variable $U$. We expect that we will obtain ~5% of false positives, as defined by our hypothesis testing threshold. As we do not expect any relationship between \\{$\\mathbf V$\\} and $S$, we use a logistic regression to estimate $P(S = s \\mid \\mathbf{V})$.'*\n\n[...]\n\n*\"As the number of samples used to compute the weights is an important component of our testing approach, we repeat the process while varying the number of samples from 100 to 5,000. Figure 5(c) displays that the variance on Type I error (here across conditions) decreases when the number of samples to fit the $P(S = s \\mid \\mathbf{V})$ classifier increases.\"\n\n[...]\n\nFor sensitivity: *\"Therefore, the testing approaches produces a causal graph that mirrors the graph used to engineer the shift, and our procedure is considered faithful.\"*\n\n**Design of cohorts**\n\nWe have added relevant details for the EHR experiments (see the corresponding response).\n\n**Related work**\n\nWe have added a sentence about Creager et al. (2020) in section 5 (fairness mitigation):\n\n*\"More similar to our settings, the work by \\citet{Creager2020} addresses fairness in dynamical systems, although only for low-dimensionality variables.\"*\n\nSimilarly, we have added a sentence about the work referred by Rev. aaBT in the \"shift detection\" subsection. We believe this section provides most of the context for the development of the tests.", " Thank you for these suggestions. We have amended the writing accordingly and have added details on the cohort selection:\n\n*\"We use the open access, de-identified Medical Information Mart for Intensive Care III (MIMIC-III) dataset \\citep{Johnson2016-eu}, which consists of data from admissions to ICUs at the Beth Israel Deaconess Medical Center between 2001 and 2012. This clinical system has the benefit of having separate specialized ICUs and allows to assess the generalizability of risk scores estimated in one clinical population to another. Based on clinical input, we consider the Medical ICU (MICU), Surgical ICU (SICU) and Trauma Surgical ICU (TSICU) to be generalist ICUs (source data); whereas the Cardiac Surgery Recovery Unit (CSRU) and Coronary Care Unit (CCU) are specialized ICUs (target data). This split leads to 17,641 patients included in the source dataset and 10,442 patients in the target dataset, after selecting adult patients with a length of stay of minimum one day and with a recorded care unit. Our goal is to obtain a robustly fair model that predicts prolonged ICU stay (i.e. length of stay $>3$ days, as in \\citep{Wang2019-ac,Pfohl2021-xn}) using data from the first 24 hours of first ICU admission and a recurrent architecture \\citep{Tomasev2019-tu,Tomasev2021-uf,Roy2021-rr}. The model is trained on 80% of the source data, tuned using a separate split of 10% and tested on the remaining 10%. See the Supplement for further details.\"*\n\nRegarding the confounding \"reason for visit\": we agree with the reviewer and this is stated in the results of the EHR section. The IPW potentially (partly) corrects for this based on comorbidities and treatments. Our results suggest that there is no significant differences in length of stay after correcting for these factors, although the high-dimensionality of the correction can lead to under-powered results (see the Supplement for a discussion of these specific results). We have added this explicitely below the section above:\n\n*\"Noteworthy, the model does not have access to the `reason for visit'.\"*", " Dear Reviewer,\n\nThank you for clarifying your comments and making suggestions.\n\n**Use of the testing approach**\n\nWe have now clarified the writing with respect to the tests, highlighting that these are used throughout the experiments. For instance:\n\nSection 4.1: To understand the failure of fairness transfer in this context, we investigate the structure of the shift *by referring to Algorithm 1.*\n\nSection 4.2: We now test whether the shift $S$ has direct effects on the variables in this problem *using our proposed approach.*\n\nWe believe that our figures 2(a) and 3(a) serve the purpose of illustrating the results of our testing approach, as we add (colored) links to the causal graphs based on the results of each test (as described in the text). We have amended the caption of these figures to be more explicit:\n\n*\"Colored arrows represent statistical dependence as identified by Algorithm 1.\"*\n\n**fairness transfer framing**\n\nThank you for clarifying your comment, we know understand how this can suggest a \"data only\" framing of the issue. In the text, \"problematic\" means that the variables are affected by the shift in such a way that fairness transfer cannot be guaranteed. As we mention in the discussion, not identifying an issue is however not a guarantee that fairness will transfer. We have clarified this statement and added a (brief due to space constraints) discussion of the broader issues:\n\nline 28-30: ...*identify aspects of the data that hinder the transfer of fairness properties*\n\ndiscussion (algorithm development subsection): *\"We however note that our work does not assess the impact of model architecture or different training strategies, which might lead to compounded bias and/or poor generalization.\"*\n\nWe acknowledge unobserved variables in their general sense in the discussion, based on the suggestion of Rev. aaBT.", " Clarifying and adding additional detail about the cohort selection, experiment design and underlying motivations will greatly improve the paper in this section. It is critical that this is included in the main body of the paper. I apologize for misunderstanding the [noa] reference. Perhaps greater clarity of where that citation fits into the discussion could help? By amending that sentence to say, \"As reported by the Critical Care Statistics [noa], over half of the hospitals in the US...\".\n\nThis would then set up a much clearer introduction to the use of MIMIC in this paper. Something along the lines of: \"However, BIDMC is unique in that it has several ICU departments. This enables an exploration into the possible dangers of model performance under distribution shift allowing us to investigate the effects of training models on clinical subpopulations and applying them to others.\" --> More transparency about the design and motivation of these experiments is the best choice here. \n\nAlso, while the model doesn't have access to \"reason for visit\", it is a clear confounder to the health presentation of patients. Elective cardiac surgery patients are typically in better health, skew slightly younger (and male), and have shorter stays. Other underlying conditions such as hypotension are more prevalent in surgical ICUs due to blood loss incurred during the procedures. Without correcting for this bias, possible comparisons in model performance may not be valid. It is possible that the proposed IPW component of the proposed statistical test helps account for this, it is however an outlying consideration that goes unmentioned in the cohort selection of this part of the empirical results. ", " Thanks for the detailed response. It has clarified my thinking underlying each of my comments. \n\nQuickly re: Nestor, et al... I wanted to quickly mention that I drew favorable connections between the cross sectional analyses presented in each work in the presence of distribution shift. The intention of the analyses differs between the work (not in any way to diminish the expected contributions of this paper) but could fall under the same model generalization under transfer umbrella, if you will.", " Thanks for addressing the concerns about omitted literature. It has helped me better frame the contributions of this work.\n\nI understand that there is repeated use of the statistical test throughout the paper. My comment here derives from how the paper is written, at times (due to the heavy reliance of the appendix to demonstrate the use of Alg. 1) the statistical test which is claimed to be the major contribution of this work alongside the causal framing seems to be an afterthought. I recommend that more explicit references and clearer organization (perhaps through a table comparing results, elucidating comparisons with bolded indications of discovered factors to fairness properties of the models failing to be maintained across the distribution shift) would help improve this aspect of the paper.\n\n**Re: mirroring Chen, et al (2021):** In my estimation, the categorization used as well as some of the recommendations are directly derived from the original paper. The proposed adjustment to the language preceding these recommendations is great. I believe that this type of attribution is more appropriate than what is currently written in the paper where a reader may be inclined to assume that this categorization is wholly original to this work.\n\n**Re: \"fairness transfer\" framing:** From the very first section (lines 28-30) there are mentions of \"problematic aspects of the data\". As stated in my review, the challenges come through possible insufficiencies in the source or target data to support model generalization, especially when considering cross-sectional performance as analyzed in this work. I think more concrete discussion of the interplay between training and the properties ingrained in the model as a response to characteristics/limitations of the underlying data (possibly acknowledging unobserved confounding) would be appreciated.", " **General Response**\n\nThank you for your patience with my laundry list of considerations, some of which derived from an incomplete understanding of the work. I admit that my comment on the ability of the authors to improve the paper sufficient for me to raise my score was misguided and incorrect. I apologize for the rash response. There are several issues or challenges with the writing and clarity however that should be addressed before I would be confident in recommending the paper for publication. I will try to highlight these comments in-line with the responses given by the authors.\n\nI evaluated the paper _as submitted_, the extensive justifications of unclear or missing information with reference to the Appendix or Supplementary material are helpful. I do however contend that the information surrounding the content, intent and, purpose of each portion of the referenced sections in the appendix is not always clear in the main paper. This left the impression of the paper being incomplete.\n\nMany of the recommendations/comments I left were in place in attempt to guide the improvement of the paper, despite my general dissatisfaction with the concept of \"fairness transfer\" under distribution shift as defined in this paper (it is not a tangible component of the model and is rather a byproduct of training + data limitations). It is important to be precise and clear with what the source of challenges are and what avenues for solution are being explored. Shorthand can be helpful if clearly and explicitly defined. This paper, as submitted, struggles to clearly outline the major contributions (as mentioned previously and graciously addressed by the authors) and mostly uses shorthand to discuss the general challenges and solution approaches at a high-level which hinders full understanding of the specifics of the proposed statistical model. In places, this could be mediated by a simple, single sentence explanation (specific examples given in-line with the comments and responses above).\n\n\n**Re: Creager, et al (2020) and other related work**\nThanks for your explanation. It might be helpful to include this level of detail when discussing prior approaches to help isolate the contributions of the paper. In total, I find the related work discussions anchoring the conceptual framing of the paper to be uninformative. Understandably, the focus in Section 5 is on the mitigation strategies and performance comparisons. However, it is difficult to make comparisons as the performance metrics are not easily accessed in the paper. Perhaps summarizing the most relevant comparisons in a table would be helpful?\n\n**Sensitivity of Alg. 1**\nThis insight is not apparent from Section A.3. The metrics and procedures used to evaluate the proposed approach are not clearly outlined. I presume that the authors are referring to the analysis underlying Figure 3c. This specific experiment is not sufficiently introduced or outlined. It would be helpful to introduce the procedures in Section A.3.3 more directly (e.g. \"We investigate the effect of domain shift on the observed R.V.s using the proposed statistical test. We evaluate using the proportion of false positives, which indicate that..... We also investigate the effect of the number of samples when estimating the domain of the dataset, a critical component of our proposed statistical test...\"). As someone who is not entirely familiar with the material or type of statistical tests performed in this paper, taking a little more care in discussing what I'm expecting to look at would go a long way in improving the overall clarity of the paper.\n\n**Design of cohorts**\nThe paper, as submitted, is devoid of any information that discusses or explains this. _This is a critical omission._ Without full transparency of the motivations underlying the design of the experiments and the supporting cohorts (including the clinical guidance that directed these efforts), it is very easy to question the intention of the experiments. A summary of this information *needs* to be included in the main body of the paper.", " We thank the reviewers for their comments and suggestions. We provide here a summary of our response to the reviewers and of the changes made in the manuscript:\n\n- We have added 2 references that were suggested by reviewers aaBt and 4fVY and have in particular discussed the recent work of Singh et al., 2022 that is directly relevant.\n- We have modified the notation and added details when this led to misunderstandings.\n- We have added to our discussion of the limitations of the method in terms of handling multiple environments or with regards to unobserved demographic variables (Reviewer aaBt).\n- We have clarified that our work includes multiple experiments to validate our proposed testing approach. Due to space constraints, these have been included in the Appendix and detailed results are presented in the Supplementary. (Reviewer 4fVY)\n- We have clarified that all our experiments were designed with input from clinicians, that our models reflect state-of-the-art performance and that all details of the modeling can be found in the Supplementary Materials. (Reviewer 4fVY)\n- We have added another synthetic experiment as suggested by Reviewer Ezbv (Appendix).\n\nWe hope this addresses your concerns and clarifies any misunderstandings. We would be happy to make further changes or respond to more questions if any.\n", " We thank the reviewer for their positive comments and for their suggestions. We respond below, with the changes made to the text.\n\n- **Line 101 -- do Rabanser [53] use bootstrapping techniques to do dimensionality reduction?**\n\nThe bootstrapping only affects the statistical testing, not the dimensionality reduction. This means that dimensionality reduction is performed independently, and that the summaries are then fed into the statistical testing that involves bootstrapping. We have amended the text to make this clearer:\n\nCaption of Algorithm 1:\n*“Algorithm 1: (Conditional) independence testing assessing the nature of shift S on a single variable U ∈ G. U represents the feature values or its summary if high-dimensional.”*\n\nQuestions\n--------------\n\n- **A synthetic experiment can be switching the labels of the test set. In this way one can have a pure concept shift, and if done only with protected attribute subgroups it can affect fairness. Would the method proposed in this paper diagnose this type of shift?**\n\nThank you for suggesting this experiment. Using the dermatology source data, we have randomly shuffled the soft label across the 27 conditions, for each case. This leads to a change in the prevalence of the conditions, not conditioned on $A$. While there are no changes in P(A) due to S (by design), we observe a significant difference between $P(Y | A, S=0)$ and $P(Y | A, S=1)$ for 19 conditions out of 27 (Bonferroni corrected). Thanks to the correction, and given that the images were not modified, there are no significant differences between $P(X | Y, A, S=0)$ and $P(X | Y, A, S=1)$.\n\nWe have added this result in the Appendix:\n*“Finally, we randomly shuffled the labels $y_k$ across the 27 conditions, for each case independently. At the population level, this results in a change in $P(Y)$. As expected, there was no effect of $S$ on $A$. For $Y$, we observed significant differences in $P(Y \\mid A, S=0)$ and $P(Y \\mid A, S=1)$ for 19 conditions out of 27 (Bonferroni corrected). Thanks to the correction, and given that the images were not modified, there are no significant differences between $P(X | Y, A, S=0)$ and $P(X | Y, A, S=1)$.”*\n\n- **A strategy used in order to post-process the prediction to enforce fairness metric is Alabdulmohsin[4], the authors observe that fairness mitigating on source data leads to no improvement or worsening on test data. I wonder if simpler methods such as [https://www.nature.com/articles/s42256-021-00396-x] have the same results.**\n\nThank you for this comment. In this work, we have selected a method that comes with a technical justification and guarantees a solution that enforces the desired fairness criterion (tested with demographic parity and equalized odds for EHR). It is possible that different fairness mitigation strategies lead to different fairness transfers (line 336). However, if a strategy does not amend the BCN in a way that guarantees fairness transfer (as specified in our analysis in Appendix A.4), the specific method used for fairness mitigation would not change that result (line 337). ", " Limitations\n-----------------\n\n- **Some other concern is the lack of complete data**\n\nThank you for this comment. We touched upon this phenomenon at line 322 regarding missing unobserved factors across environments, and at line 324 around missingness patterns. We have added the following to line 322:\n*“We are also limited by which factors are observed across all environments, and cannot assess the changes in unobserved variables (e.g. social determinants of health).”*\n\n- **One more concern is when considering multiple sources or multiple target environments.**\n\nThank you for this comment. Indeed, our approach refers to a “source” domain and one target domain. While we can perform pairwise comparisons between the source and every possible target, extending the method to a general form of S would be desirable. This could be considered by using permutation weights as described in [Arbour et al.]. We have added this limitation as “future work”:\n\n*“In addition, extending the testing strategy to compare multiple (discrete or continuous) versions of the environment $S$ would be valuable.”*", " We thank the reviewer for their positive assessment of our work and their interesting comments. We respond to your suggestions and questions below.\n\nQuestions\n--------------\n\n- **\"[1] also discuss detecting shifts across environments using a causal discovery approach (FCI) that also adopts conditional independence tests.\"**\n\nThank you for pointing us to the recent work in [1]. This is indeed a very relevant reference that we have now included. The major differences between our approach and [1] are that (a) FCI and MMD can be applied to low numbers of dimensions, while we deal with high-dimensionality data, (b) the MMD tests are done on marginal distributions, (c) we provide a causal analysis of failures of fairness transfer with respect to prior mitigation work, which can guide the selection of potential mitigation strategies. \n\nWe have included the citation and added the following to section 5:\n*“Recently, Singh et al. (2022) investigate differences in source and target distributions using squared maximum mean discrepancy (Gretton et al., 2012), but do not take the causal structure into account. To differentiate direct from indirect effects of $S$ on variables $U$, they perform causal inference and visually inspect the obtained graph. This method can only be applied to low-dimensionality datasets.”*\n\n- **\"In the dermatology task, what is the dimension of X considered? How would the dimensionality affect the results?\"**\n\nThe dermatology images are resized to 448 × 448 pixels. We have added this information in the Supplement.\n\nIn both datasets, the dimensionality of X is large. There are two potential effects of large dimensionality: (1) to build a reliable summary of these features to perform the test on, and (2) to use such features to derive the weights based on P(S|X) if needed. For (1), Rabanser et al. investigate different summary strategies. As we mention on line 696, our approach does not impose this choice. The goal is therefore to obtain a reliable summary. In both the EHR and dermatology work, we refer to X → Y models and obtain models that perform well on a separate source test set (see Supplement). However, the summary could become “dominated” by specific dimensions. For (2), we have found that using more features led to higher performance in discriminating between environments (e.g. for EHR: using {A,M}, the performance is 76.9% while using {A,M,T} leads to 87%). However, model performance cannot assess whether the correct / all relationships between features and environments are captured. Therefore, if the variance is high, the test might be underpowered across bootstraps. This is a possible explanation for the EHR tests on X and Y conditioned on >5,000 treatments.\n\nWe have detailed our statement in the Supplement:\n*“We however note that our analysis is limited by the large dimensionality of $\\{A, M, T\\}$ to build the weights. Indeed, the gradient boosted tree model $P(S | A, M, T)$ can then rely on many different signals to build its predictions (D’Amour et al., 2020) and display high variance across bootstraps. This would result in an under-powered test.”*\n\n- **\"In figure 2d, does the X-axis represent the plots for the source and target?\"**\n\nApologies for this confusion: every case can have minimum 1 to maximum 6 images (variable). All images for a case are embedded and their average is taken. Hence, the number of images per case likely affects the signal-to-noise ratio. This plot shows that on average, this number is different across datasets. We have modified the legend to make this clearer:\n*“(d) Distribution of the number of images in cases labeled as `SK/ISK' in older females. Each case includes min. 1, max. 6 images whose embeddings are then averaged.”*\n\n- **Reference missing on line 174**\n\nThank you. This reference is a website. For clarity we have used a footnote instead (noa means no authors).\n\n- **Check if the references for path-specific effects (line 329) are appropriate. [2] is related to this.**\n\nThank you for this. There seemed to be a compilation mishap as we did mean to cite Chiappa and [2].", " Questions\n----------------\n- **“Unfortunately, I am not sure whether the authors will be able to sufficiently address the major concerns that I raised in my review above that would allow me to raise my score without re-submission. There are core issues to the framing and narrative of the paper necessitating a full revision as well as some additional experimental analysis that would need to be re-reviewed before acceptance for publication. “**\n\nWe have not identified any suggestions for modifications of the narrative, nor suggestions of additional experiments that have not already been performed (see Appendix and Supplement). Therefore, we kindly ask the reviewer to revise their assessment of our work.\n\n- **“In the authors’ view, what are the major intended contributions of this paper (see first major points in the section above)?”**\n\nAs described in our point by point response, our main contribution is the testing approach.\n\n- **“How does this paper differentiate from Creager, et al (2020 ICML)? Is it only in the use of real-world medical datasets and the corresponding simple prediction tasks?”**\n\nThe setup in Creager et al., 2020 is quite different from ours. First, the causal models are different, with Creager et al. 2020 investigating feedback loops in a system that might affect not only the demographic distribution, but also the decision-making process, i.e. X → Y. Shifts are also observed over multiple time points, rather than seeing a new dataset without adapting the model. Learning a policy is also quite different from predicting an output, leading to different mitigation strategies. Last but not least, our datasets represent real-world applications and include high-dimensional features, which the method proposed in Creager et al., 2020 cannot handle.\n\n- **“How sensitive is Algorithm 1 to the quality/accuracy of the models used internal to the statistical test?”**\n\nThis is a valid technical point that we discuss (line 316). See Appendix A.3 for an investigation of the effect of the number of samples on test Type I error.\n\n- **“What efforts were taken in the construction of the experimental subpopulations to create a fair comparison of model performance?”**\n\nFor all experiments, we consulted with clinicians and domain experts. We used state of the art model architectures. Our results are described in detail in the Supplement, highlighting the good performance of the model on the source, but also on the target populations.\n\nLimitations\n---------------\n\n- **\"However, it is unclear what kinds of assumptions and limitations that exist for the development and execution of the statistical tests in Alg. 1. I can guess that the data needs to admit some form of conditional independence estimation (which might be a fairly strong assumption on its own) as well as be able to adequately estimate the dataset source (predicting).\"**\n\nThe limitation of the method is mostly that we need a causal graph of the application. This is clearly stated in the discussion (line 326).", " - **It's unclear what the training/testing procedure is to assess model fairness before and after the dataset shift. Fairness properties of a model depend significantly on the training mechanism of the model. The absence of any discussion along these lines is concerning.**\n\nThese are briefly mentioned in the experiments, including the numbers of test cases in the source dataset. All training and validation parameters are described in detail in the Supplementary Materials, including hyper-parameters, and the use of a separate validation set. We note however that the overall generalization of the models from source to target is satisfactory, with the EHR model even performing better on the target than on the source. Therefore, the observed fairness gaps are not due to model overfitting, but to changes in the data distribution that directly affect fairness.\n\n- **In the EHR experiments, only 5 comorbidities were accounted for in the target population. How were these chosen? Why only those 5?**\n\nThis is a misunderstanding. We consider 30 comorbidities, as displayed in Figure 3(c) and detailed in Appendix. We obtain statistical significance for 5 comorbidities out of the 30 when comparing the source and the target. This was clarified in the text.\n\nEHR experiments\n-----------------------------\n- **“It is stated that many hospitals do not have specialized ICU departments. This assertion is not supported by citation and seems speculative and misleading.”**\n\nThis is not speculative, and we directly refer to the Critical Care Statistics (https://www.sccm.org/Communications/Critical-Care-Statistics , line 172). The Beth Israel medical system (where the MIMIC dataset is recorded from) is a special case and actually allows us to investigate the effects of training models on clinical subpopulations and applying them to others.\n\n- **“The poor model performance between the two subsets of data is not at all surprising since the subsets of the data cover very disparate patient populations.”**\n\nThe model performs on par with the literature (performance of 78.6%, reported in Supplement). More importantly, the overall model performance is similar, if not higher for the target population than for the source. This means that the model is generalizable between the clinical subpopulations if performance is the metric considered. This statement is hence not correct.\n\n- **“The design of these experiments is not very intellectually honest without proper description or justification. [...] it appears that the authors are trying to construct a dramatic example to garner more significance. [...] there are some significant concerns about the intentions of the authors in using these experiments.”**\n\nWe strongly disagree with those comments. First, this experiment was designed based on the input from clinicians. Second, we clearly mention the hypothesis we test (line 172), and how this is not a clinically surprising result (line 217). As mentioned in the text, the model does not have access to the ‘reason of visit’ for training, which is typical for such EHR models. As mentioned above, the model performs well on both populations overall and the difference is only in how the distribution shift affects fairness. Therefore, this comparison is not an “extreme case” or dishonest. We use this example to highlight how fairness mitigation in the source does not guarantee the transfer of fairness in the target (section 5).", " - **“ I really enjoyed the attempts this paper makes to 1) identify the challenges to maintaining model fairness that arise due to distribution shift (re-affirming Nestor, et al 2019)”**\n\nNestor et al., 2019 highlight issues of robustness with EHR models, not of fairness transfer. Appendix F describes subpopulation stratification results but does not assess the fairness of the models before and after the EHR system change, and does not guarantee the model fairness before the change.\n\nThe \"diagnosis\" of the distribution shifts in the real-world datasets is underwhelming\n----------------------------------------------------------------------------------------------------------------\n\n- **“It was surprising that after introducing Algorithm 1, that it wouldn't be used as readily for testing the independence of each variable in the dataset.”**\n\nWe have used algorithm 1 for every statistical comparison, apart from the marginal comparisons P(A|S=0, 1). This is clearly described in the results. Due to space constraints, the details of the tests are reported in the Supplementary Materials. We report modeling accuracy for $P(S| X_i)$, as well as describe the construction of the summaries for high-dimensionality features.\n\n- **“ It largely feels that the analysis of distribution shifts could be does by estimating the simple disease prevalence among subgroups or otherwise measuring the homogeneity of the dataset using something like optimal transport dataset distance”**\n\nMeasuring disease prevalence amongst subgroups is not possible for EHR as we need to condition on multiple variables beforehand. For dermatology, this is essentially what the test does but it is a more generalizable approach as it can deal with multiple demographic attributes without reducing the power of the test (while you would be constrained by the number of cases in an intersection for every condition). Based on the causal graph, the conditioning needs to be performed (which we do with IPW). The statistical test could be different though, as we clearly mention, and as explored in Rabanser et al. This does not decrease the value of the proposed approach.\n\nThe procedure and execution of the proposed statistical test is unclear\n-----------------------------------------------------------------------------------------------\n\n- **“The procedures in setting up and evaluating these models are not made clear, nor are the underlying effects of possibly incorrect models on the overall ability of the statistical test to adequately reject the null hypothesis.”**\n\nWe describe our testing strategy in detail in Appendix (A3) and in the Supplementary Materials for the medical applications. As mentioned in section 3 (line 103), Appendix A3.3 extensively investigates the tests, assessing its Type I error under different sample conditions.\n\n- **“Additionally, it's unclear whether Algorithm 1 is used on each variable in the BCN individually or all at once.”**\n\nAlgorithm 1 is used on each variable separately, as described on line 93 (“For each variable U in the graph”) and in the table caption (“Independence and conditional independence testing to assess the nature of a shift S on a variable U ∈ G”). We have added a confirmation in the caption of algorithm 1:\n\n“Algorithm 1: (Conditional) independence testing assessing the nature of shift S on a *single* variable U ∈ G.”\n\n- **“Beyond this, it is difficult to adequately gauge the validity or helpfulness of the proposed statistical tool with the datasets used in the paper. Here, the design of synthetic examples where a subset of the variables are affected by the shift (rather than all of them) would be very insightful“**\n\nWe describe briefly our validation strategy in section 3 (line 103), which points to synthetic experiments where we have performed this test and show that not only is our testing strategy performing well, our causal model assumptions seem to align with the data.\n\nAdditional points where the clarity of the paper could be improved\n-----------------------------------------------------------------------------------------\n\n- **The use of notation in Algorithm 1 is terse and not introduced anywhere. It's difficult to follow and understand. This is made harder by compounding the use of 'S' to be both the sample from the datasets and the source/target indicator.**\n\nWe have amended the notation for clarity.\n\n- **It would be instructive to outline the BCN basic structure or concepts using a figure early on. This could be particularly useful in the design and execution of synthetic examples where the causal graph is modified accordingly.**\n\nBCNs are described in Appendix A.2.\n", " We thank the reviewer for their comments. We hope that our point by point response clarifies our intent and results.\n\nContributions\n-------------------\n- **“The causal framing of datasets and the effects of environment (e.g. source and target dataset) has been established previously “ [...] “Perhaps the extension of these lines of previous inquiry to real-world medical datasets is the major contribution? This is a noteworthy effort that is to be commended but the tenor of the findings only serve to reinforce well understood complexities with medical data (e.g. the type of distribution shift is compound and possibly too complex to adequately or feasibly mitigate).”**\n\nWe do not claim to be the first to point issues with fairness transfer. Line 16 clearly mentions this, citing example papers in the field. However the implications on the field of medical AI are not well understood by this community. This is further illustrated by the recent publication of Singh et al. (April, 2022) pointed out by reviewer aaBt. Therefore, showing these results on medical applications is important for the field of medical AI.\n\nRegarding references: we aim to illustrate our points with example citations in the field from diverse sources and do not claim to be exhaustive. In the references pointed out by the reviewer, Madras et al., 2019 addresses the issue of fairness through causal modeling, but does not address its intersection with distribution shift. Creager et al., 2020 discusses fair policy learning in dynamical systems, which is closer to the topic of robustly fair models. We have added Creager et al., 2020, and discuss below its differences with our work.\n\n- **“The statistical test has some merit as a potential contribution but is not presented convincingly as such, nor is it used to any extensive effect in this paper.”**\n\nThe tests are not a side point but the main contribution of our work. Their description, usage and implications are made explicit throughout our manuscript (lines 28, 89, 132-167, 192-221, section 5). We report the full methods and validation in Appendix given space constraints. Most of our results, analysis of related work and discussion reflect the impact these tests can have.\n\n- **“Another issue along the lines of originality here is that the ML-pipeline remedies (lines 293-315) to address fairness challenges (among other issues raised by distribution shift) are almost entirely pulled from Chen, et al (2021) without proper attribution. At first read this section seemed like an odd departure from the narrative and form of the preceding sections of the paper. After re-reading the paper a second time, it comes across as a mea culpa trying to mask technical deficiencies in the work by redirecting the focus away from the results.”**\n\nWe strongly disagree with the reviewer’s interpretation of this section. First, we cite Chen et al. and did not “pull” conclusions from their paper. Second, points 2-5 refer directly to the issue of fairness transfer, reframing suggestions in Chen et al under this focus, but also including references to our tests. This section is not a mea culpa for deficiencies in our work, but rather aims to broaden the discussion outside of purely technical approaches to fairness transfer issues. This is actually going beyond what most technical works discuss, bringing more ethical considerations along the ML pipeline. We believe this is an important addition to the discussion.\n\nTo make attribution clearer, we have modified our text:\n“Our work highlights real-world challenges that prevent the development of robustly fair models. Given that each task leads to different complexities in terms of causal graph, expected shifts, and fairness requirements, these findings support looking beyond purely algorithmic solutions. *Considering the whole ML pipeline, we can re-interpret the remedies discussed in Chen et al. (2021) for fairness transfer: [...]* ” \n\nThe framing of \"fairness transfer\" is odd\n------------------------------------------------------\n- **“While the fairness of trained models is an important characteristic to monitor and assess, it strikes me as an odd framing to place model performance as focused on \"problems\" with the data.”**\n\nWe are not claiming this is a data issue, rather that one or multiple datasets used as source cannot capture the whole data generation process and are hence susceptible to shifts. In the same way, one cannot claim that issues arise from models only. It would be helpful if the reviewer could point out which parts of the text refer to fairness transfer as a data issue, and/or make suggestions for improvement as it is unclear what modifications are requested.", " This paper establishes a causal framing to characterize the effects of distribution shift on a trained model's fairness. A statistical test building from various factors of conditional independence, enabled through the assumed causal structure, is used to understand what factors of a dataset are unduly affected by the distribution shift. This knowledge illuminates the possible reasons a model's fairness qualities are not maintained after transfer which in turn motivate possible mitigation strategies. The proposed statistical test is then used to analyze model performance on two separate real-world datasets.\n\n# UPDATE: After discussion period\n\nI'm grateful to the authors for their patience to address my misunderstandings and willingness to improve the writing of the paper. After re-assessing the paper, I am happy to improve my score and recommend acceptance of this paper. It's clear that a lot of work has been done in pursuing the research underlying this paper. There are thorough empirical investigations as well as substantial scholarship to properly anchor the work. The statistical test, built around predicting the binary indicator source of the dataset, appears to be a helpful tool. It unfortunately isn't given enough space to be sufficiently introduced or analyzed. I will try, to the best of my ability, to outline where I feel that the paper is strong and provide my view on its weaknesses with some recommendations for improvement along the lines recommended in this form (\"originality, quality, clarity, and significance\"). All recommended literature beyond those referenced in this paper will be included at the bottom of this section.\n\n**To be up front, it is unclear what the intended contributions of this work are meant to be.** The causal framing of datasets and the effects of environment (e.g. source and target dataset) has been established previously (see Adragna, et al '20; Madras, et al 2019; Creager, et al '20, '21, _some of these citations were missing surprisingly_). The statistical test has some merit as a potential contribution but is not presented convincingly as such, nor is it used to any extensive effect in this paper. Finally, the mitigation approaches outlined in the Appendix are all established in the literature and are discussed, _even in this paper_, to not be realistic or particularly helpful. In this regard, it is unclear how to assess the paper's originality. Perhaps the extension of this lines of previous inquiry to real-world medical datasets is the major contribution? This is a noteworthy effort that is to be commended but the tenor of the findings only serve to reinforce well understood complexities with medical data (e.g. the type of distribution shift is compound and possibly too complex to adequately or feasibly mitigate).\n\nAnother issue along the lines of originality here is that the ML-pipeline remedies (lines 293-315) to address fairness challenges (among other issues raised by distribution shift) are almost entirely pulled from Chen, et al (2021) without proper attribution. At first read this section seemed like an odd departure from the narrative and form of the preceding sections of the paper. After re-reading the paper a second time, it comes across as a mea culpa trying to mask technical deficiencies in the work by redirecting the focus away from the results.\n\n**The framing of \"fairness transfer\" is odd**. While the fairness of trained models is an important characteristic to monitor and assess, it strikes me as an odd framing to place model performance in as focused on \"problems\" with the data. The issue at hand are artifacts of the model training and its generalization capacity. Issues of fairness, while being spurred by characteristics of the data (especially under transfer), largely arise through the model overfitting to \"degenerate\" or local modes in the global data distribution. Placing this entire blame on the data doesn't seem appropriate. Perhaps, it might be better to frame this challenge as ways the datasets are insufficient to ensure equitable model performance across distribution shift? I acknowledge that the term \"fairness transfer\" is present in the literature, of which we can't control. But, there is some improvement possible to the way this literature frames and addresses these problems. I really enjoyed the attempts this paper makes to 1) identify the challenges to maintaining model fairness that arise due to distribution shift (re-affirming Nestor, et al 2019), 2) evaluate the effects of distribution shift on cross-sectional model performance, 3) propose possible mitigation strategies and 4) recommend possible mediation strategies to the ML pipeline when facing these types of challenges. I just wish that the narrative and structure used in this paper made the actual challenges more apparent without resorting to shorthand that isn't wholly indicative.\n\n**The \"diagnosis\" of the distribution shifts in the real-world datasets is underwhelming.** At worst, the analysis described in the paper can be viewed as elementary data exploration. It was surprising that after introducing Algorithm 1, that it wouldn't be used as readily for testing the independence of each variable in the dataset. Simple distributional analyses of the data and its subgroups seem to dominate the assertion (and pretty well understood fact a priori) that the type of shift is compound. The mention of the statistical tests is a footnote and seems to be largely kept quite vague. I suppose that I was hoping for a bit more tangible evidence whether through tables or charts about how the statistical tests were used and how they were assessed to be meaningful or not (especially since they rely on prediction models internally, how accurate or useful were those by the way?!). Parenthetically stating that something was significant or not based on the p-value doesn't necessarily provide enough information about the validity of the underlying statistical test. It largely feels that the analysis of distribution shifts could be does by estimating the simple disease prevalence among subgroups or otherwise measuring the homogeneity of the dataset using something like optimal transport dataset distance (OTDD, Alvarez-Melis and Fusi, 2021). \n\n**The procedure and execution of the proposed statistical test is unclear.** \nAs far as I understand the statistical test proposed in Algorithm 1 is built around learning predictive models of the data source, conditioned on the causal parents of the dataset. The procedures in setting up and evaluating these models is not made clear, nor are the underlying effects of possibly incorrect models on the overall ability of the statistical test to adequately reject the null hypothesis. As the core technical contribution (as far as I can tell), it is surprising that the mechanisms and analysis of this algorithm are not featured in any substantive manner. Additionally, it's unclear whether Algorithm 1 is used on each variable in the BCN individually or all at once. \n\nBeyond this, it is difficult to adequately gauge the validity or helpfulness of the proposed statistical tool with the datasets used in the paper. Here, the design of synthetic examples where a subset of the variables are affected by the shift (rather than *all* of them) would be very insightful. It would be helpful to see use cases where the proposed test does work well and as designed/intended. Right now, going from idea to implementation in complex settings, the presumed impact of the approach is significantly lacking.\n\nAdditional points where the clarity of the paper could be improved:\n- The use of notation in Algorithm 1 is terse and not introduced anywhere. It's difficult to follow and understand. This is made harder by compounding the use of 'S' to be both the sample from the datasets and the source/target indicator.\n- It would be instructive to outline the BCN basic structure or concepts using a figure early on. This could be particularly useful in the design and execution of synthetic examples where the causal graph is modified accordingly.\n- It's unclear what the training/testing procedure is to assess model fairness before and after the dataset shift. Fairness properties of a model depend significantly on the training mechanism of the model. The absence of any discussion along these lines is concerning.\n- In the EHR experiments, only 5 comorbidities were accounted for in the target population. How were these chosen? Why only those 5?\n\n\n**Re: EHR experiments** It is stated that many hospitals do not have specialized ICU departments. This assertion is not supported by citation and seems speculative and misleading. This is made more apparent when the experiment utilizes MIMIC data that is derived from a hospital that *does* have specialized ICU departments. The poor model performance between the two subsets of data is not at all surprising since the subsets of the data cover very disparate patient populations. The design of these experiments is not very intellectually honest without proper description or justification. It can be understood if the subpopulations were designed to be an apparent and extreme example of distribution shift but without stating this, it appears that the authors are trying to construct a dramatic example to garner more significance. The conclusions made in lines 216-221 are exceedingly obvious. Of course the type of ICU a patient is admitted to affects the underlying performance of a model trained on a separate population. There are basic factors due to underlying health conditions, whether the procedure was elective or urgent, hosptial procedures, etc. that confound the presentation of data for use in predictive settings. By transferring between subpopulations designed to be as far apart as possible along some of these confounding dimensions (especially when these things are well understood within MIMIC--CSICU vs. MICU for example), there are some significant concerns about the intentions of the authors in using these experiments. Again, if this was clearly set out and justified from the outset it would be more more acceptable (yet, still inappropriate) that these empirical claims were made in this paper.\n\n#### _**Recommended Literature**_\n\nMadras, D., Creager, E., Pitassi, T., & Zemel, R. (2019, January). Fairness through causal awareness: Learning causal latent-variable models for biased data. In Proceedings of the conference on fairness, accountability, and transparency (pp. 349-358).\n\nCreager, E., Madras, D., Pitassi, T., & Zemel, R. (2020, November). Causal modeling for fairness in dynamical systems. In International Conference on Machine Learning (pp. 2185-2195). PMLR.\n\nAlvarez-Melis, D., & Fusi, N. (2020). Geometric dataset distances via optimal transport. Advances in Neural Information Processing Systems, 33, 21428-21439.\n\n\n Unfortunately, I am not sure whether the authors will be able to sufficiently address the major concerns that I raised in my review above that would allow me to raise my score without re-submission. There are core issues to the framing and narrative of the paper necessitating a full revision as well as some additional experimental analysis that would need to be re-reviewed before acceptance for publication. This process is not supported by the reviewing timeline for NeurIPS.\n\nI do have some questions though that may help me understand whether I should reconsider my assessment and if I misunderstood the paper in any way. \n\nIn the authors view, what are the major intended contributions of this paper (see first major points in the section above)?\n\nHow does this paper differentiate from Creager, et al (2020 ICML)? Is it only in the use of real-world medical datasets and the corresponding simple prediction tasks?\n\nHow sensitive is Algorithm 1 to the quality/accuracy of the models used internal to the statistical test?\n\nWhat efforts were taken in the construction of the experimental subpopulations to create a fair comparison of model performance? The major limitations of this paper have been listed by the authors in the final paragraphs of the paper. They have clearly outlined major challenges from a data perspective that limit the feasibility of mitigation strategies as well as overall performance assurances with regards to transfer when facing distribution shift. \n\nHowever, it is unclear what kinds of assumptions and limitations that exist for the development and execution of the statistical tests in Alg. 1. I can guess that the data needs to admit some form of conditional independence estimation (which might be a fairly strong assumption on its own) as well as be able to adequately estimate the dataset source (predicting $P(S|pa(U))$).", " The paper proposes an approach to detect distribution shifts between two environments or settings that could lead to the failure of transfer of fairness properties across the settings. The approach adopts the joint causal framework to identify the direct effects of the environment variable on other variables at hand. The paper also suggests some mitigation strategies related to post-deployment, data collection, and outcome definition in case distribution shifts make it challenging to transfer fairness properties. Experiments on two real-world health tasks: 1) predicting skin conditions in dermatology and 2) clinical risk prediction from EHR highlight the merits of the proposed approach. Strengths:\n1. The paper is generally well written with a clear description of the proposed approach and the empirical details.\n2. The work touches on an important aspect of the transfer of fairness notions, especially in healthcare where there is not enough clarity on which of the fairness notions are more suitable. The topic is of importance to the community and the said approach is one such mitigation/investigative tool. There is potential for further advancement in this area that is touched upon here and it is thus significant. \n\nWeakness:\n1. One potential weakness as illustrated is the dependence on sample size which is known to affect conditional independence testing. \n2. There are some parallel approaches that touch upon identifying how shifts relate to model performance in the healthcare setting; it would be useful to discuss the current approach with respect to such methods (more on the approaches below). 1. [1] also discuss detecting shifts across environments using a causal discovery approach (FCI) that also adopts conditional independence tests. This is another line of work closely related to this paper and would be helpful to situate the current approach with regards to this work/discuss the differences.\n\n2. In the dermatology task, what is the dimension of X considered? How would the dimensionality affect the results?\n\n3. In figure 2d, does the X-axis represent the plots for the source and target? It's a little unclear as to what the number of images signifies here.\n\n4. Reference missing on line 174.\n\n5. Check if the references for path-specific effects (line 329) are appropriate. [2] is related to this.\n\n\n\nReferences\n[1] Singh, Harvineet, Vishwali Mhasawade, and Rumi Chunara. \"Generalizability challenges of mortality risk prediction models: A retrospective analysis on a multi-center database.\" PLOS Digital Health 1.4 (2022): e0000023.\n[2] Mhasawade, Vishwali, and Rumi Chunara. \"Causal multi-level fairness.\" Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. 2021.\n The authors have discussed some limitations of the work.\n\nSome other concern is the lack of complete data, especially when considering multiple environments/settings in healthcare such as social determinants of health. This is pertinent since social determinants are known to shift across environments and would thus lead to failure of fairness transfer. \n\nOne more concern is when considering multiple sources or multiple target environments. As the number of environments increases the chances of shift would also increase. While the robust training could be done to incorporate multiple source environments, assessing the performance across multiple target environments is crucial to understand under what kind of settings is the model fair.", " The authors deal with the problem of diagnosing fairness transfer under distribution shifts. For this they use a causal framework and statistical hypothesis testing to identify the type of distribution shift. Their work relies on having the labels of sample of out-of-distribution data to correctly identify the type of shift. The paper deals with a very relevant topic.\n\nThe formalization of the problem is quite clear, simple, and suits very well the task at hand. The paper is well written and easy to understand. Even if it has a long appendix, the main body is self-contained and does not rely on the appendix. \n\nThe paper structure is not the typical one, with a related work section on page 7 and a discussion section that spans for over a page.\nIn the discussion section, there is a set of remedies. I am unsure if a set of remedies can be included in a scientific paper. \n\n\nLine 101 -- do Rabanser [53] use bootstrapping techniques to do dimensionality reduction?\n\n A synthetic experiment can be switching the labels of the test set. In this way one can have a pure concept shift, and if done only with protected attribute subgroups it can affect fairness. Would the method propose on this paper diagnose correctly this type of shift?\n\nA strategy used in order to post-process the prediction to enforce fairness metric is Alabdulmohsin[4], the authors observe that fairness mitigating on source data leads to no improvement or worsening on test data. I wonder if simpler methods such as [https://www.nature.com/articles/s42256-021-00396-x] have the same results. A common underpinning of much work in ML fairness is that the metric selection is not justified from a socio-technichal perspective. In this work the authors acknowledge that the chosen metrics do not consider some social factors. \n\nThe authors also acknowledge that the solution proposed depends on the number/quality of samples and that it should be perceived as to diagnose fairness transfer failures." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "nips_2022_K-A4tDJ6HHf", "VzRL_ZXp9j0", "vVoAW6iUk8", "20SqT_A3l1", "YxB0Xbe6_Xs", "2UizF4ntcYQ", "ybyyAGn7bqO", "z9IsR-su8oy", "UOXdSJWjN7A5", "d7OZTaulT13", "nips_2022_K-A4tDJ6HHf", "_Yn2Vaz-DmE", "IpXQJg104Ul", "Kze6JYe6icM", "ybyyAGn7bqO", "z9IsR-su8oy", "UOXdSJWjN7A5", "vVoAW6iUk8", "nips_2022_K-A4tDJ6HHf", "nips_2022_K-A4tDJ6HHf", "nips_2022_K-A4tDJ6HHf" ]
nips_2022_AYQI3rlp9tW
Efficient identification of informative features in simulation-based inference
Simulation-based Bayesian inference (SBI) can be used to estimate the parameters of complex mechanistic models given observed model outputs without requiring access to explicit likelihood evaluations. A prime example for the application of SBI in neuroscience involves estimating the parameters governing the response dynamics of Hodgkin-Huxley (HH) models from electrophysiological measurements, by inferring a posterior over the parameters that is consistent with a set of observations. To this end, many SBI methods employ a set of summary statistics or scientifically interpretable features to estimate a surrogate likelihood or posterior. However, currently, there is no way to identify how much each summary statistic or feature contributes to reducing posterior uncertainty. To address this challenge, one could simply compare the posteriors with and without a given feature included in the inference process. However, for large or nested feature sets, this would necessitate repeatedly estimating the posterior, which is computationally expensive or even prohibitive. Here, we provide a more efficient approach based on the SBI method neural likelihood estimation (NLE): We show that one can marginalize the trained surrogate likelihood post-hoc before inferring the posterior to assess the contribution of a feature. We demonstrate the usefulness of our method by identifying the most important features for inferring parameters of an example HH neuron model. Beyond neuroscience, our method is generally applicable to SBI workflows that rely on data features for inference used in other scientific fields.
Accept
The paper presents a method for feature selection in simulation-based inference, that is, for quantifying to what extent each feature (or summary statistic) contributes to reducing posterior uncertainty. As this can be accomplished naively by re-estimating the posterior after systematically omitting features, the focus is on developing a fast method instead. The proposed method is evaluated on parameter estimation of a Hodgkin-Huxley neuron model from neuroscience. The reviewers found the paper to be clearly written and did not voice major concerns regarding its technical quality. The proposed method is simple, efficient and clearly presented, and the evaluation on the Hodgkin-Huxley model is convincing. A potential drawback of the method is that it requires a model that can be analytically marginalized over, and thus may not benefit from current and future advances in generative modelling. Seeing as the paper is of good quality without major problems, I'm happy to recommend acceptance.
train
[ "QL76PFG5Io", "bxAbQptsUUh", "Gn9X5HqKNXY", "Jl23y_4yzIJ", "2viccE_zl3I", "wHbB9R_dsk", "3zRixAq_SmF", "SD6PXx-wR8r", "8bDJO2FgaQ0", "IvAHl64OJE", "ZPTAVPrDxAA", "QLjQkDGsl16", "-MmtLhBBrLw" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for their reply and for appreciating our efforts.", " We thank the reviewer for their reply and for appreciating our efforts.", " Thanks for your reply. I think it all makes sense and I'm glad you added the discussion of MI and how a single observation can be important in neuroscience. I will raise my score.", " I would like to thank the authors for the rebuttal comment as well as the revised version of the paper.\n\nIt is great to see the revised version of Fig 4a with a better-constructed null model (distribution over multiple random sequences).\n\nThe additional explanations about the computational cost and efficiency of the algorithm make sense, and help me understand the method better.\n\nMy ratings for the paper remains unchanged; I still think that this is a small but nice technical improvement with clear practical advantage.", " We thank the reviewer for their comments and appreciate the valuable feedback. The reviewer found the importance of evaluating “interpretable summary statistics” crucial for trustworthy inference and our method easy to follow. However they also expressed some concerns regarding the use of mixture density networks (MDNs) with respect to high dimensional problems and in comparison to flows as well as pointing out biases towards recent literature, which we discuss and address below.\n\n__Density estimator architecture and flow-based NLE__: The reviewer asked about details of the NLE architecture and training. To show that FSLM achieves the same posterior accuracy via marginalized likelihood estimates as neural likelihood estimates trained directly on marginal feature data, while being significantly more efficient, we opted to use MDNs for both algorithms. We make this more explicit in the revised version (line 165), expanding on our previous mention in Fig. 1. We also included more detailed information about the number of layers (line 166). \n\nWith regards to the applicability of flows, we now show that running the brute-force feature selection with a flow-based NLE is slower than the MDN-based NLE. We added this information to Table 1. Of course, flows are the state-of-the-art for NLE in terms of performance. Performing new analysis, we show that FSLM yields very similar feature importance estimates as the significantly slower brute-force flow-based NLE approach (see Table 1), indicating that FSLM-derived features are consistent with the features identified with flow-based NLE architectures. We will add this information to Fig. 3 and associated text in the final version of the paper (we cannot do so currently, due to space constraints).\n\n__The use of MCMC for likelihood free simulator models__: We thank the reviewer for pointing out our imprecise wording here, we changed the wording to be more specific in the revised version (line 28).\n\n__Comparing training times__: The reviewer is asking for details about how training time was exactly measured. This is indeed an important piece of information, which we have added to the revised submission (line 168). The training times are compared based on the following convergence criterion: We split the data into a training and a validation set with a 90:10 split. We then stop the training when the log likelihood has not improved over 20 epochs on the validation set. In our experiments, this yielded very accurate posterior estimates and ensured sufficient convergence of the density estimators. As correctly pointed out by the reviewer, the neural likelihood estimates obtained on the entire feature set can be reused for FSLM, hence the training times for FSLM are essentially the same, the only difference being that no additional training for further marginal likelihoods is needed. This is also visible in Fig. 1.\n\n__High dimensional covariance matrices__: We used full-rank covariance matrices in this work and did not explore further whether restricted covariance matrices would yield better performance. For higher-dimensional features spaces, one clearly would have to evaluate whether e.g. diagonal covariance matrices are more suitable. We added these possibilities to the Discussion (lines 277 - 279).\n\n__Focus on one experiment__: The reviewer suggested investigating a different model in addition to the HH model. As the HH-model application is of high value to computational neuroscience and inference in this model is quite hard due to the involved non-linearities, we chose to focus on this example and analyze it in depth. \n\n__Missing references__: We thank the reviewer for the additional references which we added to the paper.\n\nInconsistent notation and missing cross-references: We thank the reviewer for pointing this out, we have made the notation more consistent in the revised version of our submission and added pointers to the appropriate theorems (line 515).\n", " We thank the reviewer for their comments and appreciate the valuable feedback. The reviewer found our algorithm to be a “compelling advance for the computational neuroscience field”, solving a notoriously hard problem in computational neuroscience. Below, we further justify the choice of KL-divergence as a measure and discuss the importance of the prior.\n\n__The use of KL over MMD__: The reviewer suggests to use the MMD as an alternative to the KL divergence. We had run preliminary experiments with both MMD- and KL-estimators and found that the KL-estimates lead to a much more consistent ranking of features. Nevertheless, the MMD remains an interesting direction to explore in future work. We added a short statement regarding our initial results with MMD to the text (see lines 146 - 148). \n\n__Importance of prior__: We thank the reviewer for bringing this up. As this is a limitation inherited from NLE, rather than being specific to FSLM, it can be remedied by employing posterior predictive checks and coverage tests applicable to NLE [1-3], which take into account uncertainty and systematic errors in approximation quality. We have added this as a point in our discussion. (see lines 285 - 288)\n\n[1] Talts S, Betancourt M, Simpson D, Vehtari A and Gelman A Validating Bayesian Inference Algorithms with Simulation-Based Calibration 24\n[2] Gabry J, Simpson D, Vehtari A, Betancourt M and Gelman A 2019 Visualization in Bayesian workflow Journal of the Royal Statistical Society: Series A (Statistics in Society) 182 389–402\n[3] Zhao D, Dalmasso N, Izbicki R and Lee A B Diagnostics for Conditional Density Models and Bayesian Inference Algorithms 11\n\n__Typos and misleading notation in formulas__: We apologize and have revised them in the updated submission.\n\n", " We thank the reviewer for their comments and appreciate the valuable feedback. The reviewer found our approach to be interesting and novel, but suggested mutual information-based strategies for feature selection, which we now discuss in more detail. Also, they asked for the generalization to flow-based NLE, for which we performed new analyses and show that the identified features generalize well. \n\n__Estimating mutual information between parameters and features__: The reviewer suggested that an alternative to our proposed algorithm would be to estimate the mutual information (MI) between sets of features and parameters for the model. We agree that this would be an interesting strategy, and we have added a paragraph in the Discussion to address this. FSML and MI are closely related. Using the definition of the MI\n\n$MI = D(p(t, x) || p(t) p(x)) = \\int p(x) \\int p(t | x) log(p(t|x)/p(t)) dt dx = E_x[D(p(t|x) || p(t)]$\n\nreveals that the mutual information is the average D_KL between the posterior and the prior. Thus, the MI can be recovered by averaging FSML over the entire dataset and using the D_KL between posterior and prior as distance metric. We note that FSML could easily be extended to the entire dataset by averaging over many observations (the trained neural network is amortized and, thus, requires no retraining for different observations).\n\nIn many real-world scenarios such as computational neuroscience, one is particularly interested in the importance of features for individual observations. This is because different features may be important in different cell types or cell families. We, therefore, opted to employ FSML as a method to identify important features of individual observations, but as stated above, our method could easily be extended to average over the entire dataset (and, in that case, corresponds to estimating the MI). We emphasize that, in general, estimating MI in simulation-based models is difficult because these models have no closed-form likelihood and, thus, would have to rely on sample-based approximations to the mutual information. We have highlighted the similarities and differences between MI and FSML in the discussion and hope that this allows users to pick the ideal method for the problem at hand.\n\n__The choice of MDNs over Flows__: We agree with the reviewer that flows have advantages with regards to performance when compared to MDNs, as we discuss in our paper. However, we also note that “even in times of normalizing flows” they still remain a relevant tool in similar settings, see e.g. [1] used MDNs for inference on HH models. Nonetheless, we have now performed an additional new analysis of how well our MDN-derived features generalize to a NLE with flows. To this end, we estimated feature importance in the brute-force setting using a flow-based NLE. We found that MDN-derived features generalize well, with high agreement between the two architectures and only one additional feature (mean resting potential) showing up in the flow-based NLE. We will add this information to Fig. 3 and the associated text and to Table 1 in the final version of the paper (we cannot do so currently due to space constraints) as well as discuss the limits of this correspondence, the details of which will have to be investigated in future work. Together, this argues for the usefulness of our method also for flow-based models, exploiting the fact that MDNs allow for efficient and exact computation of the marginal likelihoods. \n\n[1] Gonçalves P J, Lueckmann J-M, Deistler M, Nonnenmacher M, Öcal K, Bassetto G, Chintaluri C, Podlaski W F, Haddad S A, Vogels T P, Greenberg D S and Macke J H 2020 Training deep neural density estimators to identify mechanistic models of neural dynamics eLife 9 e56261, https://elifesciences.org/articles/56261.pdf\n\n__Additional references__: We thank the reviewer for pointing us to the additional references, some of which we have now included in the revised version of our paper. (see line 40)", " We thank the reviewer for their comments and appreciate the feedback. The reviewer found our work a ‘useful technical improvement to simulation-based inference’. They suggested providing a more informative baseline for our greedy feature search, which we added to Fig. 4a. This additional analysis clearly shows that our greedy feature selection is more efficient than the baseline. Furthermore, we reply to a few questions regarding the computational cost of our algorithm - here, we emphasize that the speed-up achieved enables much more efficient search for good features for simulation-based inference.\n\n__Better “chance level” for greedy search__: We thank the reviewer for this suggestion. We agree that multiple random sequences are much more informative and we included this in the revised version of Fig. 4a. The revised figure clearly shows the advantage of our method in selecting features to obtain an accurate posterior.\n\n__Parallelization of the algorithm__: Regarding the computational efficiency of the algorithm, we would like to stress that our algorithm achieves a 12x speedup in training time and a 5x speedup in total time (including sampling) over the naive implementation. As the reviewer noted, these are quite significant. Further speed-ups could be achieved by running the sampling for all subsets of features in parallel. This would reduce the total time (training + sampling) for FSLM from $20.48 \\pm 2.08$ h to$ 8.42 \\pm 1.67$ h. Furthermore, the sampling process itself could be spread out across several threads, by running multiple MCMC chains in parallel. \n\n__Constraint on the number of features__: The reviewer asks whether our algorithm enables exploration of a very long list of features to minimize human intervention. Indeed, our algorithm makes that possible. The longer the list of features, the greater the advantage of our method from a computational standpoint, compared to repeated runs of NLE. In that case, however, it would be advisable to use the greedy feature search set-up. Here, more features only expand the search space, but the initial improvement in KL for the first few features will be large, so selection can be robustly done. In contrast, for the setup of testing the effect of removing individual features from a large feature set, the differences in KL estimates and the IQR ratios of the marginals become increasingly small such that detecting small changes in uncertainty can become difficult. \n\n__Feature combination for HH model__: The reviewer asks for the computational cost of evaluating feature combinations. Indeed, there is no additional cost in investigating combinations of features, since the likelihood can be marginalized analytically with regard to one, two or more features. This also implies that instead of only expanding the most promising node in the search tree, as was done in our greedy implementation, the scheme is easily extensible to a full beam search, where the N-most promising nodes are expanded at each iteration. This would allow us to keep track of potentially combined effects of different features. We added a remark regarding this point in the respective section (lines 233 - 235).\n", " We thank all reviewers for their comments and appreciate the feedback, which helped us improve the manuscript. They found our work interesting, novel and clearly written, providing a “useful technical improvement to simulation-based inference” and a “compelling advance for the computational neuroscience field”. In particular, they highlighted the importance of evaluating “interpretable summary statistics” as crucial for trustworthy inference. We thank the reviewers for this very positive assessment.\n\nOf course, the reviewers also voiced criticism, which we will in detail address in the point-by-point replies below. We would like to highlight two key issues, for which we performed new analyses prompted by the reviews. First, one reviewer criticized our choice of baseline for the greedy feature selection pipeline from Fig. 4. We added a new baseline to this figure, averaging over multiple random runs, which now clearly shows that our method is able to select informative features much more efficiently than random search. Second, two reviewers suggested that MDNs are not state-of-the-art for NLE since flow-based NLE typically performs better. We address this point with a new analysis which shows that flow-based NLE is slower than MDN-based NLE when used in the brute-force setting and that the features identified by our method generalize well to flow-based NLE. \n\nThus, we believe we have addressed the criticism of the reviewers. We would like to underscore the need for an efficient feature selection method for neural-likelihood-based simulation-based inference, which yields interpretable features for simulation-heavy fields such as computational neuroscience. Such a method is provided by FSLM, and we hope our arguments convince the reviewer’s of this fact. \n", " This paper develops an efficient feature selection method in simulation-based inference, where each evaluation of the likelihood is costly. Conventionally, to test the sensitivities of the posterior distribution to specific features, one would need to re-evaluate the posterior with different combinations of features many times. The key idea of the method is to estimate the likelihood as an intermediate quantity instead of directly estimating the posterior, and analytically marginalize the joint likelihood for each feature instead of re-evaluating from scratch each time. The marginalization is made possible by the use of mixed density networks with gaussian properties where different parameters are easily partitioned. Using a Hodgkin-Huxley model as an example, the paper demonstrates how this method can be used for selecting features at a much smaller computational cost. This work seems to be a small but useful technical improvement to simulation-based inference (SBI). I am not an expert of the SBI method space in particular, so I cannot speak much on the originality or the novelty of the idea. But I have seen usages of SBI and surrogate likelihood/posterior approaches in neuroscience as well as in other domains (computational chemistry), and I have not seen an analytical marginalization approach like this before. Whenever the mixed gaussian model (with the convenient partitioning properties that allow marginalization) is a good fit to the system, this method will provide an immediate practical advantage. The paper is mostly clear and well written.\n i) It would be nice to have a better \"chance level\" curve for the random selection in Fig 4a, i.e., average over multiple random sequences of features.\n\nii) The speedup by the proposed method is evident, but 20 hours is still a long time if one wants to repeat. Can the authors comment on how/where the algorithm can use parallelization to reduce computation time?\n\niii) I had two questions, and they both boil down to understanding the computation time cost of considering more features:\n\n- There is still a human curated step required in the procedure, namely the pre-selected feature list. Can the authors comment on how this new level of efficiency could change the constraints related to the choice of the pre-selected feature list? That is, can one now run through a very long list of candidate features, including all the silly things that one would not expect to be important a priori?\n- Regarding the HH model experiment: I agree that the greedy features selection is a completely reasonable thing to do. But looking at Fig 4a made me wonder if This method also provides easy ways to explore combinations of features. (Specifically, from Fig 4a, I was wondering if the combination of APT, AHP, and APW were important, based on the drop in the blue curve at the addition of APT, and the drop in the black curve at the addition of APW.) How much additional cost would it be to investigate combinations of features? The authors have adequately addressed the limitations of their methods.", " The paper proposed a method to determine the \"informativeness\" of data features by comparing the width of posteriors with some data features marginalized out. The authors compare a naive solution where many models are trained to their proposed technique which takes advantage of the properties of Mixture Density Networks (DMN) as the architecture for Neural Likelihood Estimation (also known as synthetic likelihood) to trivially marginalize over data features without retraining.\n\nTheir method Feature Selection Through Likelihood Marginalization (FSLM):\n- allows for closed-form marginalization of data features, thereby enabling access to posteriors which are conditioned on arbitrarily marginalized data random variables without retraining.\n- comes with a greedy feature selection strategy to select useful features from a predefined set.\n- is compared to ground truth posteriors from simple linear Gaussian model and against a non-linear neuronal model. ## Originality\n\n### Strengths\n- The closed-form marginalization is simple and easily seen as a result of using a MDN. I've not seen it used before and I find it interesting for their purposes.\n\n### Weaknesses\n- The paper does not reference the work of Benjamin Wandelt and coauthors which aims to transform simulation data into a compressed format which maximizes the information shared by the data and its compressed representation for Neural Likelihood Estimation. This is not strictly the same subject as Wandelt's work does not quantify the value of specific dimensions of input data; however, it is a highly relevant dimension reduction technique. Please consider citing all of the papers on the [PyDelfi](https://pydelfi.readthedocs.io/en/latest/intro.html) website (there are three).\n- There is other tangentially related work regarding marginalizing over parameters. In particular, [Truncated Marginal Neural Ratio Estimation](https://arxiv.org/abs/2107.01214), [Arbitrary Marginal Neural Ratio Estimation for Simulation-based Inference](https://arxiv.org/abs/2110.00449), and (from pydelfi) [Nuisance hardened data compression for fast likelihood-free inference](https://arxiv.org/abs/1903.01473). Perhaps it would be interesting to include for the purposes of dimensionality reduction.\n\n## Quality\n### Strengths\n- The Gaussian linear model example is chosen well to show the relevance of independence between parameters and data and the corresponding effect on the estimated posterior.\n- The posterior visualization, tables which indicate computational cost and estimated KL divergence, and search result visualizations are all quite clear.\n- The limitations of using MDNs are made clear and the benefit for feature search is as well.\n\n### Weaknesses\n- Although the method seems nice, I still don't understand why the practitioner would not instead want to compute the mutual information. We do not know what sort of observations we will have in the future and the mutual information is likely more robust to future data (rather than overfitting feature selection to x_o.)\n\n## Clarity\n- Clarity was satisfactory in my opinion.\n\n\n## Significance\n### Strengths\n- They're certainly a nice feature of MDNs and NLE.\n\n### Weaknesses\n- MDNs seem to be of limited value in the times of normalizing flows, etc.\n- I also don't see why setting the feature importance based off a single x_o is really so interesting. Selected features should have some sort of predictive power for future data, right? If so, shouldn't we then integrate over all possible data, i.e. estimate the mutual information between different feature sets and the parameters?\n - Why is this method superior to a method which estimates the mutual information between parameters and data with some data features marginalized out?\n - I think this was also made clear.", " The authors propose a new method for determining informative features in simulation based inference. Recently proposed simulation based inference methods proceed by (1) generating parameter samples from the prior, (2) simulating to generate conditional samples of the data given the parameter, (3) fitting an approximation to the likelihood of data features under the simulator model consisting of a mixture model parameterized by neural networks and (4) sampling from the posterior over parameters given the data, using the conditional mixture model as an approximation to the true likelihood. The authors observe that to obtain a posterior conditional on a subset of features, one can use the property of multivariate normals that their marginals are analytically tractable, and replace the full approximate likelihood with the marginal approximate likelihood. The method itself is very straightforward, enough so that it seems like a trick any researcher in the field would be able to figure out, as it relies on well-known properties of multivariate normal distributions. However, the method is presented very clearly. Identifying informative features in simulation based inference is an important and poorly studied problem (to my knowledge); this paper offers smart and clear problem statement and setup. Moreover, writing a paper about the method helps motivate the particular advantages of neural likelihood estimation in contrast to competing techniques (such as NPE). \n\nThe evaluations are clear and compelling, especially the biological application. Hodgkin Huxley models are notoriously hard to fit, and the connection between the features that electrophysiologists focus on and actual underlying ion channel mechanisms are notoriously difficult to understand. This paper thus offers a compelling advance for the computational neuroscience/electrophysiology field in particular.\n\n There are some serious typos and lack of clarity in Eqn. 6 - there's min_j, but j does not appear; lowercase m and n are undefined; and it's unclear what min^N_j means (min raised to the Nth power??).\n\nMore generally I'm confused by the motivation for the KL estimator. KL estimation from paired samples is notoriously difficult and unreliable in high dimensions. I think it would make more sense to use maximum mean discrepancy or related techniques, which are well-established for moderately high dimensional data, and which offer the further advantage of providing a witness function that can help determine where the two distributions differ. (see https://jmlr.csail.mit.edu/papers/v13/gretton12a.html and https://papers.nips.cc/paper/2015/hash/0fcbc61acd0479dc77e3cccc0f5ffca7-Abstract.html)\n\n\n\n The authors discuss some limitations. One additional limitation that I think is important is that the success of NLE in general depends crucially on the choice of prior; if the true parameters are far out in the tails of the prior, the likelihood approximation is likely to be very poor, particularly as it restricts the tails of the likelihood to be Gaussian (and will thus fail if the true likelihood has, for instance, very fat tails). I worry that the proposed method does not take into account uncertainty or systematic errors in the approximation quality", " The paper introduces a method to efficiently estimate the importance of several features / outcome variables of a simulation model in a Bayesian setting. The method is based on mixture density networks and the fact that marginalization of Gaussian distributions is analytically tractable and relatively simple.\nThe authors use their method to demonstrate in 2 practical examples that the method reliable identifies feature importance and demonstrate the method's superior speed. __Strengths__\n\n- Reasoning about the importance of _interpretable_ summary statistics is commonly (and unjustly) ignored in the era of neural networks. The authors therefore tackle an issue that is highly relevant for reliable and trustworthy inference. \n- Marginalisation using the MDN is indeed an efficient approach and using the so computed marginal distributions to compute the feature important makes sense to me.\n- The method is easy to understand and implement\n- The writing is coherent and what is said is easy to follow (but the text does lack crucial information, see below)\n\n__Weaknesses__\n\n- The paper seems rushed: there is very little information on many relevant details. For example, what kind of density estimator was used for NLE? A flow? If so, what kind of flow? How many layers? I have many questions, see below. If any of those are answered in the text and I missed them, could the authors please let me know?\n- The paper contains only 2 examples, one of which is just a simple Gaussian. I understand the value of the Gaussian example, but the experiments section then only has one \"real\" model, which puts this section on the lighter side. (Perhaps the relatively long description of the HH model could have gone to the appendix in favour of another example.)\n- The literature review is a bit biased towards more recent developments and perhaps a bit unfair towards earlier contributions. For example:\n\n> likelihood-based Bayesian methods such as variational inference [8] or Markov Chain Monte Carlo [9] cannot be used.\n\nThat's a bit imprecise. Of course, MCMC can be used, in the form of ABC-MCMC [1] or pseudo-marginal methods [2]. The argument against those methods is more the computational cost and potentially slow convergence, not however, that it's impossible.\n\n- Equation 11 in the appendix is confusing. The RHS has a variable $valid$ which the LHS does not have. The reference for proofs is to imprecise, I could not find the relevant part. The reference should point to the theorem that establishes the result. \n\n[1] Marjoram, P., Molitor, J., Plagnol, V., & Tavaré, S. (2003). Markov chain Monte Carlo without likelihoods. Proceedings of the National Academy of Sciences, 100(26), 15324-15328.\n\n[2] Andrieu, C., & Roberts, G. O. (2009). The pseudo-marginal approach for efficient Monte Carlo computations. The Annals of Statistics, 37(2), 697-725.\n - What kind of density estimator was used for NLE? Just an MDN? Why not a flow? Have alternatives different from the default be tried? This should be mentioned. (Could any of those choices reasonable lead to NLE working better than your method, e.g. a flow instead of an MDN.)\n- How is time for training compared? Is there a predefined number of iterations or are you using a stopping criterion? Or is this just to say that you train one MDN and marginalize instead of retraining an MDN on the subset? (I.e. if you used a flow for NLE it's not clear how to compare time.)\n- Are you modelling the MDN with a full covariance matrix? If so, what would you suggest for higher dimensional data spaces?\n The authors discuss some limitations in the discussion section. While there are some elements on the limitations of MDNs, I'd like to know more about the covariance matrix that has to be estimated and hence the scalability. (A reasonable argument to be made would be that scalability isn't that important since handcrafted summary statistics are usually of limited dimensionality, but I think this should be discussed.)" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 5 ]
[ "Gn9X5HqKNXY", "Jl23y_4yzIJ", "3zRixAq_SmF", "SD6PXx-wR8r", "-MmtLhBBrLw", "QLjQkDGsl16", "ZPTAVPrDxAA", "IvAHl64OJE", "nips_2022_AYQI3rlp9tW", "nips_2022_AYQI3rlp9tW", "nips_2022_AYQI3rlp9tW", "nips_2022_AYQI3rlp9tW", "nips_2022_AYQI3rlp9tW" ]
nips_2022_S8-duMv77W3
Sound and Complete Causal Identification with Latent Variables Given Local Background Knowledge
Great efforts have been devoted to causal discovery from observational data, and it is well known that introducing some background knowledge attained from experiments or human expertise can be very helpful. However, it remains unknown that \emph{what causal relations are identifiable given background knowledge in the presence of latent confounders}. In this paper, we solve the problem with sound and complete orientation rules when the background knowledge is given in a \emph{local} form. Furthermore, based on the solution to the problem, this paper proposes a general active learning framework for causal discovery in the presence of latent confounders, with its effectiveness and efficiency validated by experiments.
Accept
This paper presents sound and complete orientation rules to incorporate local causal background knowledge along with algorithms implementing these rules. Reviewers were universally appreciative of the contributions and in favor of acceptance.
train
[ "z8PPK1ZoEYE", "JxQ6YfyrTNh", "7Rfn5OZbEnF", "vxI0KVJx6lV", "fdtEfdiTJ7Z", "orx6mbsHCn", "H1dvx5xfTYi", "XKI6qh_BGcG", "Zlx1MFXuf7H" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for answering my questions in detail. I like this paper. I will maintain my recommendation of acceptance.", " Thank you for your clarification. All my concerns have been addressed.", " Thanks for the response regarding the comparison to Jaber et al. I will keep my current score.", " Thank you for your valuable comments! We will revise our paper accordingly. And we will also give more examples for the illustrations in Line 178-187 about the intuition of the steps of Alg. 1, and provide some concrete examples of the reason that the proposed orientation rules (Zhang’s rules with additional $R_4’,R_{11}$) are complete.\n\nQ1. About Line 156 the meaning of “algorithm can be achieved”: We wanted to express that in the second part of the proof, we prove that the proposed orientation rules can orient the same edges as Alg. 1. The ultimate objective of the whole proof is to prove that the orientation rules are sound and complete to orient a PAG with local BK, which is hard to prove directly. Hence, we introduce Alg. 1 as a bridge, and divide the whole proof process into two steps. In the first step, we prove that Alg. 1 is sound and complete to orient the PAG with local BK. In the second step, we prove that Alg. 1 orients the same edges as the proposed orientation rules. These two steps together proved the soundness and completeness of the orientation rules. We will make it clearer in the revised version.\n\nQ2. About connection to Jaber et al.: We will illustrate it in revised versions. \nFrom the viewpoint of *problem*: (i) The algorithms (such as Jaber et al.) that are sound and complete in incorporating interventions are not necessarily able to be leveraged for local BK. **The input of the two methods are different**. In addition to the PAG, the input of our method is the local marks regarding $X$ (only $X$ is considered for example), the input in their work is the distribution under intervention on $X$. The inter. distribution can imply the local marks, but not vice versa. Hence, local BK is a weaker requirement than inter. distributions. That implies, although the compared method is complete with inter. distribution, it is not necessarily applicable with local BK. For example, according to human experience regarding $X$, we know the marks at $X$, but human experience cannot give us the inter. distribution. In this case, Line 18 of Alg. 1 of Jaber et al. cannot be tested (this step is irrelevant to whether inter. target is known and whether we learn $\\mathcal{P}$ in advance). Hence for this kind of BK, Alg. 1 of Jaber et al. cannot be executed. In summary, the input of the two methods are not identical. (ii) Then even if we *only consider the local BK from inter. data*, if the inter. distribution is fully available such that Line 18 of Alg. 1 of Jaber et al. can be correctly tested, their method can obtain the desired graph. But we want to emphasize that an advantage of considering local BK is that this kind of input information does not need large amount of inter. data to estimate the inter. distribution (As pointed out by Reviewer Wrk3, the inter. data is often expensive.). The reason is that our method based on local BK only needs to learn the local marks at $X$ by some two-sample tests such as whether $P(Z|do(X))=P(Z)$ ($X$ and $Z$ are singleton variables). It is not hard to achieve. The samples required in the two-sample test to ensure low test error is far fewer than that in testing Line 18 of Alg. 1 of Jaber et al. because of the possibly high-dimensional $\\mathbf{W}$ and the intrinsic hardness of conditional independence tests.\n\nFrom the viewpoint of *techniques*: To prove completeness, both methods construct MAGs with each circle being an arrowhead and a tail. Jaber et al. constructs an MAG over $\\mathbf{V}\\cup \\mathbf{F}$ with obser. and inter. distributions, while we construct an MAG over $\\mathbf{V}$ with obser. distribution and local marks regarding some variables. Even if we put aside the difference of input, the construction of Jaber et al. is essentially constructing an MAG over $\\mathbf{V}\\cup \\mathbf{F}$ based on a special kind of local BK regarding $\\mathbf{F}$ where all the edges regarding $\\mathbf{F}$ are out of $\\mathbf{F}$ (i.e., the local BK dictates that the marks at $\\mathbf{F}$ are **tails**). There are also some similar constructions such as Lemma 7.6 of Maathuis et al. and Lemma 43 of Perković et al.. The construction in our method is constructing an MAG over $\\mathbf{V}$ based on $\\mathcal{P}$ and local BK regarding some vertices where there could be **additional arrowheads** dictated by the local BK. The additional arrowheads at given variables take the great challenge to the construction. It is the reason that our method introduces two complex steps - the first and second steps of Alg. 1, in the construction, in contrast to previous construction methods [Maathuis et al., Perković et al., Jaber et al.] developed directly based on Thm. 2 of [31].\n\n- A generalized back-door criterion. Marloes H. Maathuis and Diego Colombo.\n- Complete Graphical Characterization and Construction of Adjustment Sets in Markov Equivalence Classes of Ancestral Graphs. Emilija Perković, Johannes Textor, Markus Kalisch and Marloes H. Maathuis.\n", " Thank you for your insightful questions and positive evaluation! We will revise the paper accordingly.\n\nQ1. About local BK: Defining the local BK over nodes is motivated by the fact that BK can be obtained through interventions, which are often performed on certain nodes instead of edges. When we intervene on some variable $X$, we can learn the marks at $X$ on the edge with $Z$ by testing whether $P(Z|do(X))$ equals $P(Z)$. \n\nFor general BKs, including BK in edge forms, the rules in this paper are sound, hence can also be exploited. But currently we are not sure if the rules we proposed are complete for general BK, although we have not found any incomplete counterexample.\n \nQ2. About latent: The latent variables in the experiments are taken randomly. To the best of our knowledge, FCI does not need the assumption that all latent variables are mutually independent. Consider a DAG $A\\rightarrow B\\rightarrow C\\rightarrow D$. If $B$ and $C$, which are not mutually independent, are latent variables, then by FCI we learn $A\\circ\\hspace{-1.4mm}-\\hspace{-1.4mm}\\circ C$. It is still a valid PAG. And the MAG representing the DAG with observable variables is $A\\rightarrow D$.\n\nQ3. About correctness of MEC: In this paper, to focus on the orientation with incorporating BK, we assume the correctness of Markov equivalence class as related methods [28,32]. If we further consider the cases that some mistakes may happen in $\\mathcal{P}$ *but assume local BK is always correct*, we can exploit a theoretical result in one of our recent work under review, which given the necessary and sufficient conditions for the existence of MAGs consistent to $\\mathcal{P}$ and local BK. For example, with the local BK, if the necessary and sufficient conditions are not fulfilled, it implies that there must be some mistakes in $\\mathcal{P}$ because there should exist an MAG consistent to the correct $\\mathcal{P}$ and local BK. In this case, we need to collect more observational data, or by other ways, to re-learn a PAG. In general, we feel that it is hard to present the correction rules for preventing the cascading error, because when there are conflicts, there are many kinds of marks corrections even skeleton corrections which can lead to eliminating the conflicts. From the theoretical viewpoint, we cannot determine where the mistakes happen (which correction is consistent to the true PAG) without further assumptions. In practice, we can possibly exploit some heuristic ideas such as taking the correction which changes the least edges in $\\mathcal{P}$ to eliminate the conflicts as the correction that is consistent to the true PAG.\n\n\nQ4. About Originality: The key contribution is the sound and complete orientation rules in the presence of latent confounders, which make it possible to conduct a complete active learning framework. Entropy criterion was applied in causal sufficiency setting [20]. Our work is the first to extend the criterion to causal insufficiency setting, and in light of the large space of MAGs compared to DAGs, we introduce rejection sampling for MAGs.\n\nQ5. About accuracy difference: The difference is due to the type I and type II errors of two-sample tests. When we intervene on $X$, for example, we learn the marks at $X$ on an edge between $X$ and $Z$ by testing whether $P(Z|do(X))$ is equal to $P(Z)$. Because the intervention variables can be different for different methods (MCMC and random), different strategies suffer different test errors in real implementation, which causes the difference.\n", " Thank you for your careful reading and valuable questions! We will explain the steps of Alg. 1 with the examples in Fig. 1.\n\nQ1. About the case that local BK is not correct: In general, wrong BK can introduce some mistakes. For example, consider a PAG $A\\circ\\hspace{-1.4mm}-\\hspace{-1.4mm}\\circ B\\circ\\hspace{-1.4mm}-\\hspace{-1.4mm}\\circ C$, if the local BK regarding $B$ tells us that there are both arrowheads at $B$ on the edge between $B$ and $A$ and the edge between $B$ and $C$, the obtained graph is evidently invalid since there are additional v-structures introduced. Although wrong BK may be ubiquitous in real-world problems, to make a step towards understanding the impact of BK in causality, assuming BK is correct is still a common practice in related works [28,32]. However, we want to point out that in some cases, it is possible that we identify the existence of mistakes of local BK if we only assume the correctness of PAG, which can help analysts realize that they should re-evaluate the correctness of local BK before performing causal analysis. In a paper under review in one of our recent work, we gave necessary and sufficient conditions for the existence of MAGs consistent to $\\mathcal{P}$ and local BK (suppose it is $BK(X)$). If the local marks at $X$ dictated by the local BK do not fulfill the conditions, there cannot exist MAGs consistent to $\\mathcal{P}$ and the local BK, which implies that the local BK is wrong. \n\n\nQ2. About MH Algorithm: It is a very penetrating question! Two Markov equivalent MAGs can be transformed to each other in limited number transformation, which is shown by Thm. 3 of [37]. And due to Prop. 2 and the fact proved by Lemma 15.1 that the MAG obtained on Line 1 of Alg. 2 is an MAG consistent to the given BK and PAG, all sampled MAGs are consistent to the PAG. However, the sampled MAGs are not necessarily consistent to the given BK. Hence, in real implementation of our method, after we sample $L$ MAGs, we have an additional step to judge whether each sampled MAG is consistent to the local BK (has the non-circle marks dictated by the local BK), and we only keep the $L’$ MAGs that are consistent and calculate the entropy with these $L’$ MAGs. We will make it clear in Alg. 2 in the revised version to avoid misunderstanding. \n\nAbout the complexity of Alg. 2: the running time of applying Prop. 2 mainly depends on the time spent on determining if the third condition is met, which is $O(\\lvert V\\rvert\\lvert E\\rvert)$ (See footnote 20 of [31]). There are at most $\\lvert E\\rvert $ edges that can be transformed. Hence the complexity of Line 3 of Alg. 2 is $O(\\lvert V\\rvert\\lvert E\\rvert^2)$. Since the complexity of Alg. 2 mainly depends on Line 2 to Line 6, the total complexity of Alg. 2 is $O(L\\lvert V\\rvert\\lvert E\\rvert^2)$.\n", " This paper presents sound and complete orientation rules to incorporate local causal background knowledge to PAGs and applies these rules to active learning. Here, the local causal background knowledge regarding a variable $X$ refers to the causal relations about X with any adjacent variable. Note that, this paper assumes no selection bias. This paper is clearly written and the contributions are novel. The authors prove sound and complete orientation rules to orient a PAG with local background knowledge. Although I did not check the proof, the sketch seems reasonable. I only have one suggestion: it would be better to explain the steps to construct the PAGs in the example presented in Figure 1 in more detail. I also only have some questions for the authors\n1. The authors assume the local BK is correct, meaning that the orientation rules can only be applied when the local BK is correct. When the local BK is not correct, will applying the orientation rules cause some mistakes (such as cycles or new colliders), or we can still get a PAG?\n2. Regarding the MH sampling, it seems that Proposition 2 only gives a method to construct an equivalent MAG from a given one. Can any two Markov equivalent DAGs be transformed into each other in a limited number of these transformations? Is it possible that a sample MAG from $S({\\cal M}_{t-1})$ is not consistent with the given BK (line 3, Algorithm 2)? What is the computational complexity of Algorithm 2?\n Not applicable.", " Nonparametric causal discovery can identify the true DAG up to some undertainty. The uncertainty can be eliminated only by incorporating background knowledge (BK). Under causal sufficiency assumption, Meek rules established a complete rule to orient the CPDAG to DAG with BK. While in the presence of latent variables, authors provide the first sound and complete rules to orient the PAG. The orientation rules are based on local form of BK. With these rules, authos propose an active learning framework, aiming to orient a given PAG with the minimum number of interventions (BKs). Since it is hard to count the number of MAGs in some consistent space, authors further equipped the method with the adaptive rejection Metropolis-Hastings sampling. Experimental results demonstrate the effectiveness of the proposed framework. + Pros:\n - **Clarity.** The paper is generally well-written and organized. The problem definition is set clear, the preliminaries are described in a detailed and clean way, and theoretical analyses are also well carried out.\n - **Technical quality of the proposed rules.** The proposed orientation rules are technically dense. I haven't read the rules (section 3) in detail, and could not ensure the correctness. But I tried on some simple graphs example, and the rules appear to be correctly identifiable to me.\n\n+ Cons: Overall this is good to me. Currently I could not give any specific weaknesses of this paper, but only some questions. See the questions section. 1. **local BK**: now BK is defined as _all marks_ on a variable X - which seems expensive - and usually BK is given as some specific direct edges. Could the authors please give more motivation on why this BK is defined over some node, instead of edge? If some BK is given in the edge form, could it also be incorporated into this framework?\n2. **Latent**: by _with latent variables_, is it assumed that all latent variables are exogenous (or mutually independent), similar as assumptions in FCI, so that the Markov equivalence class can be represented as a PAG. And thus, in experiments, L348 \"take three nodes as latent variables and the others as observed ones\" - are they taken randomly or with some criteria?\n3. **Correctness of BK**: L134: \"The correctness indicates that there exists an MAG consistent to P and the BK.\" So the authors also assume that the learned P at the first stage is correct. What if P is incorrect (which is often the case), and the given BK has confict with P - is there any correction rule to prevent cascading error?\n4. **About originality**: the active learning framework (maximal entropy criterion, rejection metropolis sampling etc, section 4) seems to be a classic paradigm in combinational search. Is this originally proposed by authors, or very similar to some existed methods on orientation rules?\n5. **Accuracy difference in Table 1**: why there is accuracy difference between MCMC and random? Since the orientation rules are deterministic, given the same PAG, the maximum oriented graph should be exactly the same, with only the difference in the number of interventions required. Where does the accuracy difference come from? What if the starting PAG is exactly the true PAG (asymptotic result, not by FCI)?\n6. L86: letters (V, E) should be bold face. The problem setup is quite clear, and the authors have adequately addressed the limitations. I only have some questions above, regarding assumptions on latent, and assumptions on correct P and BK.", " For a given PAG, the paper presents a sound and complete procedure to incorporate local background knowledge (BK). For a given node X, the local BK the paper considers is knowledge of the edge marks at X, i.e., the circle marks at X in the given PAG are converted to an arrowhead or a tail using the BK. Then the procedure orients further edges/nodes in the PAG. For a given set of nodes with BK, the procedure sequentially incorporates the BK for each node one at a time.\nNext, the paper presents an algorithm to decide which nodes to intervene on (interventions are essentially equivalent the BK) with the goal of learning the MAG with a small number of interventions.\n\n Originality, quality, and significance:\nThe primary contribution of the work is to propose two new orientation rules that allow for sound and complete incorporation of BK for a given node V_i (in Alg. 1). I think the work is significant as it proposes a way to further narrow down the MEC beyond what can be learned using only observational data. And incorporating this specific kind of BK is also novel (although I think more comparison is needed to related literature --- please see my remarks in the \"Questions\" section).\n\nClarity:\nOverall, I think the paper is hard to read. I would have appreciated some more examples to illustrate how the algorithm work. It would have been nice if the proof sketches were explained with concrete examples of why the two new rules are sound and complete (especially for Thm. 2).\n\n Comparison to learning PAGs with interventional data:\nThere is some work on incorporating interventional data into PAGs. As an example, see [1], which provides a characterization as well as a learning algorithm for causal discovery under unknown interventions (with known interventions being a special case). This algorithm is also sound and complete. BK is not necessarily the same as soft interventions (which can reveal more information). But to me, they seem very closely related as interventions are one way of obtaining BK, so it seems to me that algorithms that are sound and complete in incorporating interventions can also be leveraged for BK. Can the authors comment on how their work is related to this line of work?\n\nLine 156:\nWhat do you mean by \"algorithm can be achieved\"? What does it mean for an algorithm to be achieved?\n\n\n[1] Jaber, A., Kocaoglu, M., Shanmugam, K., & Bareinboim, E. (2020). Causal discovery from soft interventions with unknown targets: Characterization and learning. Advances in neural information processing systems, 33, 9551-9561. I have addressed this in the previous sections." ]
[ -1, -1, -1, -1, -1, -1, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "fdtEfdiTJ7Z", "orx6mbsHCn", "vxI0KVJx6lV", "Zlx1MFXuf7H", "XKI6qh_BGcG", "H1dvx5xfTYi", "nips_2022_S8-duMv77W3", "nips_2022_S8-duMv77W3", "nips_2022_S8-duMv77W3" ]
nips_2022_aJ5xc1QB7EX
Deep Active Learning by Leveraging Training Dynamics
Active learning theories and methods have been extensively studied in classical statistical learning settings. However, deep active learning, i.e., active learning with deep learning models, is usually based on empirical criteria without solid theoretical justification, thus suffering from heavy doubts when some of those fail to provide benefits in applications. In this paper, by exploring the connection between the generalization performance and the training dynamics, we propose a theory-driven deep active learning method (dynamicAL) which selects samples to maximize training dynamics. In particular, we prove that the convergence speed of training and the generalization performance is positively correlated under the ultra-wide condition and show that maximizing the training dynamics leads to a better generalization performance. Furthermore, to scale up to large deep neural networks and data sets, we introduce two relaxations for the subset selection problem and reduce the time complexity from polynomial to constant. Empirical results show that dynamicAL not only outperforms the other baselines consistently but also scales well on large deep learning models. We hope our work inspires more attempts in bridging the theoretical findings of deep networks and practical impacts in deep active learning applications.
Accept
It is the consensus of the reviewers that this paper makes a worthwhile contribution to active deep learning. The author(s)' idea of optimizing for convergence speed is interesting and of potential significance. The meta-reviewer would recommend acceptance of the paper as a poster.
train
[ "8nb4XHsnjto", "gPwAjqQG8a", "hYl3yke91Vp", "3RI1AaWdKVn", "G6Wnh0LDsgSi", "1Sv0-h2jNvX", "oBPyMxXNte", "fO0rGtE_3i", "rBYyWEpHoqf", "mmTLrVy3a3ai", "Y0o1aRjqNRS", "5HTCrUQliqsl", "GcSOCZ0xlT", "I-HemXrWAES", "gPmbnRlrCV4", "axcumpLmIuE", "G_fn61eZFj-", "SzpOhpMbhPa" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for the positive comments. We really appreciate all the discussions and suggestions!\n \n \n \nBest,\n \nAuthors.", " Thanks for the responses.", " Thanks for an invigorating discussion, this paper was a pleasure to review. I hope that others read it with as much enthusiasm as I did :)", " > But it can be challenging to bridge the acquisition functions of other active learning methods to alignment, which is related to the generalization bound (Theorem 2). Actually, the connection with the theoretical analysis is a distinct advantage of our method.\n\nSorry, I should have been more clear here. The experiment I was suggesting was to run two acquisition functions per budgeted time step; one based on calculating alignment, the other based on some heuristic acquisition function against which you compare. That way, you can report not only the difference in points that each acquisition function suggests would lead to the greatest expected improvement, but also you can learn more about alignment by looking at the difference in alignment distributions between your method and competing methods. In this way, you might learn more about how alignment contrasts with existing methods. I still think it would be a helpful guide to readers, but I won't insist on piling on more work for work's sake.\n\n> The paper [1] pointed out by the reviewer studies the influence of statistical bias commonly existing in active learning methods and finds the bias can be actively helpful when training with neural networks. We acknowledge that the point-wise contribution is also not strictly additive in our case and studying the influence of statistical bias for our method is interesting, but it is out of the scope of this work. We will continue the study of the statistical bias for the generalization performance of our method in our future work.\n\nThat's a fair distinction, and I appreciate that while important, you cannot fit all the work that's interesting into one paper. I hope you'll follow up on it in future work.", " Thanks for the reply. I will be maintaining my recommendation for acceptance (7).", " \nThanks for your reply!\n\nAs mentioned, the non-greedy version does not suffer from clustering. The greedy version is introduced for the fast approximation of the non-greedy one (as described in Section 3.2, Line 147). Therefore, a natural question is: *what is the quality of the approximation?* To study the quality and feasibility of the approximation, we introduce the term Approximation Ratio (Eq(14)). When the approximation ratio is close to 1, then it means that the approximated value is close to the original one. As shown in Figure.4, under 6 different settings, the approximation ratio is closed to 1, which indicates that for real-world datasets (in general, the non-degenerated data), the approximation is good enough. Then the greedy version doesn’t suffer from the clustering issue.\n\nOn the other hand, for the case mentioned in the initial post, the approximation ratio is closed to 1 when the times of duplication are much smaller than the query batch size. As we discussed in the initial reply, when the duplication ratio is larger than the query batch, the approximation will deviate from the original value, in which the clustering issue may happen.\n\nWe acknowledge that the studies of when the employed approximation is invalid and whether we can have a better approximation are valuable. But, the extreme case doesn’t belong to the non-degenerated data as we assumed in Section.2, and thus it is out of the scope of this work. We would like to leave the exploration of it to future work.\n", " Thanks for addressing the points brought up in the initial review. I wanted to ask again about the clustering problem with the greedy approximation. I understand that the non-greedy version of your approach does not suffer from clustering, but I was wondering about the greedy version specifically (eq. 12).\n\nIf I understand correctly, the greedy acquisition function is:\n\n$Q_{greedy}^* = argmax_{Q \\subseteq U} \\sum_{\\{x_i, y_i\\} \\in Q} \\left( ||\\nabla_\\theta l(f_\\theta(x_i), y_i) ||^2_2 + \\nabla_\\theta l(f_\\theta(x_i), y_i) ^ T g_S\\right)$\n\nNote this is different than what you provided in your response since \n\n$ \\sum_{\\{x_i, y_i\\} \\in Q} ||\\nabla_\\theta l(f_\\theta(x_i), y_i) ||^2_2 \\neq || g_Q||^2_2 $\n\nso it looks like the greedy variant of the algorithm will target the top |b| which match the gradient of the full training set **individually**, as opposed to matching the gradient when combined together. This seems like it would be susceptible to clustering.", " 5. I would recommend trying this out on datasets with more severe class imbalances.\n* Thanks for your suggestion. In this work, we follow the previous works [6, 7, 8] and adopt the commonly used data sets for active learning. On those balanced data sets, we verify the correctness of the theoretical analysis and the effectiveness of the proposed method. The study of active learning with imbalanced data sets is important and interesting. The common setting of class imbalance is to train the model with class imbalanced data and test on balanced data [10, 11]. However, the distribution discrepancy between training and test data will complicate the generalization analysis (Theorem 2). Note that the generalization analysis under the distribution shift problem is an actively studied independent field [9] out of the scope of this paper. For this work, we want to make the experiments close to the analysis and would like to leave the exploration on this setting in future work. The other setting is to train and test the model on the data from the same imbalanced distribution. The data set Caltech101 we use in the experiment is in this setting, and we can observe better performance improvement (Figure 10 in Appendix E.6). We would like to try dynamicAL on this imbalanced setting on other more imbalanced data sets next.\n\n\n6. What current theories of self-supervised learning can do to inform this work, or what this work could do to transform a self-supervised learning method into a few-shot method that would perform better?\n* Several works are trying to pretrain the network with a self-supervised learning (SSL) method before the active learning method [2, 3]. The formal analysis with the self-supervised learning theory [4,5] is an interesting and important topic to explain the empirical success of using SSL to warmup for active learning. In [2], the proposed method relying on the pseudo-labeling, enjoys the benefit introduced by pretraining. For our work, the SSL warmup might empirically bring benefit to our methods, although this is beyond the scope of the current method. \n\n7. Lines 217-219: In theorem 1, the factor $v$ will change at $t \\rightarrow \\infty$, since it depends on the reciprocal of $1/n^2$. Is this accounted for in the proof?\n\n* Our theoretical result is based on the fact that neural tangent kernel stays constant during gradient descent training. In particular, $\\Theta= V^{\\top} \\Lambda V$, where $V= $ \\{ $ v_{i} $ \\}$ _{i=1}^n$. Because neural tangent kernel stays constant, $v_i$ will keep unchanged during gradient descent update.\n\n\n\n[2] Gudovskiy, Denis, et al. \"Deep active learning for biased datasets via fisher kernel self-supervision.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.\n\n[3] Bengar, Javad Zolfaghari, et al. \"Reducing label effort: Self-supervised meets active learning.\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.\n\n[4] Arora, Sanjeev, et al. \"A theoretical analysis of contrastive unsupervised representation learning.\" arXiv preprint arXiv:1902.09229 (2019).\n\n[5] HaoChen, Jeff Z., et al. \"Provable guarantees for self-supervised deep learning with spectral contrastive loss.\" Advances in Neural Information Processing Systems 34 (2021): 5000-5011.\n\n[6] Ash, Jordan T., et al. \"Deep batch active learning by diverse, uncertain gradient lower bounds.\" arXiv preprint arXiv:1906.03671 (2019).\n\n[7] Liu, Zhuoming, et al. \"Influence selection for active learning.\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.\n\n[8] Zhang, Beichen, et al. \"State-relabeling adversarial active learning.\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020.\n\n[9] Chuang, Ching-Yao, Antonio Torralba, and Stefanie Jegelka. \"Estimating generalization under distribution shifts via domain-invariant representations.\" arXiv preprint arXiv:2007.03511 (2020).\n\n[10] Aggarwal, Umang, Adrian Popescu, and Céline Hudelot. \"Active learning for imbalanced datasets.\" Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2020.\n\n[11] Bengar, Javad Zolfaghari, et al. \"Class-Balanced Active Learning for Image Classification.\" Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2022.", " 4. Is the experiment in section 4.3 calculating G using pseudo-labels or ground truth labels? It would be interesting to see the effect of changing pseudo-labels to ground-truth labels on the results in section 4.3.\n* In Figure 2, we compute $G(S \\cup \\overline{Q})$ in which $\\overline{Q}$ is the corresponding data set with ground-truth labels (line 134). In the Appendix, we further provide the relationship between $\\mathcal{A}(X \\| X_Q, Y \\| Y_Q)$ and $G(S \\cup \\widehat{Q})$, where $\\widehat{Q}$ includes the pseudo-labels. Comparing Figure 2 and Figure 6, we find that the positive relationship between $\\mathcal{A}$ and $G$ computed with ground-truth labels is slightly stronger than $G$ computed with pseudo-labels. The result is aligned with our expectations, because the extra noise is introduced in the pseudo-labels. However, the Kendall $\\tau$ coefficient still achieves $0.46$ for $\\mathcal{A}$ and $G$ computed with pseudo-labels. The result justifies the use of $G(S \\cup \\widehat{Q})$ as the acquisition function to query samples, which is verified by our empirical results (Figure.3 and Appendix E.5). \n* We refer the reviewer to Appendix D.2 of the revision for a more detailed discussion and the result.\n\n\n\n5. It would useful to see how well G correlates to the final generalization performance.\n* We provide the correlation between $G$ and bound $B$ in the Appendix D.3 of the revision. Figure 7 shows that with the increase of $G$, $B$ will decrease. This empirical observation is aligned with our expectation, because Theorem 2 indicates that the alignment $A$ is inverse proportional to $B$ and Figure 2 tells us that the $G$ is proportional to $A$. We would like to refer the reviewer to the Appendix D.3 of the revision for a more detailed discussion and the result.", " 3. Whether the proposed method is vulnerable to clustering. \n* Theoretically, our method is not vulnerable to the clustering issue. The dynamicAL aims at selecting samples maximally increasing the training dynamics (Eq (13)). We can rewrite the original function in the following form with the gradient of each sample:\n\n $Q^{\\star} = argmax_{Q \\subseteq U} \\| \\sum_{(x_u,y_u)\\in Q} \\nabla_{\\theta} \\ell(f(x_{u} ; \\theta), {y}_{u})\\|^{2}$\n\n $+2 \\sum_{(x, y) \\in S} \\nabla_{\\theta} \\ell(f(x_{u} ; \\theta), {y}_{u})^{\\top} $\n\n $\\nabla_{\\theta} \\ell(f(x ; \\theta), y)$\n\n The optimal solution will not change by adding a constant $C$ into the acquisition function,\n \n $Q^{\\star} = argmax_{Q \\subseteq U} \\| \\sum_{(x_u,y_u)\\in Q} \\nabla_{\\theta} \\ell(f(x_{u} ; \\theta), {y}_{u})\\|^{2}$\n\n $+2 \\sum_{(x, y) \\in S} \\nabla_{\\theta} \\ell(f(x_{u} ; \\theta), {y}_{u})^{\\top} $\n\n $\\nabla_{\\theta} \\ell(f(x ; \\theta), y) + C$\n \n Without loss of generality, let $C = \\| \\sum_{(x,y) \\in S} \\nabla_{\\theta} \\ell(f(x ; \\theta), y)\\|^{2}$, then we have,\n \n $Q^{\\star} = argmax_{Q \\subseteq U} \\|\\mathbf{g}_{Q} $\n\n $+ \\mathbf{g}_{S} \\|^{2}$\n\n where $\\mathbf{g}_{Q}$\n\n $ = \\sum_{(x_u,y_u)\\in Q} \\nabla_{\\theta} \\ell(f(x_{u} ; \\theta), {y}_{u})$ \n\n and $\\mathbf{g}_{S} $\n\n $= \\sum_{(x,y)\\in S} \\nabla_{\\theta} \\ell(f(x ; \\theta), {y})$. \n\n By assuming that outliers, which can cause extraordinarily large gradient norms, don't exist in the data set, we need to select a set from the candidate pool, where its composed gradient, $\\mathbf{g}_Q$, has the same direction as $\\mathbf{g}_S$. Back to your example, unless there exists a sample causing the gradient to be coincidentally the same as $\\mathbf{g}_S$, the same image will not be selected 10 times.\n* Empirically, we use the approximated acquisition function (Eq (15)). The important difference is that $\\sum_{(x_u,y_u)\\in Q} \\|\\nabla_{\\theta} \\ell(f(x_{u} ; \\theta), {y}_{u})\\|^2$ is used to replace the first term in the original form. And we verify the feasibility of the approximation over non-degenerate data sets. For the degenerate data sets, when the ratio of duplication $r \\ll b$, where $b$ is the query batch size, the difference can be ignored. And dynamicAL will **not** suffer from the clustering issue. For example, if the data set is duplicated 10 times (as your example), the selected samples will almost remain the same with $b \\in$ \\{250, 500, 1,000\\} as adopted in this work. We acknowledge that the approximation might cause an issue when the duplication ratio is much larger than the query batch, such as each image of the data set being duplicated 10,000 times, but this extreme case is out of the scope of this work. We explicitly give the non-degenerate assumption[7] in the revision to better clarify the scope of this work.\n\n[7] Allen-Zhu, Zeyuan, Yuanzhi Li, and Zhao Song. \"A convergence theory for deep learning via over-parameterization.\" International Conference on Machine Learning. PMLR, 2019.\n", " 1. To help the readers better understand a) provide more descriptions for the empirical NTK in line 100; b) recall that $\\mathcal{K}^{i j}$ is the empirical NTK in lines 122, 154.\n* Thank you for your constructive comments! As suggested, we clarify the meaning of the notation $\\mathcal{K}^{i j}$ in line 100 and recall the meaning of this notation in lines 122, 156.\n\n\n2. Figure 2 is nice, but it lacks some context. How good is the correlation between the training dynamics proxy and alignment? What would help readers understand this better would be a comparison of different acquisition functions and how they correlate with alignment. \n* Thanks for the suggestion. We provide more description and explanation for Figure 2 to help readers understand the correlation between the training dynamics and alignment. But it can be challenging to bridge the acquisition functions of other active learning methods to alignment, which is related to the generalization bound (Theorem 2). Actually, the connection with the theoretical analysis is a distinct advantage of our method.\n\n\n3. The subset approximation introduced in line 150 can bias the selection of points, since the point-wise contribution to $G(S)$ is not necessarily additive. See work by Farquahr et al.[1] for an example in the case of using uncertainty to acquire labels. \n* To analyze the feasibility of this approximation, we empirically measure the expectation of the Approximation Ratio (Eq (14)) on three data sets with three different neural networks in Figure 4 which show that the approximated result is close to the original value. Then, the samples selected based on the approximated value will not be largely different from the samples selected with the original one.\n* The paper [1] pointed out by the reviewer studies the influence of statistical bias commonly existing in active learning methods and finds the bias can be actively helpful when training with neural networks. We acknowledge that the point-wise contribution is also not strictly additive in our case and studying the influence of statistical bias for our method is interesting, but it is out of the scope of this work. We will continue the study of the statistical bias for the generalization performance of our method in our future work.\n\n\n4. (Lines 49-52) Ensure that TD can be computed or well-estimated quickly.\n* The training dynamics will not be directly computed in our method. Instead, we compute the change of training dynamics, as shown in Eq (12). Furthermore, to accelerate the computation of the acquisition function of Eq (12), we introduce the subset approximation which leads to the criterion employed to select samples Eq (15). We provide additional discussion of the computational requirement of Eq (15) in Appendix C of the revision.\n\n[1] Farquhar, Sebastian, Yarin Gal, and Tom Rainforth. \"On Statistical Bias In Active Learning: How and When to Fix It.\" International Conference on Learning Representations. 2020.", " 1. Can you compare the proposed method with other baseline methods for rounds > 10?\n* We provide the experiments with $b=500, r=15$ on Caltech101 data set with ResNet18 as the backbone in Appendix E.6. At the last round, we run out of all images in the pool. As shown in the Figure 10, our method consistently outperforms those baselines.\n\n2. Can you discuss the tightness of the bound in the relationships between convergence and generalization?\n* By looking at Eq (19), the lower bound is due to the HM-GM-AM-QM inequalities which state the relationship between the harmonic mean, geometric mean, arithmetic mean, and quadratic mean. The tightness of the lower bound depends on the value distribution of $\\lambda_i(v_i^{\\top}Y)^2$. If $\\lambda_i(v_i^{\\top}Y)^2$ is close to each other, then the generalization performance is close to the lower bound. On the other hand, if the difference among $\\lambda_i(v_i^{\\top}Y)^2$ is significant, then the upper bound should be more close to the generalization performance, because the upper bound has $\\frac{\\lambda_{\\max}}{\\lambda_{\\min}}$ to account for the size of the region of the eigenvalue distribution.", " Thank you very much for the recognition of our work! We’d like to address your questions as follows. \n\n1. Derivation and algorithm based heavily on NTK theory, but practical sized networks have been shown to deviate far from this regime (see [1, 2], for example). This paper does not address this issue. At least some discussion of it would be useful.\n* Thanks for your suggestion. We provide more discussion of the practical sized networks in Appendix F of the revision. Although [1, 2] mentioned that the NTK assumption is hard to be strictly satisfied in some real-world models, we notice that some recent works have shown the high-level conclusions derived based on the NTK analysis are insightful and useful to guide the design of the practical methods. Some of their applications can achieve SOTA. For example, \n * Park et al. [3] used the NTK to predict the generalization performance of architectures in the application of Neural Architecture Search (NAS). \n * Chen et al. [4] used the condition number of NTK to predict a model’s trainability. \n * Chen et al. [5] also used the NTK to evaluate the trainability of several ImageNet models, such as ResNet. \n * Deshpande et al. [6] used the NTK for model selection in the fine-tuning of pre-trained models on a target task. \n\n In our work, the empirical results in Figure 3 and Appendix E also show that the high-level conclusion (Proposition 1) still holds.\n\n\n2. It appears that calculating Eq (15) would scale linearly with |S|, which seems quite undesirable.\n* The runtime is almost the same with the increase of the size of set S. For the computation of Eq (15), \n\n $\\Delta(\\{(x_u, \\widehat{y}_u) \\}| S) = $\n\n $\\|\\nabla_{\\theta} \\ell (f({x}_u; \\theta), \\hat{{y}}_u)\\|^2 $\n\n $+ 2 \\sum_{(x, y) \\in S} \\nabla_{\\theta}\\ell(f({x}_u; \\theta), \\hat{{y}}_u)^{\\top}$\n\n $ \\nabla_{\\theta} \\ell (f({x}; \\theta), {{y}})$\n\n the second term can be rewritten as \n\n $2 \\nabla_{\\theta}\\ell(f({x}_u; \\theta), \\hat{{y}}_u)^{\\top} $\n\n $\\sum_{(x, y) \\in S} \\nabla_{\\theta} \\ell (f({x}; \\theta), {{y}})$\n\n The summation of gradient over the set $S$ needs to be computed only once and reused for each sample $(x_u, \\hat{y}_u)$. Although the summation operation is linear with $|S|$, its computational overhead can be ignored, compared with the computation of gradient inner product over the unlabeled set. Furthermore, to help the reader better understand the computational requirements, we provide a discussion in Appendix C.2.\n\n[1] Lee, Jaehoon, et al. \"Finite versus infinite neural networks: an empirical study.\" Advances in Neural Information Processing Systems 33 (2020): 15156-15172.\n\n[2] Fort, Stanislav, et al. \"Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the neural tangent kernel.\" Advances in Neural Information Processing Systems 33 (2020): 5850-5861.\n\n[3] Daniel S Park, Jaehoon Lee, Daiyi Peng, Yuan Cao, and Jascha Sohl-Dickstein. Towards nngp guided neural architecture search. arXiv preprint arXiv:2011.06006, 2020.\n\n[4] Wuyang Chen, Xinyu Gong, and Zhangyang Wang. Neural architecture search on imagenet in four gpu hours: A theoretically inspired perspective. In International Conference on Learning Representations, 2021a.\n\n[5] Chen, Xiangning, Cho-Jui Hsieh, and Boqing Gong. \"When Vision Transformers Outperform ResNets without Pretraining or Strong Data Augmentations.\" arXiv preprint arXiv:2106.01548 (2021).\n\n[6] Deshpande, A., Achille, A., Ravichandran, A., Li, H., Zancato, L., Fowlkes, C., Bhotika, R., Soatto, S. and Perona, P., 2021. A linearized framework and a new benchmark for model selection for fine-tuning. arXiv preprint arXiv:2102.00084.\n", " Thank you for your constructive comments!\n\n1. Formal results similar to Theorems 1 and 2 in the active learning setting.\n* Because Lemma 1 does not require the training set to be i.i.d., we can directly apply Theorem 1 to the active learning setting.\n* As for Theorem 2 (generalization upper bound), we can establish a generalization upper bound for the active learning setting by plugging Eq (20) into Eq (18). And we show the result here \n\n $\\mathcal{L}_p \\le \\sqrt{\\frac{2 \\textrm{Tr}[Y^{\\top}\\Theta^{-1}(X,X)Y]}{n}}$\n\n $ +\\textrm{MMD}(S_0,S,\\mathcal{H}_{\\Theta}) $\n\n $ + O(\\sqrt{\\frac{\\log\\frac{n}{\\lambda_{0}\\delta}}{n}}) +O(\\sqrt{\\frac{C\\log(1/\\delta)}{n}})$. \n\n Then the leading term in the generalization bound will be:\n\n $\\mathcal{B}'= \\sqrt{\\frac{2 \\textrm{Tr}[Y^{\\top}\\Theta^{-1}(X,X)Y]}{n}}+\\textrm{MMD}(S_0,S,\\mathcal{H}_{\\Theta})$. \n\n As a result, Eq (19) can be extended to \n\n $\\sqrt{\\frac{2}{n}\\frac{\\textrm{Tr}^2[Y^{\\top}Y]}{\\mathcal{A}(X,Y)}} + \\textrm{MMD}(S_0,S,\\mathcal{H}_{\\Theta}) \\le \\mathcal{B}' \\le $\n\n $\\sqrt{\\frac{2}{n}\\frac{\\lambda_{\\max}}{\\lambda_{\\min}}\\frac{\\textrm{Tr}^2[Y^{\\top}Y]}{\\mathcal{A}(X,Y)}} + \\textrm{MMD}(S_0,S,\\mathcal{H}_{\\Theta})$.\n\n* In our work, we analyze the generalization bound first and then discuss the relationship between the generalization bound and the discrepancy induced by our active learning method, which keeps the analysis of generalization succinct and leads to an important conclusion (Proposition 1) for our method.\n\n\n2. Some notations can be simplified to make the formulas cleaner.\n* Thank you for the suggestion. We have simplified the notation and updated Eqs (12)(13)(14) in the revision.\n\n", " The paper proposes an active learning method for deep learning (named dynamicAL) that selects data points which maximize the training dynamics. The paper proves a relationship between the convergence speed of training and generalization and shows experimentally that dynamicAL can outperform other popular active learning baselines. Strengths:\n- The method proposed in the paper is novel and works well compared to other baselines.\n- The experiments are extensive.\n- The paper is well-written.\n\nWeaknesses:\n- The main theoretical results in Section 4.1 (Theorem 1 and 2) are only for iid data, which do not hold for the active learning setting. Although Section 4.2 gives some discussions for the active learning setting, the paper does not prove any formal results for the setting.\n- Some notations can be simplified to make the formulas cleaner; for example in Eq 13, the subscript u for x, y, etc. can be dropped without affecting the meaning of the equation. The same is also true for other equations such as 12, 14, etc. - Can you prove formal results similar to Theorem 1 and 2 for the active learning setting in Section 4.2 and 4.3 ? There is no potential negative societal impact.", " 1. The paper describes DynamicAL, an algorithm for active learning based on NTK-based analysis of training dynamics\n2. The algorithm is based on the notion that fast training results in better generalization. The authors describe \"alignment\" as a measure of convergence rate and show that this term affects the generalization bounds derived in the NTK regime.\n3. The authors then show that the proxy used in DynamicAL (\"training dynamics\") correlates with alignment and therefore could be used as a proxy\n4. The paper propose a greedy variant of the proposed algorithm and show that it well approximates the non-greedy variant, while being significantly faster\n5. The paper shows that the resulting algorithm outperforms existing baselines on a few models/datasets Strengths:\n1. Good presentation, well structured\n2. Well motivated, main claims of the papers are well justified from theoretical point of view, and verified empirically (section 4)\n3. Resulting algorithm is simple to implement and could be considered generalizations of existing methods (as mentioned in section 3.3)\n\nWeaknesses:\n1. Derivation and algorithm based heavily on NTK theory, but practical sized networks have been shown to deviate far from this regime (see [1, 2], for example). This paper does not address this issue. At least some discussion of it would be useful\n2. Unclear computational requirements compared to other methods or runtime analysis (this part, while possible to interpret based on the algorithm description, could be stated more explicitly)\n3. Improvements over baselines seem rather marginal (although this is somewhat expected in a somewhat saturated field)\n\n[1] - https://arxiv.org/abs/2007.15801\n[2] - https://arxiv.org/abs/2010.15110 1. What is the runtime of the algorithm? It appears that calculating equation 15 would scale linearly with |S|, which seems quite undesirable\n\n2. Does the algorithm suffer from clustering due to the greedy approximation? As an example, suppose that every single item in the dataset is duplicated 10 times and the query size is 10. Will the algorithm not pick the exact same image 10 times? (see the distinction between BALD and BatchBALD, for example)\n\n3. Is the experiment in section 4.3 calculating G using pseudo-labels or ground truth labels? It would be interesting to see the effect of changing pseudo-labels to ground-truth labels on the results in section 4.3\n\n4. While section 4.3 shows that G (training dynamics) correlates with A (alignment), it would useful to see how well G correlates to the final generalization performance, as this is true objective of the active learning algorithm. See the questions and strength/weaknesses section", " In this paper, the authors address the problem of active learning in the setting where the learning model is over-parameterized with respect to the data. They note that theories of active learning are based upon assumptions that hold for hypothesis classes with lower parameterizations may not hold in the over-parameterized setting, and propose to use newer tools from generalization theory to theoretically ground active learning for deep neural networks. They propose to use tools for studying the relationship between training dynamics and generalization performance (such as the NTK) as the basis for new deep active learning heuristics to select unlabeled data points in the pooled setting, eschewing traditional methods that select points that maximize either labeled data diversity or model uncertainty. They demonstrate that selecting data points which maximize the rate at which models train (the rate of descent of the training loss as a function of the number of parameter updates) is positively correlated with a lower generalization bound, thus establishing it as a bona fide heuristic for the acquisition function. \n Lines 49-52: Will definitely have to ensure that TD can be computed or well-estimated quickly, lest this be impractical\n\nLine 100: $K^{i,j}(x,x’)$ is the inner product of the gradient of the $i$-th class probability and the gradient of the $j$-th class probability evaluated at {x} and {x’} respectively. I think? It's not totally clear, and having to flip back to the notation section much higher up is not ideal. \n\nLine 122: Similar criticism here. This section includes a lot of notation, it would help the reader here to better understand equation (8) to recall that $K^{ij}(X,X)$ is the empirical NTK.\n\nLine 150: This is a common issue in active learning. This can bias the selection of points, since the point-wise contribution to G(S) is not necessarily additive. See work by [Farquahr et al 2021](https://arxiv.org/abs/2101.11665) for an example in the case of using uncertainty to acquire labels. A test to see how vulnerable G(S) is to this phenomenon would be to run selection with $b = 1$, and then with $b$ increasing in various intervals, to see how frequently the single points acquired with $b = 1$ appear in the set of $b$ points acquired in a single batch selection.\n\nLine 154: I presume that $\\mathcal{K}^{ij}$ is the empirical NTK, but it would help the reader to specify what it represents, as it’s not super clear by the derivations from lines 149 - 155.\n\nLines 176-178: While using an IF derived acquisition function makes some sense (in the sense that high influence points will by identification provoke large magnitude changes in the model parameters), it’s not clear that it’s a better or more valuable way to acquire points. \n\nLine 194: “of” missing here\n\nLines 217-219: In theorem 1, the factor $\\nu$ will change at $t -> \\infty$, since it depends on the reciprocal of $n^2$. Is this accounted for in the proof?\n\nLine 260: I would reformulate this sentence here, since evaluating one experiment for two query rounds on one network and dataset does *not* do much to convince the reader that Lemma 2 holds. I think the authors should broaden the datasets they use to establish the validity across other datasets, especially with different class imbalances.\n\nLine 263: This figure is nice, but it lacks some context. How good is the correlation between the training dynamics proxy and alignment? What would help readers understand this better would be a comparison of different acquisition functions and how they correlate with alignment. This correlation argument would become much stronger.\n\nSection 5: There are two key points missing in the empirical analysis of dynamicAL versus other acquisition functions. \n\nThe first is an analysis of the degree of overlap between the points selected by dynamicAL and other methods; how many of the same points are chosen, and does the membership of the sets of acquired points converge over time, or do they diverge as successive rounds of acquisition take place?\n\nThe second is that in both figure 3 and 5 we see only the means but not the variances of each set of retraining experiments. I suspect that there are two components contributing to the performance of each algorithm: first is the selection of the points acquired, as well as the smaller differences in training dynamics for each network due to initialization, batch order in which the data is presented to the models, or even GPU noise. It would be more convincing to see error bars, so that we can gauge how much of an effect the selection of points has on the accuracy. Lines 284-285: I would recommend trying this out on datasets with more severe class imbalances. Each of CIFAR10, SVHN and Caltech 101 are rather well behaved and don’t exhibit any class imbalances, making them odd choices for active learning. Perhaps this is why there does not seem to be much of a drastic difference in the results of Figure 3?\n\nLine 162: The use of pseudo labeling in this paper connects this work to recent work on self-supervised learning. I wonder if the authors have considered the implications of what current theories of self-supervised learning can do to inform this work, or what this work could do to transform a self-supervised learning method into a few-shot method that would preform better?\n Yes, they have.", " The authors prove that convergence speed of training and generalization are positively correlated. Based on this observation, the authors proposed a novel active learning algorithm (dynamicAL), which maximizes training dynamics. DynamicAL is theoretically motivated and generalizes existing methods such as uncertainty sampling methods and influence function methods.\n Originality:\n\nThe paper examines the training dynamics of Neural Tangent Kernel, which is suitable for theoretical analysis. The authors proved that convergence speed and generalization are positively correlated.\nThe authors proposed a new objective for an active learning algorithm: optimize the speed of training convergence.\n\nQuality:\nThe results are novel and interesting. However, the quality of the bound is not clear. For example, the authors introduce the notion of label alignment and show that label alignment is correlated with convergence and generalization. Based on this correlation, the authors conclude that convergence and generalization are positively correlated. While it is logical, I believe it oversimplifies the dependency between convergence and generalization given that the bound is an approximation and the quality of the bound was not assessed.\nExperimental results can be improved by measuring performance for all rounds (> 10 rounds until the training set is exhausted). Currently, the authors evaluate and compared methods for less than 10 query rounds. It is not clear if the proposed method outperforms the baselines in all rounds.\nThe improvement is rather small (often less than 1%).\n\nClarity:\nIn general, the paper is easy to read. However, its flow and presentation can be improved. For example, the authors first present the method and later present a theoretical analysis of “Train faster → generalize better”. I suggest that some of the theoretical results can be presented together with the method to improve the presentation quality. Other, unnecessary lemmas should be moved to the appendix. Then, the authors can expand the theoretical section and discussion of related work sections.\n\nSignificance:\nThe authors’ idea of optimizing for convergence speed is interesting and of potential significance, as it can be explored by a wider community in other fields. However, the authors should improve the presentation and clarify of the paper, while also updating experimental results.\n Can you compare the proposed method with other baseline methods for rounds > 10?\n\nCan you discuss the tightness of the bound in the relationships between convergence and generalization?\n Yes" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 2 ]
[ "hYl3yke91Vp", "I-HemXrWAES", "fO0rGtE_3i", "Y0o1aRjqNRS", "1Sv0-h2jNvX", "oBPyMxXNte", "rBYyWEpHoqf", "Y0o1aRjqNRS", "mmTLrVy3a3ai", "GcSOCZ0xlT", "G_fn61eZFj-", "SzpOhpMbhPa", "axcumpLmIuE", "gPmbnRlrCV4", "nips_2022_aJ5xc1QB7EX", "nips_2022_aJ5xc1QB7EX", "nips_2022_aJ5xc1QB7EX", "nips_2022_aJ5xc1QB7EX" ]
nips_2022_vdxOesWgbyN
Model Extraction Attacks on Split Federated Learning
Federated learning (FL) is a popular collaborative learning scheme involving multiple clients and a server. FL focuses on client's data privacy but exposes interfaces for Model Extraction (ME) attacks. As FL periodically collects and shares model parameters, a malicious client can download the latest model and thus steal model Intellectual Property (IP). Split Federated Learning (SFL), a recent variant of FL, splits the model into two, giving one part of the model (client-side model) to clients, and the remaining part (server-side model) to the server. While SFL was primarily designed to facilitate training on resource-constrained devices, it prevents some ME attacks by blocking prediction queries. In this work, we expose the vulnerability of SFL and show how ME attacks can be launched by malicious clients querying the gradient information from server-side. We propose five ME attacks that differ in the gradient usage in data crafting, generating, gradient matching and soft-label crafting as well as in the attacker data availability assumptions. We show that the proposed ME attacks work exceptionally well for SFL. For instance, when the server-side model has five layers, our proposed ME attack can achieve over 90% accuracy with less than 2% accuracy degradation with VGG-11 on CIFAR-10.
Reject
The paper studies the vulnerability of split federated learning with model extraction attacks. The paper provides five attacks and evaluates them experimentally. The authors also provided additional experimental results during the author rebuttal. While the topic and techniques are interesting, reviewers raise concerns about the novelty, and lack of experiments on standard FL datasets (e.g., LEAF) or large number of clients. While authors addressed some of these concerns during rebuttals, the paper can benefit from (a) explaining the novelty of the contributions (b) clarifying the assumptions made in the paper (c) explaining if the paper considers cross-device or cross-silo federated learning (b) adding more experiments on standard FL datasets and tasks.
train
[ "kKhqPi4bx66", "6rcDCgmbem", "RO6cLx8n77x", "u4yz63jlC0W", "2B7WGMIIwK-", "BdMlQCndw04", "eaWcCrnjC5e", "FHDXEJS0zMW", "6z0kI3ytJ1H", "hH_ooIzqUaM", "5McwCk4_5-i", "XkN21yG7xdY", "lczOKdJhtsa", "3KFwNH-Vni", "euJ40q_hPyf", "Qg-gVEdwo0", "Ov_cwfW0YsS", "XmMKYe6dfr" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I appreciate the authors' detailed response. I will keep my rating for this work.", " **Response to Q2**: We kindly disagree that showing the vulnerability of split federated learning is not a major finding. The model extraction attack (MEA) we are focusing on is a unique attack vector for SFL that does not exist for FL. In FL, all participating clients can get access to the entire model, thus performing MEA is meaningless as clients already have the full model. But in SFL, clients cannot get access to the entire model, but only the client-side model, which is a small part of the model (before the cut-layer). This is why SFL was typically seen as an “IP-protected” FL. In this paper, we point out the vulnerabilities of SFL and refute the claim of SFL being secure from MEA.\n\n**Response to Q3**: In this paper, we consider all three data assumptions, namely, no-data, out-of-distribution data and in-distribution data; Section 4 describes the attacks in each case. The no-data case is justified since **participants do not have to have training data**. Participants can be free-riders [1-2], that is, they claim to have abundant data but actually do not. Such a case exists since FL/SFL does not have an effective checking mechanism and everyone can participate. Our “no data” setting corresponds to the free-rider setting and can include cases where the client has no data at all or just randomly generated noise data.\n\nReference\n\n[1] Y Fraboni et al., Free-rider Attacks on Model Aggregation in Federated Learning, PMLR 2021\n\n[2] L Lyu et al., Collaborative fairness in federated learning, Federated Learning, Springer 2020 \n", " Hi,\n\nThank you authors for addressing my concerns. I keep the score. ", " I appreciate the author's responses and revisions. Although most of my questions have been addressed, I still have two concerns.\n\nFirst, the novelty of the paper is limited (Q2). Federated learning has been shown vulnerable in many recent works. Showing the vulnerability of split federated learning is not a major finding.\n\nSecond, the data assumption (no data or auxiliary data) is not practical (Q3). Since the attacker is assumed as a participant client, the attacker should have access to the in-distribution data.", " Dear reviewer, \n\nThank you for the detailed review.\n\nWe hope that our responses addressed all your concerns. We would be very happy to address\nfollowup questions if any.\n\nLooking forward to hearing back from you.\n\nBest,\nAuthors", " The original submission was lacking sufficient a discussion of societal impacts.\n\n The authors have since added a short section to the paper discussing societal impact. This section could be expanded to include a more in depth discussion of the ethical tensions in researching attack methods (which can both spur research no defenses, but also advances and publicizes attack methods). However, it does now meet the requirements. No remaining ethical issues.", " We thank all reviewers for their precious time and effort in putting in these reviews. We carefully considered the comments and revised our manuscript. In summary, we made the following changes in the revised version: \n\n1. We revised the related work (Section 2.1) to include earlier papers on “model-split” idea. \n\n2. We pointed out the similarity between Train-ME and the “model completion” method that was recently published in USENIX’ 22 in Section 4.2.\n\n3. We ran experiments corresponding to a naive baseline, that directly uses the available training data to train the surrogate model, and updated Table 2 to include these results.\n\n4. We added a “Broader Impact” subsection (Section 8) which states the societal impact of this work.\n\n5. We included a diagram in Appendix A.2 to help better understand the process of all five proposed extraction attacks.\n\n6. We included further empirical results of FEMNIST (one of LEAF datasets) in Appendix A.9.\n\n7. We included time-cost measurement results for all five attacks in Appendix A.10.\n", " **Q5**: The experiment only considers 10 clients. However, in the real-world settings, the number of clients should be much large. What’s the performance of the proposed attacks when a large number of clients participate in the SFL?\n\n**A5**: Sorry for causing confusion. The number of client settings is actually different for fine-tuning and training-from-scratch cases. We have revised the description to make it more clear. \n\nFor the fine-tuning case, we simulate 50 clients by using 1K data per client. For the training-from-scratch case, we have results for 5-client and 10-client in Appendix A.4. Here we see that increasing the #clients generally reduces the model extraction performance since the number of queries per data is less. Unfortunately, we do not have simulation results for more than 10 clients. We will certainly investigate the scalability problem in our future work.\n", " **Q1**: The paper claims that the client-side attacker cannot use the existing model extraction attacks in the split federated learning because the client cannot get the predictions from the server. However, in the inference phase, the clients can get the predictions. The proposed attacks can be further compared with the existing model extraction attacks through predictions in the inference phase.\n\n**A1**: Thanks for pointing out this concern on our threat model. Below we list several reasons to justify our assumptions that clients cannot access the prediction:\nThe SFL scheme in [Thapa, 2022], on which this work is based, does not provide clients prediction results and so we too assume the same.\nThere exist many systems that do not expose prediction API to customers. For example, Google keyboard uses an FL-trained model as part of the next-word prediction service, where the final suggested word is a combination of the FL-trained model with another model that is hidden from the customer (refer to [1] by Google). Thus users do not know the exact prediction of the FL model, that is built internally. Similarly, we believe that the prediction result of the SFL-learned classification engine should be wrapped up internally to improve companies’ service quality, instead of exposing directly to customers.\nShould it be possible for prediction to be accessed, the model extraction (ME) problem in SFLwould become an easier problem (sub-problem) compared to ME on prediction API, which has been thoroughly investigated in [2, 3]. As we demonstrate in Fig.3, existing methods can succeed in ME with high accuracy and fidelity. For the server (SFL scheme and model owner), it makes sense not to expose the prediction. \n\nReferences\n[1] T Yang et al. - 2018 - Applied Federated Learning: Improving Google Keyboard Query Suggestions\n[2] F Tramer et al. - 2016 - Stealing Machine Learning Models via Prediction APIs\n[3] M Jagielski et al. - 2020 - High accuracy and high fidelity extraction of neural networks\n\n\n\n**Q2**: The novelty of this paper is limited. Most proposed attacks leverage the existing model extraction attacks. Train-ME is the most effective attack. However, Train-ME is a very straightforward attack, which does not even need the gradient query. This may indicate that the model extraction attacks in SFL is not a trivial problem.\n\n**A2**: The major contribution of this paper is exposing the vulnerability of SFL. We propose five attacks that can get a very accurate and high-fidelity surrogate model. Thus the claim that baseline SFL provides IP protection is no longer true! On a more positive note, we also show how resistance can be improved by leaving more layers on the server-side, and by applying regularization techniques during training.\n\n**Q3**: The no data assumption is confusing. If the attacker is a client, the attacker should at least have access to the client’s data\n\n**A3**: \nOur “no data” assumption means the attacker has no data at all or randomly generated noise data. We have elaborated on this in the introduction. For the case when the attacker has training data, we have discussed corresponding attacks in section 4.2. \n\n**Q4**: It would great if the proposed attacks could be compared with some baselines. For example, since the attacker is one of the clients, what is the attack performance when the attacker uses the client’s data to train a model?\n\n**A4**: Thanks for the suggestion. We performed the naive baseline attack (uses client’s data to train a model) for VGG-11 architecture on CIFAR-10, assuming that the attacker has only 2% training data (1K data). The table below shows that the accuracy/fidelity performance is much lower compared to model extraction attacks. \n\n| 1K data | Naive Baseline | Train-ME (N=2) | SoftTrain-ME (N=2) | Train-ME (N=8) | SoftTrain-ME (N=8) |\n|--------------|:---------------:|:---------------:|:------------------:|-----------------|--------------------|\n| Accuracy | 49.64 | 92.05 | 91.99 | 70.28 | 71.32 |\n| Fidelity | 50.62 | 99.29 | 99.10 | 71.79 | 72.45 |\n\nWe have included the simple baseline results in Table 2 for both finetuning and training cases. We have also listed differences between the naive baseline and proposed attacks in Table 1.\n\n**Q5**: The experiment only considers 10 clients. However, in the real-world settings, the number of clients should be much large. What’s the performance of the proposed attacks when a large number of clients participate in the SFL?\n\n**A5**: Please see our response in part2.", " **Q5**: The potential societal impact may need to be stated more explicitly.\n\n**A5**: Thanks for the suggestion. We strongly agree that pointing out the societal impact is very important for attack work. We add a “Broader Impact” subsection which states the societal impact of this work: “This work points out the vulnerability of Split Federated Learning (SFL), to model extraction attacks, and should prevent a naive adoption of SFL as a model IP protection method. We believe that the attacks presented here would initiate research in the development of defense schemes to mitigate such attacks, help design a more robust SFL and possibly help in the design of neural network models that are inherently resilient to such attacks.”", " We merged description under weaknesses and questions for better readability:\n\n**Q1**: When evaluating the model extraction attacks with training data available, it is intuitive to compare the attacks with a simple baseline that uses the training data to train a model locally on the malicious client. I would expect the authors to perform a simple comparison with such baseline. This is essential because if the attacker can locally train a better model, then there is no motivation for the attacker to perform the model extraction attacks. What is the accuracy of the model obtained via the aforementioned baseline, i.e., locally trained by the attacker with the available training data? Do the proposed attacks outperform it?\n\n**A1**: Thank you for the suggestion. We performed the naive baseline attack for VGG-11 architecture on CIFAR-10 and CIFAR-100, assuming that the attacker has only 2% training data (1K data). The table below shows that the accuracy/fidelity performance is much lower compared to model extraction attacks. \n\n| 1K data | Naive Baseline | Train-ME (N=2) | SoftTrain-ME (N=2) | Train-ME (N=8) | SoftTrain-ME (N=8) |\n|--------------|:---------------:|:---------------:|:------------------:|-----------------|--------------------|\n| Accuracy | 49.64 | 92.05 | 91.99 | 70.28 | 71.32 |\n| Fidelity | 50.62 | 99.29 | 99.10 | 71.79 | 72.45 |\n\nWe included the simple baseline results in Table 2 for both finetuning and training cases. We also listed differences between the naive baseline and proposed attacks in Table 1.\n\n**Q2**: There is no analysis on the complexity of the proposed attacks. It is interesting to see some analysis on the attack cost, e.g., O() notations with respect to the number of parameters in the server-side model.\n\n**A2**: Thank you for raising this concern. \nWe did not include the query complexity analysis because our proposed methods are not based on cryptanalytic model extraction (such as Carlini et al. - Cryptanalytic Extraction of Neural Network Models). Our attacks are based on approximate model extraction where we can better approximate the target model with more queries, but performance may vary from case to case. \n\nSo here we include time measurement results to provide a sense of how much time it takes to deploy proposed attacks. We show the time for preparation and surrogate model training of all five attacks. For example, preparation phase includes crafting inputs in Craft-ME or crafting soft labels in SoftTrain-ME. The measurement is done using Python time library, on a desktop PC equipped with a R7-5800X CPU and a single RTX-3090 GPU.\n\n| Time Cost (s) | Craft | GAN | GM | Train | SoftTrain |\n|--------------------------|:--------:|:-------:|:----------:|:-------:|:------------:|\n| Preparation | 317.8 | 44.5 | 30.7 | 4.7 | 18.9 |\n| Surrogate Training | 381.2 | 339.3 | 5523.1 | 313.4 | 949.5 |\n| Total | 699.0 | 383.8 | 5553.8 | 318.1 | 968.4 |\n\nMore details have been included in a new section in Appendix A.9 in the revised paper.\n\n**Q3**: There are some defenses in federated learning that leverages statistical analysis of the local gradients (known as Byzantine-robust FL methods). Can existing statistics-based Byzantine-robust FL methods defend against the attacks?\n\n**A3**: Four of our proposed attacks require poisoning the model - “sending false input” or “correct inputs with false labels”, and the Byzantine-robust FL method is known to be robust to the poisoning effect. However, we are not sure whether these methods will be applicable to SFL and will be able to circumvent gradient-based ME. Thank you for providing this lead. We will investigate this in the near future.\n\n**Q4**: In Section 5.3, it is claimed that \"This indicates Resnet architecture is more resistant to ME attack\". It would be interesting to see some explanation on why this is the case. Is there any possible reason why ResNet shows better robustness to ME attack?\n\n**A4**: Our intuitive understanding of why ResNet is more resistant compared to VGG architecture is because of its residual connection which allows raw input to be passed to the later layers. In VGG architecture, front layers tend to play an important role as they decide what features will feed to later layers. But in ResNet, later layers can receive both the extracted feature and the raw inputs, thus, to some degree, reducing the importance of front layers that are public to the attacker.\nWe would like to perform a deeper analysis in our future work with the hope that we can derive special architectures which have a natural resistance against the proposed attacks. \n\n**Q5**: The potential societal impact may need to be stated more explicitly.\n\n**A5**: Please see part2 of our response.\n", " **Q1**: Different attack approaches are explained in an abstract way. It is hard to understand the attack assumption, how to craft data, how the attacker works, etc.\n\n**A1**: We apologize that the attack model was not clear! The attack assumptions are discussed in detail in section 3.1 and methodologies for each of the proposed attacks are separately introduced in section 4. To better explain the procedure of all five attacks, we have added a diagram in Appendix A.2 to help show the process of extraction attacks. We will release our code publicly as well.\n\n**Q2**: Craft-ME method, both the surrogate model and input x_r are randomly initialized, and the surrogate model generates x_r since the server-side model won't provide loss respect with x_r. Even though we can generate plausibly (x_r, c) labels via such a random surrogate model, how can these collected (x_r, c) instances be applied to train the surrogate model well?\n\n**A2**: In our assumption, clients can get access to $\\nabla_{A} L$, and have access to the client-side model C. Given $A=C(x_r)$, clients can apply the chain rule ($\\nabla_{x_r} L = \\frac{\\partial L}{\\partial A} \\frac{\\partial A}{\\partial x_r} = \\nabla_{A} L \\frac{\\partial C(x_r)}{\\partial x_r}$) to acquire $\\nabla_{x_r} L$, which can be used to update $x_r$ to make the loss small. Once some pairs of ($x_r$, c) are generated, these small-loss instances are used as normal data to train the surrogate model using cross-entropy loss.\n\n**Q3**: Gradient matching model extraction (GM-ME): since the prediction query is disallowed, V is unknown in this setting. how to compute \\nabla_{x_i} L(V(C(x_i)), y_i) (see line 196, eq. 1)? Normally, Victim part will only compute gradient with respect to C's party weights.\n\n**A3**: Actually, even if V is not known, \\nabla_{x_i} L can still be accessed. Once the server calculates L, it performs backward propagation on V and sends the $\\nabla_{C(x_i)} L$ back to the client (the detailed process is illustrated in Algorithm 1, line 12). The client can then apply the chain rule (described in A2) using client model C and $\\nabla_{C(x_i)} L$ to acquire gradients on $x_i$.\n\n**Q4**: Training-based model extraction (Train-ME): how much training data are used to train such a surrogate model, compared with the data size in SFL settings.\n\n**A4**: Sorry for not clearly describing the data setting. We have added the necessary descriptions in the caption of Table 2. We use 1K training data (randomly sampled) for the fine-tuning cases. For training-from-scratch experiments, we use a 10-client SFL scheme where each client (including the malicious one) has 5K training data.\n\n**Q5**: Gradient matching model extraction (GM-ME) relies on the gradient of x_i, which is infeasible in reality.\n\n**A5**: Please see the responses under **A2** and **A3**.\n\n**Q6**: Training-based model extraction (Train-ME)'s idea is done by previous work, using model transferability. see [1]: [1] Fu, Chong, et al. \"Label inference attacks against vertical federated learning.\" 31st USENIX Security Symposium (USENIX Security 22), Boston, MA. 2022.\n\n**A6**: Thanks for pointing this out. At the time of submitting the paper in May, we were not aware of this USENIX conference paper. We have included it in the updated version and pointed out the similarity between our Train-ME and the “model completion” technique in [1].\n", " **Q1**: What do authors think about their possible way of defending such attacks through regularization? How good is that?\n\n**A1**: The current paper is on different attack methods and their evaluation. We plan to look at different defense techniques next. For starters, we evaluated a naïve method using L1 regularization selectively on the client-side model. The goal was to reduce the capability of the client-side model. Figure 4 (e) shows that L1 regularization can reduce extracted fidelity by 5 to 10%, suggesting this can be a possible defense technique. There could be other promising defense methods based on regularization. We plan to look at them in the near future. \n\n**Q2**: Have the authors also looked the LEAF datasets for the experiments? LEAF seems to be the standard dataset for FL papers, with natural client partitions and other FL constraints.\n\n**A2**: Thanks for bringing this up. We provide further empirical results on the official LEAF dataset FEMNIST (40K data, 62-class dataset) for the five attacks in Appendix - Table 8 using the same finetuning setting as in Table 2. The model is a VGG-11 model that achieves 74.62% validation accuracy.\n\n| FEMNIST | N | Craft | GAN | GM | Train | SoftTrain |\n|:------------:|:-:|:-----:|:-----:|:-----:|:-----:|:---------:|\n| Accuracy (%) | 2 | 53.20 | 10.59 | 56.67 | 70.32 | 75.70 |\n| | 3 | 43.57 | 7.19 | 22.53 | 68.47 | 74.93 |\n| | 4 | 43.04 | 5.27 | 9.87 | 68.80 | 74.42 |\n| | 5 | 40.50 | 4.02 | 3.78 | 67.70 | 74.46 |\n| Fidelity (%) | 2 | 59.28 | 11.39 | 69.70 | 82.56 | 83.87 |\n| | 3 | 48.10 | 7.35 | 25.14 | 77.52 | 81.24 |\n| | 4 | 46.58 | 5.20 | 10.10 | 75.61 | 77.39 |\n| | 5 | 42.11 | 3.80 | 3.68 | 71.97 | 76.30 |\n\n**Q3**: Some related work is missing. The authors mention that Split Learning is firstly introduced in reference \"[Gupta and Raskar, 2018]\" while the idea goes further back and there are already several papers on the field. For example (non exhaustive list): [Kang, Yiping et al. 2017], [Yousefpour, Ashkan et al. 2019], [Liu, Peng et al. 2018] and [Surat Teerapittayanon et al. 2017]\n\n**A3**: Thank you for pointing out this important issue. While the model split idea was proposed earlier, it was for distributed inference. This paper focuses on collaborative training and so we had quoted Gupta and Raskar's paper as it explicitly demonstrated the use of the model split area for such cases. We agree with your concern and have included these works in Section 2.1 to acknowledge them as the inventor of the “model split” idea. We maintain that [Gupta and Raskar, 2018] was the first to use the model split area for DNN training application. ", " The request for ethics review comes from one reviewer, who would like the authors to add one more paragraph to clarify the societal impacts of the work. Three other reviewers did not see any ethical problems.\n\nIn my view, this is a reasonable request, and I believe that the authors can easily respond with the requested paragraph. The authors' motivation for this work is clearly ethical (as well as technical, of course). They already have the knowledge to write this paragraph. The requested paragraph will help to explain the authors' ethical motivation to a broader audience.\n\nTo state this clearly: There are NO ethical problems or issues with this paper. There is only a suggestion to explain the agreed ethical issues for a broader audience. There are NO ethical concerns or issues. The request for ethical review was about making an agreed position clearer to a broader audience. Please write one more paragraph in Discussion or Conclusion to help readers to understand the ethical implications of the work. I believe that authors and reviewers agree about the ethical issues. The request is for a clearer explanation. This will be a service to the readers. ", " The authors expose the vulnerability of split Federated Learning and show how model extraction attacks can be launched by malicious clients querying the gradient information from server-side.\n\nThey proposed five different variants of model extraction attacks (ME) on split federated learning. The attacks use different gradient schemes, including data crafting, data generating, gradient matching and soft label crafting. They also made different assumptions for the data such as no data, only auxiliary data (out-of-distribution data) and training data (in-distribution data). They interestingly find that in a 5-layer-in-server SFL, an ME attack can derive a surrogate model with over 90% accuracy, and less than 2% accuracy degradation.\n\nFor the experiment they tried both fine-tuning the SFL models and training from scratch for different gradient consistency.\nThey showed that the ME attacks can succeed even without any data when fine-tuning. They also concluded that the ME attacks would succeed better with the increase of layers on the server side. Strengths:\n- It’s an original work and perhaps the first one on the possible attacks on SFL systems.\n- Quality of the work is good, and the writing it’s quite clear. \n\nWeaknesses:\n- Some related work is missing. The authors mention that Split Learning is firstly introduced in reference \"[Gupta and Raskar, 2018]\" while the idea goes further back and there are already several papers on the field. For example (non exhaustive list):\n* Kang, Yiping, et al. \"Neurosurgeon: Collaborative intelligence between the cloud and mobile edge.\" ACM SIGARCH Computer Architecture News 45.1 (2017): 615-629.\n* Yousefpour, Ashkan, et al. \"Guardians of the deep fog: Failure-resilient DNN inference from edge to cloud.\" Proceedings of the First International Workshop on Challenges in Artificial Intelligence and Machine Learning for Internet of Things. 2019.\n* Liu, Peng, Bozhao Qi, and Suman Banerjee. \"Edgeeye: An edge service framework for real-time intelligent video analytics.\" Proceedings of the 1st international workshop on edge systems, analytics and networking. 2018.\n* Surat Teerapittayanon, Bradley McDanel, and HT Kung. 2017. Distributed deep neural networks over the cloud, the edge and end devices. In Distributed Comput- ing Systems (ICDCS), 2017 IEEE 37th International Conference on. IEEE, 328–339. What do authors think about their possible way of defending such attacks through regularization? How good is that? \n\nHave the authors also looked the LEAF datasets for the experiments? LEAF seems to be the standard dataset for FL papers, with natural client partitions and other FL constraints. The defense mechanism is quite basic and it would be nice to see some strong defense mechanisms with more analyses and numerical experiments", " The authors perform five different model extraction attacks in settings of split federated learning (SFL). Such attacks exploit gradient information from the server-side to conduct model extraction attacks. Besides, this paper proposed an approach to obtain the necessary shadow data for constructing attackers. ** strengths \n 1. Give a comprehensive investigation of model extraction attacks in split federated learning settings, and provide rich experimental results. \n** weaknesses\n 1. Different attack approaches are explained in an abstract way. It is hard to understand the attack assumption, how to craft data, how the attacker works, etc. 1. Craft-ME method, both the surrogate model and input x_r are randomly initialized, and the surrogate model generates x_r since the server-side model won't provide loss respect with x_r. Even though we can generate plausibly (x_r, c) labels via such a random surrogate model, how can these collected (x_r, c) instances be applied to train the surrogate model well?\n 2. Gradient matching model extraction (GM-ME): since the prediction query is disallowed, V is unknown in this setting. how to compute \\nabla_{x_i} L(V(C(x_i)), y_i) (see line 196, eq. 1)? Normally, Victim part will only compute gradient with respect to C's party weights. \n 3. Training-based model extraction (Train-ME): how much training data are used to train such a surrogate model, compared with the data size in SFL settings. 1. Gradient matching model extraction (GM-ME) relies on the gradient of x_i, which is infeasible in reality. \n 2. Training-based model extraction (Train-ME)'s idea is done by previous work, using model transferability. see [1]\n\n[1] Fu, Chong, et al. \"Label inference attacks against vertical federated learning.\" 31st USENIX Security Symposium (USENIX Security 22), Boston, MA. 2022.", " Federated learning is a distributed learning paradigm that allows multiple clients and a server to collaboratively learn a machine learning model without data sharing. However, federated learning still suffers from model privacy concerns, since the server and the clients all have access to the full global model. Split federated learning is proposed as a solution to both communication reduction and privacy enhancement. In split federated learning, the machine learning model is split into two parts, one held by the server and the other one held by the clients. Since the clients have access to only part of the model, split federated learning was considered to be robust to client-side model extraction attacks. However, in this work, the authors find that this is a false sense of security. Specifically, a malicious client can perform model extraction attacks to learn a surrogate model that have similar accuracy/predictions compared to the target server-side model. ### Strengths\n- Five different model extraction attacks are proposed in this work. The authors consider attack different settings for the attack, e.g., whether the attacker (malicious client) has local training data. \n- The authors conduct extensive experiments to show the effectiveness of their model extraction attacks. It is impressive that the attacks do work in many cases and it is interesting to see the impact of different parameters, e.g., the number of queries, the number of layers in the server-side model, and model architectures.\n- The paper is generally well-written and easy to follow. \n\n### Weakness\n- When evaluating the model extraction attacks with training data available, it is intuitive to compare the attacks with a simple baseline that uses the training data to train a model locally on the malicious client. I would expect the authors to perform a simple comparison with such baseline. This is essential because if the attacker can locally train a better model, then there is no motivation for the attacker to perform the model extraction attacks. \n- There is no analysis on the complexity of the proposed attacks. It is interesting to see some analysis on the attack cost, e.g., $O()$ notations with respect to the number of parameters in the server-side model. \n- There are some defenses in federated learning that leverages statistical analysis of the local gradients (known as Byzantine-robust FL methods). I am curious whether these methods are sufficient to defend against the proposed attacks. \n- In Section 5.3, it is claimed that \"This indicates Resnet architecture is more resistant to ME attack\". It would be interesting to see some explanation on why this is the case. \n- The potential societal impact may need to be stated more explicitly.\n- Writing issues. On line 3 of Algorithm 1, I would say it is better to average model parameters (i.e., $W$) instead of models (i.e., $C$). In Figure 4, the captions of subfigures mismatch. - What is the accuracy of the model obtained via the aforementioned baseline, i.e., locally trained by the attacker with the available training data? Do the proposed attacks outperform it?\n- What is the complexity of the proposed attacks?\n- Can existing statistics-based Byzantine-robust FL methods defend against the attacks?\n- Is there any possible reason why ResNet shows better robustness to ME attack? The negative societal impact may need to be stated more explicitly, e.g., via adding a paragraph in the discussion.", " This paper investigates model extraction attacks on split federated learning. The paper assumes that the attacker cannot be able to access the model predictions from the server. Hence, the paper leverages the gradients to extract the server-side model. Five model extraction attack methods are proposed for different data assumptions. Strengths:\n1. The paper investigates five model extraction attacks against SFL.\n\n2. The evaluation considers several interesting scenarios, such as non-iid data distributions, adversarial attacks, and attacks without knowing model architectures.\n\nWeaknesses:\n1. The paper claims that the client-side attacker cannot use the existing model extraction attacks in the split federated learning because the client cannot get the predictions from the server. However, in the inference phase, the clients can get the predictions. The proposed attacks can be further compared with the existing model extraction attacks through predictions in the inference phase.\n\n2. The novelty of this paper is limited. Most proposed attacks leverage the existing model extraction attacks. Train-ME is the most effective attack. However, Train-ME is a very straightforward attack, which does not even need the gradient query. This may indicate that the model extraction attacks in SFL is not a trivial problem.\n\n3. The no data assumption is confusing. If the attacker is a client, the attacker should at least have access to the client’s data. \n\n4. It would great if the proposed attacks could be compared with some baselines. For example, since the attacker is one of the clients, what is the attack performance when the attacker uses the client’s data to train a model?\n\n5. The experiment only considers 10 clients. However, in the real-world settings, the number of clients should be much large. What’s the performance of the proposed attacks when a large number of clients participate in the SFL?\n\n6. The proposed attacks do not perform well on complex datasets (e.g., ImageNet, CIFAR100) and complex models (e.g., ResNet). \n\n7. The scope of the paper is pretty narrow. The proposed attacks can be only applied to split federated learning. Please clarify the no prediction assumption (weakness 1) and the no data assumption (Weakness 3). The authors addressed the limitations and potential negative societal impact." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 5 ]
[ "5McwCk4_5-i", "u4yz63jlC0W", "lczOKdJhtsa", "FHDXEJS0zMW", "XmMKYe6dfr", "nips_2022_vdxOesWgbyN", "nips_2022_vdxOesWgbyN", "XmMKYe6dfr", "XmMKYe6dfr", "Ov_cwfW0YsS", "Ov_cwfW0YsS", "Qg-gVEdwo0", "euJ40q_hPyf", "nips_2022_vdxOesWgbyN", "nips_2022_vdxOesWgbyN", "nips_2022_vdxOesWgbyN", "nips_2022_vdxOesWgbyN", "nips_2022_vdxOesWgbyN" ]
nips_2022_kUnHCGiILeU
CageNeRF: Cage-based Neural Radiance Field for Generalized 3D Deformation and Animation
While implicit representations have achieved high-fidelity results in 3D rendering, deforming and animating the implicit field remains challenging. Existing works typically leverage data-dependent models as deformation priors, such as SMPL for human body animation. However, this dependency on category-specific priors limits them to generalize to other objects. To solve this problem, we propose a novel framework for deforming and animating the neural radiance field learned on \textit{arbitrary} objects. The key insight is that we introduce a cage-based representation as deformation prior, which is category-agnostic. Specifically, the deformation is performed based on an enclosing polygon mesh with sparsely defined vertices called \textit{cage} inside the rendering space, where each point is projected into a novel position based on the barycentric interpolation of the deformed cage vertices. In this way, we transform the cage into a generalized constraint, which is able to deform and animate arbitrary target objects while preserving geometry details. Based on extensive experiments, we demonstrate the effectiveness of our framework in the task of geometry editing, object animation and deformation transfer.
Accept
All reviewers consider the central idea of the paper to be novel and interesting, and that deformable NeRFs are a valuable research area. All reviewers agree that the initial paper was poorly presented, and that the revised version is considerably improved. While accepting the authors' rebuttal regarding the lack of suitable datasets to evaluate deformable NeRFs. the qualitative presentation could be improved further, and in particular, it is recommended to include video sequences of continuous deformation, which will more clearly show the artifacts of both the proposed and other methods.
train
[ "AvefCYkEgGJ", "jbY6bX2xetp", "9brrPgIBGB", "UqVBG66Xd", "lULOkqkxKBf", "xDQ51os4Dsn", "vWC8xXwWBj4", "odKun87dErh", "fdP5umkrtW", "k37IHb3akv", "XGM5ISTKcfV", "vLCwspF5Gi", "qVic8M5nxix", "8Q9Oavka_v2", "xQdx6IzOvfY", "fs7tUxaUoZ" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the effort and time you contributed to reviewing our manuscript. We are glad that our response addressed your concerns.", " Thank you for these clarifications. I think that incorporating them somehow into another revised version would be helpful, as I was not confident in my understanding of test-time rendering.\n\nSince my concerns have been addressed and, given the already revised version, I trust the authors to improve their submission with the remaining questions I and other reviewers still had, I will increase my rating to weak accept. ", " We sincerely thank you for the review and comments, which have definitely improved the quality of this paper. We hope the following responses and results can answer your questions. We enjoyed the fruitful discussions with you, thanks again for the careful review and insightful comments.\n### Question A: The function and purpose of using explicit mesh.\nThe functionalities/purposes of using an explicit mesh are three-fold:\n- An explicit mesh enables the cage deformer module to automatically generate deformed cages in tasks like deformation transfer and pose reenactment, i.e., the capability of Neural Cages gets transferred to the radiance field.\n- The explicit mesh can be deformed into a novel shape using cage-based deformation. The deformed mesh is used in test-time rendering to act as a 3D mask with a vicinity threshold of $\\epsilon$. This 3D mask can effectively reduce the sample points down to 10% of the original number required in volume rendering.\n- **The insufficiency of cage-only deformation** An explicit mesh enables us to robustly establish the backward deformation, i.e., from the deformed space to canonical space. Cage-based deformation can express the forward deformation correctly, i.e., from the canonical space to the deformed space, but not always vice versa. In some cases, both Neural Cages and our cage deformer module will output a deformed cage with overlapping faces or collapsed vertices. Naively taking the MVC value calculated in such deformed cages will induce artifacts, since points that reside in such regions cannot be correctly projected into the canonical space. (This property can be interpreted as:CBD using a cage with a self-overlapping structure is similar to applying a matrix that is not full rank.) We provide backward deformation results on SMPL model using cage with and without self-overlapping structures via this anonymous [link](https://postimg.cc/MfjKYBsZ).\nAdditionally, if the cage vertices is manually edited without overlapping faces and collapesd vertices, we can generate MVC field directly based on this deformed cage, i.e., without the need of an explicit mesh, since this cage deformation is inversable from deformed space to canonical space.\n\n### Question B:\nThe test-time rendering process is exactly the same as you've described.\nWe sample points within the vincinity $\\epsilon$ of the **deformed mesh** in Step 3 of actual rendering.", " We sincerely thank you for providing a novel perspective to our research and valuable insights through your review and comments. We will incorporate these considerations into the manuscript to our best effort. We enjoyed the process of the author-reviewer discussion with you. We are happy that our response and updated manuscript have answered your concerns. Thanks again for the careful review and insightful comments.", " \n### Question 5: Experiment Design and Datasets\nDeformable radiance field is a pioneering research topic. There lacks standard benchmarks for us to conduct experiments.\nAs for the choosing of our datasets and experiments:\n- The experiment \"Geometry Editing\" and \"Pose Reenactment\" are representative in the aspects of learning from NeRF-like datasets, and deforming both general objects and humanoids.\n- The experiment \"Neural Animation\" is conducted to demonstrate the quality and accuracy of our deformable radiance field.\n- We conduct our experiment on H36M dataset to demonstrate that our framework is capable of learning the canonical radiance field from the deformed space, i.e., multi-view images of posing human and deforming real-world targets.\nOur framework is also generalizable to standard reconstruction datasets such as DTU. \n\n### Algorithm used in extracting mesh:\nThe KNN method mentioned in our original manuscript is an early practice of extracting mesh implemented using the built-in function of the mesh processing library [Open3D](http://www.open3d.org/). The KNN method which yields a similar result as the marching cube algorithm. To keep simplicity, we choose the marching cube algorithm in both our manuscript and the actual experiments.\n\n### Notation of $\\mathcal{C}(\\cdot)$\nThank you for your suggestion, we will follow your advice and update our equations accordingly.", " Thank you for your valuable review and comments, which have improved the quality of our paper. We provide these additional responses and results which we hope can clarify your concerns. We enjoyed the effective discussion with you, despite whether this paper can be accepted. \n### Question 1: Proofreading\nThank you for pointing out these mistakes and inconsistencies, we will address these carefully in our manuscript.\n### Question 2: \n- **Source of the template cage:** We use two template cages for our cage deformer module, similar to those used in Neural Cages. Specifically, we use the same 42-vertex template cage as our template for deformation transfer on objects in ShapeNet, i.e., the chair deformation. For animated hulk and the humanoid robot in pose reenactment, we use the 126-vertex template cage provided in the source code of Neural Cage.\n- **Prediction of canonical cage:** Canonical cage is predicted the same way as Neural Cage using the template cages mentioned above. We feed the network with the extracted mesh from canonical space as its input shape.\n- **Using cage versus using mesh directly:**\n 1. Controllable. Directly controlling a mesh with a high vertex count is computationally expensive and difficult to control. A cage is a sparsely defined mesh, which is easy to edit and control.\n 2. General. Cage-based deformation is a category-agnostic method that can be applied to arbitrary objects. Current mesh driving model such as SMPL requires a predefined parametric relationship between the vertices and the skeleton.\n \n### Question 3:\n- **Acquisition of shell $\\mathcal{G}_s$:** Shell $\\mathcal{G}_s$ is generated by 1) calculate the vertex normal of $\\mathcal{G}^{'}$, i.e., the geometry polygon mesh of the object in the deformed space. 2) translate every vertex along the calculated vertex normal by a small distance (0.5$\\epsilon$ in our experiment). 3) construct the polygon mesh of shell $\\mathcal{G}_s$ using the translated vertices and the same face definition of $\\mathcal{G}^{'}$.\n- **The shape of $\\mathcal{G}_s$:** The shape of a shell can be interpreted as an enlarged version of $\\mathcal{G}^{'}$, i.e., the geometry of the object inside the canonical space. The sphere shape of $\\mathcal{G}_s$ is to keep the illustration simple since the deformed mesh $\\mathcal{G}^{'}$ is a sphere itself in the case of Figure 4. \n- **Calculating the MVC coordinates** $\\phi_s$ is calculated the same way as $\\phi_g$ since both of them are polygon meshes. Specifically we calculate the MVC coordinates of a grid point w.r.t. a polygon mesh inside our MVC field using the following steps: 1. find the nearest triangle on the polygon mesh to this grid point. 2. calculate the MVC coordinates of the triangle vertices. 3. using barycentric interpolation of these three vertices to determine the MVC of the grid point. In our implementation, we use functions provided by mesh processing libraries called [Mesh](https://github.com/MPI-IS/mesh) to perform barycentric interpolation and [PyMesh2](https://pypi.org/project/pymesh2/) to calculate MVC coordinates. \n- **Difference between reducing $\\epsilon$ and applying shell:** Directly reducing $\\epsilon$ will require the NeRF network to learn a more accurate representation and requires a more precise polygon mesh. Both requirements are difficult to meet while maintaining a relatively larger $\\epsilon$ can achieve on-par performance without imposing higher demands on the network.\n\n### Question 4: The Setting of Experiments\nThe \"Neural Animation\" experiments are not intended to show an application when there exist both mesh and texture, but to demonstrate that our method can generate comparable animation results as LBS driven methods.\nThe humanoid robot used in experiment \"Pose Reenactment\" only has a polygon mesh and is not animated or rigged. And as demonstrated in Figure 9, our deformed result can correctly take the same pose as the target.", " Thanks for the response! Many errors have been corrected, and the current presentation is much clearer. After reading the response and the revised manuscript, I still have some concerns, comments, and questions.\n\n1. The current version may still need another round of careful proofreading. After a quick scan of the revised manuscript, I already found many errors or inconsistencies:\n\n(a) Line 127, the reference of the MarchingCubes is wrong.\n\n(b) Line 128-134, the discussion of the cage requirement is missing. E.g., we need an optimized cage that geometrically aligns and encloses this object.\n\n(c) Line 202: ``expanding''\n\n(d) Line 255-257: \"\\mathcal{G}_{n}\" should be \"\\mathcal{G}^'\"?\n\n2. In NeuralCage, they predict a cage for each shape using a neural network. In this work, there is no prediction, and where does the original template cage actually come from? Can we directly use \\mathcal{G} as a cage? What's a good cage? \n\n3. It's still unclear to me how is the shell \\mathcal{G}_{s} defined. Is \\mathcal{G}_{s} a sphere? Also, how does \\phi_{s} is calculated? Why does introducing \\mathcal {G}_{s} differ from just reducing the vicinity threshold \\epsilon?\n\n4. As for me, the \"Synthesized Object Animation\" is still not a \"real application\" of CageNeRF. Why do we need cage deformation here? We already have animated meshes here as the target shape. Or why do we need NeRF here? The animated meshes are already textured. Similar feeling for \"Pose Reenactment\", we may already have textured meshes of each pose. Do we really need to deform the NeRF? \n\n5. Using one single shape for quantitative comparison is not convincing (high variance, easy to cherry-pick). I would suggest authors include large-scale experiments.\n\n\n\nMinor comments following the original review:\n\n1.(d): How can we use the kNN to generate a watertight manifold mesh? The response to this point is still confusing. However, this part is replaced by MarchingCubes in the revised version. \n\n1.(k): I did not mean that cage-based deformation is not a critical component. Instead, I was saying that \\mathcal{C}(\\cdot) should also appear in the previous definition (Equation (7)).\n", " The revised version is indeed clearer and I appreciate the effort.\n\nI have two remaining request for clarification:\n\n(A) Why is an explicit geometry (mesh) in addition to a canonical cage required? What does that mesh do? Why is the cage insufficient, i.e. why could we not just use the canonical cage as the (coarse) explicit mesh?\n\n(B) There still seems to be another missing piece in my understanding of the method. Specifically, I do not get what happens for test-time rendering, after I have obtained a deformed cage. Rendering deformable implicit scene representations starts with a ray in deformed space. That ray is then deformed into the canonical space. Is that the case here? I do not see how the final rendering operation actually happens. Does the deformation of the ray from deformed space into canonical space happen using the cage? If so, in what order are what operations applied? My best guess is this:\n\nPre-computation per deformation:\n1) Get deformed cage with the network\n2) Use it to deform the canonical mesh to deformed space, obtaining a deformed mesh\n3) construct a second mesh (shell) around the deformed mesh\n4) for each voxel in the MVC voxel grid (which is to be constructed), compute the MVC w.r.t. the deformed mesh and the shell, and set the voxel MVC value to whichever of the two meshes is closer to the voxel\n\nActual rendering:\n1) Straight ray in deformed space\n2) Generate points on the ray\n3) Use epsilon filtering to throw away points not close to the deformed cage (or the deformed mesh?)\n4) For each remaining point, perform trilinear interpolation in the precomputed MVC voxel grid, to obtain the MVC at the continuous location of the point\n5) For each remaining point, use the interpolated MVC coordinates and the canonical cage as the basis to obtain the position in canonical space\n6) For each remaining point, get geometry and appearance at that canonical position (using the canonical position as input to the NeRF MLP)\n7) Volumetric accumulation of the points using standard NeRF rendering", " I thank the authors for their effort to address my concerns.\n\nIn my initial review, I raised some doubts and limitations, that in the rebuttal have been discussed in a satisfactory way - I think these considerations should also be included in the main manuscript and are useful for readers. \n\nI do not feel to have further points to discuss with the authors, and looking forward to other reviewers' opinions (especially in their raised concerns about the methodology, paper presentation, and also their opinion on the limitations here discussed)!", " Dear reviewers:\nThe author-reviewer discussion period will end in a few days, we would appreciate it if you could spare your valuable time and have a brief discussion with us.\n", " ### Question 1: The presentation.\n\nThank you for your valuable suggestions. We have carefully studied all the comments and have reworked part of the whole method. Below we will provide point-by-point responses to all the questions and suggestions. Please also refer to our rebuttal revision for the updated manuscript. \n\n(a)\n- *Geometry* refers to the general concept of the object's shape.\n- *Geometry proxy* is a polygon mesh consisting of vertices $\\mathbf{v}$ and faces $\\mathbf{f}$. \nWe clarify them in the revision (see Lines 125).\n\n(b) \nThe symbol ${G}$ in our original manuscript refers to the geometry of the render target, which can either be a set of surface points (i.e., {G}\\_{can} ) or a polygon mesh (i.e., {G}\\_{p} and {G}\\_{n}). In the revision, for simplicity, we only use {G}\\_{x} to represent polygon mesh.\n\n\\(c\\)\nWe correct the notation of mesh in the rebuttal revision (see Line 125). \n\n(d)\nThe used nearest neighbor algorithm is FLANN KD-tree, which can generate a watertight manifold mesh (*Muja et.al, Fast Approximate Nearest Neighbors with Automatic Algorithm Configuration.).\n\n(e)\n- We sample points from the zero-level set of the implicit SDF, i.e., the surface of the object.\n- {G}\\_{p} is computed after the training of our neural renderer module. We can avoid reconstructing {G}\\_{p} if we convert the sampled points from {G}\\_{can} into mesh at the beginning of our pipeline.\n\n(f)\n- $\\mathbf{v}$ refers to a sampled vertex instead of a sampled point.\n- maxmin and other notations are revised in Equation 5 (i.e., Equation 4 in the revision).\n\n(g)\n\\mathbf{E}\\_C is the pretrained encoder, which extracts features from the given mesh. For simplicity, we change \\mathbf{E}\\_C to \\mathbf{E} Win the revision.\n\n(h)\n'n' refers to 'novel', which means novel shape.\n\n(i)\nThe caption of Figure 4 is correct. We revise the notation accordingly in our updated manuscript (see Line 210-212).\n\n(j)\n'g' in \\mathbf{p}\\_{n}^{g} refer to the deformed geometry. Besides, 's' in \\mathbf{p}\\_{n}^{s} refers to shell. For clarity, we remove the subscript $n$ from both notations in our revision (see Line 209-212).\n\n(k)\n${C}(\\cdot)$ refers to the cage-based deformation, which is the critical component in our CageNeRF.\n\n### Question 2: In section 3.3, the network architecture, loss functions, and training details are missing.\nDue to the page limit, the network architecture, loss function, and training details are placed into the Supp. Mat.. We will release our code upon acceptance.\n\n### Question 3: Motivations and discussions on addressing the aliasing issue.\n1. Why do we need a vicinity threshold $\\epsilon$?\nThe vicinity threshold exists to work in pair with the deformed mesh ${G}^{'}$ and serves as a 3D mask to filter out redundant sample points to speed up the neural rendering process. \n2. If $\\epsilon$ is very small, do we still have the aliasing issue?\nDecreasing this threshold will to some extent mitigate the aliasing effect. However, it will also induce other artifacts since the estimated ${G}^{'}$ does not precisely represent the actual geometry and the neural radiance field itself cannot achieve such high precision.\n\n3. How and why it is addressed by introducing a shell {G}_{s}.\nPlease refer to Section 3.4 in our revised manuscript for detailed discussion.\n\n### Question 4: About the setting of Synthesized Object Animation.\n1. Why do we still need the proposed CageNeRF method?\nThe setting of `Synthesized Object Animation' is to offer a qualitative comparison between the linear blend skinning (skeleton driven) method and our framework under known deformation conditions. We focus on providing a novel paradigm for editing and animating the neural radiance field since most concurrent work relies on data-specific priors to perform a similar task.\n\n2. How the method actually works (Line 256-258).\nWe animate our radiance field by using the deformed mesh from each frame to deform our cage and build the deformation field accordingly.\n\n### Question 5: For Pose Reenactment, do we have meshes of humanoid objects?\n\n1. Do we have meshes of humanoid objects?\nYes, we used the meshes of humanoid objects. \n\n2. Why do we need CageNeRF?\nNeural Cage only deforms mesh. By contrast, our CageNeRF deforms the radiance field which contains both the geometry and the rendered texture.\n\n### Question 6: About Tables 1, 2, and 3.\n- While Tables 1 and 3 are calculated on a single object, every frame has its own unique pose and shape, our method is robust under various novel poses.\n- Yes, Table 2 is calculated on a single shape from H36M dataset.\n- From Figures 5-7, we can see that CageNeRF has the ability to deform and animate the neural radiance field learned on arbitrary objects. Although there are some artifacts (e.g., subtle hole artifacts ), it should be highlighted that we provide a model to deform general objects in the radiance field, which is promising for many applications.", " Thank you for your valuable suggestions. We have carefully studied all the comments and have reworked the whole methods part. Below we will provide point-by-point responses to all the questions and suggestions. Please also refer to our rebuttal revision for the updated manuscript. \n### Question 1: Hole artifacts.\nThere are two reasons results in the hole artifacts.\n1. **Insufficient MVC sampling resolution.** The MVC sampling resolution is insufficient in highly concave regions such as the armpit of hulk or bumpy regions. These regions will cause the sample grid points have MVC value sampled outside the object surface which leads to a transparent color during volume rendering. We limit the sampling resolution (grid sampling voxel size) for the sake of memory consumption and computation overhead. Increasing the sampling resolution of MVC grids will mitigate the artifacts.\n2. **Insufficient NeRF accuracy.** NeRF itself learns the geometry of the object from images and corresponding camera poses. While NeRF is capable of producing photo-realistic results it does not have a point-perfect geometry of the target, the error induced by the estimated geometry will also cause this hole artifact. However, since we design the two modules, i.e., the neural renderer and the cage deformer, in a decoupled manner, replacing the radiance field with current SDF-based neural renderer will provide a more consistent geometry, which can also help reducing this artifact.\n### Question 2: The requirement of explicit geometry.\n1. The requirement limits the arbitrary resolution power typical of neural representations?\nThe use of explicit geometry does not limit the resolution of neural rendering. Since the sampled points has their MVC calculated via barycentric interpolation inside the closest triangle on the polygon mesh, as long as the explicit geometry has reasonably dense and evenly distributed vertices, our framework is capable of rendering upsampled images.\n2. Do we need a second training for the repoesentation?\nWe choose radiance field as the structure of our neural renderer for demonstration purpose, since the two modules, i.e., the neural renderer and cage deformer, are designed in a decoupled manner, it is feasible to replace the radiance field with SDF and perform an end-to-end training instead.\n### Question 3: Limitation w.r.t. complex movements.\nThere is a trade-off between achieving better results in complex conditions and the ability to generalize under various settings. This work is a proof of the concept that explicit deformation can be applied to implicit neural representations. We choose cage-based deformation for its simplicity in calculation. We believe our frame work is also compatible with other explicit deformation schemes, which can also be applied using a similar strategy, which may provide the extra capability in adapting to complex movements. \nAlso, just as you mentioned, the deformation required to limit intersections, a typical failure case would be a deforming humanoid intoo pose with crossed arms.\n", " ### Question 1: The method section skips over non-obvious steps.\nThanks for the valuable suggestion. We have rewritten the method section to make it more understandable for readers. Specifically, we have updated the notation, the symbols, and the descriptions. Please refer to our updated manuscript.\n\n### Question 2: Insufficient evidence for deformation transfer.\nThe cage deformer module, which performs the deformation transfer task, inherits the geometry deformation capability of Neural Cages since it has a similar structural design and is pretrained under the same configurations. Additionally, instead of only deforming the geometry, our framework operates on the radiance fields, which also carries out the deformation to the color and texture of the rendered target.\n- To further demonstrate the deformation effect, we conducted another experiment on the lego dataset, widely used in the field of neural rendering. Please refer to our updated manuscript for results.\n- Pose reenactment is also achieved via deformation with source and target strictly limited to humanoid objects. As is shown in the results in both the lego dataset and the pose reenactment task deformation transfer does go beyond the anisotropic matching bounding box method. \n\n### Question 3: Average results across the entire sequence in Table 1.\nWe have added the results to our updated manuscript. We evaluate each frame by rendering 50 different views of the target and calculating the average metrics as the result for each frame. We evaluate the entire animation sequence (65 frames for the hulk dataset and 120 frames for the whale dataset) and record the average metrics in our updated manuscript.\n\n### Question 4: Why a comparison against face-specific NeRFs is not necessary?\nOur framework is more suitable for deforming objects with articulated deformation such as human posing. It is not suitable for subtle changes on the surface due to the limitation of cage deformation.\n\n### Question 5: Speed of the method.\nOur framework achieves the average rendering speed of 2.071 seconds per image with the resolution of $800\\times800$ on our Nvidia RTX3090. The rendering speed remains roughly unchanged (2.074 seconds per image) when omitting the cage deformation $\\mathcal{C}(\\cdot)$, since this function only involves taking one additional matrix multiplication in our implementation, which has little to no impact on the overall performance.\n\n### Question 6: How the grid sampling interacts with volume rendering?\nDuring volume rendering, we sample points along each pixel ray inside the vicinity of our 3D mask. The sampled points can be in any arbitrary position inside this continuous space. To determine the actual MVC value of the sampled points, we find the nearest 8 grid points (which form an enclosing cell around the sampled points) in our MVC grid and calculate the MVC via trilinear interpolation.", " The paper extends the NeuralCages work to NeRF to enable the deformation of implicit fields. Given a NeRF, it first extracts surface points and then optimizes a template cage to enclose the points by penalizing the negative MVC values of the sampled points. It then follows NeuralCages to deform the cage. All points in the cage can be deformed accordingly with barycentric coordinates and the vertex position of the deformed cage. The author proposes a sampling trick to handle the aliasing issue (I don't understand this part).\n\nIn general, the idea of combing NeRF and NeuralCages is straightforward. However, the presentation is far from satisfactory. The experimental results are not convincing or impressive. Strengths:\nThe idea of introducing cage deformation to NeRF is interesting. The overall idea of combing NeRF and NeuralCages is straightforward and technically sound.\n\nWeaknesses:\n1. Although the overall idea is simple, the presentation is far from satisfactory and hard to follow. Especially, the terms and symbol system are complicated and confusing:\n\n(a) What do the terms `geometry` and `geometry proxy` refer to? Are they refer to surface meshes only, surface points, or the regions/points within surface meshes?\n\n(b) geometry \\mathcal{G}_{can} is a set of surface points? but the geometry proxy \\mathcal{G}_{p}(\\mathbf{v}, \\mathbf{f}) is a mesh? What about deformed geometry \\mathcal{G}_{n}? Again, the meaning of symbel \\mathcal{G} is confusing. \n\n(c) If you want to say a mesh M with vertices V and faces F, please use M={V, F}. M(V, F) seems like a function taking two arguments. For example, the text uses \\mathcal{G}_{p}(\\mathbf{v}, \\mathbf{f}), which is strange.\n\n(d) Line 132: which nearest neighbor algorithm? Please add a reference. Does it generate a watertight manifold mesh?\n\n(e) Line 139: sample points from mesh surfaces or the interior. If sample points from the interior, why \\mathcal{G}_{p} is closed mesh? If sample points from surfaces, can we directly sample from \\mathcal{G}_{can}? And there is no need to construct \\mathcal{G}_{p}?\n\n(f) Equation 5: does \\mathbf{v} refer to a sampled point? If so, please use another symbol instead of \\mathbf{v}, which refers to vertices before. Also, I cannot understand why it is ``max min``? We need to penalize all negative MVC values, and it should be ``min min`` instead?\n\n(g) Line 157: what is \\mathbf{E}_{C}?\n\n(h) For the deformed geometry \\mathcal{G}_{n}, what does `n' refer to? It's kind of confusing.\n\n(i) Line 219-220 says \\mathcal{G}_{s} and \\mathcal{G}_{p}. But in the caption of Figure 4, it says \\mathcal{G}_{s} and \\mathcal{G}_{n}. Which one is correct?\n\n(j) Equation (14) what does `g' in \\mathbf{p}_{n}^{g} refer to?\n\n(k) Do we really need \\mathcal{C}(\\cdot)? Only mentioned once.\n\nIn general, section 3.2 - 3.4 is very confusing and wordy. I tried hard to follow but still failed to follow all the contents. I suggest the authors rewrite it more precisely, clearly, and compactly. A figure to illustrate the relations between the symbols may also help.\n\n2. In section 3.3, the network architecture, loss functions, and training details are missing.\n\n3. I can roughly understand the reason for the aliasing issue. But why do we need a vicinity threshold \\epsilon? If \\epsilon is very small, do we still have the aliasing issue? Moreover, I have no idea of how and why it is addressed by introducing a shell \\mathcal{G}_{s}. Motivations and discussions are missing.\n\n4. The setting of ``Synthesized Object Animation'' is kind of confusing. If we already have a sequence of animated meshes, we can directly render the images. Why do we still need the proposed CageNeRF method? I am also confused about how the method actually works (Line 256-258).\n\n5. For Pose Reenactment, do we have meshes of humanoid objects? If so, can we directly apply NeuralCages? Why do we need CageNeRF?\n\n6. Both Table 1 and Table 3 are computed over a single shape? What about Table 2? The experiment results are not convincing or impressive.\n See weaknesses. The authors have mentioned some limitations in supplementary materials.", " The paper proposes a way to deform Neural Radiance Fields using a cage-based paradigm. It is based on the training of a cage deformer, optimized on the poses. Then, given a new pose, the method regresses the cage on an explicit geometry and computes a dense mean-value coordinate field to compute the new visualization. The method is tested on articulated (i.e., humanoids) and non-articulated (i.e., chairs) objects. == Strenghts ==\n\n- The approach seems novel and addresses a complex problem. Providing novel tools to edit Neural Radiance Fields is a hot topic with a potentially great interest for the computer graphics community\n- The method seems applicable in different contexts, with good results both for rigid objects and non-rigid objects\n- The method is well discussed in its elements, providing ablation and justifying the design choices.\n\n== Weaknesses ==\n- Looking at the video, there seem to be some hole artefacts, e.g., under the lifted arm of Hulk, in the pose reenactment experiment. Is it due to the stretching introduced by the paradigm, or is it because the NERF itself does not well represent some areas?\n- The method still requires access to an explicit geometry, which somehow limits the arbitrary resolution power typical of neural representations. When an SDF is used, it requires a second training for the representation. \n- The shown animations are not particularly complicated; probably, the use of cages requires to limit of the intersections, but I cannot understand if the method is applicable also in more complex scenarios with more articulated movements.\n\n\n== minor fixes ==\n- Line 174: missing a comma after \"$o$ is the camera origin\"\n- Line 291: typo, \"Impact\"\n\n== After Rebuttal ==\n\nAfter the rebuttal and the discussion phase, I am prone to keep my initial rating and vote for acceptance. For the final version, I suggest including in the main manuscript the observations discussed in the reviewers-authors discussion, especially about the limitations and trade-offs of the method. - Why do some of the results present hole artefacts?\n- Why the method is not tested on more complex movements? There is any limitation to taking them into account? The paper fairly discusses limitations and societal impact, while on the first probably a bit more elaboration on the kind of animations admitted by the method can be discussed.", " CageNeRF allows for deforming a neural radiance field of arbitrary surfaces/objects, unlike prior work that was restricted to specific categories like humans or faces by design. The deformation network is trained per object and requires training data for deformations. The method design uses an invertible/bi-directional deformation parametrization based on classic graphics knowledge, namely deformation cages. It is an extension of the prior work Yifan et al. (Neural Cages for Detail-Preserving 3D Deformations; CVPR'20) to implicit fields. This extension is non-trivial due to the volumetric and non-discretized nature of the neural radiance field. The method is evaluated on deformation transfer, reenactment, and animation. All these applications apply a deformation cage to a neural radiance field, with the deformations coming from the same sequence as the radiance field in the case of synthetic reenactment and from a different sequence/scene in all other cases. Strengths:\n* Editing the neural radiance field of arbitrary surfaces/objects is an important direction for the NeRF field.\n* CageNeRF is a fairly elegant parametrization and tackles e.g. aliasing issues well. \n* Only human reenactment allows for a comparison against prior work, namely AniNeRF for human-specific NeRF animation. CageNeRF performs worse but close enough in my opinion, considering that it is a general method.\n* The paper is written in good English.\n\nWeaknesses:\n* The method section skips over non-obvious steps in many places, see below in Questions.\n* Deformation transfer is demonstrated only qualitatively and only on the ShapeNet chair category for a handful of (random) instances. In addition, this leads to deformations that seem almost just like per-axis scalings. Other categories with more interesting deformations would show that what the method does goes beyond what an anisotropic matching of the object bounding boxes could do. I don't believe the deformation transfer results are sufficient evidence that deformation transfer actually works with the method.\n* The quantitative results in Table 1 are on 3-4 specific frame indices out of dozens of indices per sequence (on top of evaluating only two sequences). That leaves the door wide open for cherry-picking a few good indices. Averages (and standard deviations) across the entire sequences should be reported. \n* The deformed cage is determined by a neural network and hence might not generalize well to deformed target poses that lie far away from the training data. * An argument why a comparison against face-specific NeRFs is not necessary would be good.\n* What is the speed of the method? By what factor does the rendering slow down due to the deformations, i.e. relative to just directly using the canonical NeRF, skipping C in Eq. 15?\n* I don't understand how the grid sampling (L195) interacts with the volume rendering. The points p from the rendering are first along a straight ray. How are these then transformed via the cage into the canonical radiance field? The points p won't lie on a grid but anywhere in space, so how does the grid come into play? I understand the thresholding with epsilon.\n\nMinor questions that are relevant for a revision but not the rebuttal:\n* Eq. 5 is not easy to parse. \"v\" is used for the vertices of the cage template, the proxy geometry, and as any individual vertex of the proxy geometry. I cannot tell which of these \"v\" the \"max\" is over. I believe it should be the vertices of the proxy geometry.\n* I also don't understand how this is turned into a neural-network optimization problem. What is the loss? If this information can be found in Neural Cage, I strongly recommend to add a Background section explaining the important parts of Neural Cage that are used by this submission.\n* How is the cage deformer applied in Sec. 3.2 (L137f)? Is C_t the \"Optimized Cage\", G_p the \"Novel Pose\", and C_can the \"Deformed Cage\"? If so, this is not at all obvious and needs to be stated explicitly.\n* In Sec. 3.3, what is the loss, during pretraining and for Sec. 3.2? \n* In Fig. 3, both the lower and upper branch use E_C to my understanding. It would be helpful to the reader to visualize both in the same way, e.g. use the \"E_C\" block also for the upper branch instead of the words \"Feature Extraction\".\n* In Sec. 3.3, please mention that a global PointNet++ encoder is used. It was not clear to me for a while whether the deformation might be regressed per-vertex (coordinate-based MLP) instead for all vertices at once. The supplement also gives the unclear description \"to predict the vertex-wise offset for the input cage base\" (L85). \n* In L216ff, G_d, G_p and G_s are used incorrectly, I believe. G_d should be G_n at the very least.\n* Are the numbers in Table 1 from rendering into a single view or multiple views per frame index?\n\nTypos etc.:\n* L170: Not a full sentence.\n* L291: Impact Limitations are mentioned but I believe that it should be also explicitly mentioned that the cage deformer might not generalize well to target poses outside the training distribution since there is no evidence to the contrary in the submission.\n\nGeneral broader impact is discussed but no potential negative societal impacts are mentioned. Deformation transfer could have the typical deep fakes issue where the identity of one person is animated by motions of another." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "jbY6bX2xetp", "9brrPgIBGB", "odKun87dErh", "fdP5umkrtW", "vWC8xXwWBj4", "vWC8xXwWBj4", "XGM5ISTKcfV", "qVic8M5nxix", "vLCwspF5Gi", "nips_2022_kUnHCGiILeU", "8Q9Oavka_v2", "xQdx6IzOvfY", "fs7tUxaUoZ", "nips_2022_kUnHCGiILeU", "nips_2022_kUnHCGiILeU", "nips_2022_kUnHCGiILeU" ]
nips_2022_k5uFiFLWv3X
Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation
Deep neural networks (DNNs) have been shown to be vulnerable to adversarial examples, which can produce erroneous predictions by injecting imperceptible perturbations. In this work, we study the transferability of adversarial examples, which is significant due to its threat to real-world applications where model architecture or parameters are usually unknown. Many existing works reveal that the adversarial examples are likely to overfit the surrogate model that they are generated from, limiting its transfer attack performance against different target models. To mitigate the overfitting of the surrogate model, we propose a novel attack method, dubbed reverse adversarial perturbation (RAP). Specifically, instead of minimizing the loss of a single adversarial point, we advocate seeking adversarial example located at a region with unified low loss value, by injecting the worst-case perturbation (the reverse adversarial perturbation) for each step of the optimization procedure. The adversarial attack with RAP is formulated as a min-max bi-level optimization problem. By integrating RAP into the iterative process for attacks, our method can find more stable adversarial examples which are less sensitive to the changes of decision boundary, mitigating the overfitting of the surrogate model. Comprehensive experimental comparisons demonstrate that RAP can significantly boost adversarial transferability. Furthermore, RAP can be naturally combined with many existing black-box attack techniques, to further boost the transferability. When attacking a real-world image recognition system, Google Cloud Vision API, we obtain 22% performance improvement of targeted attacks over the compared method. Our codes are available at https://github.com/SCLBD/Transfer_attack_RAP.
Accept
This paper studies the transferability of adversarial examples. In general, the reviewers found the paper is well motivated, and the proposed method is simple and effective. Most initial concerns were about missing comparisons and ablations. All these concerns are well addressed in the rebuttal. As a result, all reviewers unanimously agree to accept this submission.
train
[ "s91Zj1Y0Xr", "5A7qwy6kHx", "9krxfYDt7Nf", "ruI-bVhcqt-", "2eElVfK7J3", "lPwHD3bqOe", "5EaHB0Tzovz", "muRmI2ZfGhu", "J-xmiYLuOvV", "RTQxoPRssgr", "ozB2ph99g3u", "X0OBkSOkfhR", "FuF-aXauftE", "I_xfzw-dSkV", "_fEXdVqJU_S0", "Ajc-IqaP1NI", "k4-7HWoaBM7", "mjPZHC9GJqS", "9AP9ijHZLT", "Gygif0cheP", "zxjtPtUSwq9", "r1O5naRNSGz", "Iceg-We4ALP" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer pUE8, \n\nConsidering the discussion stage is close to the end, we are looking forward to your further feedback about our latest response (posted one day ago, see below), whether your remaining concern has been addressed. We would like to discuss with you in more details. \nGreatly appreciate your help. \n\nSincerely,\n\nAuthors", " Dear Reviewers and Area Chairs:\n\nWe sincerely appreciate all of your precious time and constructive comments. \n\nWe are greatly encouraged by the positive comments of our work, including **the idea is novel, interesting and insightful** (by pUE8, Ra8U); **the proposed method is effective** (by VMp1, pUE8 and Ra8U); **the results are reasonably good** (by pUE8 and Ra8U); **the experiments are comprehensive** (by VMp1, HFVj and Ra8U); **can be integrated easily into existing gradient-based attacks** (by HFVj) and **is general to existing attacks** (by Ra8U); **conducting a case study with Google Cloud Vision API** (by VMp1), etc. \n\nWe are also glad to see that **our responses are helpful for addressing the reviewers' concerns**. All these concerns are very insightful and beneficial for us to improve the quality of this work. We will carefully revise our manuscript by adding all suggested experimental comparisons, adding more detailed explanations and fixing the typos, as promised in our first-round responses. \n\nFinally, we would like to emphasize again that the adversarial transferability plays an important role in adversarial machine learning and its intrinsic mechanism is still uncovered. We are confident that our efforts of **providing a novel perspective to explain the adversarial transferability** and **proposing an insightful, effective, and plug-and-play method for transfer adversarial attack** have shed some light on this interesting topic, and could inspire more research on this topic from the community. \n\nBest regards,\n\nAuthors", " > **Reviewer HFVj's further comments**: \"I went through your response, other reviews, and the paper again. My concerns are partially addressed as the recent competitors [b, BMVC2021] and [c, CVPR21] are not reported in the main paper and provided results for Res50 are not sufficient. Nonetheless, I strongly suggest adding the results of [b, BMVC2021] and [c, CVPR21] to the revised main paper and hope the authors include the complete set of experiments in the revised version. I increase my score to 5 taking into account the novelty and empirical evidence...\"\n\nDear Reviewer HFVj:\n\nWe sincerely appreciate the recognition of our novelty and empirical evidence. And, we are greatly encouraged that our additional results help to address your concerns. We will add the complete set of experimental comparisons to [b] and [c] in our revision, as follows:\n\n- In addition to the experimental results reported in our first-round responses (where we adopted ResNet-50 as the surrogate model and Inc-v3/Dense-121/VGG-16 as target models), we will complete the experiments by taking Inc-v3/Dense-121/VGG-16 as surrogate models, respectively. \n\nBesides, thanks for your constructive suggestion about LinBP. We will follow the suggestion to move LinBP to supplementary and add the comparisons to [b,c] in the main manuscript. We will also add the comparison to [a] once we got their source code.\n\nThanks again for your precious time and constructive comments, which are really beneficial for us to improve this work and highlight its contributions. \n\nBest regards,\n\nAuthors\n", " > **Reviewer Ra8U's further comment**: \"Thank the authors for the responses. I think the authors have addressed my main concerns and would maintain my score.\"\n\nDear Reviewer Ra8U:\n\nWe sincerely appreciate your further comment, and we are greatly encouraged that our responses have addressed your main concerns. All additional experiments and analysis presented in our first-round responses will be updated into the final version of our manuscript. \n\nBest regards,\n\nAuthors", " Dear Authors, \n\nI went through your response, other reviews, and the paper again.\n\nMy concerns are partially addressed as the recent competitors [b, BMVC2021] and [c, CVPR21] are not reported in the main paper and provided results for Res50 are not sufficient. Nonetheless, I strongly suggest adding the results of [b, BMVC2021] and [c, CVPR21] to the revised main paper and hope the authors include the complete set of experiments in the revised version. I increase my score to 5 taking into account the novelty and empirical evidence.\n\nMoreover, the LinBP method is especially focused on skip connections-based networks and not generic. Therefore, I urge you to move the results of LinBP to supplementary and add competitive baselines [b, c, and a if possible] in Table 5 of the main paper. \n\nI apologize for my comment on S and mu. It was inadvertently added here from a different paper. Please ignore. \n\n\n\n", " Thank the authors for the responses. I think the authors have addressed my main concerns and would maintain my score.", " Dear Reviewer pUE8, \n\nWe greatly appreciate your further response and insightful comment. We are also encouraged by your positive comments on our motivation and results. \n\nIn terms of your remaining concern, \"it seems the proposed method is less helpful when the decision boundary is smooth, and current and future models tend to have more smooth decision boundary (eg, using SiLU activation function instead of ReLU)\", unfortunately there is no time left for us to further experimentally verify it. However, we would like to share our thoughts according to the current results and our experiences:\n- **Is there a tend to use SiLU instead of ReLU in current and future models?** We are not the expert of designing model architectures, thus we cannot sure that whether there is really a tend to replace ReLU by SiLU. However, according to the demonstration in the implementation of SiLU in Pytorch, SiLU was originally proposed in 2016 (https://arxiv.org/pdf/1606.08415.pdf). We can see that the standard ReLU is still widely used in the mainstream model architectures, possibly due to its simplicity and low computational cost. In contrast, the sigmoid function in SiLU will lead to high computational cost, which may restrict its usage in the scenario where the computational resource is limited (e.g., deploying the deep model in phone or drone). Maybe SiLU could bring in some benefits to the model, but it cannot fully replace the standard ReLU, as they have different advantages. Besides, our finding in this work that the model with smoother loss landscape is more resistant to transfer attack provides another reason to choose SiLU, which is also one contribution of our work to the community. \n- **How smooth of the decision boundary if replacing ReLU by SiLU? Is it really resistant enough to transfer attack?** As claimed above, unfortunately there is no time left for us to further experimentally verify it. But, we don't think that there is one once-and-for-all solution to make the model's loss landscape smooth enough to be resistant to query-based or transfer-based adversarial attacks. If so, then we have found a very good solution to improve the adversarial robustness. The technique like SiLU may improve the overall smoothness, but it cannot guarantee the enough smoothness of any local region, thus there is still space for improving the adversarial transferability using our RAP method. Besides, note that we have evaluated our method on attacking against the adversarially trained (AT) model (see Section 4.5 and Line 276-286), which is recognized as one of the most effective approach to improve the adversarial robustness, possibly due to that it could enhance the smoothness of loss landscape (please refer to \"Semi-supervised robust training with generalized perturbed neighborhood\", Pattern Recognition 2022). It is shown that our RAP method still shows very significant improvement on transfer attacks against these AT models (over 10\\% improvement, see Line 285-286). We believe that our RAP method will be helpful even for attacking against model with smoother loss landscape. \n\n**In summary**, we think there is no necessary to overly worry about that our RAP method may not very helpful for future models. Hope you can understand that due to the limited time, our above descriptions may be not fully exact in all sentences, but we think the main points should have been clearly demonstrated. Hope our quick responses could help you to re-evaluate the contribution of our work, including the insight, effectiveness, and the property of plug-and-play (please refer to the response to Q1.1 of Reviewer VMp1). We sincerely thank the reviewer's insightful comment and help again, and the discussion with you is very delightful and beneficial. We are glad to provide more detailed responses to any further concern, if there is still rebuttal time. \n\nSincerely,\nAuthors\n\n", " Hello Authors,\n\nI go through the paper, your response, and other reviewers' comments. I think most of my concerns are addressed. The only one concern is: it seems the proposed method is less helpful when the decision boundary is smooth, and current and future models tend to have more smooth decision boundary (eg, using SiLU activation function instead of ReLU). Therefore, I am worried about if the proposed method can be used on future models.\n\nI have no other concerns -- the motivation of this paper is clear, and all results make sense to me. I am going to keep Borderline accept unless the authors have more comments.", " Dear Reviewers,\n\nThanks for spending time reviewing our work and providing valuable feedback. We have provided new experiments in Response 4.4 as well as clarification in response to your questions and the mentioned weakness part. Do our response address your questions? And do you have other comments?\n\nBest regards,\n\nAuthors", " Dear Reviewers,\n\nThanks for spending time reviewing our work and providing valuable feedback. We have provided new experiments in Response 3.2 as well as clarification in response to your questions and the mentioned limitations. Do our response address your questions? And do you have other comments?\n\nBest regards,\n\nAuthors", " Dear Reviewers,\n\nThanks for spending time reviewing our work and providing valuable feedback. We have provided new response to your questions and mentioned limitations. Do our response address your questions? And do you have other comments?\n\nBest regards,\n\nAuthors", " Dear Reviewers,\n\nThanks for spending time reviewing our work and providing valuable feedback. We have provided new experiments in Response 1.3 as well as clarification in response to your questions and the mentioned weakness part. Do our response address your questions? And do you have other comments?\n\nBest regards,\n\nAuthors", " Paper discusses improvements to adversarial attacks against machine learning models. It offers no\ndiscussion or recommendations regarding defending against these improvements and thus falls on the\nmalicious side of the power balance noted in the NeurIPS ethical guidelines:\n\n \"On the methodology side, for example, a new adversarial attack might give unbalanced power to\n malicious entities; in this case, defenses and other mitigation strategies would be expected, as\n is standard in computer security.\" Issues have been acknowledged only in the supplemental document, not in the main document. There\nthe authors offer the possibility that \"This work can potentially contribute to understanding of\ntransferability of adversarial examples\" and further note \"the better transferability of\nadversarial examples calls the machine learning and security communities into action to create\nstronger defenses and robust models against black-box attacks\" . These fall short of NeurIPS\nethical guidelines regarding balance (see above). Most directly the authors can discuss potential mitigations against their work. Experimentally,\nthey can analyze a larger set of defenses and comment on which of them is most resilient to their\nproposed methods (and hopefully discuss why). Presently the paper experiments with only 2 defenses\nthat I can tell which may offer only a limited basis for extrapolating mitigation recommendations.\n", " ### **Q4.4** 'RAP is general to existing attacks, ... , different loss function [2], model specific attack [3].'\n\n**Response 4.4:**\nThanks for this useful suggestion. According to the reviewer's suggestion, we conduct the evaluation of combining our RAP with the mentioned works. We choose the ResNet-50 as the source model and MI-TI-DI as the baseline method. For the mentioned work [2], we do not find the source code. Therefore, we conduct the experiments for [1] and [3]. \n\n**(1)** For combining variance-tuning [1] with RAP, we calculate the variance of $x^{adv}+n^{rap}$ rather than the variance of $x^{adv}$ like the orginal paper. The results are shown in the below table.\n\n**Experimental Results:** \nThe targeted attack results (%) are shown in the below table. We also follow the experimental settings in Section 4.1 of the main submission. \n|targeted attack ($M^S$: Res-50)\t| Inc-v3\t| Dense-121\t| VGG-16|\n| :----------: | :--------------------------: | :-----------------------------------------: | :------------------: |\n|MI-TI-DI | 10.9 | 74.9 | 62.8|\n|MI-TI-DI-VT [1] | 34.4 | 83.4| 75.1|\n|MI-TI-DI-VT+RAP | 36.1 | 88.3| 80.3|\n\nThe experimental results demonstrate that RAP can further boost the adversarial transferability of VT method.\n\n**(2)** For combining ghost network [3] with RAP, we conduct the experiments on our PyTorch codes following the original TensorFlow codes provided by the authors. The main idea of ghost network is to perturb skip connections of resnet to generate ensemble networks. To achieve this goal, the authors multiply skip connection by the random scalar r sampled from a uniform distribution. We reimplement this procedure with the hyperparameter about r recommended in the orginal paper on the PyTorch resnet-50 model (we utilize the checkpoint from torchvision). \n\n**Experimental Results:**\n\nThe targeted attack results (%) are shown in the below table. We also follow the experimental settings in Section 4.1 of the main submission. Here, we use GN to represent ghost network method. \n\n|targeted attack ($M^S$: Res-50)\t| Inc-v3\t| Dense-121\t| VGG-16|\n| :----------: | :--------------------------: | :-----------------------------------------: | :------------------: |\n|MI-TI-DI | 10.9 | 74.9 | 62.8|\n|MI-TI-DI-GN | 24.9 | 85.9| 80.7|\n|MI-TI-DI-GN+RAP-LS | 49.7 | 89.6 | 87.7 |\n\nThe results show that ghost network method can improve the transfer attack performance. Combined with RAP-LS, the adversarial transferability can be further improved especially on the inception-v3 model. \n\n\n----\n\n### **Q4.5** 'The biggest concern is the huge computation cost ... input transformations only adopt 10 iterations.'\n\n**Response 4.5:** Thanks for this comment. It is indeed that by introducing the inner maximization process, our method will increase the time cost of adversarial example generation process. \n\n1) We firstly analyze the computational cost of our method. In Algorithm 1 with global iteration number $K$, late-start iteration number $K_{LS}$ and inner iteration number $T$, our RAP-LS requires $K+(K-K_{LS})*T$ forward and backward calculation. While the original attack algorithm requires $K$ forward and backward calculation. The extra computation cost of RAP-LS is $(K-K_{LS})*T$ times forward and backward calculation. In our experiments, $T$ is set to 8 and $K_{LS}$ is set to 100. $K$ is set to 400. As for the larger number of outer iteration number $K$, we follow the existing work of [8] to use larger iterations. As pointed out in [8], currently the unreasonably restricting attack optimization to a limited number of iterations leads to poor transferability. \n\n2) However, we would like to emphasize that the adversarial example generation process is conducted based on the offline surrogate models. Compared with this offline time cost, the attacking performance is much more important for black-box attacks, which is also the main advantage of our method. Besides, the late-start strategy could alleviate the time cost. However, we will keep exploring more efficient transfer-based attack methods which could also help to find flat local minimum in our future work. \n\n[8]. On Success and Simplicity: A Second Look at Transferable Targeted Attacks, NeurIPS 2021. \n\n----\n\n### **Q4.6** Minor typos:\n\n**Response 4.6:** Thanks for pointing out these typos. We have corrected them and thoroughly proofread the revised manuscript. \n", " ### **Q4.1** 'Figure 1 is not clear. What do the x-axis and y-axis indicate? How do you obtain the curve?'\n**Response 4.1:**\nThanks for pointing out these. Figure 1 is a schematic diagram in 1D space. The x-axis means the value of input $x$. The y-axis means the value of attack loss function $\\mathcal{L}$. We will add these annotations in the revision. \n\n----\n\n### **Q4.2** 'Why does the flat local region indicate better transferability? It might be interesting if you could provide some analysis.'\n\n**Response 4.2:** Thanks for this insightful suggestion. This is indeed an important question, and the connection between the local flatness and transferability is the main motivation of our method. In the submitted manuscript, actually, we have analyzed the connection between the flatness of loss landscape and adversarial transferability in **Section 1 (Line 38-43, Line 51-56)** and **Section 3.2 (Line 132-140)**. And we also verified whether RAP could boost the flatness of the adversarial example or not in **Section 3.3** experimentally. Here we summarize the key points in the following.\n\n1)The previous works attribute the poor attack transferability to the overfitting of adversarial examples to the surrogate model. In Figure 1, we use a schematic plot to interpret this. As shown in Figure 1 (b), when $x^{pgd}$ locates at a sharp local minimum, it is not stable and is sensitive to changes of $\\mathcal{M}^{S}$. When having some changes on model parameters, $x^{pgd}$ could result in a high attack loss against $\\mathcal{M}^{S'}$ and lead to a failure transfer attack. In contrast, the flat local minimum is less sensitive to the changes in decision boundary, thus leading to a more stable transfer attack. Following this, we advocate to find $x^{adv}$ located at flat local region and propose to minimize the maximal loss value within a local neighborhood region around the adversarial example $x^{adv}$. \n \n2)The experimental results demonstrate that RAP can boost the adversarial transferability. We also take loss landscape visualizations to verify whether RAP can help us find a $x^{adv}$ advocated at the local flat region or not. Figure 3 plots the visualizations. We can see that compared to the baselines, the loss landscape of $x^{adv}$ founded by RAP is much flatter. \n\n----\n\n### **Q4.3** 'What does one iteration mean? Does it indicate one forward and back propagation?'\n**Response 4.3:** \nWe utilize Algorithm 1 of the main submission to clarify this point. For one iteration of inner loop (t= 1,..,T), it indicates one forward and backpropagation. For one iteration of outer loop (k= 1,..., K), it first conducts an inner loop (T iterations) process mentioned above. And after getting $n^{rap}$, it conducts one forward calculation and backpropagation to update $x^{adv}$. We will make this clearer in our next version.\n\n", " ### **Q3.3** 'calculation of inner optimization increases the computational time of attack wrt baseline methods.'\n**Response 3.3:**\nThanks for this comment. It is indeed that by introducing the inner maximization process, our method will increase the time cost of the adversarial example generation process. \n\n1) We firstly analyze the computational cost of our method. In Algorithm 1 with global iteration number $K$, late-start iteration number $K_{LS}$ and inner iteration number $T$, our RAP-LS requires $K+(K-K_{LS})*T$ forward and backward calculation. While the original attack algorithm requires $K$ forward and backward calculation. The extra computation cost of RAP-LS is $(K-K_{LS})*T$ times forward and backward calculation. In our experiments, $T$ is set to 8 and $K_{LS}$ is set to 100. $K$ is set to 400. As for the larger number of outer iteration number K, we follow the existing work of [8] to use larger iterations. As pointed out in [8], currently the unreasonably restricting attack optimization to a limited number of iterations leads to poor transferability. \n\n2) However, we would like to emphasize that the adversarial example generation process is conducted based on the offline surrogate models. Compared with this offline time cost, the attacking performance is much more important for black-box attacks, which is also the main advantage of our method. Besides, the late-start strategy could alleviate the time cost. However, we will keep exploring more efficient transfer-based attack methods which could also help to find flat local minimum in our future work. \n\n[8]. On Success and Simplicity: A Second Look at Transferable Targeted Attacks, NeurIPS 2021. \n\n----\n\n### **Q3.4** 'Adds more hyperparameters (S and mu) but the ablation studies show they are largely target model agnostic'\n**Response 3.4:**\nWe do not contain the parameter $S$ and $mu$ mentioned by the reviewer. It would be greatly appreciated if the reviewer can provide further comments to help us to understand more correctly. The ablation study of the parameters of RAP is shown in Section 4.6. \n", " ### **Q3.1** 'How is. the performance of the attack wrt to PGD-based adversarial training'\n**Response 3.1:**\nThanks for this kind suggestion. Actually, we have tested transfer attacks on PGD-based adversarial training in Line 276-280 on Page 8. And the detailed results are shown in Section C and Table 8-9 of the appendix due to space limitation. \n\nWe conducted experiments on two multi-step PGD Adversarial training (AT) models from [1] (the $\\ell_{\\infty}$ ResNet-50 AT model with budget $4/255$ that ranks first in the RobustBench leaderboard [2] and the $\\ell_{2}$ ResNet-50 AT model with budget 0.5). We also tested our attacks on the stronger defense models, the combination of PGD-AT models and NRP, an adversarial purified defense method. The results are shown in Table 8 and 9 of the appendix. We can observe that our RAP further boosts the transferability of baseline methods on PGD-AT models. \n\n[4] . https://github.com/microsoft/robust-models-transfer\n\n[5] . https://robustbench.github.io/\n\n----\n\n### **Q3.2** 'Recent works of improving gradient directions by looking ahead ... can be discussed to clearly highlight the contribution'\n\n**Response 3.2:**\nThanks for this kind suggestion. Our RAP and the works mentioned by the reviewer have a similar goal, to find a more stable gradient direction to escape sharp local minimum. Being different from RAP, these methods randomly sample points around the current $x^{adv}$ to calculate the more stable gradient. In contrast, to find $x^{adv}$ located at flat local region, our RAP minimizes the maximal loss value within a local neighborhood region around the current $x^{adv}$. \nAccording to the reviewer's suggestion, we conduct the evaluation of our RAP and the mentioned works. We choose the ResNet-50 as the source model and MI-TI-DI as the baseline method. [a] is a recent CVPR 2022 paper and we do not find the source code. \nExperimental Results: \nThe untargeted and targeted attack results (%) are shown in the below tables. We also follow the experimental settings in Section 4.1 of the main submission. \n|untargeted attack ($M^S$: Res-50)\t| Inc-v3\t| Dense-121\t| VGG-16|\n| :----------: | :--------------------------: | :-----------------------------------------: | :------------------: |\n|MI-TI-DI | 85.7 | 99.8 | 99.8|\n|MI-TI-DI-VT [b] | 95.8 | 100| 100.0|\n|EMI-TI-DI [c] | 93.6| 100.0 | 100|\n|MI-TI-DI+RAP-LS | 96.9| 100.0| 100|\n\n|targeted attack ($M^S$: Res-50)\t| Inc-v3\t| Dense-121\t| VGG-16|\n| :----------: | :--------------------------: | :-----------------------------------------: | :------------------: |\n|MI-TI-DI | 10.9 | 74.9 | 62.8|\n|MI-TI-DI-VT [b] | 34.4 | 83.4| 75.1|\n|EMI-TI-DI [c] | 32.1| 81.2 | 73.3|\n|MI-TI-DI+RAP-LS | 33.2| 88.5| 81.5|\n\nAs shown in experimental results, our RAP-LS achieves better performance, especially for targeted attacks. Compared with VT [b], RAP-LS gets an increase of 3.4% for targeted attacks in terms of average success rate. This demonstrates the effectiveness of our methods. We will add these in the revision.\n\n", " ### **Q2.1** 'I found the proposed method is less effective when attacking strong defense models (such as Feature Denoising), according to Table 8. Is there any explanation?'\n\n**Response 2.1**:\nThanks for this insightful suggestion. We think this is mainly due to **the specially designed feature denoising block** (i.e., the non-local block), and **the different settings of maximum perturbation size during adversarial training**, as follows. \n\n1) Feature Denoising [33] inserts several non-local blocks into network to eliminate the adversarial noise at the feature level. According to [33], for input feature map $F_{i}$, the non-local block computes a denoised output feature map $F_{o}$ by taking a weighted average of input features in all spatial locations. Through this, the non-local block would model the global relationship between features in all spatial locations, which may smooth the learned decision boundary. Recalling that our RAP is to boost the transferability by seeking for a flat local minimum. The smoothness of decision boundary could make it harder to escape from certain local minimum, especially for small attack perturbation size, so as to limit the performance improvement of RAP. \n\n2) In our experiment, for Feature Denoising, the maximum perturbation size during their training is set to 16/255. In Table 8, the maximum perturbation size of attack is also set to 16/255. The attack size of 16/255 may not be large enough for escaping from local minima for the Feature Denoising model trained with 16/255 perturbation size. In contrast, the maximum perturbation size of AT-$\\infty$ during training is 4/255. To verify this, we conduct an ablation study of increasing the maximum perturbation size to 20/255. Using a larger perturbation size of 20/255, the attacking performance against Feature Denoising is 48.3% for MI-TI-DI and 50.7% for MI-TI-DI+RAP-LS. The relative performance improvement of RAP-LS is 2.4%, which is much larger than the relative performance improvement of 0.3% in Table 8 with perturbation size 16/255, which may partially explain the phenomenon. \n\nWe will add above analysis to the revised manuscript and appendix. \n\n[33] Feature denoising for improving adversarial robustness, CVPR 2019.\n\n----\n\n### **Q2.2** 'It is not clear if the proposed method can attack strong defense models.'\n**Response 2.2:**\nWe have tested several strong defense models, which we mentioned in **Line 276-280 of Page 8 and the detailed results are shown in Section C and Table 8-10 of the appendix due to space limitation**. To be specific, apart from the Feature Denoising defense method, we also tested two multi-step PGD Adversarial training (AT) models [1] (the $\\ell_{\\infty}$ ResNet-50 AT model with budget $4/255$ that ranks first in the RobustBench leaderboard [2] and the $\\ell_{2}$ ResNet-50 AT model with budget 0.5), a adversarial purified defense method (NRP) [2], and one input transformation method (R\\&P) [3]. We also tested our attacks on the stronger defense, the combination of AT models and NRP. We can observe that our RAP further boosts the transferability of baseline methods on these strong defense models. We refer to the appendix for the detailed experimental results.\n\n[1] https://github.com/microsoft/robust-models-transfer\n\n[2] A self-supervised approach for adversarial robustness, CVPR 2020. \n\n[3] Mitigating adversarial effects through randomization, ICLR 2018.\n\n----\n\n\n### **Q2.3** 'It is not clear if the proposed method can be applied to generate other kinds of adversarial examples (such as patch attack)'\n\n**Response: 2.3**\nThanks for this constructive comment. As formulated in Eq. (2), the only change of our RAP method is inserting one reverse adversarial perturbation $\\mathbf{n}^{rap}$ onto the adversarial example $\\mathbf{x}^{adv}$, based on the original objective function of generating adversarial example (*i.e.*, Eq. (1)). Thus, currently we don't see technique obstacles for applying our RAP method with other kind of adversarial examples. Take the mentioned patch attack as an example, if we didn't misunderstand its meaning, we can introduce a binary mask into Eq. (2), *i.e.*, replacing $\\mathbf{x}^{adv} + \\mathbf{n}^{rap}$ by $(\\mathbf{x}^{adv} + \\mathbf{n}^{rap}) \\cdot \\mathbf{m}$. Due to the limited rebuttal period, we have not implement ed it. We will keep the study of combining our RAP method with more types of adversarial examples to explore more advantages of RAP in our future work.\n", " ### **Q1.1** 'The proposed method is not new.'\n\n**Response 1.1:** Thanks for your comment. Although measuring one work's novelty is somewhat subjective, in the following we would like to summarize our main contributions to clarify our novelty.\n \n**(1) We provide a novel perspective to explain the adversarial transferability** that adversarial examples located at flat local minimum will be more transferable than those at sharp local minimum. We also provide some intuitive analysis and experimental verification about this perspective (see more details in the response to Q1.2). In contrast, we find that in many existing transfer-based attack works, the motivation was just addressing the overfitting to surrogate models, and borrowed \"the techniques from deep learning model training strategy, such as momentum term, data augmentation\" (**please refer to the first comment by Reviewer pUE8**), without deep analysis.\n\n**(2)** We propose an **insightful**, **effective**, and **plug-and-play** method. **(a)** \"**Insightful**\" means that our method is inspired by the above perspective, aiming to find adversarial examples located at flat local minimum, which is intuitively illustrated (see Figure 1) and experimentally verified (see Figure 3). **(b)**\"**Effective**\" means that the proposed RAP method achieves superior attack performance, which is supported by extensive experiments. **(c)** \"**plug-and-play**\" means that our RAP method can be naturally combined with many existing transfer-based attack method, as RAP focused on replacing the random perturbation to attacked sample by a reverse adversarial perturbation, which has never been studied in exsiting methods. The experimental results also demonstrate that our RAP method could improve attack performance of all existing methods. \n\n----\n### **Q1.2** 'Though It seems that the theory of flat minimum is intuitive, is there any additional analysis besides the empirical experimental results that justify the theory? '\n\n**Response 1.2:** Thanks for this valuable comment. In addition to the experimental verification of the connection between the flat minimum and the adversarial transferability, we also provide the following analysis. \n\n1) As shown in Figure 1, we provided schematic plots to help understand the difference between sharp and flat local minima, as well as their different effects on transferability. As analyzed in Line 38-43, Line 51-56, Figure 1 intuitively illustrates that the flat local minima is more stable to the change of model parameters (i.e., loss landscape). \n\n2) In Section 3.3, we take loss landscape visualizations to verify that RAP can help to find adversarial example $x^{adv}$ located at the local flat region. As shown in Figure 3, compared to the baselines, the loss landscape of $x^{adv}$ founded by RAP is much flatter. \n\nWe will keep studying the theoretical connection between transferability and flatness of loss landscape in our future work.\n\n----\n\n### **Q1.3** 'Additionally, why simple EOT ... competitive baselines are provided in the paper.'\n\n\n**Response 1.3:** Thanks for this kind suggestion. We conducted the experiment of the suggested EOT baseline. We choose the ResNet-50 as the source model and MI-TI-DI as the baseline method. Instead of adding input transformation once like DI, we sample random transformation (resizing and padding) multiple times in each iteration. Then, we add them to $x^{adv}$ following the expectation of transformation (EOT). We set the number of sampling as $10$. Our RAP can be also naturally combined with EOT attack. Therefore, we also take experiments for this combination. \n\n**Experimental Results:** \nThe targeted attack results (%) are shown in the below table. We also follow the experimental settings in Section 4.1 of the main submission. \n|Attack ($M^S$: Res-50)\t| Inc-v3\t| Dense-121\t| VGG-16|\n| :----------: | :--------------------------: | :-----------------------------------------: | :------------------: |\n|MI-TI-DI | 10.9 | 74.9 | 62.8|\n|MI-TI-DI-EOT | 11.2 | 76.9| 66.9|\n|MI-TI-DI+RAP | 28.3| 78.2 | 72.9|\n|MI-TI-DI-EOT+RAP | 32.8| 86.1| 79.5|\n\nAs shown in the above table, **1)** the EOT attack gets a moderate increase on attack performance compared with the baseline MI-TI-DI attack, which demonstrates that EOT could improve adversarial transferability. **2)** Our RAP attack achieves better performance and surpasses the EOT attack by a large margin, especially for Inc-v3 and VGG-16 target models. **3)** Combining RAP with EOT can further boost EOT attack performance. \n\nThese results demonstrate that RAP could achieve better adversarial transferability and help find better flat local minima. Besides, the combination of RAP and EOT achieves the best performance among them, which demonstrates that these two methods could complement each other. We will add this part in the revision.\n\n\n\n\n", " In this paper, the authors propose a method (RAP) for generating adversarial examples that are transferable to other models. The authors identify that adversarial examples lay on a flat minimum will be more likely to be transferred to other models. The authors then propose the reverse adversarial perturbation guided by an additional inner maximization optimization process, to generate adversarial examples lay on a flat minimum. + The proposed method is straightforward and effective.\n+ The authors conduct a comprehensive evaluation of several models with various types of attacks.\n+ The authors conduct a case study with Google Cloud Vision API to validate the effectiveness of the proposed method.\n- The proposed method is not new.\n- The authors did not provide an analysis or theoretical proof for justifying the flat minimum hypothesis.\n- The evaluation does not have competitive baselines. Though It seems that the theory of flat minimum is intuitive, is there any additional analysis besides the empirical experimental results that justify the theory? Additionally, why simple EOT method is not enough for generating adversarial examples at a flat minimum? It seems to the reviewer that no competitive baselines are provided in the paper. The limited novelty of the paper. The theory of flat minimum is intuitive and not well justified by either experiment or theoretical proofs. Competitive baselines are not provided to show the effectiveness of the proposed method.", " This paper focuses on generating transferable adversarial examples. Observing previous works overfits to the surrogate model, the authors propose a novel method to address this issue. According to the experiments, the proposed method generates stronger transferable adversarial examples.\n + Strength\n1. The proposed method is interesting and insightful. Previous methods usually address the overfitting issues by borrowing the techniques from deep learning model training strategy, such as momentum term, data augmentation. In contrast, this paper proposes a method specifically designed for generating adversarial examples.\n2. The results are reasonably good. According to the results in the main paper, the proposed RAP and RAP-LS significently improve the transferability of adversarial examples.\n- Weakness\nI found the proposed method is less effective when attacking strong defense models (such as Feature Denoising), according to Table 8. Is there any explanation?\n See weakness. If the concern in weakness can be well addressed, I am willing to increase the score. 1. It is not clear if the proposed method can attack strong defense models.\n2. It is not clear if the proposed method can be applied to generate other kinds of adversarial examples (such as patch attack)", " The paper proposed to add reverse adversarial perturbation at each iteration in gradient-based attacks to compute more accurate gradient directions and escape sharp local minimum #### Strengths\n- Paper is clear and the idea is simple. \n- Experiments are comprehensive\n- Can be integrated easily into existing gradient-based attacks\n- Figure 3 is very insightful\n\n\n#### Weakness\n- The paper improves already an earlier line of works to find a more accurate gradient direction. The novelty of the method is not high but that said, the paper clearly provides detailed analyses from a loss landscape perspective and is validated through a battery of experiments in both targeted and untargeted persepctive.\n\n - How is. the performance of the attack wrt to PGD-based adversarial training\n\n- Recent works of improving gradient directions by looking ahead [a] or by accumulating the gradients of multiple data points around the current data point sampled [b] can be discussed to clearly highlight the contributions\n\n[a] Jang, Donggon, Sanghyeok Son, and Dae-Shik Kim. \"Strengthening the Transferability of Adversarial Examples Using Advanced Looking Ahead and Self-CutMix.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\n\n[b] Wang, Xiaosen, et al. \"Boosting adversarial transferability through enhanced momentum.\" arXiv preprint arXiv:2103.10609 (2021).\n\n- SOTA transferability technique Variance tuning-based FGSM [c] as well as [a] should be reported since they refine the gradient direction as in the proposed method. I appreciate the authors for including another line of methods such as LinBP and ILA, and TTP. \n\n[c] Wang, Xiaosen, and Kun He. \"Enhancing the transferability of adversarial attacks through variance tuning.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.\n\nI'm giving the score of 4 because of no evaluation and discussion to [a,b,c] which also find optimal gradient direction at each step. I'm very much open to increasing the scores given the comprehensive experiments already reported in the paper. - calculation of inner optimization increases the computational time of attack wrt baseline methods.\n- Adds more hyperparameters (S and mu) but the ablation studies show they are largely target model agnostic", " The authors claim that the adversarial examples should be in a flat local region for better transferability. To boost the transferability, they propose a simple yet effective method named Reverse Adversarial Perturbation (RAP). RAP adds an inner optimization to help the attack escape sharp local minima, which is general to other attacks. Experimental results demonstrate the high effectiveness of RAP. # Strengths\n\n1. The idea is novel and interesting. The adversarial example in flat local region could boost the transferability, which seems to be reasonable.\n\n2. The method is simple yet effective. Adding an inner optimization could effectively boost the transferability, which is also general to existing attacks.\n\n3. Extensive evaluations on ImageNet demonstrate the effectiveness of the proposed method in most scenarios.\n\n# Weakness\n\n1. Figure 1 is not clear. What do the x-axis and y-axis indicate? How do you obtain the curve?\n\n2. Why does the flat local region indicate better transferability? It might be interesting if you could provide some analysis.\n\n3. What does one iteration mean? Does it indicate one forward and back propagation? \n\n4. RAP is general to existing attacks, not limited to input transformation based attacks. Hence, the authors should integrate RAP into various attacks, such as momentum-based attack [1], different loss function [2], model specific attack [3].\n\n5. The biggest concern is the huge computation cost due to the larger iteration and the inner optimization in Eq. (4). Existing attacks could exhibit superior attack performance in much lower iterations. For instance, MI-FGSM [4], VMI-FGSM [1], EMI-FGSM [5], and the corresponding input transformations only adopt 10 iterations.\n\n6. Minor typos:\n\nLine 42 “a slightly change…” -> “a slight change…”\n\nLine 65 “help to escape…” -> “help escape…”\n\nLine 76 “can significant boost…” -> “can significantly boost…”\n\n\n[1] Wang et al. Enhancing the transferability of adversarial attacks through variance tuning. CVPR 2021.\n\n[2] Wu et al. Boosting the Transferability of Adversarial Samples via Attention. CVPR 2020.\n\n[3] Li et al. Learning Transferable Adversarial Examples via Ghost Networks. AAAI 2020.\n\n[4] Dong et al. Boosting Adversarial Attacks with Momentum. CVPR 2018.\n\n[5] Wang et al. Boosting Adversarial Transferability through Enhanced Momentum. BMVC 2021.\n See weakness, especially 4 and 5. N/A" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 5 ]
[ "muRmI2ZfGhu", "nips_2022_k5uFiFLWv3X", "r1O5naRNSGz", "Iceg-We4ALP", "Ajc-IqaP1NI", "RTQxoPRssgr", "muRmI2ZfGhu", "ozB2ph99g3u", "Iceg-We4ALP", "r1O5naRNSGz", "zxjtPtUSwq9", "Gygif0cheP", "nips_2022_k5uFiFLWv3X", "_fEXdVqJU_S0", "Iceg-We4ALP", "k4-7HWoaBM7", "r1O5naRNSGz", "zxjtPtUSwq9", "Gygif0cheP", "nips_2022_k5uFiFLWv3X", "nips_2022_k5uFiFLWv3X", "nips_2022_k5uFiFLWv3X", "nips_2022_k5uFiFLWv3X" ]
nips_2022_OoN6TVb4Vkq
Contextual Bandits with Knapsacks for a Conversion Model
We consider contextual bandits with knapsacks, with an underlying structure between rewards generated and cost vectors suffered. We do so motivated by sales with commercial discounts. At each round, given the stochastic i.i.d.\ context $\mathbf{x}_t$ and the arm picked $a_t$ (corresponding, e.g., to a discount level), a customer conversion may be obtained, in which case a reward $r(a,\mathbf{x}_t)$ is gained and vector costs $\mathbf{c}(a_t,\mathbf{x}_t)$ are suffered (corresponding, e.g., to losses of earnings). Otherwise, in the absence of a conversion, the reward and costs are null. The reward and costs achieved are thus coupled through the binary variable measuring conversion or the absence thereof. This underlying structure between rewards and costs is different from the linear structures considered by Agrawal and Devanur [2016] (but we show that the techniques introduced in the present article may also be applied to the case of these linear structures). The adaptive policies exhibited in this article solve at each round a linear program based on upper-confidence estimates of the probabilities of conversion given $a$ and $\mathbf{x}$. This kind of policy is most natural and achieves a regret bound of the typical order $(\mathrm{OPT}/B) \smash{\sqrt{T}}$, where $B$ is the total budget allowed, $\mathrm{OPT}$ is the optimal expected reward achievable by a static policy, and $T$ is the number of rounds.
Accept
As acknowledged in the reviews (and I concur), this is a well written paper that introduces a relevant and interesting contextual-bandits model and gives solid technical results. The paper certainly has its limitations, but overall the technical contributions are novel and well executed. The authors have effectively addressed the questions and concerns brought up in the reviews. I recommend acceptance.
train
[ "xnuTCS-3X1x", "iQMK1OSTzu", "5OCBuiBulfu0", "5_-3IwR-2a4", "CD_52oxB-_", "wxDpAZmonR", "PWDuWGz1rDi", "jctrIYyz5aF" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their detailed response and they have addressed most of my questions on the technical results. In terms of future directions, I think the main limitations of the current work (e.g., finiteness of contextual space, lack of specific lower bound results, etc.) remain to be further looked at. A more clearly stated lower bound result, in particular, can be helpful for evaluating the performance of the proposed algorithm. Nevertheless, I think the current work proposed an interesting model along with solid technical results. Given its contributions and limitations, I have updated my score to a 6. ", " My main concerns are addressed in the rebuttal. Please include the discussion on technical novelty, open questions regarding the lower bounds, experimental details regarding the benchmarks and new regret plots in the revised version. ", " We thank the reviewer for the careful reading and agree with the overall evaluation written. Interestingly, this overall evaluation (in terms of strengths and weaknesses) and the questions raised are extremely in line with what the other reviewers write! \n\nWe would like to answer to the weaknesses underlined and to the questions asked.\n\nWeakness 1 (The paper can do a better job of highlighting the technical novelty in the analysis. In the main paper, attention is paid to describing steps that bear similarities with the earlier works. Please elaborate if any non-trivial new technical machinery was introduced in the regret analysis apart from the KKT analysis): Yes, we will do. The reviewer is right in spotting the KKT part (and the associated direct primal formulation of a strategy) as the main technical novelty in the analysis. To understand that this is the crux of our technical contributions, one must read Section E.3 in appendix, and admittedly, this should have been given more visibility, which we will do. In Appendix E.3, we relate our new approach to the approach by Agrawal and Devanur (2016). They had to rely on a parameter Z (to be learned), while we replace Z by dual optimal variables. Also (though we did not even mention it in the main body of the article) we think that we provide a cleaner and more transparent treatment of the use of the no-op action, which was often not specifically discussed (if not neglected) in earlier CBwK works. This weakness of not giving enough visibility to the new contributions was raised by the three reviewers, and we will edit the main body to reflect the true nature of our technical contributions, and in particular, move material from Section E.3 into the main body. \n\nWeakness 2 (A discussion on lower bounds and tightness of the results will help readers understand the merits of the proposed work better): We agree. For now, we mostly note (see comments after Theorem 2) that the orders of magnitude of our upper bound match the ones of previous bounds (obtained in different settings). The references cited actually only hint at some vague arguments tending to indicate that these orders of magnitude should be sharp enough, but none of them provides a clear and well-stated theorem with a lower bound. We actually had put the fact of stating a clean lower bound (together with other research perspectives) on our to-do list for a subsequent article. The present article, while being 47-page-long with all appendices, is but a milestone on the long and difficult road of the CBwK theory... \n\nQuestion 1 (Why negative regret appears in simulations?): Yes, we will comment on this. The expectation of the regret is non-negative, so the average regret computed on independent runs must be non-negative. However, due to to computational complexity and time-management issues, we were only able to compute 10 runs. We provided the confidence intervals on the associated expectation of the regret, and all contain 0 (and larger values), even when the empirical average over 10 runs is negative. We agree that this fact should have been commented and underlined. We will do so. \n\nQuestion 2 (When justifying comparing with the optimal static policy in regret, the work of Agrawal and Devanur, 2016, is mentioned. However, the setup here is not the same due to the conversion model. How is it ensured that the relation between the optimal static policy and the best feasible adaptive policy remains the same as in this prior work under the conversion model?): We are unsure to which part of the article this comment refers. We therefore provide two answers.\n\nQuestion 2--Answer 1, based on the text between line 126-129 in Section 2.1: The text is misleading indeed. We actually meant and should have written that the argument of Agrawal and Devanur (2016, Appendix B) can be adapted in a straightforward manner to prove this fact. Their argument is short (10-line-long) and easy to adapt. But it's true that given we provided lengthy and self-complete appendices, we could have well added a complete proof of this fact! We will do so. \n\nQuestion 2--Answer 2, why consider the policy by Agrawal and Devanur (2016) as a competitor in the simulations of Appendix F: In lines 1124-1129 in Appendix F, we agree that this policy is disadvantaged. This is even more true as it models rewards and costs independently, while they are coupled (and often simultaneously null). We nonetheless used this linear-modeling policy because a typical justification for linear approximations is that they are simple and work usually well. Also, we wanted to have some competitor to our policy (to which other policy could we compare it, given that we introduce a new setting?), and this one was only one easy to implement in practice---unlike the policies by Badanidiyuru et al. (2014) and subsequent works, which rely on considering finitely many benchmark policies. We will detail these issues more in the revision. ", " We thank the reviewer for the careful reading and agree with the overall evaluation written. Interestingly, this overall evaluation (in terms of strengths and weaknesses) and the questions raised are extremely in line with what the other reviewers write! \n\nWe would like to answer to the weaknesses underlined and to the questions asked.\n\nWeakness 1, first part (The finiteness of the context set does appear limited): Yes. But as we underline in Section 2.2, all earlier works on contextual bandits with knapsacks [CBwK] relied on heavy assumptions: either there was no constraint on the context set but regret was with respect to a finite number of pre-specified benchmark policies; or the cost and reward functions r and c were linearly linked to the contexts, i.e., the problem was parametric. In our case, while the conversion function P is parametric (but we do not require finiteness to learn its parameter \\theta_\\star in Pahse 1), we need some restriction or structural assumption on the functions r and c, and we picked finite contexts as such a restriction (so as to be able to carry out our Phase 2). The simulations of Appendix F (see in particular the end of Section F.2) discuss these issues further. This could certainly be improved one day: CBwK is an (extremely) difficult and largely unexplored problem, and admittedly, our contribution is only a milestone in developing a richer theory. \n\nWeakness 1, second part ([...] required only for Phase 2 of the algorithm and [...] theoretical results when one discretizes a continuous context set X for Phase 2?): They remain as stated in the main body of the article, though we did not underline this enough. We will do so in the revision (e.g., in Appendix F.2). More precisely, the error for learning \\theta_\\star (as outcome of Step 1 of the proof, see the equations of line 229) is then 'carried over' in the subsequent steps of the proof of Theorem 1 in a transparent way, as it is encompassed in the \\epsilon terms. Finiteness of X plays a role in these subsequent steps only to be able to write and solve the convex optimization problems. So, P, as mentioned above, may well depend on non-finite contexts, but we have r and c take finitely many values by imposing that we cluster contexts into finitely many groups. Somehow, in practice, this only means that r and c are 'rougher' and 'more random' than we would like. \n\nWeakness 2 (Some numerical studies should be included in the main body of the paper. [...]): We agree... except that we did not find a way to provide a numerical study in 1 page. Appendix F reporting numerical results is 6-page-long: 1 page for the graphs but 5 pages to describe the setting (what contexts are, the functions P, c and r, etc.). Any hint/suggestion on this front would be much appreciated!\n\nWeakness 3 (Including the lower bound results in the paper could be helpful, even though the authors referred to other works. Currently it is unclear how good the regret upper bounds are): We agree with the fact that specific lower bounds should be proved. We mostly note (see comments after Theorem 2) that the orders of magnitude of our upper bound match the ones of previous bounds (obtained in different settings). The references cited actually only hint at some vague arguments tending to indicate that these orders of magnitude should be sharp enough, but none of them provides a clear and well-stated theorem with a lower bound. We actually had put this issue (together with other research perspectives) on our to-do list for a subsequent article. The present article, while being 47-page-long with all appendices, is but a milestone on the long and difficult road of the CBwK theory. \n\nQuestion 1: Yes, we suspect that the bound (7) could be slightly improved, but not in terms of the orders of magnitude in t (which is the critical point in the bound). It's merely that we apply T|X| times a simple Hoeffding-Azuma argument and that some more powerful empirical-process technique might lead to a slightly optimized bound, e.g., without a \\sqrt{\\ln T} term. But all this would only reduce slightly the b_T term in the bound of Theorem 2 and would not change the overall order of magnitude of the bound. We will ask some experts of empirical processes. We underlined this possibility of improvement only by honesty and to be transparent that we had not time to ask an expert, but it is really a detail (at least in our eyes). \n\nQuestion 2: Yes, and all earlier references on logistic bandits (Filipi et al. 2010, Faury et al. 2020) suffer from the same (deep) issue. However, we mention at the end of Section 3 that these references and our own simulations show that in practice, this projection step is virtually never required. We therefore did not bump into any problem when carrying out our numerical study, as the outcome of MLE was already in the required set \\Theta. Admittedly, we took an \\Theta large enough in our simulations. ", " We thank the reviewer for the careful reading and agree with the overall evaluation written. Interestingly, this overall evaluation (in terms of strengths and weaknesses) and the questions raised are extremely in line with what the other reviewers write! \n\nWe would like to answer to the weaknesses underlined and to the questions asked.\n\nWeakness 1 (The proposed adaptive policy requires a finite context set): Yes. But as we underline in Section 2.2, all earlier works on contextual bandits with knapsacks [CBwK] relied on heavy assumptions: either there was no constraint on the context set but regret was with respect to a finite number of pre-specified benchmark policies; or the cost and reward functions r and c were linearly linked to the contexts, i.e., the problem was parametric. In our case, while the conversion function P is parametric (but we do not require finiteness to learn its parameter \\theta_\\star in Pahse 1), we need some restriction or structural assumption on the functions r and c, and we picked finite contexts as such a restriction (so as to be able to carry out our Phase 2). The simulations of Appendix F (see in particular the end of Section F.2) discuss these issues further. This could certainly be improved one day: CBwK is an (extremely) difficult and largely unexplored problem, and admittedly, our contribution is only a milestone in developing a richer theory. \n\nWeakness 2 (The computational complexity of the proposed adaptive policy is high [...]): For Phase 2, see the answer above. For Phase 1: yes, and all earlier references on logistic bandits (Filipi et al. 2010, Faury et al. 2020) suffer from the same (deep) issue. However, we mention at the end of Section 3 that these references and our own simulations show that in practice, this projection step is virtually never required. \n\nWeakness 3, first part (Theoretical analysis techniques are standard): Yes, all techniques taken in isolation are standard. The true technical contributions, which we admittedly did not underline enough in the main body of the article, would be that the algorithm directly uses a primal formulation. This was achieved thanks to the second step of the proof of Theorem 2, which looks so natural but follows from careful writing of all KKT conditions. That this is the crux of our technical contributions may be best understood by reading Section E.3 in appendix, where we relate our new approach to the approach by Agrawal and Devanur (2016). They had to rely on a parameter Z (to be learned), while we replace Z by dual optimal variables. Also (though we did not even mention it in the main body of the article) we think that we provide a cleaner and more transparent treatment of the use of the no-op action, which was often not specifically discussed (if not neglected) in earlier CBwK works. All in all, given that this weakness was raised by the three reviewers, we will edit the main body to reflect the true nature of our technical contributions, and in particular, move material from Section E.3 into the main body.\n\nWeakness 3, second part (Some theoretical regret bounds are not sharp): May we respectfully ask for more details on this? Yes, we suspect that the bound (7) could be slightly improved, but not in terms of the orders of magnitude in t (which is the critical point in the bound). It's merely that we apply T|X| times a simple Hoeffding-Azuma argument and that some more powerful empirical-process technique might lead to a slightly optimized bound, e.g., without a \\sqrt{\\ln T} term. But all this would only reduce slightly the b_T term in the bound of Theorem 2 and would not change the overall order of magnitude of the bound. We will ask some experts of empirical processes. We underlined this possibility of improvement only by honesty and to be transparent that we had not time to ask an expert, but it is really a detail (at least in our eyes). \n\nQuestion 1 (For the projection step in Phase 1 of the proposed adaptive policy, could the authors elaborate [...]?): We provide the formulation of the projection in Equation (4). Fortunately, it did not need to be solved in our simulations, as the outcome of MLE was already in the required set \\Theta; see the answer to Weakness 2. Admittedly, we took an \\Theta large enough in our simulations. \n\nQuestion 2 (The authors stated in quite a few places that their regret bounds can likely be improved): To the best of our knowledge, we only mention this for bound (7)---see the beginning and end of Section 5. See our answer above to Weakness 3, second part. \n\nQuestion 3 (The proposed adaptive policy and its analysis are based on or inspired by integrating several existing techniques. [...] Could the authors highlight the key differences [...]?): Yes, see the answer above to Weakness 3, first part. The main difference and contribution is w.r.t. Agrawal and Devanur (2016), who somehow used a dual formulation to be able to rely on online convex optimization techniques.", " This paper studied the problem of contextual bandits with knapsack constraints, where, in each round, given the stochastic i.i.d. context and the selected arm, a conversion rate is obtained. The reward and costs are coupled through the binary variable measuring conversion. The authors proposed a new adaptive policy that solves a linear program based on upper-confidence estimates of the probabilities of conversion in each round, and they show that the proposed policy achieves $(\\mathrm{OPT}/B)\\sqrt{T}$, where $B$ is the total budget allowed, $\\mathrm{OPT}$ is the optimal expected reward, and $T$ is the number of rounds. The authors also showed that their proposed techniques can also be applied to the linear structures considered in [Agrawal and Devanur 2016]. Strengths:\n\n+ This paper studied a new MAB model that integrates contextual stochastic bandits and knapsack constraints, which is well motivated in practice.\n+ The authors proposed an adaptive policy that achieves $O(\\sqrt{T})$ regret for the proposed contextual MAB model with knapsack constraints.\n+ The proposed adaptive policy can also be applied to traditional linear contextual bandits with knapsacks.\n\nWeaknesses:\n\n- The proposed adaptive policy requires a finite context set.\n- The computational complexity of the proposed adaptive policy is high: Phase 1 of the proposed policy requires a non-trivial projection step and Phase 2 assumes a finite context set. \n- Theoretical analysis techniques are standard and some theoretical regret bounds are not sharp. - For the projection step in Phase 1 of the proposed adaptive policy, could the authors elaborate on how the projection problem should be formulated and solved?\n\n- The authors stated in quite a few places that their regret bounds can likely be improved. Could they provide a tighter analysis?\n\n- The proposed adaptive policy and its analysis are based on or inspired by integrating several existing techniques (e.g., Logistic-UCB1 by [Faury et al. 2020], [Agrawal and Devanur 2016]). The key analysis steps are standard and similar to these existing works. Could the authors highlight the key differences, challenges, novelty, and contributions in their proofs? This paper is theoretical and irrelevant to negative societal impact.", " In this paper, the authors studied the contextual bandits with knapsack problem, where the rewards and costs are coupled through a conversion term. The authors proposed an adaptive policy for the proposed setting, and showed that it achieves a regret bound of order $\\tilde{O}(\\frac{\\text{OPT}}{B} \\sqrt{T})$, regardless of whether the context distribution is known or unknown. Finally, the authors also briefly discussed how their approach also applies to the linear contextual bandits with knapsack setting studied in [Agrawal and Devanur 2016].\n Strengths:\n1. The authors considered a novel setting that couples reward and cost using a conversion term, which corresponds to whether the customer accepts the offer. This is a realistic setting (though it will be ideal if the authors can move some of the discussion on real-world examples to the main body of the paper). \n2. The authors proposed an adaptive policy for their setting and provided detailed discussion about each stage. They also provided theoretical results for the settings of both known and unknown context distributions and the proof appears sound. \n\nWeaknesses:\n1. The finiteness of the context set does appear limited. The authors mentioned that this is required only for Phase 2 of the algorithm and I wonder what are the theoretical results when one discretizes a continuous context set $\\mathcal{X}$ for Phase 2? \n2. Some numerical studies should be included in the main body of the paper. In particular, an illustration of the regret bound can be helpful in illustrating the dependency of the algorithm on $T$.\n3. Including the lower bound results in the paper could be helpful, even though the authors referred to other works. Currently it is unclear how good the regret upper bounds are. \n 1. The authors mentioned that the regret bound in Theorem 2 can be further improved if the uniform deviation can be improved. Could you provide more details related to this statement? Why is the bound in equation (7) likely to be improved?\n2. The authors state that the projection step in Phase 1 could be an issue. I wonder how this issue is dealt with? Do you do something different in numerical studies? Can you make certain changes to this step to improve the theoretical complexity? \n None. ", " This paper proposes a new contextual bandit model with knapsacks. In this new model, the decision-maker sequentially offers actions to arriving users after observing their contexts. Rewards and costs are only realized if the users accept the offer (a conversion happens). Conversion probabilities follow a logistic model that depends on action-context features and an unknown parameter vector. The goal is to design an adaptive policy that performs almost as well as the best static policy that knows the context distribution and conversion probabilities. The authors propose logistic-UCB1 algorithm to solve this problem. The proposed algorithm at each round computes the MLE estimate of the unknown parameter vector based on history. Then, it computes action selection probabilities by solving an optimization problem with expected constraints. This is a linear program that can be computationally manageable for finite context spaces. Sublinear regret upper bounds are shown both when the learner knows the context distribution and when the learner uses an empirical estimate of the context distribution. Supplemental material provides a motivating example from the banking sector, where the actions correspond to discounts on offered loans. They also investigate how to extend the current work to linear contextual bandits with knapsacks and come up with an algorithm with better budget requirements and parameter tuning capabilities, albeit requiring the finiteness of contexts. Strengths:\n\nThis paper is well written. It proposes a new structural model called conversion for contextual bandits with knapsacks inspired by work on logistic bandits. The model has interesting real-world applications. The regret metric and the proposed algorithm are intuitive. I think the main paper does a good job positioning the current work within the literature on contextual bandits with knapsacks. Comparison with existing works is sufficient. \n\n\nWeaknesses: \n\nThe paper can do a better job of highlighting the technical novelty in the analysis. In the main paper, attention is paid to describing steps that bear similarities with the earlier works. Please elaborate if any non-trivial new technical machinery was introduced in the regret analysis apart from the KKT analysis. \n\nA discussion on lower bounds and tightness of the results will help readers understand the merits of the proposed work better. Why negative regret appears in simulations? \n\nWhen justifying comparing with the optimal static policy in regret, the work of Agrawal and Devanur (2016) is mentioned. However, the setup here is not the same due to the conversion model. How is it ensured that the relation between the optimal static policy and the best feasible adaptive policy remains the same as in this prior work under the conversion model? This work does not have any societal impact. Limitations are well discussed. " ]
[ -1, -1, -1, -1, -1, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, 4, 2, 3 ]
[ "5_-3IwR-2a4", "5OCBuiBulfu0", "jctrIYyz5aF", "PWDuWGz1rDi", "wxDpAZmonR", "nips_2022_OoN6TVb4Vkq", "nips_2022_OoN6TVb4Vkq", "nips_2022_OoN6TVb4Vkq" ]
nips_2022_buXZ7nIqiwE
Using natural language and program abstractions to instill human inductive biases in machines
Strong inductive biases give humans the ability to quickly learn to perform a variety of tasks. Although meta-learning is a method to endow neural networks with useful inductive biases, agents trained by meta-learning may sometimes acquire very different strategies from humans. We show that co-training these agents on predicting representations from natural language task descriptions and programs induced to generate such tasks guides them toward more human-like inductive biases. Human-generated language descriptions and program induction models that add new learned primitives both contain abstract concepts that can compress description length. Co-training on these representations result in more human-like behavior in downstream meta-reinforcement learning agents than less abstract controls (synthetic language descriptions, program induction without learned primitives), suggesting that the abstraction supported by these representations is key.
Accept
The submission explores differences in human and machine inductive biases. Using the task of generating a pattern in a grid, the authors first show that models trained on human inputs generalize better to machine generated inputs than other human inputs, suggesting that models lack the correct inductive bias. However, by exploiting representations of natural language descriptions or programs during training, models can be given a human-like inductive bias. The reviewers agree that this is creative, thought provoking work backed up by thorough experiments, and the paper it is particularly well written. The main area for improvement in future work would be showing that these results would generalize to other, more complex domains.
train
[ "bk-L9PoPZkc", "revRBERMQc-", "wejsH1UHEWA", "sKkadxqyNvZ", "kV3Yg4HxA-y", "tdgMn-r5YWB", "5RnUbSW8zA5", "SBCSSU8gshG", "A5IcF6hPDSk", "8k3BbpTDx28", "UAarKPlDOJS", "9D2RU7P-fQa" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks a lot for the detailed responses, all my points are adequately addressed. I agree with the authors that the weaknesses I mentioned in my initial review are better seen as avenues for future work. Very interesting to learn about this meta-learning approach for RL, I will read more about that. I will keep my rating at strong accept, I'm excited to see this paper accepted and keep an eye on future work.", " We are very grateful to all the reviewers for providing the very helpful and constructive feedback. There were a lot of great points raised about exploring potential avenues for scaling this approach up, exploring the role of abstraction in other kinds of environments, etc. We have included as many main points as we could within the page limit (9 pages) of the submission, but are excited to incorporate more details/points from these discussions in the camera ready version (which gives us 10 pages) should this work be accepted, or in future iterations of this work.", " > The nearest neighbor heuristic also encodes a human-like prior in some sense, and since many results are reported relative to this heuristic, I wonder if this could be stacking the cards against the agent.\n\nWe agree with the reviewer that the nearest neighbor heuristic has a human-like prior (spatial proximity). However, it is not necessarily the primary human-like prior we are interested in studying (abstraction). This z-score metric was introduced in Kumar et al. 2021 ICLR to control for this proximity prior so that the prior of interest can be studied and we follow suit in this work. \n \nWe think it’s less likely that this could be stacking the cards against the agent, because both humans and all agents are evaluated on all task distributions under the same metric for comparison. We also believe that, since the agents all use a conv layer, it would naturally contain this bias of spatial proximity (Kumar et al. 2021 ICLR has a discussion on these points in the section titled “Bias towards spatial proximity.”).\n\n> 275: \"improves performance uniformly\": if I'm reading the results correctly, autoencoder seems to decrease the performance of the RL baseline on the machine-generated boards\n\nYes, we thank the reviewer for catching this error and we have amended the claim accordingly in the new submission. \n\n> 302-303: I may have misunderstood, but the library learning isn't grounded in language in the current work, is it (i.e. the paper doesn't do a LAPS-like, Wong et al. approach) -- it's just the RL agent that is grounded?\n\nYes, the reviewer is right. We used program induction (including library learning) as a separate training signal for the RL agent (independent of also using language as a training signal for the agent). We apologize that this was ambiguous and have made this more explicit in the new submission. \n\n> Mu et al. does use human-generated language for the Birds dataset. This would also be a good place to cite Andreas et al., Learning with Latent Language\n\nWe thank the reviewer for catching this and have made the appropriate citations accordingly in the new submission. \n\n> 312: this is describing the results in Fig 5B, right? there are really only 2 bits in the comparison (relative to the RL agent with no representation prediction, GSP goes up; control goes down), and so this wasn't totally convincing to me\nWe have softened this claim in the new submission. \n\n> line 51: is a meta-learning\n\n> 59: define the auxiliary representation here\n\n> 93-95: this seems intuitive, but it could help to give some stats to make it concrete -- what does it mean to respect most of the stat properties?\n\n> 240: it wasn't clear until seeing Fig 6 that \"task attribute\" can include representations of the language or program.\n\n> 250-252: I confess that having only these citations bugged me a bit, given the long tradition of work on language grounding before 2018.\n\nWe appreciate all these suggestions. They have been incorporated into the revised document. \n\n> Fig 5 caption: \"across modalities\" was a bit confusing to me: it's the correlation matrices that are being compared across modalities, right, not the individual representations? (i.e. language embeddings are never compared to program embeddings)\n\nYes, this is correct in that we are comparing the correlation matrices and not the specific representations. We have made this clearer in the revised paper. \n\nWorks cited:\n\n* Kumar, S., Dasgupta, I., Cohen, J., Daw, N., & Griffiths, T. (2021). Meta-Learning of Structured Task Distributions in Humans and Machines. In International Conference on Learning Representations.\n\n\n\n", " > Q1) 123- I was pretty unclear on how meta-learning is used in the agent training. Is a meta-learning algorithm used (it doesn't seem to be; PPO seems to be applied in each individual task), or is it a meta-learning architecture? If the second, does the LSTM model the temporal nature of individual tasks (recurrence is across timesteps for individual agent actions) or the temporal nature of learning across tasks? \n\nWe apologize for the paper not clearly stating the exact connection with meta-learning. Indeed, we are using a meta-learning architecture, and the LSTM is modeling the temporal nature of individual tasks. In the supplement , we have included this explanation as well as relevant references that details this connection in the revised paper (reproduced below):\n\n“Meta-learning can be considered as a bi-level optimization problem where there is an outer-loop of learning in which the model learns a useful inductive bias across different tasks of the same task distribution and an inner-loop of learning which takes that inductive bias and rapidly learns or adapts within a specific task (Hospedales et al. 2020). In recurrent-based meta-reinforcement learning architectures like ours, in which tasks are fed sequentially to the model, the outer loop is explicitly implemented as a reinforcement learning algorithm that updates the weights across tasks and the inner loop is implicitly implemented as a separate reinforcement learning algorithm in the activation dynamics of the recurrent network (Wang et al. 2016; Botvinick et al. 2019) that employs fast adaptation within a specific task. See (Ortega et al. 2019) for a formal explanation of this adaptation. In our case, the LSTM weights are updated in the outer loop across different grids to give the LSTM a useful prior learned from patterns or abstractions seen across different grid tasks. The activation dynamics within the LSTM utilizes this prior, along with the history of observations within the episode, to implement a fast algorithm to solve the current grid task for the inner loop. See (Ortega et al. 2019) for a formal explanation of this adaptation.”\n\n> Q2) What are 120 LSTM units? Does this mean an LSTM with hidden dimension 120, or 120 different LSTM cells, or 120 unrolled time steps (or something else)?\n\nWe apologize for being vague here. We meant that we used an LSTM with 120 different *cells.* We have clarified this point in the revised document.\n\n> The paper shows similar effects from natural language abstractions & learned program abstractions, but it would be interesting to also investigate whether there is a direct correspondence between these modalities, e.g. using alignment methods similar to those in Andreas et al. 2017, Translating Neuralese or Andreas 2019, Measuring compositionality in representation learning [I feel I should disclaim here that I am not an author of these works :) ].\n\nWe agree that this is an important future direction that we would be very interested in pursuing. In addition to the works mentioned by the reviewer, Wong et al. 2021 (ICML) uses a similar joint language and program idea as an objective and Wong et al., 2022 (CogSci) uses it as an evaluation metric.\n\n> One additional point of uncertainty for me was what typifies the \"machine prior\". It's not at all clear to me that an agent should do better on the control distribution than on the human distribution, particularly given what seem to be differences in the architecture used to produce the distribution and the one used in the RL agent. It would strengthen the paper to explore this more. \n\nWe agree that it is important to investigate what typifies the machine prior. We believe a systematic exploration of different combinations of architectures that produce the distribution and use the distribution during training would allow us to answer this question. We are excited to explore this in future work. \n\nWorks cited:\n\n* Wong, C., Ellis, K. M., Tenenbaum, J., & Andreas, J. (2021, July). Leveraging language to learn program abstractions and search heuristics. In International Conference on Machine Learning (pp. 11193-11204). PMLR.\n\n* Wong, C., McCarthy, W. P., Grand, G., Friedman, Y., Tenenbaum, J. B., Andreas, J., ... & Fan, J. E. (2022). Identifying concept libraries from language about object structure. arXiv preprint arXiv:2205.05666.\n\n\n\n\n\n", " We thank the reviewer for their positive comments and all their constructive feedback. Below is a response to the questions/concerns raised. \n\n> W1) The use of language in the paper is not quite as well-developed as the use of program abstractions. There's a stark contrast between the nature of the synthetic language and human-written language -- not just in the level of abstraction, but also in being human-written versus synthetic. The synthetic language is likely far out-of-distribution for the RoBERTa model, and the language representations (if I understood correctly) weren't induced for any of the tasks in this paper (the RoBERTa model was frozen).\n\nWe agree the synthetic language may be more out-of-distribution for the RoBERTa model than the human-written language, and incorporating finetuning is an important next step. For the purpose of our experiments, however, the synthetic distributions being OOD doesn’t fully account for the effects we found. In particular, we note that the performance boost for human-written language is selective in that it only helps for human-generated boards, while for machine-generated boards, the synthetic language performs better than the human language. This suggests that the synthetic language still provides a useful signal for the model. We would not necessarily expect such selectivity in performance boost if the only difference between synthetic and human language was that one is more in distribution for the large language model. \n\n> I would have appreciated a finer-grained analysis that compared different levels of abstraction in the human-written language. But I don't think this is a crucial weakness.\n\nThis is a great point. In future work we would like to more systematically dig into the different levels of abstraction in the human-written language, ideally using a human experiment where we rate descriptions based on description length vs reconstructability (i.e. can someone reconstruct the board given the description?). This experiment would provide a more convincing way of quantifying the level of abstraction (and its usefulness) in the descriptions than simply being elicited by different prompts. It would then be interesting to show what level of abstraction helps the agent learn human inductive biases the most (i.e. too little and too much abstraction both may not be so helpful). \n\n> W2) I worry a bit that the findings here are bound to depend on the program abstractions and the language elicited. I don't think this is a crucial weakness because the paper was pretty careful here (primitives and synthetic language, while similar in design, are natural for this domain; and the language is human-elicited and program abstractions are learned from just a few low-level primitives) -- but I'd be more convinced if the main findings were also shown to hold in another domain. But again I don't think this is a crucial weakness, there's enough here that that could be followup work.\n\nWe agree that our task is tailored to visualizing abstract human inductive biases. Now that we’ve validated our approach using a careful experimental setup in which such abstractions are easily visualized, we can set out to scale up this approach in future work on larger, more real world and natural domains. For larger environments, this may involve automating the language description generation with the use of vision-language models or using graphics programs DSL’s to synthesize photorealistic images with structure that can apply across many domains (see our response to the first point for a larger discussion on this). Additionally, human abstractions in some domains will help more than others. Some domains, such as mathematical reasoning (as the reviewer brought up), planning, or puzzle solving even more obviously necessitate abstractions. From the literature in cognitive science (e.g. Lake et al. 2017), we believe these domains correlate with tasks that humans are good at and typical neural network agents are not so good at. It will be exciting future work to use our approach to test this correlation by evaluating which other tasks co-training with language and program abstractions can improve neural agent’s ability. Doing so will be an exciting opportunity both for studying human intelligence by getting a better context for what the role of abstraction is in everyday cognition as well as improving machine intelligence by expanding the capabilities of current neural agents. \n\nWorks cited:\n\n* Lake, B. M., Ullman, T. D., Tenenbaum, J. B., & Gershman, S. J. (2017). Building machines that learn and think like people. Behavioral and brain sciences, 40.", " > This is a very minor point, and maybe more of an opinion than a weakness, but in the discussion you claim that your work suggests \"not all language is created equal: human-generated language leads to more human-like performance than synthetic language descriptions\", but this seems like more of an artefact of the particular type of synthetic language you chose, namely one without any abstractions, simply listing the tiles. Probably a synthetically generated description that would take into account shapes like lines (which are easily automatically detected) would work as well as natural language. In any case, I do agree that natural language is more suited to this, I just think the comparison shouldn't be synthetic versus natural but rather learned/natural abstractions (language/library learning) versus hardcoded synthetic abstractions (that will cover less of the abstractions in the data).\n\nThis is a great point and we very much agree. The synthetic descriptions we used are devoid of any abstractions (we made them as literal descriptions as possible) and it is very much plausible that synthetically generated descriptions with embedded abstractions (such as lines, shapes, etc) would better instill human inductive biases. We refrained from creating synthetic descriptions with specific abstractions ourselves since it would be an arbitrary choice, but we note that human descriptions do indeed contain these abstractions, and Dreamcoder libraries also pick up on these abstractions. We have thus reworded this observation in the discussion to focus on the ‘abstract language vs. literal language’ axis rather than the ‘natural vs. synthetic’ axis.\n\n> I don't see under what definition this method uses meta-learning. As far as I know [2], meta-learning always has some meta-phase (i.e. doing some learning over the task distribution) to learn a meta representation (e.g. initialisation of parameters like in MAML) and an inner phase on a task sampled from the task distribution. This method does have a task distribution, but there's no notion of meta-learning as far as I can see. I could have an outdated idea of what constitutes meta-learning nowadays, so adding it to the questions :)\n\nWe apologize for the paper not clearly stating the exact connection with meta-learning. In the supplement, we have included this explanation as well as relevant references that details the connection in the revised paper (reproduced below):\n\n“Meta-learning can be considered as a bi-level optimization problem where there is an outer-loop of learning in which the model learns a useful inductive bias across different tasks of the same task distribution and an inner-loop of learning which takes that inductive bias and rapidly learns or adapts within a specific task (Hospedales et al. 2020). In recurrent-based meta-reinforcement learning architectures like ours, in which tasks are fed sequentially to the model, the outer loop is explicitly implemented as a reinforcement learning algorithm that updates the weights across tasks and the inner loop is implicitly implemented as a separate reinforcement learning algorithm in the activation dynamics of the recurrent network (Wang et al. 2016; Botvinick et al. 2019) that employs fast adaptation within a specific task. See (Ortega et al. 2019) for a formal explanation of this adaptation. In our case, the LSTM weights are updated in the outer loop across different grids to give the LSTM a useful prior learned from patterns or abstractions seen across different grid tasks. The activation dynamics within the LSTM utilizes this prior, along with the history of observations within the episode, to implement a fast algorithm to solve the current grid task for the inner loop. ”\n\n> Nit 1: Can you add how many runs you did to get the performance in variance, and add in the bar chart figures that the bars are variance over different runs / different humans?\n\n> Nit 2: I can't parse line 249-250\nWe thank the reviewer for pointing these points of ambiguity out. We have clarified them more in the revised submission. \n\n> Nit 3: I personally found library learning in the abstract slightly confusing because I didn't know the term. Generally, I think the abstract could use some work on clarity.\n\nWe apologize for the more confusing nature of the abstract and have refined it to be more clear. ", " We thank the reviewer for their positive comments and constructive feedback. Below is a response to the questions/concerns raised. \n\n> It is mentioned in the discussion, but the weakness of this method is that either a DSL needs to be created or natural language descriptions of each task in the dataset. The former isn't as expensive but might be difficult for more realistic tasks, and the latter is expensive. It would be nice to see some more discussion on the feasibility of scaling this approach to real-world tasks. There are many tasks for which humans are thought to use program-like representations (e.g. mathematics, counting, addition, etc., see [1] below). Also, perhaps vision-language models could be used to generate the language descriptions to get around the need for crowdsourcing them.\n\nWe agree with the reviewer that this remains the most important next step for our work. We are excited to brainstorm potential steps to make the path to scaling more clear. The primary bottlenecks of our approach are the need to (1) collect language descriptions and/or (2) run program induction on the environment’s states in order to include them as co-training signals. We have added discussion of these avenues to our limitations paragraph in the discussion section.\n\nOn the language side, we agree that a potential remedy would be moving to a weakly- or semi-supervised settings, finding the most relevant or ‘hardest’ abstractions in the environment, collecting a small number of language descriptions, and as the reviewer astutely suggests, fine-tuning vision-language models to augment these descriptions. Indeed, recent work (Yan et al. 2022 arXiv) has suggested that such models can be trained to do visual grounding with a small amount of captioned data and mostly uncaptioned data, for example. This approach may also necessitate the need for humans to provide independent ratings of description quality, which would be less expensive than producing de novo descriptions.\n\nOn the program side, we agree that the need to define ad hoc base DSLs for each domain is an important limitation of any approach relying on program abstractions. An important direction for this area is to develop a ‘canonical’ basis for program representations that may apply across many domains. For example, a graphics engine may provide one such general-purpose DSL to synthesize state abstractions for larger and more realistic visual environments. This kind of base DSL may be seen as an extension to the generic 2D vector graphics used by Ellis et al., 2018, to induce graphics programs from hand-drawn images.\n\n> Related; the tile-revealing task is a task very much tailored to literally visualising human-like inductive biases. Basically, in this task the solution is a sort of sum of the abstractions (e.g. U-like shape plus some red tile somewhere). Would these results transfer to problems where the solution is less obviously constructed from the abstractions?\n\nWe agree that our task is tailored to visualizing abstract human inductive biases. Indeed, our work intentionally set out to explore a potential solution to instilling inductive biases in a sound experimental setup where abstractions can be easily visualized and interpreted. Although some abstractions we found may be specific to this domain, the literature in cognitive science (e.g. Lake et al. 2017) provides good reason to expect that the effectiveness of our approach in this more artificial domain could translate to abstractions in real-world domains that are natural for humans, which will be even more well-suited for language descriptions. Some such domains, such as mathematical reasoning (as the reviewer brought up), planning, or puzzle solving even more obviously necessitate abstractions. It will be exciting future work to use our approach to empirically validate this conjecture by evaluating which other tasks co-training with language and program abstractions improves neural agent’s ability. \n\nWorks cited:\n\n* Yan, Chen, Federico Carnevale, Petko Georgiev, Adam Santoro, Aurelia Guy, Alistair Muldal, Chia-Chun Hung, Josh Abramson, Timothy Lillicrap, and Gregory Wayne. \"Intra-agent speech permits zero-shot task acquisition.\" arXiv preprint arXiv:2206.03139 (2022).\n\n* Ellis, K., Ritchie, D., Solar-Lezama, A., & Tenenbaum, J. (2018). Learning to infer graphics programs from hand-drawn images. Advances in neural information processing systems, 31.\n\n* Lake, B. M., Ullman, T. D., Tenenbaum, J. B., & Gershman, S. J. (2017). Building machines that learn and think like people. Behavioral and brain sciences, 40.\n", " > Is it possible that the fact that DreamCoder has a DNN inside of it presents a confound that might result in its libraries and procedures being too similar to what an agent would learn directly anyway?\n\nThis is a good question. Even though DreamCoder uses a DNN architecture, its architectural modifications, training objective, and data distribution presumably play a critical role in developing representations that can be distinguished from a more ‘vanilla’ DNN that doesn’t necessarily have those three qualities. The DreamCoder DNN’s architecture is built to specifically predict a probability distribution over the current routines in the library (this includes high-level routines added to the DSL during library learning). The DreamCoder DNN is trained to predict the most likely programs for a given grid by balancing a program’s description length and its score (its training objective). In addition, the data it trains on comes both from the GSP task dataset, as well as from grids that are randomly sampled from the current DSL/grammar (“dreams”: its training data). Ellis et al. 2021 shows that this training phase (“dreaming”) has a large effect on enriching the representations of the DreamCoder DNN (see Figs S2 and S3 in the DreamCoder paper). We conjecture that this combination of architectural priors, more structured training objective, and structured training data allows the program-based representation to serve as a distinct and effective cotraining target that can better instill human inductive biases in more tabula rasa neural agents. We provided a control experiment in the revised manuscript in the supplementary (Figure 8) where we co-trained the RL agent with a more standard ‘vanilla’ DNN. The results show that the selective performance improvement on human-generated tasks vs machine-generated tasks is unique to cotraining representations from the DreamCoder DNN as opposed to a standard DNN. Future work can confirm this by reproducing and extending the experiments of Ellis et al. 2021 in examining and dissecting the representations of the DreamCoder recognition model for our domain. We can also follow this up with more control experiments examining different kinds of DNNs with a variety of architectures, training objectives, and training data and comparing/contrasting how they function as co-training signals for RL agents. \n\nWorks cited:\n\n* Ellis, K., Wong, C., Nye, M., Sablé-Meyer, M., Morales, L., Hewitt, L., ... & Tenenbaum, J. B. (2021, June). Dreamcoder: Bootstrapping inductive program synthesis with wake-sleep library learning. In Proceedings of the 42nd acm sigplan international conference on programming language design and implementation (pp. 835-850).", " We thank the reviewer for their positive comments and constructive feedback. Below we respond in detail to specific questions and concerns that were raised. \n\n> I wish there was some sort of clearer path to \"scaling this up\" to the next step -- a more complex agent, a larger DNN, a richer task.\n\nWe agree with the reviewer that scaling to richer tasks remains the most important next step for our work. Below we include some ideas about the feasibility of scaling. The primary bottlenecks of our approach are the need to (1) collect language descriptions and/or (2) run program induction on the environment’s states in order to include them as co-training signals. We focus on decision-making environments with visual observations, as this is the setting for our work.\n\nOn the language side, the main bottleneck is that collecting human descriptions with sufficient coverage can become prohibitively expensive as the state space grows larger. One potential remedy would be to employ active learning methods to focus language elicitation on the most relevant abstractions in the environment and to explore data augmentation approaches to make these descriptions go further. For example, a vision-language model could be fine-tuned on the human descriptions to generate sufficiently high-quality descriptions for new states. There is some new work showing that even a small amount of language can be used for models to perform visual grounding on large environments (Yan et al. 2022 arXiv). This approach to scaling language may require scoring descriptions for their ‘quality’, which we could also conceivably accomplish using in another human experiment (scoring descriptions is less time-consuming for human participants than producing descriptions). \n\nOn the program side, the main bottleneck is that library learning is not yet computationally tractable for larger environments. However, since submitting our work, several algorithmic improvements have already emerged that make this direction appear promising (e.g. Bowers et al., 2022, which introduces a top-down synthesis approach yielding 2-3 orders of magnitude improvements in performance). \n\nWe have included a brief discussion of these directions in our limitations paragraph in the discussion section.\n\n> Ultimately, the paper only provides an /indirect/, behavioral measure of whether or not the internal representations of the DNN are human-like. It would be intriguing to see if any sort of /direct/ measure is possible (maybe by analyzing weights directly?).\n\nWe agree that it would be interesting to more directly assess the similarity of internal representations of the agent with the internal representations of humans. The first step toward comparing internal representations is a rigorous evaluation of *external* behavior. But, as the reviewer hints, human-like external behavior is a necessary but not sufficient condition for human-like internal representations. Claims on alignment of internal representations are challenging to make using only choice observations. \\\n\nThe “gold standard” for measuring alignment between the agent’s internal representations and those of humans uses neuroimaging data. For example, representational similarity analysis and/or encoding/decoding models (Kriegeskorte et al. 2009; Kriegeskorte et al. 2019) provide a direct measure of the representational similarity of models and humans. Another more inexpensive measure that can give a window to internal representations of humans can be eye tracking data, though this may come with its own challenges as well. Another idea could be to decode the agent’s language descriptions, and use alignment techniques such as those in Andreas et al. 2017; Andreas 2019; Wong et al. 2022 Cogsci to see if agent-generated language aligns with program representations or human-generated language. We believe the current work lays a foundation for more systematic probing of internals.\n\nWorks cited:\n\n* Kriegeskorte, N. (2009). Relating population-code representations between man, monkey, and computational models. Frontiers in Neuroscience, 35.\n\n* Kriegeskorte, N., & Douglas, P. K. (2019). Interpreting encoding and decoding models. Current opinion in neurobiology, 55, 167-179.\n\n* Andreas, J., Dragan, A., & Klein, D. (2017). Translating neuralese. arXiv preprint arXiv:1704.06960.\n\n* Andreas, Jacob. \"Measuring compositionality in representation learning.\" arXiv preprint arXiv:1902.07181 (2019).\n\n* Wong, C., McCarthy, W. P., Grand, G., Friedman, Y., Tenenbaum, J. B., Andreas, J., ... & Fan, J. E. (2022). Identifying concept libraries from language about object structure. arXiv preprint arXiv:2205.05666.\n\n* Bowers, M., Olausson, T.X., Wong, C., Grand G., Tenenbaum, J.B., Ellis, K., & Solar-Lezama, A. (2022). Top-Down Synthesis For Library Learning. Proc. ACM Program. Lang. 1, CONF, Article 1\n\n", " This paper explores the intersection of abstraction, induction, language, and behavior. The goal of the paper is to instill human-like inductive biases into neural networks, in an attempt to improve generalization and performance. This is accomplished by using auxiliary tasks in an RL framework that attempt to elicit structured internal representations that match human representations, as measured by language-based representations of tasks. The paper presents careful experimentation that involves a variety of tests of human and synthetic descriptions, and show that asking the agent to match human language generally induces more human like inductive biases. Wow. I really, really enjoyed this paper. It was thoughtful, extraordinarily well-written, and intellectually stimulating.\n\n+ The idea of inducing human-like abstractions in general DNNs is important and potentially impactful\n\n+ The paper is one of the most clearly written papers I've read in a long time\n\n+ The experimental setup is complex, but clean, with each step naturally building on previous steps\n\n+ The paper is intellectually enlarging, proposed new methodologies and tasks to make progress against a hard problem\n\n- I wish there was some sort of clearer path to \"scaling this up\" to the next step -- a more complex agent, a larger DNN, a richer task.\n\n- Ultimately, the paper only provides an /indirect/, behavioral measure of whether or not the internal representations of the DNN are human-like. It would be intriguing to see if any sort of /direct/ measure is possible (maybe by analyzing weights directly?). - Is it possible that the fact that DreamCoder has a DNN inside of it presents a confound that might result in its libraries and procedures being too similar to what an agent would learn directly anyway? This paper suitably address both limitations and potential ethical concerns (which are minor).", " The aim of this paper is twofold: (1) showing humans and machines use different inductive biases, and (2) showing that machines can learn to generalise more like humans do by distilling human inductive biases into them during training.\n\nAs a proxy for human-like inductive bias, the authors use the setup from [11] with two distributions of tile revealing tasks: one reflecting human inductive biases and one so-called \"metamer\" task, that is similar to the human-biased task in statistics, but is generated differently. The task reflecting human inductive biases (called GSP boards) are 2D grids with shapes sampled directly from humans. The metamer task uses the exact same sampling procedure, but instead of a human in the loop, there is a machine in the loop (called machine-generated boards). The task the agent is evaluated on is revealing tiles on a 2D grid to uncover a certain shape of red tiles, while revealing as little white tiles as possible. For the human-like tasks, imagine U-like, I-like, or pyramid-like shapes on a board, and the machine-generated boards are similar except that it's not symmetric and smooth shapes. Humans perform better on the GSP boards than on the machine-generated boards. \n\nTowards aim (1), showing humans and machines use different inductive biases, the authors train an RL agent with PPO on a set of GSP boards (i.e. red shapes on a 2D grid), and test on a held-out set of GSP boards as well as on machine-generated boards. Even though the agent has been trained on GSP boards, the agent performs better on the machine-generated boards than on the GSP boards. The authors hypothesise that this is because humans and machines use different biases to solve the task, and even though the machine is trained on the human-generated boards reflecting the human inductive biases, because the machine-generated boards have similar statistics to the GSP boards, they are better at those\n\nTowards aim (2) the authors train the baseline model again but with an auxiliary term added to the loss. The encoder of the baseline model is, besides learning the policy, trained to reconstruct a representation of the grid. Four different representations are experimented with: (1) synthetic language describing the board (red tile on row x1 col y1, red tile on row x2 col y2, etc.), (2) crowdsourced natural language describing the board (u-like shape in the bottom left, etc.), (3) a program constructed of a bunch of primitives from a DSL learned with DreamCoder (primitives like move, pen-up, pen-down, fork, left, right, so programs end up following the shape), (4) a program learned with DreamCoder + library learning (meaning certain re-usable learned programs are added to the DSL). The authors show that the descriptions lengths of natural language descriptions and programs with library learning are correlated, and that some abstractions are even similar (i.e. humans say U-shape, and DreamCoder adds a U-shaped-program to the DSL). Additionally, they train the encoder to simply reconstruct the flattened grid, called the autoencoder objective.\nResults show that for the autoencoder objective, synthetic language objective, and primitive program objective (DreamCoder without library learning) don't distill human-like inductive biases, but natural language descriptions and programs with re-used components do. This indicates that the level of abstraction of the task embeddings is important for distilling human inductive bias. Human-generated language is important and works better than synthetically generated language. Strengths:\n- This paper is very carefully and clearly written, with clear figures supplementing understanding. \n- The experimental setup is sound and convinces the reader of the main points.\n- The result that reproduces Kumar et al. in showing that humans have different inductive biases is an interesting and (to me) surprising result, and even though it's not a novel result it's still important in RL to replicate results from other papers with different algorithms (PPO vs A2C). Different training algorithms can give very different results in RL, and the fact that this result is stable under PPO and A2C strengthens it.\n- Interesting main findings; natural language descriptions and programs learned with DreamCoder + library learning are correlated in certain features, and even share abstractions, and these representations are also the ones that make the model generalise more like humans do. \n- I'm very excited about the potential of using the abstractions that are naturally present in human language for better generalisation, this is a very important research direction and I would like to see more in this area, this paper is a great step in that direction\n\nWeaknesses:\n- It is mentioned in the discussion, but the weakness of this method is that either a DSL needs to be created or natural language descriptions of each task in the dataset. The former isn't as expensive but might be difficult for more realistic tasks, and the latter is expensive. It would be nice to see some more discussion on the feasibility of scaling this approach to real-world tasks. There are many tasks for which humans are thought to use program-like representations (e.g. mathematics, counting, addition, etc., see [1] below). Also, perhaps vision-language models could be used to generate the language descriptions to get around the need for crowdsourcing them.\n- Related; the tile-revealing task is a task very much tailored to literally visualising human-like inductive biases. Basically, in this task the solution is a sort of sum of the abstractions (e.g. U-like shape plus some red tile somewhere). Would these results transfer to problems where the solution is less obviously constructed from the abstractions?\n- This is a very minor point, and maybe more of an opinion than a weakness, but in the discussion you claim that your work suggests \"not all language is created equal: human-generated language leads to more human-like performance than synthetic language descriptions\", but this seems like more of an artefact of the particular type of synthetic language you chose, namely one without any abstractions, simply listing the tiles. Probably a synthetically generated description that would take into account shapes like lines (which are easily automatically detected) would work as well as natural language. In any case, I do agree that natural language is more suited to this, I just think the comparison shouldn't be synthetic versus natural but rather learned/natural abstractions (language/library learning) versus hardcoded synthetic abstractions (that will cover less of the abstractions in the data).\n\n[1] The Child as Hacker, Joshua S. Rule, Joshua B. Tenenbaum, and Steven T. Piantadosi - I don't see under what definition this method uses meta-learning. As far as I know [2], meta-learning always has some meta-phase (i.e. doing some learning over the task distribution) to learn a meta representation (e.g. initialisation of parameters like in MAML) and an inner phase on a task sampled from the task distribution. This method does have a task distribution, but there's no notion of meta-learning as far as I can see. I could have an outdated idea of what constitutes meta-learning nowadays, so adding it to the questions :)\n\n- Nit 1: Can you add how many runs you did to get the performance in variance, and add in the bar chart figures that the bars are variance over different runs / different humans?\n\n- Nit 2: I can't parse line 249-250\n\n- Nit 3: I personally found library learning in the abstract slightly confusing because I didn't know the term. Generally, I think the abstract could use some work on clarity.\n\n[2] Meta-Learning in Neural Networks: A Survey; Timothy Hospedales, Antreas Antoniou, Paul Micaelli, Amos Storkey The two main limitations are properly addressed by the authors in the discussion, but I would like to see some more speculation on the direction of the future work and the scalability of this method. Party copied from the strengths&weaknesses part above my suggestions are: \n\n- Would like to see more discussion on scaling this approach to real-world tasks. There are many tasks for which humans are thought to use program-like representations (e.g. mathematics, counting, addition, etc., see [1] below). \n\n- Perhaps vision-language models could be used to generate the language descriptions to get around the need for crowdsourcing them.\n\n- I wonder whether these results transfer to problems where the solution is less obviously constructed from the abstractions, perhaps you could discuss this", " This paper presents an experimental setup and set of results that analyze (1) whether RL agents exhibit similar behavior to people in guessing which squares on a board are colored, in two different distributions of boards: human-elicited, and machine-produced, and (2) whether more human-like behavior (good performance on human-elicited; poor on machine-produced) is obtained by the RL agent when co-training the it using an auxiliary loss that predicts one of 4 types of representation: human-written board descriptions, low-level synthetic language board descriptions, program abstractions induced by the DreamCoder algorithm, or program primitives (which span 2 modalities and 2 rough levels of abstraction). The paper's main finding is that regularizing RL agent training with either human-generated language or induced program abstractions make the agent's behavior correspond more closely to human behavior. *Strengths*\n\nS1) I found the paper thought-provoking -- it's likely to stimulate future work on program induction and grounding language to abstraction. I'm likely to recommend this paper to others working in these areas, as well as to point to it as a nice example of a paper that combines cognitive science investigation with solid ML techniques.\n\nS2) The experiments and analysis were overall very well-designed and carefully controlled (but see below for some minor suggestions) and thorough, anticipating and heading off several concerns that I had. \n\nS3) I thought the choice of setting was a strength -- the grid world environment seems rich enough to produce sophisticated natural language and programs at varying levels of abstraction, while still being simple enough to afford controlled experiments, in particular collecting the GSP boards.\n\nS4) The paper was extremely well-written and clear. I enjoyed reading it.\n\n\n*Weaknesses*\n\nW1) The use of language in the paper is not quite as well-developed as the use of program abstractions. There's a stark contrast between the nature of the synthetic language and human-written language -- not just in the level of abstraction, but also in being human-written versus synthetic. The synthetic language is likely far out-of-distribution for the RoBERTa model, and the language representations (if I understood correctly) weren't induced for any of the tasks in this paper (the RoBERTa model was frozen). I would have appreciated a finer-grained analysis that compared different levels of abstraction in the human-written language. But I don't think this is a crucial weakness.\n\n*Edit after response:* In the response, the authors authors point out differential effects of synthetic language on human vs model boards, which I agree give evidence that improvements from human language aren't just due to being OOD for RoBERTa. So I'm even less concerned about this now.\n\nW2) I worry a bit that the findings here are bound to depend on the program abstractions and the language elicited. I don't think this is a crucial weakness because the paper was pretty careful here (primitives and synthetic language, while similar in design, are natural for this domain; and the language is human-elicited and program abstractions are learned from just a few low-level primitives) -- but I'd be more convinced if the main findings were also shown to hold in another domain. But again I don't think this is a crucial weakness, there's enough here that that could be followup work.\n\n*Edit after response:* I agree with the authors' points that these would be exciting for future work but are out-of-scope for the submission.\n\nI've updated my score to an 8 (from a 7).\n Q1) 123- I was pretty unclear on how meta-learning is used in the agent training. Is a meta-learning algorithm used (it doesn't seem to be; PPO seems to be applied in each individual task), or is it a meta-learning architecture? If the second, does the LSTM model the temporal nature of individual tasks (recurrence is across timesteps for individual agent actions) or the temporal nature of learning across tasks?\n\nQ2) What are 120 LSTM units? Does this mean an LSTM with hidden dimension 120, or 120 different LSTM cells, or 120 unrolled time steps (or something else)?\n\n*Suggestions*\n\nThe paper shows similar effects from natural language abstractions & learned program abstractions, but it would be interesting to also investigate whether there is a direct correspondence between these modalities, e.g. using alignment methods similar to those in Andreas et al. 2017, Translating Neuralese or Andreas 2019, Measuring compositionality in representation learning [I feel I should disclaim here that I am not an author of these works :) ].\n\nClaims in the results descriptions / discussion that I'd suggest revising/rewording:\n- 275: \"improves performance uniformly\": if I'm reading the results correctly, autoencoder seems to decrease the performance of the RL baseline on the machine-generated boards\n- 302-303: I may have misunderstood, but the library learning isn't grounded in language in the current work, is it (i.e. the paper doesn't do a LAPS-like, Wong et al. approach) -- it's just the RL agent that is grounded?\n- Mu et al. does use human-generated language for the Birds dataset. This would also be a good place to cite Andreas et al., Learning with Latent Language\n- 312: this is describing the results in Fig 5B, right? there are really only 2 bits in the comparison (relative to the RL agent with no representation prediction, GSP goes up; control goes down), and so this wasn't totally convincing to me\n\n\n*Minor suggestions*\n- line 51: is *a* meta-learning\n- 59: define the auxiliary representation here\n- 93-95: this seems intuitive, but it could help to give some stats to make it concrete -- what does it mean to respect most of the stat properties?\n- 240: it wasn't clear until seeing Fig 6 that \"task attribute\" can include representations of the language or program.\n- 250-252: I confess that having only these citations bugged me a bit, given the long tradition of work on language grounding before 2018. \n- Fig 5 caption: \"across modalities\" was a bit confusing to me: it's the correlation matrices that are being compared across modalities, right, not the individual representations? (i.e. language embeddings are never compared to program embeddings) The discussion of the limitations at the end of the paper was overall thorough and clear. The main things I would suggest addressing are in the weaknesses section above.\n\nOne additional point of uncertainty for me was what typifies the \"machine prior\". It's not at all clear to me that an agent should do better on the control distribution than on the human distribution, particularly given what seem to be differences in the architecture used to produce the distribution and the one used in the RL agent. It would strengthen the paper to explore this more. The nearest neighbor heuristic also encodes a human-like prior in some sense, and since many results are reported relative to this heuristic, I wonder if this could be stacking the cards against the agent. \n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 3 ]
[ "5RnUbSW8zA5", "nips_2022_buXZ7nIqiwE", "9D2RU7P-fQa", "9D2RU7P-fQa", "9D2RU7P-fQa", "UAarKPlDOJS", "UAarKPlDOJS", "8k3BbpTDx28", "8k3BbpTDx28", "nips_2022_buXZ7nIqiwE", "nips_2022_buXZ7nIqiwE", "nips_2022_buXZ7nIqiwE" ]
nips_2022_5vVSA_cdRqe
FairVFL: A Fair Vertical Federated Learning Framework with Contrastive Adversarial Learning
Vertical federated learning (VFL) is a privacy-preserving machine learning paradigm that can learn models from features distributed on different platforms in a privacy-preserving way. Since in real-world applications the data may contain bias on fairness-sensitive features (e.g., gender), VFL models may inherit bias from training data and become unfair for some user groups. However, existing fair machine learning methods usually rely on the centralized storage of fairness-sensitive features to achieve model fairness, which are usually inapplicable in federated scenarios. In this paper, we propose a fair vertical federated learning framework (FairVFL), which can improve the fairness of VFL models. The core idea of FairVFL is to learn unified and fair representations of samples based on the decentralized feature fields in a privacy-preserving way. Specifically, each platform with fairness-insensitive features first learns local data representations from local features. Then, these local representations are uploaded to a server and aggregated into a unified representation for the target task. In order to learn a fair unified representation, we send it to each platform storing fairness-sensitive features and apply adversarial learning to remove bias from the unified representation inherited from the biased data. Moreover, for protecting user privacy, we further propose a contrastive adversarial learning method to remove private information from the unified representation in server before sending it to the platforms keeping fairness-sensitive features. Experiments on three real-world datasets validate that our method can effectively improve model fairness with user privacy well-protected.
Accept
This paper presents a fair vertical federated learning framework (FairVFL), by learning a set of unified fair representations of data/features distributed across decentralized platforms, i.e., these representations do not reflect sensitive attributes such as age/gender. This is accomplished by having platforms learn local data representations from fairness-insensitive features, which are then aggregated at a central server; this is then sent (after a contrastive adversarial learning method removes private information) to platforms with fairness-sensitive features to remove bias by using adversarial learning. All reviewers found the method sound and the experimental results convincing -- two real-world datasets were used to show that the proposed method can effectively improve model fairness while preserving user privacy. Some concerns were raised over the communication and computational overhead of the proposed method, but none of the reviewers considered it serious enough as to merit rejection. One note of caution on the Accept recommendation: this is a paper that did not receive a review with confidence level above 3. Usually there is at least one review at the level of 4.
train
[ "9ZjHiPrXNGF", "GxPGJhyOd7", "EeqUSAG8noQ", "MOEnQvexrAG", "HQ-DeicBLuu", "wFp48j0qsY0", "lLXATqNtUL" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I appreciate the authors' response on my questions. They more or less answer my questions. However, as I am not an expert in fairness-aware ML, I maintain my score at 6 at the present stage. ", " Thank the reviewer for the insightful comments and constructive suggestions. Our detailed responses to your comments are as follows.\n\n**Q1:** Lacking a baseline. \n\n**A1:** The simple combination of existing fair ML methods [1,2] and the VFL framework can improve the model fairness of VFL tasks. However, these methods usually need to compute adversarial gradients from local sensitive attributes and may arouse privacy concerns, since data representations encoding private information need to be shared across sensitive attribute platforms. To improve the fairness of VFL models and meanwhile not hurt privacy, we apply adversarial learning in VFL for fair model training, and propose a contrastive adversarial learning method to protect user privacy (Section 3.3). Thus, the ablation of our FairVFL method (i.e., removing the contrastive adversarial learning method) can be viewed as the baseline method mentioned by the reviewer. The simple baseline can achieve comparable fairness with FairVFL and other centralized fair ML methods, however, it may lead to privacy leakage of privacy-sensitive attributes (Fig. 3(b)). We will discuss this issue in our revised paper.\n\n**Q2:** Definition of fairness.\n\n**A2:** Since the adversarial learning can help achieve counterfactual fairness in centralized machine learning [1], we apply it to improve the fairness of VFL models. However, as pointed out by the reviewer, it is difficult to directly verify the counterfactual fairness of different methods. Thus, following previous works [1,2], we evaluate the independence of fairness-sensitive features and data representations to verify the fairness of different methods. We will add these discussions to our revised paper.\n\n**Q3:** Should discuss limitations and tradeoffs on efficiency and stability.\n\n**A3:** Compared with the standard VFL framework, FairVFL only increases the computation cost of the server and platforms with fairness-sensitive attributes. The server and platforms need to extra compute the gradients on the attribute discriminators and contrastive discriminators, respectively. The computation complexities of FairVFL and standard VFL are $\\mathcal{O}(d^3+D^3)$ and $\\mathcal{O}(D^3)$, where $d$ and $D$ denote the parameter sizes of the discriminators and the main task model. In the widely used experimental settings, the discriminator is usually implemented by a small FFNN network, whose parameters are usually much less than the main task model. Thus, the computation efficiency of FairVFL is comparable with standard VFL methods. Besides, we agree that adversarial learning is sometimes unstable, which may affect fair model training. Fortunately, this problem has been studied by many previous works such as WGAN [3]. We can incorporate these works to improve the stability of FairVFL.\n\n**Q4:** In Figure 4(b), it seems that in some cases the fairness-F1 scores can go below random. When gender-F1<0.5 (e.g. 0.45), can the attacker simply flip the prediction and get gender-F1>0.5 (e.g. 0.55)? What does it imply on fairness?\n\n**A4:** A below-random fairness F1 score indicates that the model encodes the opposite biased information, which is also harmful to model fairness. This is because we evaluate the influence of the hyper-parameter $\\gamma$ under a fixed hyper-parameter $\\lambda$, where $\\lambda$ and $\\gamma$ control the importance of adversarial learning and contrastive adversarial learning methods respectively. When $\\gamma$ is set to 0, the weight of the adversarial loss may be too large, and the adversarial gradients may make the model encode some opposite biased information. When we set the $\\gamma$ to 0.25, the relative weight of adversarial learning is reduced, and the model fairness F1 score becomes close to the random guess.\n\n**Q5:** The authors state that \"Intuitively, highly relevant data samples are likely to share the same fairness-sensitive feature\" in line 178. I wonder after the model is trained well, and **s** does not contain fairness-related information, does this assumption still hold? If not, how will it influence the model training?\n\n**A5:** The assumption may not hold when the biased information is reduced from the unified data representations **$s$**. Fortunately, this issue will not damage the fair model training in FairVFL. This is because this assumption is used in contrastive adversarial learning to generate a protected representation that only encodes the biased information in **$s$** and reduces other private user information. The protected representation is further used to learn adversarial gradients to improve model fairness. When the assumption does not hold, the biased information is also reduced from the protected representation, which can make the adversarial learning converge.\n\n[1] Towards Personalized Fairness based on Causal Notion. SIGIR 2021.\n\n[2] Learning Fair Representations for Recommendation: A Graph-based Perspective. WWW 2021.\n\n[3] Wasserstein Generative Adversarial Networks. ICML 2017.\n", " Thank the reviewer for the insightful comments and constructive suggestions. Our detailed responses to your comments are as follows.\n\n**Q1:** Multiple times of communications might be a problem in federated learning.\n\n**A1:** In a single training round, FairVFL requires two extra communication rounds between the fairness-sensitive platforms and the server, which may increase communication costs. Fortunately, the extra communication costs are usually small. This is because in the extra communication rounds FairVFL only needs to exchange protected representations and their gradients, and the communication cost is $\\mathcal{O}(4BME)$, where $M$ is the number of fairness-sensitive platforms, $E$ denotes the dimensionality of protected representations, and $B$ denotes the batch size. Since $M$, $E$ and $B$ can be small in many settings, the corresponding cost is usually acceptable for data platforms. For example, in our experiments, $M$, $E$, and $B$ are set to 2, 64, and 32 respectively, and the extra communication cost of training model on a single sample is only 64kB.\n\n**Q2:** The local adversarial training might also slow down the overall training and increase the computational burden for both servers and platforms.\n\n**A2:** Compared with the standard VFL framework, FairVFL only increases the computation cost of the server and platforms with fairness-sensitive attributes. The server and platforms need to extra compute the gradients on the attribute discriminators and contrastive discriminators, respectively. The computation complexities of FairVFL and standard VFL are $\\mathcal{O}(d^3+D^3)$ and $\\mathcal{O}(D^3)$, where $d$ and $D$ denote the parameter sizes of the discriminators and the main task model. In the widely used experimental settings, the discriminator is usually implemented by a small FFNN network, whose parameters are usually much less than the main task model. Thus, the computation efficiency of FairVFL is comparable with standard VFL methods.\n\n**Q3:** Two hyperparameters, $\\gamma$ and $\\lambda$, seem to be very important. It might be hard for the server to choose the correct values for a new task. How consistent are $\\gamma$ and $\\lambda$ across datasets? \n\n**A3:** The optimal setting of $\\gamma$ (around 0.25) is consistent across the two experimental datasets. Besides, the optimal setting of $\\lambda$ for the two datasets is different (10 for ADULT dataset and 0.25 for NEWS dataset). Thus, the server may need to take some effort to search for the optimal setting of $\\lambda$ on different datasets. In our future work, we plan to explore replacing the hyper-parameters with adaptive loss weights to avoid hyper-parameter tuning. We will discuss this limitation in the revised paper.\n", " Thank the reviewer for the insightful comments and constructive suggestions. Our detailed responses to your comments are as follows.\n\n**Q1:** It would be better if the experiments can be conducted in a different types of datasets, e.g. image dataset celebA.\n\n**A1:** In our work, we used two datasets with different modalities (i.e., tabular data and textual data) for evaluation. We agree with the reviewer that adding experiments on a new image dataset (CelebA) can better verify the effectiveness and generality of our work. Thus, we have followed the reviewer’s suggestion and compared different methods on CelebA. The detailed experimental settings are summarized as follows. To simulate the VFL setting, we use the raw images and a part of the raw attributes as the input features. The raw images are stored in a data platform and other input attributes are stored in another platform. The target task is to classify whether the input sample has the attribute “smiling”. We also treat gender (the “male’’ attribute) as the target fairness-sensitive attribute. The evaluation methods of model accuracy and fairness are the same as the experiments in the paper. Moreover, the model for the target task is the ResNet-10. We repeat each experiment for ten times and report the average results in the following table. The results show that FairVFL can still achieve comparable fairness-performance trade-off as centralized fair machine learning methods. We will add these results to the revised paper.\n\n| Method | Task-Accuracy | Task-F1 | Fairness-F1 |\n|--------|-----------------|-----------------|-----------------|\n| CenTrain | 0.9180$\\pm$0.0013 | 0.9180$\\pm$0.0014 | 0.8706$\\pm$0.0069 |\n| FairGo | 0.9096$\\pm$0.0086 | 0.9095$\\pm$0.0088 | 0.5159$\\pm$0.0570 |\n| FairSM | 0.9032$\\pm$0.0230 | 0.9027$\\pm$0.0242 | 0.5259$\\pm$0.0807 |\n| FairRec | 0.9070$\\pm$0.0094 | 0.9067$\\pm$0.0097 | 0.5241$\\pm$0.0743 |\n| VFL | 0.9192$\\pm$0.0020 | 0.9174$\\pm$0.0021 | 0.8727$\\pm$0.0056 |\n| FairVFL | 0.9045$\\pm$0.0130 | 0.9042$\\pm$0.0135 | 0.5143$\\pm$0.0778 |\n\n\n**Q2:** However, much research shows that the exchange of information may disclose the information on the original data, e.g. [1,2]. I am wondering if the proposed training approach can defend this attack.\n\n**A2:** We agree with the reviewer that the exchange of information (i.e., representations and gradients) across different platforms may arouse privacy concerns. In our method, we propose a contrastive adversarial learning method to protect private information in data representations, which can compare data representations with the same attribute and remove the attribute-irrelated information in them (More details are in Section 3.3). Experimental results in Fig.3(b) also verify that our method can effectively protect privacy without degrading model fairness and accuracy. To further protect private information in gradients, we can apply differential privacy techniques to perturb exchanged gradients as standard VFL methods [1].\n\n[1] Privacy Preserving Vertical Federated Learning for Tree-based Models. VLDB 2020.\n", " This paper proposed an approach to improve the fairness of VFL models without the requirement of storing the fairness-sensitive features in a central server. To improve the fairness of VFL models, an additional adversarial loss is designed to remove the bias of unified feature representation. The experiments result show the effectiveness of the proposed fairness algorithms for VFL. Strengths:\n1.) The paper is well-written and easy to understand the concepts and algorithms.\n2). The research question is in-time and the fariness related research in VFL setting is less explored.\n3). The experiment's result seems promising.\n\nWeaknesses:\n1). It would be better if the experiments can be conducted in a different types of datasets, e.g. image dataset celebA. \n\n[1] Batch Label Inference and Replacement Attacks in Black-Boxed Vertical Federated Learning\n[2] Deep Leakage from Gradients\n 1). The authors claim that their approach can protect the exchange of information between the local model, aggregation models and task-specific models. The protection is based on removing the useful information from embedding. However, much research shows that the exchange of information may disclose the information on the original data, e.g. [1,2]. I am wonderring if the proposed training apporach can defende this attacks. The variety of the dataset could be improved.", " This paper proposes a novel vertical federated learning framework (FairVFL) to achieve fair VFL models while keeping information private during communication. In order to encode a fair unified representation for the task platform, the prosed method employs adversarial learning at the fairness-insensitive platform side to prevent the fairness-sensitive bias. Moreover, instead of directly sending the unified representation to the fairness-insensitive platforms, the server sends a protected representation encoded by a mapper trained by a novel contrastive learning method. Strengths:\n- The paper is well-written and organized. The method is clearly explained and well-motivated.\n- The results are very convincing. The proposed method can have a slightly higher performance on fairness with a reasonable amount of drop in main task performance, compared to the methods under centralized training. Also, plenty of experiments on different models show the method is model-agnostic.\n- The ablation study on adversarial learning and contrastive adversarial learning helps shed light on the power of the two methods in the framework.\n\nWeaknesses:\n- Multiple times of communications might be a problem in federated learning.\n- Also, the local adversarial training might also slow down the overall training and increase the computational burden for both servers and platforms.\n- Two hyperparameters, $\\lambda$ and $\\gamma$, seem to be very important. It might be hard for the server to choose the correct values for a new task.\n - How consistent are $\\lambda$ and $\\gamma$ cross datasets? Yes, the authors mention the limitations in the appendix. As the authors mention, the method requires more rounds of communications than the baseline VFL. Meanwhile, it also requires more computations than the baseline.", " This paper proposes a method to learn fair representations in the VFL setting. By fairness, the authors mean that the representations generated by FairVFL should not reflect sensitive attributes, such as age/gender. The authors achieve this goal via a contrastive adversarial learning on the sensitive attributes. The idea of this adversarial training is that, the discriminator aims to learn sensitive attributes, while the feature learner aims to fool the discriminator. Moreover, in order to conceal private information, the adversarial learning also aims to conceal all other information except for the sensitive attribute. Experiments on two real-world datasets show the effectiveness of FairVFL. **I am not very familiar with the fairness issue in machine learning. Please correct if I make misunderstandings regarding fair ML.**\n\nStrengths: \n1. **Well-motivated problem.** Trustworthy machine learning is a good topic to explore. The authors combine trustworthy machine learning with VFL, which is a good attempt. \n2. Evaluations that verify the privacy protection (Fig. 3b) is a plus. It evaluates how well the proposed method protects attribute privacy, which is a focus in FL. \n3. **Presentation is OK**. Despite the very complex gradient flow (in my opinion), the authors manage to explain the overall workflow to me. \n\nWeaknesses: \n1. **Lacking a baseline**. The authors take plain VFL as the only baseline in the feature-partition scenario. Although there are other fair ML methods compared (e.g. FairGO, FairSM), they cannot be applied in the VFL setting (as far as I can infer from the paper). Therefore, an intuitive question to ask is that, are there simpler VFL+fairness baselines that can be applied to the problem, such as slightly modifying FairSM, FairGO? If there is such a baseline, the improvements made by the authors' designs would be clearer. \n\n2. **Definition of fairness**. The authors state in Sec. 3.1, line 114 that, \"the counterfactual fairness requires that the model prediction for a sample should always hold if we only change fairness-sensitive attributes\". However, in the evaluation, the authors use the metric \"Gender F1\", namely how well an attacker can guess the sensitive attribute, to measure fairness. I am a little confused about the metric. Specifically, as I can tell from Section 3.2, the unified representation $\\mathbf{s}$ is computed from $\\mathbf{s}_i^l$, which are all computed from fairness-insensitive features. In other words, sensitive attributes are not involved in the forward computation for the task $t$. It seems that in this sense, we have no chance to verify the 'counterfactual fairness', as we cannot even change the fairness-sensitive attributes. Thus, how the metrics (gender F1) are related to the notion of 'counterfactual fairness' should be better elaborated. Maybe the Gender F1 is more related to notions like conditional independence as far as I know. \n\n3. **Should discuss limitations and tradeoffs on efficiency and stability**. The overall workflow of FairVFL is rather complex (even though the authors manage to explain it to me). As far as I am concerned, there are many points in the workflow of FairVFL that contain computation dependencies and may lead to stragglers (e.g. the iterative adversarial learning of $A$, $D$), which may significantly compromise the efficiency. Thus, I would like to see an efficiency comparison between naive VFL and FairVFL to see the tradeoffs between fairness and efficiency. Also, adversarial training tend to be very unstable. The authors should discuss how to mitigate the instability and generate consistent results. \n Q1-3: See Weaknesses 1 2 3. \n\nQ4: In Figure 4(b), it seems that in some cases the fairness-F1 scores can go below random. For example, in Figure 4(b) left, gender-F1 (binary classification) can be <0.5. This seems counter-intuitive, as the best fairness should be achieved at gender-F1=0.5.\n\nI thus have a question, when gender-F1<0.5 (e.g. 0.45), can the attacker simply flip the prediction and get gender-F1>0.5 (e.g. 0.55)? What does it imply on fairness? \n\nQ5: The authors state that \"Intuitively, highly relevant data samples are likely to share the same fairness-sensitive feature\" in line 178. I wonder after the model is trained well, and $\\mathbf{s}$ does not contain fairness-related information, does this assumption still hold? If not, how will it influence the model training? I do not see any ethical concerns. " ]
[ -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, 3, 3, 3 ]
[ "GxPGJhyOd7", "lLXATqNtUL", "wFp48j0qsY0", "HQ-DeicBLuu", "nips_2022_5vVSA_cdRqe", "nips_2022_5vVSA_cdRqe", "nips_2022_5vVSA_cdRqe" ]
nips_2022_U-RsnLYHcKa
Wasserstein Logistic Regression with Mixed Features
Recent work has leveraged the popular distributionally robust optimization paradigm to combat overfitting in classical logistic regression. While the resulting classification scheme displays a promising performance in numerical experiments, it is inherently limited to numerical features. In this paper, we show that distributionally robust logistic regression with mixed (\emph{i.e.}, numerical and categorical) features, despite amounting to an optimization problem of exponential size, admits a polynomial-time solution scheme. We subsequently develop a practically efficient cutting plane approach that solves the problem as a sequence of polynomial-time solvable exponential conic programs. Our method retains many of the desirable theoretical features of previous works, but---in contrast to the literature---it does not admit an equivalent representation as a regularized logistic regression, that is, it represents a genuinely novel variant of the logistic regression problem. We show that our method outperforms both the unregularized and the regularized logistic regression on categorical as well as mixed-feature benchmark instances.
Accept
The focus of the submission is distributionally robust logistic regression when the discrepancy used in the ambiguity set is the Wasserstein distance and the features are mixed (i.e., they can contain both numerical and categorical variables). After showing that the resulting optimization problem (1) with the log-loss function can be reformulated as a finite-dimensional exponential conic program (Theorem 1), they (i) prove that (1) can be solved in polynomial time (Theorem 2), (ii) show that it does not admit a regularized logistic regression form (Theorem 3) as it is the case for purely numerical features, (iii) propose a column-and-constraint solver (Theorem 4-5). The practical efficiency of the proposed method is illustrated on 14 UCI benchmarks. Logistic regression (LR) is among the most popular tools in machine learning and statistics. Handling mixed features for LR in the distributionally robust case is a relevant problem. The submission represents a solid work combining both important theoretical and empirical insights as it was evaluated by the reviewers.
test
[ "gaZawnAkM-t", "0I006FwGddZ", "G1bU6X2sm7q", "Am69inPjr3s", "splbxmpvlJX", "DMPYhQsxSTX", "92GuBodTaD4", "ED2llEMsWzr", "bZhsJ1qhoQn", "JcJbj_bNpSO_", "KxjyQTKIcnj", "oPb6MRFSWrP", "yty4aBSyHb", "oXUNFLadop1", "wBh4ptSzkjx", "BzljoklYltS" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your detailed explanations. I have no more questions and would tend to raise my rating from 5 to 6 (weak accept).", " We thank you again for your comments! We agree with you on the statement of Theorem 2 and will clarify that in our next revision of the paper.", " I thank the authors for their response. I find the remark on $\\phi$-divergence interesting, and an additional motivation for the studied framework. Indeed you are right about the loss. Still my point was to replace \"for generic loss\" by \"there exists a loss\", as the current statement may suggest that the problem is NP-hard for _any loss_.\n\nI have read the other reviews and responses by the authors. Overall my stance on the paper did not change, and I still think that the paper's pros outweigh its limitations. Hence I keep my score.", " We would like to thank the reviewers for their constructive reviews. Their comments led us significantly improve our paper in terms of both theory and computation. We have replied to each reviewer and addressed every comment and question. \n\nIn the revised manuscript, we highlight the changes and new material with blue-colored text. \n\nThe revised version of our submission as well as the supplementary material will be uploaded by 3 August 2022, 9 pm UTC.\n\nWe will be active and responsive on OpenReview during the Author-Reviewer Discussion period, and we are looking forward to discussing our work further.\n\nThank you for your time.", " ### Answer to Major Question \\# 3\nThank you; we are glad to hear that you find this result interesting! In the revised version of the manuscript we try to give some further insight into why this result holds; we hope you find this discussion valuable. \n\n### Answer to Major Question \\# 4\nThis is an interesting point. One could in principle replace the Wasserstein ball with a $\\phi$-divergence ball (such as the KL-divergence) and study the resulting DRO problem. Generally speaking, Wasserstein and $\\phi$-divergences are complementary technologies and there is not \"better\" or \"worse\" approach; so either formulation would lead to a DRO problem that we would consider worthy studying. That said, a $\\phi$-divergence ball centered at an empirical distribution will only contain those distributions that are absolutely continuous with respect to the reference distribution $\\widehat{\\mathbb{P}}_N$, and hence the worst-case distribution cannot place positive probability mass on feature combinations that have not been seen in the data. This is undesirable in our context: Imagine we have a problem with $K$ binary features and $N$ empirical samples. There are $2^K$ ways to choose those binary feature values, and typically $N \\ll 2^K$. So we would exclude most of the possible feature combinations from appearing in the worst-case distribution if we were to replace the Wasserstein ball with a KL-divergence ball, say. This problem also exists in the continuous case, but it is particularly prevalent in the categorical and mixed-feature cases. We also refer to the paper [i] which shows via an example that if we use $\\phi$-divergence, we do not get a meaningful ambiguity ball around the empirical distribution. That is in order to include the real data-generating distribution, we have no choice but to also consider very weird pathological distributions.\n\n#### References\n[i] - Distributionally Robust Stochastic Optimization with Wasserstein Distance, arXiv:1604.02199, 2016.\n\n### Answer to Minor Comments\nThank you for your careful reading of the manuscript! In response to your comments:\n\n1. It should indeed be less than or equal to 1, as opposed to equal to 1, as we are using a one-hot encoding where the last category is dropped.\n2. For the same reason, we believe $\\mathbb{C}(2) = \\mathbb{B}$ is indeed correct.\n3. Thank you for this suggestion; we have implemented this as you suggested!\n4. Thank you; we agree that the difference between the italicized and roman 'd's is difficult to spot. We now alert the reader to this difference in the revised version of the notation section.\n5. Thank you! We agree that formulation (6) is easier to parse. We have nevertheless kept formulation (4) as our column-and-constraint generation scheme is built around it. That said, we have now added a remark after the theorem which explains how formulation (4) emerges from the more readily accessible formulation (6) that can be found in the appendix. We hope that this aids the exposition.\n6. Thank you; we have implemented your suggestion!", " ### Answer to Weakness \\# 1\nWe apologize for not making sufficiently clear our contributions in the initial paper draft. We agree that model (4) derived in Theorem 1 is closely related to the model of [28], and that this derivation is not a major contribution of our paper. Instead, the key contributions of the present paper are *(i)* a complexity analysis of this problem, which shows that the problem becomes NP-hard in general when categorical features are present ([28] only studies the polynomial-time solvable case of continuous-only features) yet remains polynomial-time solvable in many special cases of broad interest; *(ii)* a proof that in contrast to every other Wasserstein machine learning paper known to us, model (4) does not reduce to a regularized problem variant (as you pointed out in your comment as well); and *(iii)* an efficient column-and-constraint solution scheme that, as we found out after the submission of the manuscript, is very general and extends to several other machine learning problems (including linear and quantile regression, support vector machines, decision and regression trees as well as the kernel trick). Please note that we are not aware of any other Wasserstein machine learning approach that can explicitly handle categorical features. We have sharpened our presentation of the contributions in the revised manuscript and hope that they more accurately reflect our additions to the literature. \n\n### Answer to Weakness \\# 2\nWe sincerely thank you for challenging us to more clearly present the advantages of handling categorical features explicitly, that is, the value that our method adds above and beyond that of [28]. The revision addresses this point in two ways.\n\nFirstly, we add more details regarding the worst-case distribution of Figure 1 to a dedicated section in the appendix. We elaborate on this further in our response to your major question 1 below.\n\nSecondly, we have significantly improved the presentation of our numerical results. We have now repeated the numerical results for the pure-categorical features setting with 100 training/test splits, and we have included *(i)* the approach of [28] that treats categorical features as continuous ones and *(ii)* the robust Wasserstein profile inference approach of Blanchet et al. (2019). Our new results show that we outperform all competitors in a statistically significant way on many of the datasets. (Note that some datasets were taken out as per the request of another reviewer.) Please note that the results for the mixed-feature case are still running, and they will be included in the camera-ready version. Since those results were already better than those of the pure-categorical case in the first draft of the paper, however, we have no doubt that our method will perform well there, too. \n\nWe would like to highlight that, thanks to the reviews we received, we increased the number of training/test splits from 20 to 100 and reported mean errors. Due to these changes, we are now strictly outperforming benchmark models more often. Out of the 14 categorical datasets, our models are achieving the best error in 12 datasets and in 9 of them our improvements are statistically significant over all other benchmark models. The details of the t-tests applied to the out-of-sample errors for measuring statistical significance can be found in the updated Appendices.\n\n### Answer to Major Question \\# 1\nThank you for this question, and we apologize for the omission in the first draft of the paper! We do not believe that replacing the absolute value with a square difference would change our findings in any significant way. Instead, the key issue is that the approach of [28] cannot accommodate for a support for categorical features. The revised version of the manuscript now points out that the only previously known tractable reformulations for Wasssertein DRO problems including support are for piecewise affine loss functions, see Theorem 14 in Section 3.2 of [i]. For more general continuous loss functions (as the log-loss employed in our paper), no tractable reformulation with support constraints appears to be known. Hence, the support of the model from [28] cannot be restricted to $[-1, 1]$, and as a result the worst-case distribution places strictly positive probability on pathological realizations. We further discuss this issue in a new appendix where we also derive the worst-case distribution of [28] in closed form.\n\n#### References\n[i] - Regularization via Mass Transportation, JMLR 20:1--68, 2019.\n\n### Answer to Major Question \\# 2\nPlease note that our counterexample in the proof of Theorem 2 is *not* a linear loss-function as it contains an indicator function. It attains the value $\\boldsymbol{c}^\\top \\boldsymbol{z}$ if $\\boldsymbol{F} \\boldsymbol{z} \\leq \\boldsymbol{g}$ and $- \\mathrm{M}$ otherwise. In fact, linear loss functions could be handled in polynomial time. \n", " ### Answer to Minor comments\nThank you; we have addressed all of these comments as you suggested.", " ### Answer to Question / Major Comment 1\nWe sincerely thank you for challenging us to improve the presentation of our numerical results. We have now repeated the numerical results for the pure-categorical features setting with 100 training/test splits, and we have included *(i)* the approach of [28] that treats categorical features as continuous ones and *(ii)* the robust Wasserstein profile inference approach of Blanchet et al. (2019). Our new results show that we outperform all competitors in a statistically significant way on many of the datasets. (Note that some datasets were taken out as per your major comment 5.) Please note that the results for the mixed-feature case are still running, and they will be included in the camera-ready version. Since those results were already better than those of the pure-categorical case in the first draft of the paper, however, we have no doubt that our method will perform well there, too. Moreover, as you requested we now also include the computation times for all methods on all instances. We agree that there is a trade-off between performance and computation times, and the specific circumstances will determine which method is most appropriate. That said, we believe that our revised numerical results show that our proposed approach lies on the \"efficient performance-runtime frontier.\"\n\n### Answer to Question / Major Comment 2\nThank you for this clarification question! We believe we did not state that distributionally robust optimization alleviates the issue of outliers, but that it can alleviate the issues of overfitting, erroneous feature and label values as well as distribution shifts. These issues have been investigated in some detail in the related literature, and due to a lack of space and time the revised manuscript now refers the interested readers to the publications [i, ii] for overfitting, the reference [ii] for label uncertainty and the references [iii, iv] for distribution shifts, respectively. We hope that this satisfactorily addresses your comment.\n\n#### References\n[i] - Data-driven distributionally robust optimization using the Wasserstein metric: Performance guarantees and tractable reformulations, Mathematical Programming 171:115--166, 2018.\n\n[ii] - Wasserstein Distributionally Robust Optimization: Theory and Applications in Machine Learning, INFORMS TutORials in Operations Research, 2019.\n\n[iii] - Sequential Domain Adaptation by Synthesizing Distributionally Robust Experts, PMLR 139:10162--10172, 2021.\n\n[iv] - Robust Generalization despite Distribution Shift via Minimum Discriminating Information, NeurIPS 2021.\n\n### Answer to Question / Major Comment 3\nThank you for this suggestion! As we elaborated in our response to your first major comment, we now compare our method against both of these approaches as well in the revised manuscript. \n\n### Answer to Question / Major Comment 4\nThank you. In the initial draft, we considered the medians as we only conducted 20 training-test splits. Since we now conduct 100 training-test splits, we used the mean. Due to these changes, we are now strictly outperforming benchmark models more often. Out of the 14 categorical datasets, our models are achieving the best error in 12 datasets and in 9 of them our improvements are statistically significant over all other benchmark models. The details of the t-tests applied to the out-of-sample errors for measuring statistical significance can be found in the updated Appendices.\n\n### Answer to Question / Major Comment 5\nThank you. We agree, and we have removed those datasets from the paper. \n\n### Answer to Question / Major Comment 6\nThis is a very sharp comment. Our performance guarantees in Appendix A scale with the dimension of the feature space as we seek for a high confidence of the unknown true distribution being contained in our ambiguity set $\\mathfrak{B}_\\epsilon (\\widehat{\\mathbb{P}}_N)$. A dimension-independent performance guarantee can be obtained along the lines of [i] if one instead only seeks for a high confidence of the unknown true model $\\boldsymbol{\\beta}^\\star$ being contained in the union of argmax's corresponding to the individual distributions $\\mathbb{P}$ contained in $\\mathfrak{B}_\\epsilon (\\widehat{\\mathbb{P}}_N)$. We added a discussion of this point, together with a pointer to the review paper you have provided, in Appendix A. Please note that these performance guarantees are in some sense orthogonal to the developments of our paper, and for this reason, we only mention them for completeness' sake in the appendix (rather than the main paper).\n\nOn the empirical side, we observe that our mixed-feature Wasserstein DRO approach appears to work well across the UCI data sets that we considered, which include data sets with low-dimensional as well as higher-dimensional feature spaces.\n\n#### References\n[i] - Robust Wasserstein Profile Inference and Applications to Machine Learning, Journal of Applied Probability 56(3):830--857, 2019.", " ### Answer to Weakness # 1\nWe sincerely thank you for challenging us to more clearly present the advantages of handling categorical features explicitly, that is, the value that our method adds above and beyond that of [28]. The revision addresses this point in two ways.\n\nFirstly, we add more details regarding the worst-case distribution of Figure 1 to a dedicated section in the appendix. In that section, we derive the worst-case distribution theoretically in closed-form. We hope that this addresses what you referred to when you asked for \"theoretical comparisons.\"\n\nSecondly, we have significantly improved the presentation of our numerical results. We have now repeated the numerical results for the pure-categorical features setting with 100 training/test splits, and we have included *(i)* the approach of [28] that treats categorical features as continuous ones and *(ii)* the robust Wasserstein profile inference approach of Blanchet et al. (2019). Our new results show that we outperform all competitors in a statistically significant way on many of the datasets. (Note that some datasets were taken out as per the request of another reviewer.) Please note that the results for the mixed-feature case are still running, and they will be included in the camera-ready version. Since those results were already better than those of the pure-categorical case in the first draft of the paper, however, we have no doubt that our method will perform well there, too. \n\nWe would like to highlight that, thanks to the reviews we received, we increased the number of training/test splits from 20 to 100 and reported mean errors. Due to these changes, we are now strictly outperforming benchmark models more often. Out of the 14 categorical datasets, our models are achieving the best error in 12 datasets and in 9 of them our improvements are statistically significant over all other benchmark models. The details of the t-tests applied to the out-of-sample errors for measuring statistical significance can be found in the updated Appendices.\n\n### Answer to the Question\nThank you for this question, and we apologize for the omission in the first draft of the paper! The revised version of the manuscript now points out that the only previously known tractable reformulations for Wasssertein DRO problems including support are for piecewise affine loss functions, see Theorem 14 in Section 3.2 of [i]. For more general continuous loss functions (as the log-loss employed in our paper), no tractable reformulation with support constraints appears to be known. Hence, the support of the model from [28] cannot be restricted to $[-1, 1]$ as suggested.\n\n#### References\n[i] - Regularization via Mass Transportation, JMLR 20:1--68, 2019.\n\n### Answer to the Typo\nThank you very much for pointing out the typo; this is now corrected in the revised manuscript. \n\n### Answer to the Limitation\nWe apologize if we misunderstood the new guidelines regarding the limitations; we had discussed them in the attached checklist but did not include them in the main paper. The revised numerical results now clearly point out the limitation that our approach does not always outperform the standard logistic regression. We cannot think of any other major limitations, as we are solving a variant of a well-established problem whose advantages and shortcomings are by now relatively well understood. We hope that this satisfactorily addresses your comment.", " ### Answer to Weakness \\# 1\nWe apologize for not making sufficiently clear our contributions in the initial paper draft. We agree that model (4) derived in Theorem 1 is a more or less straightforward extension of [28]. Instead, the key contributions of the present paper are *(i)* a complexity analysis of this problem, which shows that the problem becomes NP-hard in general when categorical features are present ([28] only studies the polynomial-time solvable case of continuous-only features); *(ii)* a proof that in contrast to every other Wasserstein machine learning paper known to us, model (4) does not reduce to a regularized problem variant; and *(iii)* an efficient column-and-constraint solution scheme. Please note that we are not aware of any other Wasserstein machine learning approach that can explicitly handle categorical features. We have sharpened our presentation of the contributions in the revised manuscript and hope that they more accurately reflect our additions to the literature.\n\n### Answer to Weakness \\# 2\nThank you for raising this important point! In fact, we found out after the submission of the manuscript that our column-and-constraint generation scheme is very general and extends to several other machine learning problems, including linear and quantile regression, support vector machines, decision and regression trees as well as the kernel trick. This is in some sense reminiscent of the generalization of [28] to other machine learning problems in [i]. While the size limitations of the NeurIPS proceedings do not allow us to explore this generalization in the present paper, we have added this point in the conclusions of the revised manuscript.\n\n#### References\n[i] - Regularization via Mass Transportation, JMLR 20:1--68, 2019.\n\n### Answer to Weakness \\# 3.1 (Reliance on Off-the-Shelf Solvers)\nWhile our solution approach does indeed utilize the MOSEK solver, we would like to point out that the software is free for academic use and that there are several powerful open-source solvers available to solve exponential cone programs, such as Ipopt and CVXOpt. Our approach does not rely on any particular feature of MOSEK, and our source code (which will be made open-source upon acceptance, and is currently withheld only to maintain the anonymity of the review process) can readily be adapted to work with these open-source solvers. Also, we see our present paper as a first step in the direction of developing custom algorithms for this problem. This is similar to the original optimization paper [23] on Wasserstein problems, which has relied on standard solvers and which has subsequently spurred developments of tailored first-order solution methods (e.g. [i-iii]). The introduction to Section 4 and the conclusion of the revised manuscript now clarify those points; thank you!\n\n#### References\n[i] - Stochastic Optimization for Regularized Wasserstein Estimators, PMLR 119:602-612, 2020.\n\n[ii] - First-Order Methods for Wasserstein Distributionally Robust MDP, PMLR 139:2010--2019, 2021.\n\n[iii] - Stochastic Gradient Descent in Wasserstein Space, arXiv:2201.04232, 2022.\n\n### Answer to Weakness \\# 3.2 (Lack of Substantial Improvements)\nWe sincerely thank you for challenging us to improve the presentation of our numerical results. We have now repeated the numerical results for the pure-categorical features setting with 100 training/test splits, and we have included *(i)* the approach of [28] that treats categorical features as continuous ones and *(ii)* the robust Wasserstein profile inference approach of Blanchet et al. (2019). Our new results show that we outperform all competitors in a statistically significant way on many of the datasets. (Note that some datasets were taken out as per the request of another reviewer.) Please note that the results for the mixed-feature case are still running, and they will be included in the camera-ready version. Since those results were already better than those of the pure-categorical case in the first draft of the paper, however, we have no doubt that our method will perform well there, too.\n\nWe would like to highlight that, thanks to the reviews we received, we increased the number of training/test splits from 20 to 100 and reported mean errors. Due to these changes, we are now strictly outperforming benchmark models more often. Out of the 14 categorical datasets, our models are achieving the best error in 12 datasets and in 9 of them our improvements are statistically significant over all other benchmark models. The details of the t-tests applied to the out-of-sample errors for measuring statistical significance can be found in the updated Appendices.\n\n### Answer to Limitations\nPlease refer to our earlier responses.", " ### Answer to Weakness \\# 1\nWe apologize for not making sufficiently clear our contributions in the initial paper draft. We agree that model (4) derived in Theorem 1 is a more or less straightforward extension of [28]. Instead, the key contributions of the present paper are *(i)* a complexity analysis of this problem, which shows that the problem becomes NP-hard in general when categorical features are present ([28] only studies the polynomial-time solvable case of continuous-only features) yet remains polynomial-time solvable in many special cases of broad interest; *(ii)* a proof that in contrast to every other Wasserstein machine learning paper known to us, model (4) does not reduce to a regularized problem variant; and *(iii)* an efficient column-and-constraint solution scheme that, as we found out after the submission of the manuscript, is very general and extends to several other machine learning problems (including linear and quantile regression, support vector machines, decision and regression trees as well as the kernel trick). Please note that we are not aware of any other Wasserstein machine learning approach that can explicitly handle categorical features. And the absence of a regularised problem variant, which appears to exist for all other formulations known to us, confirms that our formulation is indeed fundamentally different. We have sharpened our presentation of the contributions in the revised manuscript and hope that they more accurately reflect our additions to the literature.\n\n### Answer to Weaknesses \\# 2 \\& \\# 3\nWe sincerely thank you for challenging us to improve the presentation of our numerical results. We have now repeated the numerical results for the pure-categorical features setting with 100 training/test splits, and we have included *(i)* the approach of [28] that treats categorical features as continuous ones and *(ii)* the robust Wasserstein profile inference approach of Blanchet et al. (2019). Our new results show that we outperform all competitors in a statistically significant way on many of the datasets. (Note that some datasets were taken out as per the request of another reviewer.) Please note that the results for the mixed-feature case are still running, and they will be included in the camera-ready version. Since those results were already better than those of the pure-categorical case in the first draft of the paper, however, we have no doubt that our method will perform well there, too. \n\nWe would like to highlight that, thanks to the reviews we received, we increased the number of training/test splits from 20 to 100 and reported mean errors. Due to these changes, we are now strictly outperforming benchmark models more often. Out of the 14 categorical datasets, our models are achieving the best error in 12 datasets and in 9 of them our improvements are statistically significant over all other benchmark models. The details of the t-tests applied to the out-of-sample errors for measuring statistical significance can be found in the updated Appendices.", " This paper proposes a distributionally robust logistic regression with mixed-type features. \nOptimization method is considered. Some properties related to complexity analysis were given. \nA number of experiments were performed on classical datasets. \n Strengths: \n1.\tIt considers the expansion of recent distributionally robust logistic regression (LR) with continuous features to that with mixed features. \n2.\tThe complexity analysis is given. In particular, it proves the the non-existence of regularized version for the formulation derived from Wasserstein distance. \n\nWeaknesses: \n1.\tIt appears to be an incremental derivation from existing distributionally robust logistic regression (LR) with continuous features. \n2.\tIn terms of experimental results, the proposed formulation does not appear much better than the classical LR in terms of accuracy. Actually, it is not clear there is any significant advantages when compared to LR (Please see next comment). Without significant improvement, the new formulation is more like a conceptual exercise. \n\n3.\tThe comparative evaluation in lines 282-287 is not so convincing. Currently, only the number of best performance is counted; \n\n“The table shows that for the unregularized model, both of our distributionally robust logistic\nregression achieve the lowest classification error in 78% of the instances, whereas the classical logistic\nregression achieves the lowest classification error in 61% of the instances. For the regularized model,\nthe results remain at 78% (our model) vs. 61% (classical logistic regression). Overall, the non-robust\nmodels failed to achieve the lowest classification error in three of the instances, whereas the robust\n287 models missed the lowest classification error in only one of the instances.”\n\nThis is analogous to ‘bean counting”. \nHowever, the variance is not considered.\nIt would be desirable to show variance information. Maybe some statistical test can be used for judge if there is any significant difference between the LR with the DROs, or between the regularized r-LR with the regularized r-DROs. \n\n The presentation is clear. The discussions are pretty clear. \nThe strengths and weaknesses can be found in above comments. \nNo questions are raised here. \n\n Yes", " This paper considers the mixed feature distributionally robust logistic regression problem, which extends the setting in [1] to the mixed feature (i.e., the feature space can be divided into the numerical (real number) and categorical (integer) parts. The main contribution of this paper lies in deriving an equivalent dual reformulation of this new model and further developing tractable methods to address it. More specifically, by fully exploiting the problem-specific structure arising from the log-loss, the authors fully utilize its exponential conic representation and provide a Column-and-Constraint Solution Scheme to solve it. Finite steps convergence guarantee has also been established. Extensive experiments have been shown to demonstrate the effectiveness of its model performance. \n\n[1]: Shafieezadeh-Abadeh, Soroosh et al. “Distributionally Robust Logistic Regression.” NIPS (2015).\n **Strengths**: \n\nThe mixed features considered in the paper are less explored in distributionally robust optimization. This paper takes a first step towards filling this gap. The paper is easy to follow and well-organized. Comprehensive experiments have been given to illustrate the extension. Notably, Theorem 3 seems very interesting to me. They point out the absence of reformulation as a regularized problem, while this property holds for many DRO problems.\n\n**Weakness**:\n\nThere are three major concerns: \n\n1. The novelty of both modeling and techniques is moderate. The proposed model is a natural extension based on [1]. The dual reformulation is also relatively standard if we adopted the method provided in [1]. \n\n2. All methodologies developed in this paper highly depend on the log-loss and specific cost function (see theorem 2 for details), which are rather restrictive. In my opinion, the impact of this paper on broad ML tasks is limited. \n\n3. On the computational side, although the paper gives a polynomial-time solvable method to address the resulting dual reformulation, it still relies on certain off-the-shelf solvers, which will potentially suffer from the scalability issue on medium or large-scale datasets. More importantly, for the experiment results, the proposed mixed feature model does not gain substantial improvements over other baselines (i.e., vanilla logistic regression or distributionally robust LR) in terms of test accuracy (i..e, see tables 1 and 2), but instead cost too much computational effort. \n\nFor the above reasons, this paper is strictly below the neurips acceptance level. \n No The computational paradigm and theoretical results (i.e., polynomial solvable)is restricted to the logistic regression case. ", " This paper studies the distributionally robust logistic regression problem with mixed (numerical and categorical) features. The problem is proved to be solvable by a polynomial-time solution scheme. In particular, this paper develops a column-and-constraint approach to solve the problem. This paper also shows that the distributionally robust logistic regression problem is not equivalent to regularized logistic regression as in previous works with numerical features only. When categorical features are present, this paper proves that the problem is not equivalent to regularized problems for any regularizer. Strengths\n\n1. This paper studies distributionally robust logistic regression with mixed features, while previous works mainly focus on numerical features only or simply treat the categorical feature as numerical ones. \n2. This paper proposes a column-and-constraint approach to solve the problem efficiently.\n3. This paper shows that the problem with mixed features is not equivalent to regularized problems, as is commonly the case for distributionally robust logistic regression with numerical features.\n4. Numerical results on multiple real datasets are provided.\n \nWeaknesses\n1. The main advantage and motivation of not dealing with the categorical features directly are only shown in the example of a worst-case distribution in Figure 1, which might be insufficient and the results could be stronger if the author could have certain theoretical comparisons. \n I have a question regarding the example shown in Figure 1. By treating the categorical features as continuous ones, the authors claimed that the worst-case distribution places non-zero probability on an extreme scenario 15, 312, I am wondering would such scenarios be avoided when we restrict the range of the continuous features, like within [-1,1]. Also, it would be better if the author could comment on some theoretical comparisons between the performance of the Mixed-feature model method and the continuous-feature model in sec 2.2.\nTypos:\n1. In algorithm 2, delta \\in \\{0,1,2,...\\}, a typo in the first comma. It seems that the authors do not mention limitations of the proposed method.", " To overcome the non-robustness in classical logistic regression, the authors consider the distributionally robust logistic regression by introducing an ambiguity set defined by the popular Wasserstein distance.\nIn contrast to the previous literature, the proposed method focus on mixed features. Despite the optimization problem itself being exponential-size for generic losses, the authors show that it can be solved in polynomial time using the column-and-constraint generation scheme.\nTheoretically, they provide finite-sample and asymptotic performance guarantees for the proposed method. \nExperiments on synthetic and real-world datasets show the accuracy and efficiency of the proposed method in both unregularized and regularized versions. In general, the paper is well-written and easy to follow, the motivation to consider categorical variables is clearly stated, and the theoretical results also seem to be correct.\nAs far as I have read, the work provides novel discussions and results which are not captured by the cited literature. \nHowever, I have some concerns about the numerical experiments and also some suggested corrections/additions, which are presented below. **Major Comments**\n\n* My major concerns are about the experiments in Section 4. \n1. Sections 4.2--4.3: I do not find the proposed DRO (resp. r-DRO) performs much better than LR (resp. r-LR), while DRO (resp. r-DRO) might require a longer runtime than LR (resp. r-LR). The authors should provide comparison results on both performance and time to show the accuracy-time tradeoff.\n2. As the authors discussed in Section 1, the motivation to consider the robust version of LR is that classical LR is prone to overfitting, especially in face of outliers or distribution shifts. However, this kind of setting has not been considered in experiments. The authors should provide simulations or real data analysis on these specific settings to show the advantages of DRO (resp. r-DRO) over classical LR (resp. r-LR).\n3. Sections 4.2--4.3: Only classical logistic regression (with/without regularization) is compared with the proposed method. I suggest the authors provide more comparisons with existing distributionally robust regression methods (e.g., \"Distributionally\nrobust logistic regression\" and \"Robust Wasserstein profile inference and\napplications to machine learning.\" ) w.r.t. both computational time and classification error.\n4. The authors take the medians over 20 random training set-test set splits as the final results in Tables 1--2. What if the results are taken as means (rather than medians)? Does the proposed method still outperform the competitors?\n5. For some of the UCI datasets in Table 1 (e.g., lenses with $N=24$ and balloons with $N=16$), the sample size $N$ is too small to provide a convincing classification error on the test set.\n\n* Wasserstein distance estimation usually suffers from the curse-of-dimensionality, see \"Projection-based techniques for high-dimensional optimal transport problems\". In particular, the convergence rate of the empirical Wasserstein distance may be arbitrarily slow as the size of dimensions increases. I wonder how does the number of variables affect the performance of the proposed method? Some discussion and empirical results are needed.\n\n**Minor Comments**\n\n1. The definition of $\\mathbb{B}$ seems to be inconsistent in lines 16 and line 73.\n2. In problem (4), the notations need to be defined and explained in detail.\n3. In Section 3, please add some references for the column-and-constraint generation scheme.\n4. In Section 4.1, the comparison between the left panel and the middle panel of Fig. 2 is unfair for their scales are not consistent. Please provide the log-scale runtime versus increasing $m$. ", " This paper addresses distributionally robust logistic regression with mixed features (i.e., continuous and categorical). Formally, the logistic regression criterion is minimized for any distribution in a Wasserstein neighborhood of the empirical distribution, see Eq. (1). The categorical nature of the features considered yields a more complex equivalent problem than in previous works [28], see Theorem 1. The authors then show the importance of considering categorical features as non continuous (Section 2.2), and that their problem cannot be rephrased as a regularized logistic regression (Theorem 3). The equivalent problem derived in Thm 1 is however NP-hard to solve for a generic loss function (Theorem 2), but the authors devise an algorithm to solve it with the log loss (Theorem 4). Experiments are also provided (Section 4). Strengths:\n- the paper is clear and well written\n- the authors provide a sound and thorough study of the problem\n\nWeaknesses:\n- I am not extremely familiar with the literature, but this work might be considered to be closely related to the reference [28], that already studies distributionally robust logistic regression, but with continuous features only. Though, I feel that the categorical features yields problems of a significantly different nature. Theorem 3 is a perfect illustration of this fact.\n- I am not 100% convinced by the proof reasoning which shows that categorical features cannot be treated as continuous ones, see my questions below.\n\nOverall I like the paper and tend towards acceptance. My only concern regards its significance, with respect to [28] in particular, although I feel that the problems derived are sufficiently different in nature to justify a novel publication. I am open to discussion with the authors and other reviewers familiar with the literature on this point. Major:\n- regarding the justification for not considering categorical values as continuous: why considering the absolute value and not the square difference (possibly scaled by 1/4 if necessary)? does it make any difference? how come that $z=15,312$ is considered, this distribution cannot be in the ball centered at $\\hat{P}_N$, or am I missing something here?\n- Instead of saying *generic loss*, I would specify that a linear loss is used in Theorem 2\n- I find Theorem 3 very interesting, and would give a brief description of the proof, as it holds for **any** loss, what I believe is a strong result\n- what happens if the discrepancy between measures to create $\\mathcal{B}_\\epsilon$ is not the Wasserstein?\n\nMinor:\n- l.91: $\\sum_j z_j$ should be equal to 1 or less or equal than 1?\n- l.92: given the proposed definition, $\\mathbb{C}(2)$ should be {$(0,0), (0, 1), (1, 0)$}, no?\n- display 97-98: have the dimensions aligned with the $\\beta$, i.e., put $\\beta_0$ in the end, or have $\\mathbb{R}^{1 + n + k}$, could help\n- the notation $d$ for the distance in the Wasserstein definition (2) might be confusing with $d\\xi$\n- I found formulation (6) from the appendix more readable and understandable than Eq. (4), although I understand the desire for a convex formulation. Maybe adding a remark to explain how to make the softplus constraints convex can be a solution\n- Fig. 1: add \"robust\" in the legend for the methods concerned (left), $(z, y) = $ on the $x$ axis no? This is a theoretical work with no foreseeable negative societal impact." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3, 3 ]
[ "ED2llEMsWzr", "G1bU6X2sm7q", "BzljoklYltS", "nips_2022_U-RsnLYHcKa", "BzljoklYltS", "BzljoklYltS", "wBh4ptSzkjx", "wBh4ptSzkjx", "oXUNFLadop1", "yty4aBSyHb", "oPb6MRFSWrP", "nips_2022_U-RsnLYHcKa", "nips_2022_U-RsnLYHcKa", "nips_2022_U-RsnLYHcKa", "nips_2022_U-RsnLYHcKa", "nips_2022_U-RsnLYHcKa" ]
nips_2022_RW-OOBU11xl
Forecasting Human Trajectory from Scene History
Predicting the future trajectory of a person remains a challenging problem, due to randomness and subjectivity. However, the moving patterns of human in constrained scenario typically conform to a limited number of regularities to a certain extent, because of the scenario restrictions (\eg, floor plan, roads and obstacles) and person-person or person-object interactivity. Thus, an individual person in this scenario should follow one of the regularities as well. In other words, a person's subsequent trajectory has likely been traveled by others. Based on this hypothesis, we propose to forecast a person's future trajectory by learning from the implicit scene regularities. We call the regularities, inherently derived from the past dynamics of the people and the environment in the scene, \emph{scene history}. We categorize scene history information into two types: historical group trajectories and individual-surroundings interaction. To exploit these information for trajectory prediction, we propose a novel framework Scene History Excavating Network (SHENet), where the scene history is leveraged in a simple yet effective approach. In particular, we design two components, the group trajectory bank module to extract representative group trajectories as the candidate for future path, and the cross-modal interaction module to model the interaction between individual past trajectory and its surroundings for trajectory refinement, respectively. In addition, to mitigate the uncertainty in the evaluation, caused by the aforementioned randomness and subjectivity, we propose to include smoothness into evaluation metrics. We conduct extensive evaluations to validate the efficacy of proposed framework on ETH, UCY, as well as a new, challenging benchmark dataset PAV, demonstrating superior performance compared to state-of-the-art methods.
Accept
Initially, the paper received mixed reviews (3456). The major concerns raised by the reviews were: 1. What is the contribution of the PAV dataset? (XkVC) 2. There should be experiments on existing datasets, e.g. SDD or inDD. (XkVC, tAHR) 3. Is it fair to use curve smoothing on the GT during evaluation? To be fair, other works should be trained with CS loss too. What is the reason for CS, does it affect generalization of the model? No ablation for CS loss (XkVC, ZK4Z) 4. reported results of some SOTA works on ETH/UCY are worse than the published results. Why not use the original reported results? (XkVC, bcbq) 5. what are the settings of the hyperparameters? Is it dataset specific? (XkVC) 6. ablation study on the clustering methods used for constructing the group trajectory bank. (XkVC) 7. The bank is fixed after training -- how to guarantee the learned bank can be used in a new environment, new scene (tAHR, ZK4Z) 8. show results on ETH/UCY w/o using video data (tAHR) 9. can the proposed method work well from bird-eye view or when assumed information is missing? (tAHR) 10. evaluation metric used on ETH/UCY is not clear. stochastic vs deterministic? (bcbq, ZK4Z) 11. novelty is not significant (ZK4Z) 12. Missing details: scene-trajectory alignment, how to set K, how to set no. of semantic classes, range of 13. coordinates for cosine similarity, (ZK4Z) 14. No ablation on memory vs complexity (ZK4Z) 15. No ablation on the memory bank size and initialization (ZK4Z) The authors wrote a response to address these concerns. All reviewers were satisfied with the response. Overall, the reviewers found the paper interesting, although the approach is somewhat incremental and more experiments could be added (e.g., on SDD and inDD, as well as ablation studies). Nonetheless, the final ratings increased to 5666. The AC agrees with the reviewers and thus recommends accept. The authors should further revise the paper according to the reviews and discussion.
train
[ "XABytwwoccs", "-hG3ueuQ68s", "TNakIlFF1p5", "U8h6nkOI3TW", "uVzYlOdv6kz", "gBtB2gGMQ5R", "ObeRUwc_b3F", "ss2Lv1jmOlv", "UPoQsN8ve8E", "wFnrE9EUdJ-", "vkHT3qcLrc0", "Nd0oyqhrMbI", "N7VNW5XXS1K" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for uploading the revised the paper. I've checked the manuscript and increased my rating. I highly encourage authors to include several more lateset works in this year CVPR, since there are some highly relevant papers in the proceedings. i.e. [1][2][3] etc.\n\nAlso, there are some typos in the manuscript, please double revise.\n\n[1] Adaptive Trajectory Prediction via Transferable GNN. CVPR 2022 \n[2] How Many Observations are Enough? Knowledge Distillation for Trajectory Forecasting. CVPR 2022 \n[3] HiVT: Hierarchical Vector Transformer for Multi-Agent Motion Prediction. CVPR 2022", " Thank you very much for the feedback. Due to the page limit, we add the changes in both main text and supplementary materials. The brief summary of these changes is listed here:\n\n1. The missing related works are added in the manuscript (Line 96-102).\n2. Add some details about the processed data of ETH/UCY in the supplementary material (Line 36-38).\n3. Add the analysis about the initial bank size $K$ in the supplementary material (Line 63-64).\n4. Add the sampling protocol in the table caption (Tab.2 in our manuscript, Tab.1-3 in supplementary material).\n5. Add the analysis of bank complexity in the supplementary material (Line 64-68).\n6. Add some experiments and analysis according to the reviewers’ suggestions in the supplementary material (Section B.2 and B.3).\n", " Thanks for the response, most of my concerns are well-addressed and I am planning to increase my rating.\n\nBut before I increase the score, I would like to see the revised paper uploaded. I will increase the score once all the things are included in the revised paper.", " Thank you very much for your kind comments and feedback. \n\nQue1: The results conducted on the SDD dataset.\n\nAns1: We experimented on the SDD dataset and gave a preliminary result below. Now, we have added this content in the supplementary material (Tab. 2).\n\n|Method|MANTRA|PECNet|YNet|MemoNet(CVPR'22)[1]|SHENet|\n|--|--|--|--|--|--|\n|ADE|8.96 |9.96 |7.85|8.56|9.01|\n|FDE |17.76 | 15.88 |11.85|12.66|13.24|\n\nQue2: Can the proposed method work well?\n\nAns2: As the results mentioned in Ans1, our model performs a little bit worse than the results of YNet and MemoNet. Nevertheless, our method performs better than previous baselines (such as PECNet, MANTRA). Consequently, our method can achieve reasonable performance in bird-eye-view scenario.\n\nQue3: Can you show the performance on ETH/UCY without using the video data?\n\nAns3: We conduct the experiments on ETH/UCY according to your suggestions. The results are listed below, where the best-of-20 is adopted for evaluation. Since MANTRA didn’t conduct experiments on ETH/UCY, we use the results of MANTRA reported in the work[1].\n\n|Method|ETH|HOTEL|UNIV|ZARA1|ZARA2|AVG|\n|--|--|--|--|--|--|--|\n|Social-STGCNN|0.64/1.11|0.49/0.85 |0.44/0.79|0.34/0.53|0.30/0.48 |0.44/0.75|\n|MANTRA|0.48/0.88|0.17/0.33|0.37/0.81|0.27/0.58|0.30/0.67|0.32/0.65|\n|YNet|0.28/0.33|0.10/0.14|0.24/0.41|0.17/0.27|0.13/0.22|0.18/0.27|\n|MemoNet[1]|0.40/0.61|0.11/0.17|0.24/0.43|0.18/0.32|0.14/0.24|0.21/0.35|\n|SHENet|0.37/0.58|0.17/0.28|0.26/0.43|0.21/0.34|0.18/0.30|0.24/0.39|\n\nFrom the table, we can note that our method achieves the comparable performance without using the video data.\n\n[1] Remember Intentions: Retrospective-Memory-based Trajectory Prediction. CVPR 2022\n", " Thanks very much for your response! There are still two concerns about the experiments on the large-scale dataset such as SDD. \n\n1) I have looked through the revised paper and did not find the results on the SDD dataset. Can you kindly tell me the location or show it in the response?\n\n2) As shown in XkVC A2-3, the history search requires lots of information including the person's state and appearance, background information, and interaction of person-background/person. If the model fails when some information is missed? In many applications, we only have a bird-eye-view information. Can the proposed method work well? \n\n3) Can you show the performance on ETH_UCY without using the video data? It will help to explore the boundary of the method.\n", " Thank you very much for your detailed review.\n\nQ1-1: Memory networks. A1-1: Our proposed method is not inspired by or similar in architecture with memory network. The only similarity is the underlying idea of storing historical trajectories. The [1] and [2] are officially published at CVPR, Jun 2022, which is after the submission deadline of NeurIPS2022, thus we did not discuss these two papers. We will add them to related works in revision. [1] is totally different from our method: 1) The overall framework is different.[1] first predicts the destination, then fulfills the trajectory to the destination. In contrast, our method directly estimates the future trajectory and leverages the individual-surroundings interaction to refine the trajectory. 2)The historical trajectories are stored in a different manner. In [1],each individual trajectory is embedded through memory network, however, we explicitly utilize the group trajectory. In [2],the historical trajectories are not stored for search. In addition, the proposed \"Memory Replay\" aims to improve the temporal consistency of the predicted trajectory.The [3] is an extension of included reference [4]. To ensure the integrality of reference, we will cite it in revision.\n\nQ1-2: Clustering is not new. A1-2: The \"group\" mentioned in [5-8] is not the same as that in our work in terms of concept and motivation. In [5], a group-based model is designed to explore the interaction among pedestrians, while in our work the “group” means a set of representative trajectories. The work [6] is to predict the group trajectory of people, ours is focused on individual person trajectory prediction utilizing historical group trajectories. [7] clusters the trajectories into fixed number of categories for trajectory classification. But the clustering algorithm in our method is used to generate representative trajectories. [8] uses clustering to reduce the complexity of trajectories to alleviate the difficulty in training LSTM for trajectory classification. In contrast, we cluster trajectories to reduce the trajectory search space.\n\nQ1-3: Similarity function. A1-3: The cosine similarity is commonly used in machine learning to measure distance and it is not invented in [4]. Thus, we do not note it is adopted from [4].\n\nQ1-4: Transformer structures. A1-4: The novelty and contributions of our work lie in the insight motivation and underlying principle of designing network structure rather than the adoption of transformer structure for trajectory prediction. In addition, we request the reviewer to specify the transformer-based works and point out the similarities. It is not fair to say our work is not novel just because the transformer structure is used.\n\nQ2-1: How to align the scene with the trajectory? A2-1: We use the preprocessed data provided by YNet(Ref.[18] in paper) . It has converted the data from word coordinate into image pixel space.\n\nQ2-2: The reason/insight for setting this variable to 32. A2-2: We note that the initial number of clusters is not sensitive to the prediction results, especially when the initial number of clusters is 24-36. The results are listed below.\n\n|K|24|28|32|36|\n|--|--|--|--|--|\n|ADE|19.28|19.23|18.89|19.17|\n|FDE |45.48|45.59|45.13|45.62|\n\nQ2-3: Why 150 semantic classes? A2-3: The main reasons are two-fold: 1)The category of each scenario is unknown beforehand; 2)Our cross-modal interaction module can reweight the semantic features for human motion (Line 164-172).\n\nQ2-4: Deterministic or stochastic? A2-4: Explained in Reviewer bcbq A1.\n\nQ3-1: Unseen trajectory? A3-1: Explained in Reviewer tAHR A2.\n\nQ3-2: The coordinates and similarity. A3-2: As mentioned in A2-1, we use the preprocessed data from YNet and the word coordinates are converted into pixels. Thus, we can directly use the cosine similarity in the image space.\n\nQ3-3: Meaning/insights of curve smoothing (CS)? A3-3: Due to the randomness and subjectivity, human trajectory visually appears as an irregular curve. The irregularity makes it hard for the model to learn the past-future trajectory pair. We propose the CS loss to alleviate this in model learning.\n\nQ4-1: Experiments to support CS loss. A4-1: As mentioned in A3-3, we conduct the experiments using different methods with/without CS loss on a complex dataset PAV (Tab.3 in paper, and Tab.1 in SM). The qualitative results are in Fig.7 of the paper and Fig.3 in SM.\n\nQ4-2: Complexity. A4-2: The time complexity of the searching and updating are O(N) and O(1). Their space complexity is O(N). The group trajectory number N <=1000. The time complexity of the clustering process is O($β^2 + MKβ$), and the space complexity is O($β^2+Mβ$). β is the number of trajectories for clustering, K is the number of clusters, and M is the number of iterations for clustering methods.\n\nQ4-3: The bank size, initialization, and update. A4-3: As mentioned in A4-2, the bank size is up to 1000. The initialization and update processes are described in Alg.1.", " Thank you very much for your kind comments and feedback. We will explain your concerns point by point.\n\nQ1: What is the sampling protocol when calculating ADE/FDE?\n\nA1: We use the standard protocol (best-of-20). In order to make comparisons as fair as possible, several methods are used and listed in Tab. 2, including both stochastic methods (Social-STGCN, AgentFormer, YNet) and deterministic methods (SS-LSTM, MANTRA, SHENet [Ours]). Although stochastic methods may have a little unstable performance because of the sampling process, our compared methods adopt different sampling strategies to reduce the uncertainty. We explain the predicted results of these methods as follows. a) Social-STGCN is a typical probabilistic method to model a probability distribution. During inference, they sample 20 trajectories from the probability distribution. It is easier to sample trajectories with high probability. b) AgentFormer uses a trajectory sampler to generate 20 plausible latent variables. During inference, it samples latent variables from the trajectory sampler and uses them to produce predictions (this process can also be viewed as searching the most probable 20 trajectories). c) YNet aims to find the most probable 20 trajectories with the argsoftmax function to approximate the goal’s and waypoints’ most likely position. In general, many stochastic methods use probabilistic strategies to find plausible trajectories, while MANTRA and ours generate plausible trajectories based on the similarity between the observed trajectory and elements in memory or group trajectory bank. Specifically, MANTRA searches for the top20 hidden states that are most like the observed trajectory features, and then the top20 hidden states are used to produce predictions. Our SHENet obtains the top20 of group trajectories that are similar to the observed trajectory from bank, as well as an interaction module to refine the searched trajectories. The contrast between stochastic methods using probability and deterministic methods using similarity is worth exploring.\n\nQ2: Why are the results different from the original paper?\n\nA2: Please see Reviewer XkVC A3.\n", " We greatly appreciate the positive comments and valuable suggestions. We will explain your concerns point by point.\n\nQ1: It is suggested to add the experiments on the SDD dataset.\n\nA1: As explained in Reviewer XkVC A2-1 and Reviewer XkVC A2-3, the SDD dataset is not suitable for the assumption of our method. However, we will conduct experiments on SDD, and include in the revision.\n\nQ2: How to guarantee the learned bank with training scene can be used for the new environment?\n\nA2: Good question. Currently, our method is designed for human trajectory prediction in a constrained environment. This means we assume that the scene is fixed or similar. If the new environment is totally different with our training scene, the trained bank may degrade dramatically. This also is a common problem with many search-based methods (e.g., MANTRA [1]). However, compared to the individual-based methods, our group-based method is more generalizable (e.g., visualization results in Fig. 6).\n\n[1] Mantra: Memory Augmented Networks for Multiple Trajectory Prediction. CVPR 2020\n", " Thank you very much for your detailed review and feedback. We will address the major concerns below.\n\nQ1: What are the typical values of θ and β? Are the θ and β specific to a dataset?\n\nA1: Distance threshold θ is used to determine the update of trajectory bank $Z_{bank}$. The typical value of θ is set according to the trajectory length. When the ground truth trajectory is longer in terms of pixel, the absolute value of prediction error is usually larger. However, their relative errors are comparable. Thus, the θ is set to be 75% of the training error when the error converges. In our experiments, we set θ = 25 in PETS, and θ = 6 in ADL. The \"75% of the training error\" is obtained from the experimental result, which is listed in the following table.\n\n|$\\theta$| 25\\%|50\\%|75\\%|100\\%|\n|--|--|--|--| --|\n|ADE|22.82|20.48|18.89|20.78|\n|FDE |58.28|51.67|45.13|54.52|\n\nTrajectory threshold β specifies the threshold for the number of newly added trajectories in bank $Z_{bank}$ when updating (Alg. 1, Line 21-23). In our experiment, we set β = 200 for both PAV and ETH/UCY.\n\nQ2-1: What is the exact contribution of the PAV dataset?\n\nA2-1: The contributions of PAV are 1) comparing with ETH/UCY, the captured scenario and trajectory are more complex and realistic; 2) the clear spatial information of the surroundings, including people's state and appearance (missing in SDD/inDD) and background layout, is helpful to explore the individual-surroundings interaction for human trajectory prediction. Therefore, the data can better facilitate the trajectory prediction research. \n\nQ2-2: What are the criteria for the chosen subset of the MOT15 videos?\n\nA2-2: Human trajectory prediction is challenging due to randomness and subjectivity. Based on our analysis and to make the task more practical, we restrict the prediction task to a constrained scenario, that is, the chosen video sequences should be captured with a static camera.\n\nQ2-3: What is the reason for introducing PAV instead of experimenting on SDD and inDD?\n\nA2-3: According to the analysis in our paper, given a person’s past trajectory in a scene, the future path can be inferred by refining the searched historical trajectories of the scene using individual-surroundings information. The individual-surroundings consists of the person state and appearance, background information, and interaction of person-background/person. The person's state and appearance are not available in SDD/inDD because of the bird-eye-view and long-distance capture. In contrast, PAV can provide all these information. Therefore, we do not use SDD/inDD but introduce PAV dataset.\n\nQ3: Why are the original results not reported in the comparison?\n\nA3: As mentioned in A2-3, we need the individual-surroundings, where image information is required. ETH/UCY contains trajectories from five scenes: ETH, HOTEL, UNIV, ZARA1 and ZARA2, but video in UNIV is not available. Thus, we use fewer sequences than the original experiments (mentioned in the paper, Line 196). For fair comparison, we reproduce results of all the methods in this dataset.", " The authors tackle the problem of human trajectory forecasting with the help of a novel architecture. The architecture has two components, the group trajectory bank module to extract representative group trajectories as the candidate for future path, and the cross-modal interaction module to model the interaction between individual past trajectory and its surroundings for trajectory refinement, respectively. The authors construct a new challenging dataset, PAV for long-term prediction. The article demonstrates the superiority of the proposed model on ET/UCY and the new PAV dataset over state-of-the-art baseline methods. A new set of metrics, curve smoothing ADE/FDE have also been introduced here. Strengths:\n- The work is of moderate originality, with a fair presentation and writing quality. The proposed ideas of group memory and cross-modality fusion are interesting.\n- The proposed models seems to be able to improve the prediction performance concerning the reported state-of-the-art results. Nevertheless, there is some lack of clarification in the experiments and evaluation part, especially about the reported SOTA results and how they were obtained.\n\nWeaknesses:\n- It is not clear the exact contribution of the PAV dataset; what is the criteria for the chosen subset of the MOT15 videos? What is the reason for using this dataset instead of existing and used datasets, such as NuScenes, Oxford RobotCar, and Cityscapes?\n- It makes sense to enhance the loss function with curve smoothing during training; however, is it fair to use curve smoothing only on the ground truth during evaluation, especially for the state-of-the-art works that did not apply CS-loss? Actually, it is not mentioned if the SOTA works were retrained with CS-loss or not, and maybe it would be more convincing to include an evaluation with the default metrics as well as the new metrics.\n- The reported performance of some SOTA works on the ETH/UCY datasets is much worse than the results of their original papers. Why are the original results not reported in the comparison?\n- It is not mentioned anywhere in the paper what are the typical values of the trajectory threshold, beta and distance threshold, theta, used as hyper-parameters in the construction of group trajectory bank? It is also not said anywhere if the hyper-parameter values are specific to a dataset? \n- A comparative study could be performed with different types of clustering methods, used for constructing the group trajectory bank.\n\nMinor comments:\n- One minor mistake in row 108: t should start from -T_{pas+1}, instead of -T_{pas}. As also previously mentioned:\n- What are the typical values of the trajectory threshold, beta and distance threshold, theta, used as hyper-parameters in the construction of group trajectory bank? Are the hyper-parameter values specific to a dataset?\n- What is the exact contribution of the PAV dataset? What are the criteria for the chosen subset of the MOT15 videos? What is the reason for introducing this dataset instead of experimenting on the existing and used datasets, such as Stanford Drone Dataset (SDD), Intersection Drone Dataset (inDD)?\n- The reported performance of some SOTA works on the ETH/UCY datasets is much worse than the results of their original papers. Why are the original results not reported in the comparison? The authors did not discuss any limitations or potential negative social impact of the work.", " This paper proposes to use the \"scene history\" for the human trajectory prediction. The \"scene history\" consists of two parts: historical group trajectories and individual-surroundings interaction. This paper proposes a SHENet to jointly use these two \"scene history\" clues.\n\nExtensive experiments are conducted on ETH, UCY, and a new, challenging benchmark dataset PAV.\n [Strength1] The proposed method which uses the \"scene history\" (especially historical group trajectories) is well-motivated. \n\n[Strength2] The basic idea makes sense. Historical group trajectories and individual-surroundings interaction are helpful to understand human behavior. \n\n[Strength3] This paper provides a new challenging dataset.\n\n[Strength4] The proposed method achieve good performance.\n\n[Weakness1] It is suggested to add the experiments on the SDD dataset, which is also an important benchmark for human trajectory prediction.\n\n[Weakness2] For most real-world applications, we can not obtain the scene in which the model will be used. In the proposed method, the bank is fixed for inference after training. How to guarantee the learned bank with training scene can be used for the new environment. \n\n[Weakness3] The reproduction is difficult since no source code is attached in the supp. See [Weakness1] and [Weakness2]. No clear limitations and potential negative societal impact.", " This paper focus on the task of human trajectory prediction. The authors stress the importance of scene history, including historical group trajectories and individual-surroundings interaction, and design a clustering & update procedure to maintain a group trajectory bank, which is used to generate trajectory prediction proposals. A cross modal transformer fuse the feature of observed trajectory and scene information, which is then used to predict the offset from the trajectory proposals. The authors conduct intensive experiments on ETH, UCY, and PAV datasets. Strengths:\n1. The paper is well-organized and easy to follow. The authors state their motivation quite clear.\n2. The novelty and originality are good. The group trajectory bank module is innovating.\n3. The ablative study is comprehensive and convincing. The results shows that the main module - group trajectory bank, improve the performance by a non-trivial margin.\n\nWeaknesses:\n1. In the experiment section of ETH/UCY dataset, the authors didn't states the setting clear: what is the sampling protocol when calculating ADE/FDE? Standard protocol is best-of-20. Also, as the architecture itself is not stochastic, it'll be better if deterministic prediction ADE/FDE is added.\n2. The performance of previous work in the table is different from the results from the original paper. For example, Y-net achieve 0.18/0.27 according to the original paper, but in table 2 it becomes 0.24/0.39. Is it due to different experiment setting or different implementation? None About the ethical impact: acquiring human trajectory dataset may violate privacy under some circumstances. The authors may consider to add the potential negative societal impact.", " In this paper, a method named Scene History Excavating Network (SHENet) is proposed for pedestrian trajectory prediction. The history information is divided into two categories: historical group trajectories and individual-surroundings interaction. Specifically, a group bank module and an interaction module are designed to explore these two information. In addition, the curve smoothing strategy is utilized to alleviate the individual bias. The proposed method can achieve SOTA performance on ETH/UCY as well as a new PAV dataset. Strengths:\n- Clear-written and well-organized.\n- Promising performance.\n- A new complex trajectory dataset.\n\n\nWeaknesses: (See Questions for detailed comments)\n- Lack of novelty.\n- Missing technique details.\n- Insufficient experiments. Concerns:\n1. Memory Network is well used in many relevant tasks, i.e., trajectory prediction [1-3], question-answering, anomaly detection, etc, a subsection of related work should be included. Only one paper [4] is mentioned to tell the differences. The approach of trajectory clustering (group idea) is also not new [5-8] here. The similarity function (Eq. (1)) is the same as [4], it should be noted that it is adopted from other papers. In addition, the Transformer structures, including cross-attention and self-attention are also widely used in the trajectory prediction task. Thus, in my opinion, the novelty is not significant.\n\n2. Some key details are missing. \n(a). Based on my knowledge, the original videos of ETH/UCY are not aligned with the pedestrian coordinates, so how to align the scene with the trajectory? Or is it directly operated in the feature-level for feature fusion? \n(b). In Line 4 of Algorithm 1, the K is initialized as 32 (Line 211), what's the reason/insight of setting this variable equals to 32. It's a really important hyper-parameter since it represents the initial cluster number. I couldn't see any analysis or experiments about this variable. \n(c). In Line 155 of Sec. 3.3, the Swin Transformer is applied for extracting scene semantic features, but the number of semantic classes is 150. I assume it is pre-trained here, but why use150 in the trajectory prediction task? I don't think it is reasonable to directly use150 semantic classes here. Details and explanations are missing here. \n(d). As for the results of ETH/UCY, some of these baselines are stochastic, the results are usually the best of 20. I couldn't see any stochastic properties in this proposed method, so are the results deterministic or stochastic?\n\n3. Other questions. \n(a). The bank is fixed while inference, how to deal with the situation when there is a novel/unseen trajectory (pattern) when testing? \n(b). In Eq. (1), the Cosine Similarity is used there, the range of coordinates is not mentioned, usually we normalized the coordinates of ETH/UCY into $[-1, 1]$, which makes the range of $s_{i}$ is $[-1, 1]$. When $s_{i}=-1$, these two trajectories are totally opposite, how to define this kind of similarity? Is it similar or different? I think more explanations should be included here. \n(c). What's the meaning/insights of introducing curve smoothing (CS)? If using CS loss could have better performance, does it mean the trajectories in this dataset are more \"stable\"? Will using CS loss affect the generalization ability of the model itself, only making smooth predictions? Is that prior knowledge? \n\n4. Insufficient experiments. \n(a). No experiment to support the necessity of using CS loss. \n(b). When memory mechanism is used, I think the analysis of complexity/cost is necessary. \n(c). The study of Size($Z_{bank}$) is missing, how to initialize the memory bank, the changing process of this memory bank is necessary to support the whole method.\n\n\nMinors: \n1. Line 2, Line 6 and Line 14 of Algorithm 1, $/*$ may need to adjust.\n2. Line 7 of Algorithm 1, I believe Size($Z_{bank}$) is K.\n3. In References, the names of conferences or journals, i.e. [1] IEEE or IEEE/CVF, are not consistent.\n\n[1] Remember Intentions: Retrospective-Memory-based Trajectory Prediction. CVPR 2022 \n[2] Graph-Based Spatial Transformer With Memory Replay for Multi-Future Pedestrian Trajectory Prediction. CVPR 2022 \n[3] Multiple Trajectory Prediction of Moving Agents with Memory Augmented Networks. IEEE TPAMI 2020 \n[4] Mantra: Memory Augmented Networks for Multiple Trajectory Prediction. CVPR 2020 \n[5] Recursive Social Behavior Graph for Trajectory Prediction. CVPR 2020 \n[6] Group lstm: Group Trajectory Prediction in Crowded Scenarios. ECCV Workshop 2018 \n[7] Three Steps to Multimodal Trajectory Prediction: Modality Clustering,Classification and Synthesis. ICCV2021 \n[8] PoPPL: Pedestrian Trajectory Prediction by LSTM With Automatic Route Class Clustering. IEEE TNNLS 2020 No limitations described in the main body paper as well as the supplementary material." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4, 5 ]
[ "-hG3ueuQ68s", "TNakIlFF1p5", "gBtB2gGMQ5R", "uVzYlOdv6kz", "ss2Lv1jmOlv", "N7VNW5XXS1K", "Nd0oyqhrMbI", "vkHT3qcLrc0", "wFnrE9EUdJ-", "nips_2022_RW-OOBU11xl", "nips_2022_RW-OOBU11xl", "nips_2022_RW-OOBU11xl", "nips_2022_RW-OOBU11xl" ]
nips_2022_-QHUWgkh1OY
DOGE-Train: Discrete Optimization on GPU with End-to-end Training
We present a fast, scalable, data-driven approach for solving linear relaxations of 0-1 integer linear programs using a graph neural network. Our solver is based on the Lagrange decomposition based algorithm of Abbas et al. (2022). We make the algorithm differentiable and perform backpropagation through the dual update scheme for end-to-end training of its algorithmic parameters. This allows to preserve the algorithm's theoretical properties including feasibility and guaranteed non-decrease in the lower bound. Since the method of Abbas et al. (2022) can get stuck in suboptimal fixed points, we provide additional freedom to our graph neural network to predict non-parametric update steps for escaping such points while maintaining dual feasibility. For training of the graph neural network we use an unsupervised loss and perform experiments on large-scale real world datasets. We train on smaller problems and test on larger ones showing strong generalization performance with a graph neural network comprising only around $10k$ parameters. Our solver achieves significantly faster performance and better dual objectives than its non-learned version of Abbas et al. (2022). In comparison to commercial solvers our learned solver achieves close to optimal objective values of LP relaxations and is faster by up to an order of magnitude on very large problems from structured prediction and on selected combinatorial optimization problems. Our code will be made available upon acceptance.
Reject
The presented paper introduces DOGE-Train method that targets discrete optimization problems. It allows finding solutions for discrete problems utilizing GPUs. This is achieved by pre-training on smaller size instances and then hoping it would also generalize for larger instances that are coming from a similar family of problems. Overall, the idea is related to FastDOG but has some improvements. Despite showing promising results, there are still a few concerns raised by reviewer "ehgi". Also, I am not sure if the NeurIPS would be the best fit for this paper.
train
[ "gty32GFx0n", "rXhOq7PlT_1", "YJhD4RuNZh6", "NPu1OPxn15J", "-B52eYXbLe3", "lkjA46s77zv", "8zdVg9hMc9u", "VP_0kjHvbbm", "h0guQuH13o", "Zqqt02Ca_Dt", "lJgbzZrZwi", "9tL_NmDe8t", "geuQAPKcV4B", "Fj9QQKnjdGm", "j_Omy9SN7--S", "bbZtrYMvzVW", "IFpEq1n5Bj", "UGe4d01mfCp" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for detailed feedback. We will add more context and discussion about specialized solvers in the final version.\n\n_Regards,_\n\n_Paper 3276 authors_", " Thanks for answering questions! Now, the proposed approach becomes technically more clear and the presentation is more consistent. The proper context and specialisation of the proposed approach to solver construction is important and I like the presented results during discussion with reviewer ehgi. Please, add the respective comments in the manuscript about the expected workflow and results from running your solver compare with some specialised or commercial competitors.", " We thank the reviewer for suggesting these changes, we will add all of the above points in the final version. We agree that currently it is not clear what part of history the LSTM maintains to speed-up convergence. This can be a good direction for future investigation. \n\n_Regards,_\n\n_Paper 3276 authors_ ", " We thank the reviewer for their response. \n\n## Final accuracy comparison with Gurobi:\nFinal accuracy is only one metric but other metrics are also useful and practical. Concretely, even though given unlimited time Gurobi outperforms DOGE, reasonably close values to the optimum are also considered good for practical purpose. This is because the inputs to the ILP are also not known with complete certainty and thus the practitioners require good enough solutions but in less time. For example quoting the very recent graph matching [benchmark](https://vislearn.github.io/gmbench/results/) _\"we consider solutions within a 0.1% range to known optimum as optimal\"_. With respect to this metric and the metric of relative dual gap integral [7] we outperform Gurobi even though it is a commercial solver with much resources at disposal. \n\n## Comparison with specialized solvers:\n1. Note that specialized solvers (and FastDOG) mostly struggle with solving the LP relaxation and thus their lower bounds are worse. On the primal task these solvers are actually better than Gurobi if one needs to compute good (but not necessarily optimal) solutions in a limited amount of time.\n\n2. We report comparison of DOGE with best known specialized solvers as follows:\n\n### _Cell tracking_:\n\n| Solver type | $E (\\times 10^8)$ | Runtime(seconds) | \n|---|---|---|\n| Specialized solver [24] | -3.866 | 1673 |\n| DOGE | **-3.854** | 1380 |\n| DOGE-M | **-3.854** | **730** |\nThus the specialized solver [24] does not get a lower bound $E$ close to ours and seems to be stuck (by inspecting the solver log).\n\n### _Graph matching_:\nFor graph matching dataset we obtain results of specialized solvers from very recent work [GMB] which is to be presented at ECCV'22. We find **the best specialized solver for each instance** and report the results below:\n\n| Solver type | $E$ |\n|---|---|\n| Best spec. solver after 10 seconds | -48442 |\n| Best spec. solver after 100 seconds | -48441 |\n| Best spec. solver after 300 seconds | -48441 |\n| DOGE after 17 seconds | **-48439** |\n| DOGE-M after 21 seconds | **-48436** | \n\nThus we actually achieve better lower bounds than the best specialized solver and still with less runtime. \n\n## Practicality is doubtful:\nAs shown above for both cell tracking and graph matching problems there is much interest in solving these problems (and particularly the LP relaxations) faster than Gurobi [24, GMB]. This has motivated the need for specialized solver development separately for each dataset. We forego this need for spending large amount of human effort and develop a general purpose solver which can trained for each problem type in much lesser amount of time than developing specialized solver. \n\n[GMB] - A Comparative Study of Graph Matching Algorithms in Computer Vision. Haller et. al, ECCV 2022. \n\n## Refocus on complex datasets with repetitive problems:\n1. To compete w.r.t this metric one can easily inflate the testing set by acquiring more data. \n2. This is the reason why we separate training time and testing time. This already demonstrates to possible users having repetitive problems that once a model is trained (paying a one time cost) it allows fast inference. \n\n## CT dataset:\nGiven an instance with $n$ many dual variables the GNN stills predicts parameters for non-parameteric update $\\theta \\in R^n$. Rest of the two parameters $\\alpha$ and $\\omega$ are fixed to the values used in FastDOG. Note that for CT dataset the parameter space is still large because $n = 28 \\times 10^6$ (ref. Table 1) which is even larger than the size of our GNN's trainable parameters ($10 ^ 4$). \n\n## DOGE can not become a commercial optimizer:\nWe cannot and do not strive for this, as we do not have access to amount of resources (personnel, funding, hardware). Our solver is a limited academic effort. Still we show that there are cases where our solver can be beneficial as compared to the commercial and specialized solvers. \n\n_Regards,_\n\n_Paper 3276 authors_", " Thanks for the comments about the aforementioned weaknesses! \nThe manuscript can be improved by adding more discussion related to this comments. In particular, motivation for the used features for GNN can be briefly added and distribution of runtime over the GNN operations, optimizer steps makes clear the overhead from additional operations and gain in reducing optimizer-only runtime. \nIn addition, the usage of history to accelerate methods is very fruitful idea but it is unclear how the GNN transforms the history and why such transformation speeds up the convergence. So, the idea is clear, but it requires additional investigation.", " Dear authors, thank you for the response!\n\n**1** \n\n\"Training our solver is still less time consuming than developing an application specific solver\"\nAccording to experiments, we can use Gurobi on most datasets and it will outperform DOGE in a comparable time.\n\n\"On the other hand training our solver is still less time consuming than developing an application specific solver. Note that FastDOG already was already competitive with dataset specific solvers especially in early stages of optimization but got stuck in suboptimal fixed points.\"\nI don't see experiments comparing DOGE with application-specific solvers. According to experiments from the FastDOG paper can't you also say that Gurobi is competitive with application-specific solvers?\n\n**3**\n\" We can add one more experiment in the final version where we give same amount of time budget to Gurobi as in our training + testing.\"\nThank you, I'd like to see a such experiment with all considered algorithms.\n\n\n**To sum up**\nAfter discussion with the authors, I still have the same concerns:\n- Train-test separation\n I am still not convinced that training network and optimizing are separate processes in optimization. Even the authors provided an example of the repetitive task it is only a small class of problems. Moreover, DOGE outperforms commercial solver Gurobi only on one dataset in terms of accuracy.\n- Practicality\n In my opinion due to the large training time (4 to 48 hours) DOGE can not become a commercial optimizer. On Cell Tracking, Graph Matching, Independent Set Gurobi achieves better accuracy in less time.\n- Theoretical Contribution\n One theoretical results claim that you can backpropagate, the other is taken from FastDOG. I agree with the authors that \" ...many state of the art optimization algorithms trade theoretical guarantees for empirical convergence\". So theoretical results can't be seen as strengths nor as weaknesses.\n\n\nSo, in the end, we have a dataset-specific solver that shows some improvements only on one dataset. \n- The main idea is to use NN to learn hyperparameters, which is very interesting. However, Gurobi outperforms it in terms of accuracy on 3 datasets out of 4. \n- Improvements in the speed of convergence are because the neural network gives a priori knowledge about the dataset. If we do not separate training and optimization, DOGE is much more time-consuming on 3 datasets out of 4.\n- Practicality is doubtful due to the large training time, and generalization ability is unclear. Solver is dataset-specific. \n- For CT dataset parameters are fixed. How they are tuned? Fixing hyperparameters in the approach where they should be learned creates a negative impression.\n- On the QAPLib dataset DOGE outperforms other approaches. Authors claim that given the same amount of time DOGE outperforms Gurobi.\n- DOGE outperforms FastDOG in terms of accuracy. However, DOGE needs more time due to the training process. \n\nMaybe the authors need to refocus the paper on complex datasets like QAPLib (maybe with repetitive problems), where the superiority of DOGE will be clear. \n\n\n", " Thanks for the response. Even if our method is trivial and incremental (which we believe is not) we would kindly refer the reviewer to inspect our results in Table 3. We always get better objective values than FastDOG with **several times better** value of relative dual gap integral $g_I$ [7]. Specifically:\n\n_Cell tracking:_ Improvement by $3 \\times$\n\n_Graph matching:_ Improvement by $45 \\times$\n\n_Independent set:_ Improvement by $70 \\times$\n\n_QAPLib:_ Improvement by $26 \\times$\n\nNote that FastDOG will not be able to achieve better objectives values than ours **even if it is given unlimited time** because it can get stuck in fixed points. ", " I sincerely thank the authors for their response.\nWhile I appreciate the empirical effectiveness of DOGE over FastDOG, I still stand by my concerns on the improvement being incremental, especially given the cost of training (see discussion with reviewer ehgi).", " \n### 1. To solve the new optimization problem neural network has to be trained again\n\na. It is still important to have dataset-specific solvers as they can exploit the problem structure. For practitioners it is common to solve problems of same type over and over again. For example take example of a microbiologist who would like to track cells in a video whenever new data is acquired. To speed-up optimization, the default course of action is to first develop application specific solvers taking considerable time and human effort. On the other hand **training our solver is still less time consuming than developing an application specific solver**. Note that FastDOG already was already competitive with dataset specific solvers especially in early stages of optimization but got stuck in suboptimal fixed points. Our method alleviates this issue while also achieving faster convergence. \n\nb. Nonetheless, the problem of generality in machine learning is ubiquitous and it would be good to have application agnostic trained solver. However, **our contribution is not in learning a 'general' AI agent nor in zero-shot learning**. We use standard GNN with only one message passing layer and standard gradient based NN optimizer. In the domain of learning to optimize we would also point out to Sec. G of ICML'22 spotlight paper [L2C] where the authors also comment on lack of generalization of their method. \n\n[L2C] - Learning to Cut by Looking Ahead: Cutting Plane Selection via Imitation Learning. Paulus et. al. ICML 2022.\n\n### 2. You give the optimizer some \"prior\" knowledge ... No surprise that the proposed approach improves results\n\nEven if we agree with this statement it does not imply that:\n\na. Our work will not benefit the research community and practitioners.\n\nb. The work is not novel. \n\n### 3. I still don't think that it is fair to separate training from the optimization process\nFor the sake of discussion even if we were to perform this comparison our method might still have some advantages:\n\n**a. 'Fair' comparison with Gurobi:**\nOur method is competitive with Gurobi even if we take this type of fairness into consideration for large datasets. For example for our largest dataset QAPLib which takes most amount of training time (2 days). Evaluating our solver on all test instances takes around 8 hours. Thus our solver spends 2.3 days in total (training + inference) on QAPLib dataset. We ran the commercial solver Gurobi with a time limit of 1 hour for each test instance and it terminated in total of 31.8hours (1.3 days) with **much worse lower bounds**. Specifically as in Table 3, average objective value for Gurobi was $0.9 \\times 10^6$ whereas for our method $14.5 \\times 10^6$. We can add one more experiment in the final version where we give same amount of time budget to Gurobi as in our training + testing. \n\nLastly, it might be possible to speed-up our training process further by solving all ILPs in a training batch in parallel and caching strategies. We currently solve each ILP sequentially, parallelizing this requires considerable engineering effort which is beyond the scope of this work.\n\n**b. 'Fair' comparison with FastDOG:**\nSince FastDOG can get stuck in sub-optimal fixed points far away from the optimum it will still not be able to achieve the dual objectives even if it is given unlimited amount of time. \n\nRegards,\n\nPaper 3276 authors", " Dear authors, thank you for the response!\n\nStrengths. Well, isn't everything you mentioned a part of \"graph NN for learning parameters\"?\n\n1-3. Can the neural network be pretrained once for different optimization problems? Can a neural network pretrained on one dataset show some improvements on another? It seems that to solve the new optimization problem neural network has to be trained again, which takes from 4 to 48 hours. \n\nAs far as I understand, by training neural network separately you give the optimizer some \"prior\" knowledge about the dataset and optimization problem. So there is no surprise that the proposed approach improves results. \n\nI still don't think that it is fair to separate training from the optimization process without showing that only once trained NN can behave well on different optimization problems. \n\n\n\n", " Thanks for detailed reading and review. We breakdown our reply in two parts due to lack of space. A minor clarification: we do not have a separate model for predicting the non-parametric update. Instead a GNN predicts all three vectors $\\alpha$, $\\omega$ and $\\theta$ simultaneously as in Line 4 of Alg. 3.\n\n### Weakness 1: Hand-crafted GNN features\nThe current state of Alg. 1 is completely determined by the current set of dual variables $\\lambda$ which can easily be encoded as edge features of the GNN. We additionally encode some hand-crafted features like subgradients or min-marginals allowing the GNN to construct well-performing algorithms like subgradient ascent or min-marginal averaging. Therefore they were a natural choice for input into our GNN. The same holds true for keeping a history of intermediate states for DOGE-M. Similar history information is used in accelerated gradient methods [NE83] and heavy ball methods [PO64] to accelerate first order methods. Also other features were quite natural, including the ILP description, which was used in [35]. In general we expect our solver to become better once other informative features are provided, for which the optimization literature might be a good source.\n\n[PO64] - \"Some methods of speeding up the convergence of iteration methods\", B. Polyak, USSR Computational Mathematics and Mathematical Physics, 1964.\n\n[NE83] - \"A method of solving a convex programming problem with convergence rate $O(1/k^2)$”, Y. Nestrov, Soviet Mathematics Doklady, 1983\n\n### Weakness 2: Faster than Gurobi, insights\nActually our GNN is pretty lightweight, containing less than $10k$ parameters. The GNN predicts parameters for the solver once only after several iterations of the solver (see Table 1, column $T$, subcolumn $test$) thereby reducing the computational burden. Second, backpropagation and training is not done during test time, making our approach fast at inference time. Third, our approach can utilize massive parallelism offered by GPUs, in contrast to Gurobi and other solvers which can only run on CPU.\n", " \nWe address the raised questions as follows:\n\n### 1. Simplex or interior-point methods are not GPU friendly?\nFor the parallel simplex method we refer to the work of [HU18] (especially Sec. 6) where the authors were only able to obtain a $1.5x$ speed-up over open-source solvers and no speed-up over commercial solvers. For interior point method a major hurdle towards parallelization is the stage of matrix decomposition which is executed sequentially. Lastly and more importantly, for our work we require a differentiable solver but both interior-point and simplex method contain non-differentiable steps. \n\n[HU18] - Huangfu, Qi, and JA Julian Hall. \"Parallelizing the dual revised simplex method.\" Mathematical Programming Computation 10.1 (2018): 119-142.\n\n### 2. Motivation for forward and backward iterations in algorithm 1?\nThese iterations mimic the variable update direction of FastDOG (line 8, Alg. 1). Intuitively the goal is to allow back and forth exchange among the dual variables. Note that each variable block can be run in parallel allowing efficient GPU implementation. \n\n### 3. More details about computing subgradients in eq 23 from Appendix.\nThe subgradient is computed by applying Danskin's theorem. We will add more details in the final version.\n\n### 4. Alg 2 solves two optimization problems (for $\\beta \\in \\{0, 1\\}$).\nActually in Alg. 1 and 2 we present a naive implementation agnostic way of computing min-marginals and their gradients. \nFor implementation we actually use the efficient BDD-based scheme as given in Section 5.1 of FastDOG avoiding the computational overhead of solving optimization problems. In the BDD-based implementation the solutions of these two subproblems come at negligible additional cost.\n\n### 5. Description of datasets that are used in generating of Figure 2.\nFor each dataset the plot is generated by averaging over all of its test instances as also done in FastDOG. \n\n### 6. What solver do authors use inside the proposed framework to solve problems in line 10, Alg 1 and in line 4, Alg 2?\nWe use the efficient schemes for computing these terms following Section 5.1 in FastDOG. We will make it clear in the final version. Please also see our answer to question 4 above.\n\n### 7. Please explicitly describe how to transform standard constraints like to the form in problem (BP)?\nWe use the scheme of [31] which is also used by FastDOG. We will elaborate on it in the final manuscript.\n\n### 8. Please explicitly state and describe decreasing of what quantities can explain the observed speed up.\nThere are two points of comparisons of speed up which we breakdown as follows:\n\n1. **Speed-up over FastDOG:** To perform a fixed number of iterations our implementation actually takes around $1.2x$ more time than FastDOG. This is mainly because we have a separate value of $\\omega$ and $\\alpha$ for each dual variable instead of only a scalar for FastDOG. Since the GNN can predict good values for these parameters and non-parametric update (eq. 3) we are able to achieve much large improvement in dual objective per iteration as compared to FastDOG. We will add iteration level comparison in the final version.\n\n2. **Speed-up over Gurobi:** We build on FastDOG which performs parallel computation and already outperforms Gurobi at early stages of optimization. However, FastDOG can get stuck in suboptimal fixed points far away from the optimum. Our learned scheme allows to circumvent this issue allowing us to surpass Gurobi also at the later stages of optimization. Also note that FastDOG's iteration complexity is linear in the size of the BDD-problem decomposition, hence one iteration is comparatively cheap. Moreover, it is easily parallelizable, explaining FastDOG's and our speedup as compared to Gurobi.\n\n### 9. I assume that in line 123 should be replaced with according to the line 10 in Alg. 1.\nIn FastDOG, the value of $\\omega$ was actually equal for all dual variables and so the authors treated it as a scalar. We will resolve this ambiguity in the final manuscript. ", " ### Incremental improvement:\nWe think that our work is indeed novel in the following key aspects: \n- We show how to backpropagate through FastDOG (Alg. 2) and with additional derivations in the supplementary material. This allows us to obtain the first fully differentiable solver for convex relaxations of combinatorial problems and demonstrate that machine learning can have an impact in this area of optimization solver development. \n- Our approach is unsupervised and self-sufficient, in contrast to methods that rely on ground truth supervision [35], imitation learning [L2C] or reinforcement learning approaches [43, 55] that do not need supervision but heavily rely on classical algorithms for core parts of their computation.\n- The improvement of DOGE over FastDOG is substantial, since typically closing the dual gap, which DOGE largely does, is substantially harder than reaching an acceptable, but suboptimal dual solution, which FastDOG delivers. In optimization, solving the last few percent of a problem typically takes the majority of the time (for example see Figure 2b, 2c in [43]).\n\n[L2C] - Paulus, Max B., et al. \"Learning to Cut by Looking Ahead: Cutting Plane Selection via Imitation Learning.\" ICML 2022.\n\n### Omitted results\nWe will provide these results in a final version of the paper. We would like to note that our training does not lead to a solver that is worse than FastDOG, though.\n", " ### Strengths:\nIn addition to the strengths pointed out by the reviewer we would like to add the following:\n\n1. Our test instances are significantly larger than training ones (ref. Table 1). For example we test on $20\\times$ larger instances in QAPLib dataset. This is already a difficult task as also stated in Sec. 6.3 of the survey [6]. \n2. We do not need ground truth solutions for training due to our unsupervised loss. \n3. We show how to backprop through FastDOG for end-to-end training. \n4. Please also see our response to reviewer TgLG.\n\n### Weakness 1: Theoretical guarantees:\nWhile it is true that the non-parametric update step does not provide convergence guarantees, we provide the GNN subgradients and min-marginals as features, from which it is easily possible to construct a converging algorithm. Also, many state of the art optimization algorithms trade theoretical guarantees for empirical convergence, e.g. heuristic step sizes in subgradient schemes [SG] or aggressive combinatorial moves in max-flow [BK].\n\n[SG] - Bazaraa, Mokhtar and Sherali, Hanif: \"On the choice of step size in subgradient optimization\" in European Journal of Operational Research 1981\n\n[BK] - Boykov, Yuri and Kolmogorov, Vladimir: \"An Experimental Comparison of Min-Cut/Max-Flow Algorithms for Energy Minimization in Vision\" in TPAMI 2004\n\n### Weakness 2: Questions regarding experiments: \n1. **How the neural network is trained? During the optimization process, or separately?** The neural network is trained with the optimization problem embedded inside it. For updating neural network weights we backprop through this optimization problem by Alg. 2. For testing, the neural network weights are frozen so backprop is not required at test time. \n2. **For me it is unclear, why in Table 1 training time is in hours and significantly large compared to Figure 2.** In Table 1 we report our training setting. Figure 2 contains our main results on test instances computed using an already trained GNN.\n3. **If training and performing optimization are separate processes, training time should be included in Figure 2.**\nWe follow the conventional way to report results of machine learning methods for combinatorial optimization and measure test time performance, while training time etc. is regarded as a secondary measure. See for example the recent competition of [ML4CO] and the well-cited paper in the area [35]. The reason is that training is done only once and then the model can be used subsequently without incurring the training time overhead. Note also that our training scheme is already much faster than other approaches using machine learning for combinatorial optimization. For example, the work for learning to solve travelling salesman problems [54] requires 4 days for training and more for ground truth generation. \n4. **Complexity of one iteration in Table 2** \nFor the ablation study in Table 2 per iteration runtime of all learned approaches is almost similar ($762ms$). The reason is that the GNN is invoked only after several iterations (see column $T$, $test$ in Table 1) of the solver and so using a GNN or not (w/o GNN) does not change runtime by much. Also the non-parametric update is performed only once whenever the GNN is invoked and thus does not impact total runtime much. The runtime of one iteration of DOGE is empirically $1.2x$ the cost of one FastDOG iteration (due to variable $\\alpha$ and $\\omega$ parameters). We provide the number of iterations for each test instances in the appendix. Note that FastDOG's iteration complexity (and ours) is linear in the size of the BDD problem decomposition. \n5. **Fix parameters in Alg. 1 for CT dataset**\nFor the CT dataset we only train on two training samples which might be too few for generalization if we predict all parameters ($\\alpha$, $\\omega$, $\\theta$). Note that we do still predict the parameters $\\theta$ for the non-parametric update (eq. 2) which allows us to outperform FastDOG by a large margin. \n\n[ML4CO] - https://www.ecole.ai/2021/ml4co-competition/\n\n### Weakness 3: Comparison to Gurobi\nWhile Gurobi reaches slightly better final objectives on most datasets, actually we are very close to optimum already and the differences are on the order of $10^{-3}$ w.r.t. the relative dual gap (ref. Figure 2). On the other hand our any-time performance is orders of magnitude better, which means that when the algorithm is stopped early our results would be significantly better. Also the time as well as objective axes are logarithmic, hence the time differences between Gurobi and our approach are actually significant as well. In addition, beating Gurobi, which is a closed source commercial solver developed by a large team of dedicated researchers is typically not achieved by academic papers. Lastly we quote from a recent related survey paper [10] (pg14) \"Optimality is desired, but quickly finding a good solution is the priority\".\n\n### Weakness 4: Code is not attached\nPlease see the overall comment. \n", " We provide anonymous link of our implementation here: https://anonymous.4open.science/r/DOGE-Train-C089/. Efficient CUDA Implementation of Alg. 1 and its backpropagation (Alg. 2) can be found at [bdd_cuda_learned_mma.cu](https://anonymous.4open.science/r/DOGE-Train-C089/external/BDD/BDD-main/src/bdd_cuda_learned_mma.cu), GNN model is defined at [model.py](https://anonymous.4open.science/r/DOGE-Train-C089/model/model.py). We will also provide publicly all code and pretrained models upon acceptance of the paper.", " Authors propose an extension of the work A.Abbas and P.Swoboda FasdDOG, 2022. New algorithm includes graph network, which allows to learn parameters for non-parametric update and parametric step of FastDOG, 2022. Strengths:\n1. Modification of FastDOG algorithm, which allows for:\n - graph NN for learning parameters\n - GPU parallelization \n - non-parametric step\n\nWeaknesses:\n1. The resulting algorithm includes a non-parametric step, but there are no theoretical guarantees for the non-parametric step. \n2. Experiments part seems to be a bit unclear for me. Questions are listed in the corresponding section.\n3. Experiments show that on 3 out of 4 datasets Gurobi shows better results and on two of them Gurobi does it in approximately the same time. \n4. The code is not attached. 1. How the neural network is trained? During the optimization process, or separately? \n2. For me it is unclear, why in Table 1 training time is in hours and significantly large compared to Figure 2.\n3. If training and performing optimization are separate processes, training time should be included in Figure 2. \n4. For table 2 it is also interesting to see the comparison in the number of iterations to understand the complexity of one iteration\n5. Line 246. As for me, it is unfair to fix parameters in Alg. 1 for the CT dataset. It creates a fake impression of the proposed approach. -", " The authors present DOGE, a learning-based method to solve relaxations to integer programs that exploits GPU parallelism. \nSpecifically, DOGE builds on a method based on Lagrangian decomposition, FastDOG, and proposes to improve its update step either by optimizing over its parameters, or by learning the update direction via a GNN.\nDOGE is shown to improve on FastDOG on 4 benchmarks. The paper is well written and easy to follow. While the results appear to be convincing on the selected benchmarks, DOGE is an incremental improvement on FastDOG, which introduced the GPU-friendly dual formulation. I believe the originality of DOGE to be limited as it only proposes to learn the update steps of FastDOG, while keeping the same formulation. Given the extent of the improvement over FastDOG over the selected benchmarks, and lack thereof in the experiments mentioned in lines 278-285, I am not sure that the cost associated to training pays off in this context. - Could the authors include the omitted negative results mentioned in lines 278-285? The authors adequately and honestly address the limitations of their work.", " This paper proposes a novel framework to speed up solving ILP problems through tuning parameters of Parallel Deferred Min-Marginal Averaging algorithm. This tuning is based on the gradients of the loss function w.r.t to these parameters, so the ILP solver is represented as a neural network that can be optimized with gradient-based optimization method. The los function here is explicit dual objective function. The parameters update preserves their feasibility and non-decrease of the lower-bound. Moreover, to prevent stuck in fixed point, non-parametric update step is also proposed and separate model to generate it is trained. The presented method is evaluated in four classes of combinatorial problems, shows faster convergence than Gurobi solver and establishes smaller relative duality gap. Strengths:\n1) extensive computational experiments for evaluation of the proposed method\n2) the novel technique for differentiating optimization methods w.r.t. its parameters\n3) recipes how to escape from the fixed points\n4) unsupervised loss function eliminates any data preparation stage\n\nWeaknesses:\n1) hand-crafted features for GNN requires some approach to encode current state of Alg 1 in automatic way or some theory that explains the sufficiency of the presented set of features\n2) it remains unclear why such complicated procedure that computes gradients, updates parameters of optimizer in two-ways (parametric and non-parametric) and requires additional memory to store parameters of neural networks is still faster than the state-of-the-art solvers like Gurobi. Please comment the following remarks and questions:\n1) In line 34, authors claim that simplex or interior-point methods are not GPU friendly, however, they consist of series of linear algebra operations, that can be speed up due to GPU architectures e.g. matrix multiplication in dense layers of neural networks is faster on GPU rather than on CPU. Please, provide more detailed explanation of this point. \n2) What is the motivation for forward and backward iterations over blocks $B_1,\\ldots, B_u$ in algorithm 1?\n3) Please provide more details about computing subgradients in eq 23 from Appendix. Since this is the single non-trivial operation in the differentiating of the considered method (others operations are just summation and subtraction), its detailed description is important for understanding the correctness of the proposed approach.\n4) Alg 2 solves two optimization problems (for $\\beta = 0$ and $\\beta = 1$), so it is not clear the difference with the naive implementation. Please, provide more details or explicitly present the naive form and highlight the difference.\n5) Please provide detailed description of datasets that are used in generating of Figure 2. Do these plots correspond to a picked instance of the corresponding problem class? Or some average over all considered instances are made? \n6) What solver do authors use inside the proposed framework to solve problems in line 10, Alg 1 and in line 4, Alg 2?\n7) Please explicitly describe how to transform standard constraints like $Ax \\leq b$ to the form in problem (BP) ?\n8) Please explicitly state and describe decreasing of what quantities can explain the observed speed up. Are they the overall number of iterations, acceleration of solving intermediate optimization problems with GPU, or anything else? Such comparison with other state-of-the-art methods helps to place the proposed framework in the proper context and improve understanding of the reasons for the observed gain. If such comparison is not possible for Gurobi since authors do not have access to its detailed statistics, it is possible for FastDog, which is non-differentiable version of the proposed approach with fixed parameters $\\omega_{ij}$ and $\\alpha_{ij}$.\n9) I assume that in line 123 $\\omega$ should be replaced with $\\omega_{ij}$ according to the line 10 in Alg. 1 The authors explicitly state the problems where the proposed framework does not show improvement and discuss the possible reasons for that." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 4 ]
[ "rXhOq7PlT_1", "9tL_NmDe8t", "-B52eYXbLe3", "lkjA46s77zv", "lJgbzZrZwi", "h0guQuH13o", "VP_0kjHvbbm", "geuQAPKcV4B", "Zqqt02Ca_Dt", "Fj9QQKnjdGm", "UGe4d01mfCp", "UGe4d01mfCp", "IFpEq1n5Bj", "bbZtrYMvzVW", "nips_2022_-QHUWgkh1OY", "nips_2022_-QHUWgkh1OY", "nips_2022_-QHUWgkh1OY", "nips_2022_-QHUWgkh1OY" ]
nips_2022_Mn4IkuWamy
The Nature of Temporal Difference Errors in Multi-step Distributional Reinforcement Learning
We study the multi-step off-policy learning approach to distributional RL. Despite the apparent similarity between value-based RL and distributional RL, our study reveals intriguing and fundamental differences between the two cases in the multi-step setting. We identify a novel notion of path-dependent distributional TD error, which is indispensable for principled multi-step distributional RL. The distinction from the value-based case bears important implications on concepts such as backward-view algorithms. Our work provides the first theoretical guarantees on multi-step off-policy distributional RL algorithms, including results that apply to the small number of existing approaches to multi-step distributional RL. In addition, we derive a novel algorithm, Quantile Regression-Retrace, which leads to a deep RL agent QR-DQN-Retrace that shows empirical improvements over QR-DQN on the Atari-57 benchmark. Collectively, we shed light on how unique challenges in multi-step distributional RL can be addressed both in theory and practice.
Accept
The reviewers carefully analyzed this work and agreed that the topics investigated in this paper are important and relevant to the field. Although the reviewers generally expressed positive views on the proposed method, they also pointed out many possible limitations of this paper. On the one hand, one reviewer acknowledged that the authors provided the first theoretical guarantees on multi-step off-policy distributional RL algorithms via a novel algorithm. They argued, however, that the methodological novelty may not be too significant compared to the baseline QR-DQN, and that the experiments could be improved. After reading the authors' rebuttal, this reviewer—although still with an overall positive impression of the quality of this work—said that they do not believe the paper fully shows the particularity of applying the multi-step idea to DRL when compared to some value-based methods. They also believe that the empirical section of the paper was not strong enough. Another reviewer agreed that this is novel and significant work, as it introduces a non-trivial extension of one-step distributional RL to the multi-step off-policy setting. They argued that this paper provides a theoretical basis for the continued study of multi-step paradigms in off-policy reinforcement learning, which is important/significant. This reviewer, however, pointed out that the experimental results were not sufficiently convincing to support the claims that employing the multi-step distributional RL algorithm provides a significant empirical advantage. This reviewer carefully analyzed the authors' rebuttal and appreciated that they clarified the reviewer's original points of confusion. Overall, however, this reviewer agreed with others that the empirical results are not overly convincing. Furthermore, one of the reviewers argued that the notion of path-dependent distributional TD error is novel. They pointed out, as the main limitation of this paper, that investigating the multi-step distributional RL setting "may not be [particularly interesting since] some theoretical results on one-step distributional RL can be straightforward to extend into the multi-step version". They also argued that the unbiased QR-loss was not clearly motivated since the bias for RL may also be useful for exploration. After reading the authors' rebuttal, this particular reviewer updated their score since the authors provided more evidence to strengthen their theoretical and empirical claims. Overall, all reviewers were positively impressed with the quality of this work but brought up many points of contention regarding ways in which the paper could/should still be improved. They encourage the authors to update their work based on their constructive criticisms and, in particular, in a way that tackles the points of contention mentioned in the original reviews and their post-rebuttal comments.
train
[ "rSUPz7CiQ6v", "NwxvIV9wXrK", "Ru3MpVeFVF", "KDTEFhHoGZF", "1l0MdPmRM7b", "x4nwqyXnz2Q", "WVLJ6jtFaKW", "L1Lm3Lv9P3c", "WA9rPtbXc_A", "p2FKaE61Rns", "NXA8RZbdfnZ", "8iMurE2pyT_", "VOQQNC02muM" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for your reply. We are glad that the explanation above has clarified things. We will make sure to include such discussions and incorporate your suggestions in our revision.", " Thank you very much for the informative response, your argument is now clear to me! Could you please add a small note about this somewhere between line 175 and 180? Perhaps a footnote on 175 with part of your response above would suffice. Example below using your response:\n\n\"Note: the transformation of the distribution is non-linear as a function of $G_{0:t}$, although the corresponding transformation of random variables is affine in $G_{0:t}$.\"\n\nTo make this important distinction even more explicit, you could reproduce the exact example in your response above in an appendix in the paper.\n\nThanks again!", " We thank the reviewer for their response. We believe what the reviewer is pointing out is the difference between the transformation of the distribution itself, which is what our statement of non-linearity refers to, and the way this transformation applies to random variables.\n\nTo confirm, the transformation of the distribution is non-linear as a function of $G_{0:t}$, although the corresponding transformation of random variables is affine in $G_{0:t}$. Importantly, it is the transformation of distributions that determines the algorithm behaviour, and this is reason that path-dependent TD errors are required, and why the operator becomes non-contractive if these are not used.\n\nIn more detail, if we have a probability distribution $\\nu$ (say the standard normal distribution, $N(0, 1)$), then the transformation $ (b_{g,1} )N(0, 1) $ is a new normal distribution, now with mean parameter $g$: $N(g, 1)$. This transformation can also be interpretted through a random variable distributed according to $\\nu$. If $Z \\sim \\nu$, then $g + Z$ is distributed according to $(b_{g,1}) \\nu$. It is true that the transformation of the random variable is **affine** in $g$ (a notion closely related to linearity), in the sense that for any $\\lambda\\in[0,1]$,\n$$( \\lambda g_1 + (1-\\lambda) g_2 ) + Z$$\nis the same as\n$$\\lambda (g_1 + Z) + (1 - \\lambda) (g_2 + Z).$$\nHowever, the transformation on the distributions themselves is not linear, nor affine, in $g$. This follows because\n$$(b_{\\lambda g_1 + (1-\\lambda) g_2, 1}) N(0, 1) = N(\\lambda g_1 + (1 - \\lambda) g_2, 1),$$\nbut\n$$(\\lambda b_{g_1, 1}) N(0, 1) + ((1-\\lambda) b_{g_2, 1}) N(0, 1) = \\lambda N(g_1, 1) + (1 - \\lambda) N(g_2, 1),$$\nwhich is a mixture of Gaussians.\n\nHope this clarifies, please let us know if you have other questions.\n", " Thank you for the response. I am still not convinced of the source of non-linearity in the pushforward operation. You mention that \n\n$(b_{G_{0:t-1}, \\gamma^t})\\eta$ is not an additive function of $G_{0:t-1}$, although I believe expanding this term with the definition of the pushforward operator in the paper (line no 76 in the revised edition) yields $G_{0:t-1} + \\gamma^t\\eta$, which is an additive function of $G_{0:t-1}$. Am I missing something?", " Apologies to the authors for responding so late.\n\nI thank the authors for clarifying my questions. For Q1, 2, 3, and 5, after reading the response and appendix, I think most of my concerns have been clarified, although this further explanation should be placed in the main paper properly. There is also a typo in Lemma H.2 about the inequality order, which needs to be fixed.\n\nFor Q4, I still feel the unbiased QR loss is not well motivated, although it is non-trivial. As the authors said, the biased loss may still be empirically useful for better exploration. Also, I find the proposed unbiased algorithm is not very empirically significant over the biased version at least for a lot of games, after checking the results of all Atari games that the authors provided.\n\nIn summary, I am increasing my score to 5 vigilantly as I mainly appreciate the authors have clarified many of my concerns and the current paper version is well justified. Meanwhile, I do feel that this is still a borderline paper because there are still a couple of limitations to this work. Firstly, I am still not fully convinced about the key role and motivation of the path-dependency operator in the distributional RL setting. Secondly, the improvement of the unbiased QR-loss is not significant enough. Hope these suggestions could be helpful.", " Thank you very much for all reviewers’ valuable inputs to our paper.\n\n### **Improvements of Retrace over uncorrected n-step** \n\nA few reviewers mention that distributional Retrace does not appear to significantly improve over uncorrected n-step in the deep RL experiments. We want to mention that the strong performance of uncorrected n-step works is a well-established empirical observation in value-based learning in general (n=4 in [1]; n=5-10 in [2,3]; n=10 for Atari 100k in [4]), and is not restricted to distributional RL. This might be attributed to the fact that agents like C51 and QR-DQN uses a epsilon-greedy policy that closely tracks the greedy target policy (e.g., over time epsilon decreases), making the bias of uncorrected methods more benign when the bootstrapping horizon $n$ is small [8].\n\nHowever, this should not undermine the importance of multi-step operators whose fixed point is unbiased. The improvements of Retrace over uncorrected n-step become much more significant when the lack of off-policy corrections has a greater impact on the update target. Such situations include when the multi-step algorithm uses a very long bootstrapping horizon (e.g., using a large $n$) or when the behavior policy does not closely track the target policy (e.g., offline RL [5] or tandem RL setting [6]).\n\nIn **Appendix I**, we carry out ablation studies on the above cases. The overall performance of uncorrected n-step clearly degrades as the bootstrapping horizon n increases, whereas distributional Retrace is robust to the choice of n. Distributional Retrace also obtains a significant advantage over uncorrected n-step when the behavior policy is uniform, in which case proper off-policy corrections clearly help improve performance.\n\n### **Summary of main changes in the uploaded revisions** \n\nWe made a few major changes to the appendix:\n\n**Appendix H** includes a more in-depth discussion on the critical differences between path-dependent and path-independent multi-step distributional TD errors and their operators. We provide a formal proof of the fact that multi-step operators defined by path-independent multi-step distributional TD errors are **non-contractive**. This is meant to supplement the numerical experiment in the main paper.\n\n**Appendix I** includes additional comparison between distributional Retrace and uncorrected $n$-step. In cases where the lack of off-policy corrections is detrimental to performance, Retrace outperforms uncorrected $n$-step more significantly. This also implies that in general Retrace is much more robust to hyper-parameters such as bootstrap horizon $n$ and replay ratio [7].\n\n**Appendix J** includes per-game scores and additional game score reports for the experiments in Fig 4.\n\n### **Reference** \n\n[1] Hessel et al., Rainbow: combining improvements for deep reinforcement learning, AAAI 2018.\n\n[2] Kapturowski et al., Recurrent experience replay in distributed reinforcement learning, ICLR 2018.\n\n[3] Rowland et al., Adaptive trade-off in off-policy learning, AISTATS 2020.\n\n[4] Schwarzer et al., Data efficient reinforcement learning with self-predictive representations, NeurIPS 2020.\n\n[5] Levine et al., Tutorial on offline reinforcement learning, 2019.\n\n[6] Ostrovski et al., The difficulty of passive learning in deep reinforcement learning, NeurIPS 2021.\n\n[7] Fedus et al., Revisiting Fundamentals of Experience Replay, ICML 2020.\n\n[8] Kozuno et al., Revisiting Peng's Q($\\lambda$) for modern reinforcement learning, ICML 2021.", " \n### **Question 4 on “motivation of unbiased QR loss”** \n\nTo begin with, we want to note that extending an unbiased QR gradient estimate from the one-step case to the multi-step case is non-trivial. In showing Lemma 5.2, we have exploited a certain property of the QR loss function, namely its affineness in the target distribution. Such a result would not be generally available for other distributional loss functions.\n\nThe uncorrected operator [26] does not apply off-policy corrections and hence differs significantly from the distributional Retrace operator we introduced in the paper. In tabular setting (Fig 3 (d)) we can verify that using an uncorrected operator leads to a biased fixed point, which is undesirable. In Atari games, the bias has been empirically shown to be more benign potentially because the behavior policy is sampled from a replay buffer which keeps getting updated with more on-policy data.\n\nThough it is true that the uncorrected operator can obtain most improvements over the one-step baseline, in the median performance of C51 and mean performance of QR-DQN (Fig 4), distributional Retrace still obtains some statistically significant improvements over the uncorrected operator. This hints at a favorable effect of carrying out off-policy corrections with such algorithms. Note that such improvements are not as visible in the value-based setting from prior literature.\n\n### **Question 5 on “results across all atari games”** \n\nSee **Appendix J** for per-game scores for the experiments in Fig 4.\n\n### **Limitations of the paper** \n\nThanks for mentioning this point. We have made some discussions in the conclusion section, in the form of potential future work and extension of the current paper. We will make sure to discuss limitations of the work more explicitly in the main paper. \n\nTo be concrete, one open question is whether our method can be extended to more naive multi-step algorithms such as Q($\\lambda$). For example, Q($\\lambda$) [1] is a commonly-used off-policy learning algorithm but it does not satisfy the trace condition $c_t\\leq\\rho_t$ and therefore the theoretical statements in our work do not generally apply. It would be of interest to study the property of an equivalent “distributional Q($\\lambda$)” algorithm.\n\n[1] Harutyunyan et al., Q($\\lambda$) with off-policy corrections. ALT 2016.\n\n", " Thank you very much for your valuable inputs to our paper. We address your detailed comments below.\n\n### **Question 1 on “path-dependent distributional TD error”** \n\nWe include an in-depth discussion on the differences between path-dependent multi-step distributional TD errors and the path-independent alternatives in Appendix H. We provide an explanation on why the path-dependency cannot be removed and is fundamental to defining contractive operators. We also provide detailed motivations for path-independent TD errors, which are in fact more direct analogies to the value-based TD errors. We briefly explain why the path-dependency cannot be removed below.\n\nThe cumulative sum of reward $G_{0:t-1}$ cannot be canceled out in the definition of multi-step distributional TD error. To see this more clearly, we can rewrite the multi-step distributional TD error as \n\n$$\\tilde{\\Delta}_{0:t}^\\pi = ( b_t ) \\Delta_t^\\pi$$\n\nwhere $b_t \\coloneqq b_{G_{0:t-1},\\gamma^t}$. In other words, the multi-step distributional TD error can be understood as applying a push-forward operation $b_t$ defined by $G_{0:t-1}$ to the one-step distributional TD error $\\Delta_t^\\pi$ at time t. Though the one-step distributional TD error depends on $R_t$ only, the push-forward in general depends on $G_{0:t-1}$ and does not cancel out $G_{0:t-1}$ as a common term. \n\n### **Question 2 on “rigorous proof of non-convergent behavior”**\n\nThanks for pointing this out. In Appendix H, we now provide a rigorous proof to show alternative path-independent operators (defined in line 192-193, line 203-206) are non-contractive in the space of signed measures with total mass 1, under the Cramer distance $L_p$ with $p=1$. Here, we give a brief sketch.\n\nThe high-level idea is to construct a numerical counterexample on the same toy problem in the main paper: we can find two distributions $\\eta_1$ and $\\eta_2$ such that\n\n$$L_p( R \\eta_1, R\\eta_2) > L_p(\\eta_1,\\eta_2).$$ \n\nNote that the proof technique of finding numerical counterexample is a common practice in RL theory, especially when proving that certain algorithms **do not** work. See, e.g., [1] shows the Boltzmann operator is numerically unstable on a toy problem; [2] shows without off-policy linear TD is unstable on a toy problem.\n\n[1] Asadi et al., An alternative softmax operator for reinforcement learning, ICML 2017.\n[2] Sutton et al., An emphatic approach to the problem of off-policy temporal difference learning, JMLR 2016.\n\n### **Question 3 on “advantage of path-dependent distributional TD error”** \n\nThanks for your question on this. See **Appendix H** for a more comprehensive discussion.\n\nOne major significance of path-dependent distributional TD error, is that path-dependency is indispensable in defining convergent multi-step off-policy operators. By drawing naive analogies to the value-based setting, one can plausibly come up with path-independent operators which are non-convergent (line 185 - line 206). As one of the examples we introduced in the paper (**Appendix C**), seeing that value-based operator takes the form\n\n$$RQ(x,a) = Q(x,a) + \\mathbb{E}[\\sum_{t=0}^\\infty c_1...c_t \\gamma^t \\delta_t^\\pi] $$\n\nWhich can be seen as a weighted sum of one-step value-based TD error $\\delta_t^\\pi$, we can similarly weigh one-step distributional TD error $\\Delta_t^\\pi$ and construct a multi-step operator of the following form\n\n$$\\bar{\\mathcal{R}}\\eta(x,a) = \\eta(x,a) + \\mathbb{E}[\\sum_{t=0}^\\infty c_1...c_t \\gamma^t \\Delta_t^\\pi] $$\n\nAs we showed numerically, such an operator is non-contractive (now we also have a formal theorem which states the operator is non-contractive in **Appendix H**). To build a theoretically grounded operator, we need to carefully probe the connection between n-step predictions and TD error in the distributional RL setting and exploit the fact that properly defined distributional TD errors need to be path-dependent. \n\nWe will make sure to include more motivations and explanations in the revision.\n", " Thank you very much for your valuable inputs to our paper. We address your detailed comments below.\n\n### **Question 1 on “in which way the push-forward operations are nonlinear”** \n\nThanks for your question, we will be sure to clarify more in the revision. \n\nTo be clear, the push-forward operation is affine in the input distributions, in the sense that (we omit the hashtag # because it does not format well in the text box)\n$$\\left(b_{r,\\gamma}\\right) (\\alpha \\nu_1 + (1-\\alpha) \\nu_2) = \\alpha \\left(b_{r,\\gamma}\\right) \\nu_1 + (1-\\alpha) \\left(b_{r,\\gamma} \\right) \\nu_2.$$\nHowever, the push-forward operation is not affine nor linear in the argument $r$ above. This is what we call the “nonlinearity” of the push-forward operations. We provide a bit more details below.\n\nMore concretely, the nonlinearity of push-forward operations are mainly in contrast with the value-based counterparts. In Line 173-174, note that the value-based n-step predictions take the form $$G_{0:t-1} + \\gamma^t Q_t.$$\n\nThe prediction takes an “additive” form of $G_{0:t-1}$ and the bootstrapped target $Q_t$. The linearity of the addition operation ensures that we can cancel out $G_{0:t-1}$ as a common term from both of the n-step predictions. The distributional n-step predictions are computed as follows\n\n$$\\left(b_{G_{0:t-1},\\gamma^t}\\right) \\eta_t.$$\n\nThe n-step prediction does not take e.g., an additive form as in the value-based setting (i.e., the above expression is not an additive function of $G_{0:t-1}$. This is the main source of nonlinearity we refer to in the paper. As a result, we do not expect in general $G_{0:t-1}$ to cancel out as a common term from the difference between two n-step predictions.\n\n### **Question 2 on “particular use cases or real-world consequences”** \n\nOur proposal is a fairly generic multi-step distributional RL algorithm which can be applied to any applications where value-based learning or RL in general can be applied. The main difference might be that our approach could lead to potentially better asymptotic performance or entail faster learning. Such recent real world applications include [1], where they apply QR-DQN to train the agent.\n\n[1] Bellemare et al., Autonomous navigation of stratospheric balloons using RL, Nature 2020.\n\n### **Limitations of the paper** \n\nThanks for mentioning this point. We have made some discussions in the conclusion section, in the form of potential future work and extension of the current paper. We will make sure to discuss limitations of the work more explicitly in the main paper. \n\nTo be concrete, one open question is whether our method can be extended to more naive multistep algorithms such as Q($\\lambda$). For example, Q($\\lambda$) [1] is a commonly used off-policy learning algorithm but it does not satisfy the trace condition $c_t\\leq\\rho_t$ and therefore the theoretical statements in our work do not generally apply. It would be of interest to study the property of an equivalent “distributional Q(lambda)” algorithm.\n\n[1] Harutyunyan et al., Q($\\lambda$) with off-policy corrections. ALT 2016.\n", " Thank you very much for your valuable inputs to our paper. We address your detailed comments below.\n\n### **Question 1 on “difference between multi-step value-based and DRL algorithms\"**\n\nWhile both multi-step learning and distributional RL improve value-based RL algorithms, their mechanisms of improvements are quite orthogonal to each other. In this work, since our focus is to apply principled multi-step learning techniques to distributional RL, we find it a bit distracting to add a comparison between multi-step value-based and DRL algorithms.\n\nNevertheless, we can offer some comparison using results from prior literature. From the Rainbow paper [1], in Fig. 1, a comparable multi-step value-based algorithm achieves median performance at around 120%. In our work, multi-step distributional RL algorithms can generally achieve median performance at around 140%.\n\n[1] Hessel et al., Rainbow: combining improvements in deep RL. AAAI 2018.\n\n### **Question 2 on “improvement of Retrace over uncorrected seems to be not significant”** \n\nIn the median performance of C51 and mean performance of QR-DQN (Fig 4), distributional Retrace still obtains some statistically significant improvements over the uncorrected operator. This hints at a favorable effect of carrying out off-policy corrections with such algorithms. Note that such improvements are not as visible in the value-based setting from prior literature.\n\nIn addition, in **Appendix I**, we carry out ablation studies that compare distributional Retrace and uncorrected n-step. In situations where off-policy corrections are essential to performance (such as when the bootstrapping horizon n is large, or when the data is very off-policy), distributional Retrace is clearly more robust and achieves significant improvements over uncorrected n-step methods.\n\n### **Question 3 on “applying the idea to IQN”** \n\nWe focus on C51 and QR-DQN, mainly because these two algorithms directly stem from theoretically-grounded tabular learning algorithms such as categorical TD learning and quantile TD learning. Since IQN and FQF are empirical extensions of C51 and QR-DQN, we leave such further empirical evaluations to future work.\n\n### **Question 4 on “crossing quantile issues”** \n\nThe multi-step method we introduced does not address the crossing issues out-of-the-box. However, non-crossing quantile regression could certainly be combined with our approach.\n", " This paper provides theoretical guarantees on multi-step off-policy distributional RL algorithms and derives a DRL algorithm, QR-DQN-Retrace based on QR-DQN which shows empirical improvements over QR-DQN on Atari. There are two main contributions of this paper:\n1. This paper systematically discusses the similarity and difference between multi-step value-based RL and multi-step distributional RL. They also provide theoretical guarantees on multi-step off-policy distributional RL algorithms.\n2. This paper derives a novel algorithm, Quantile Regression-Retrace, which leads to a deep RL agent QR-DQN-Retrace that shows empirical improvements over QR-DQN on the Atari-57 benchmark. \n\nThere are some weaknesses of this paper.\n1. Although there are some theoretical contributions, the methodological novelty is not that big from the perspective of algorithm architecture compared to the baseline QR-DQN.\n2. The empirical part is not that solid while some more comparisons between multi-step value based algorithms and multi-step DRL algorithms are helpful. 1. Is it possible to demonstrate the difference between multi-step value based algorithms and multi-step DRL algorithms in an empirical way?\n2. As Figure 4 shows, the improvement of \"Retrace\" over \"Uncorrelated\" seems to be not significant.\n3. How about applying the retrace idea on other DRL algorithms such as IQN, FQF.\n4. Can this method ensures the validity of the learned distributions, such as the crossing issues?\nZhou, Fan, Jianing Wang, and Xingdong Feng. \"Non-crossing quantile regression for distributional reinforcement learning.\" Advances in Neural Information Processing Systems 33 (2020): 15909-15919. Although there are some theoretical contributions, the algorithm design is not novel enough, and also the empirical part needs to be improved to support the idea of the paper. ", " The authors study a multi-step off-policy learning approach to the distributional RL setting. The essential contributions of the paper may be seen as the theoretical and mathematical frameworks necessary to extend one-step distributional RL to an importance-sampled multi-step algorithm. This extension enables off-policy corrections for distributional RL in the same way that the Retrace algorithm enables off-policy corrections for value-based learning. Through a careful analysis, the authors elucidate the fact that such an extension of a Retrace style off-policy update to the distributional RL case is non-trivial. This fact is primarily owed to the nonlinearity, and repeated application of the pushforward operator used to produce the Bellman recurrence relation on the return distribution. The recurrence relation thus establishes a path-dependent relationship between the return distribution and the distributional TD error. This is in direct contrast to the path-independent relationship present in value-based RL methods.\n\nThe authors go on to substantiate the proposed methods theoretically and empirically. They evaluate their methods on Atari control problems and compare them against one-step distributional algorithms and uncorrected multi-step distributional algorithms. For each experiment, the authors run their algorithms using two different parametric distribution representations: categorical and quantile regressions. The proposed methods generally outperform the compared methods in either the mean or median score. Strengths\n\n-The work is, to the best of my knowledge, novel and significant, as it introduces a non-trivial extension of one-step distributional RL to the multi-step off-policy setting.\n\n-The work provides a theoretical basis for the continued study of multi-step paradigms in off-policy reinforcement learning.\n\n-The authors make clear that the extension provided in the paper deviates from the value-based multi-step paradigm. \n\n-The paper provides adequate clarity of the introduced concepts and is precise with its mathematical notation.\n\n\n\nWeaknesses\n\n-The experimental results are not overly convincing that employing the multi-step distributional RL algorithm provides a significant empirical advantage.\n\n-Minor grammatical mistake in line 357.\n -Could you please detail exactly in which way the pushforward operations are non-linear? I believe it to be non-obvious, considering that they are constructed using an affine measurable map, and both the composition and inverse of an affine map will be affine.\n\n-Are there particular use cases or real-world consequences of the proposed approach that are worth mentioning? For example, are there specific applications that are now enabled by the work?\n -The authors state in the checklist that they discuss the limitations of their work, although I could not find an explicit statement in the paper. This does not diminish the contributions of the paper.", " This paper investigates the multi-step off-policy distributional RL algorithms. Firstly, the multi-step distributional TD error is defined and the contraction property is also provided. Building this result, the authors pointed out the fundamental difference between multi-step distributional RL and classic RL by proposing a new notion called path-dependence distributional TD error. Finally, a new unbiased version of the multi-step distributional RL algorithm is proposed, including quantile-regression-Retrace and QR-DQN-Retrace, which is slightly better than the QRDQN and the naïve biased version of the multi-step distributional RL algorithm. ### Originality\n\nTheoretically, the paper provides the first theoretical guarantees on multi-step off-policy distributional RL algorithms (see Proposition 3.2 and 3.3). To the best of my knowledge, the notion of path-dependent distributional TD error is novel. However, combining multi-step distributional RL with quantile regression is straightforward, although a relatively different unbiased version in Line 280 is derived.\n\n\n### Quality \n\nThe writing quality is good. One minor typo in Line 239, “Speficially” should be “specifically”.\n\nThe technical contribution can be further improved. Generally, investigating the multi-step distributional RL may not be somewhat intriguing as some theoretical results in one-step distributional RL can normally be straightforward to extend into the multi-step version. As such, the theoretical result may not be surprising to me, and the design of the QR-based Retrace algorithm may not be novel enough.\n\n\n### Clarity \n\nThe paper is well organized and easy to follow. The key confusing point is the comparison between path-dependent distributional TD error and the value-based TD error, although I understand the authors did make some efforts to clarify it. Please refer to Question for my detailed concern.\n\n### Significance \n\nOne potential limitation of this paper is the proposed algorithm may not achieve significant improvement compared with the baseline “Uncorrected” as suggested in Figure 4. It is easy to expect that the multi-step distributional RL can be significantly better than the one-step and thus it is not surprising that “Retrace” and “Uncorrected” are superior to “one step”. However, as far as I can tell, the contribution in the algorithm part is the proposal of an unbiased version of QR loss (Line 280 and Lemma 5.2), but I can not recognize the improvement of the unbiased version in contrast to its biased version (“Uncorrected”).\n 1. Path-dependent distributional TD error. From Line 174 to 180, I feel that $G_{0:t-1}$ can still be canceled out in the definition of distributional TD error so that only R_t is left. I may miss some points, but could the authors provide more explanation about that? The more detailed derivation could help readers to have a better understanding of their discrepancy.\n\n2. To demonstrate the key role of path-dependent distributional TD error, the authors only show that the path-independent operator numerically can be non-convergent. My suggestion is that a more rigorous proof of the non-convergent behavior could be more convincing.\n\n3. The advantage of path-dependent distributional TD error needs to be further explored. As multi-step distributional TD may heavily depend on the past rewards, can it be beneficial for the optimization of RL algorithms? Or it can lead to an instability issue? A deeper explanation is welcome in the future version.\n\n4. What is the motivation of the proposed unbiased QR loss in Line 280 and what is the advantage over the naïve biased version in [26]? It seems that the empirical improvement is not significant.\n\n5. The experimental result across all Atari games can be also provided rather than only the mean and median of average rewards. For instance, authors can present a histogram to illustrate the improvement ratio of the proposed algorithm across all Atari games.\n In the checklist, the authors said they described the limitations of their work. Maybe I missed them, but I have not found any discussion about them in the main context. This paper indeed makes some contribution to the multi-step off-policy distributional RL as claimed in the introduction, but perhaps the main limitation lies in that some results (both theoretically and empirically) are not inspiring or surprising enough that I have not learned too much after carefully reading this paper. For example, Proposition 3.2 may be a straightforward result based on previous results in the one-step case. Moreover, the underlying impact of the path-independent distributional TD error can be deeply investigated. The empirical improvement of the proposed unbiased Retrace algorithm seems to be only slightly better or even worse compared with the uncorrected version, as illustrated in Figure 6. \n\n\nIn summary, I faithfully thank the pain-taking efforts the authors make in order to make a solid contribution in the multi-step off-policy distributional RL setting, but the limitations may not be addressed easily. My suggestion is that the path-dependence notion can be further probed and the algorithm novelty can be improved to expect better performance. I would be happy to see a better version of this paper in the future." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "NwxvIV9wXrK", "Ru3MpVeFVF", "KDTEFhHoGZF", "WA9rPtbXc_A", "WVLJ6jtFaKW", "nips_2022_Mn4IkuWamy", "L1Lm3Lv9P3c", "VOQQNC02muM", "8iMurE2pyT_", "NXA8RZbdfnZ", "nips_2022_Mn4IkuWamy", "nips_2022_Mn4IkuWamy", "nips_2022_Mn4IkuWamy" ]
nips_2022_VwRFJi9crEH
Personalized Subgraph Federated Learning
In real-world scenarios, subgraphs of a larger global graph may be distributed across multiple devices or institutions, and only locally accessible due to privacy restrictions, although there may be links between them. Recently proposed subgraph Federated Learning (FL) methods deal with those missing links across private local subgraphs while distributively training Graph Neural Networks (GNNs) on them. However, they have overlooked the inevitable heterogeneity among subgraphs, caused by subgraphs comprising different parts of a global graph. For example, a subgraph may belong to one of the communities within the larger global graph. A naive subgraph FL in such a case will collapse incompatible knowledge from local GNN models trained on heterogeneous graph distributions. To overcome such a limitation, we introduce a new subgraph FL problem, personalized subgraph FL, which focuses on the joint improvement of the interrelated local GNN models rather than learning a single global GNN model, and propose a novel framework, FEDerated Personalized sUBgraph learning (FED-PUB), to tackle it. A crucial challenge in personalized subgraph FL is that the server does not know which subgraph each client has. FED-PUB thus utilizes functional embeddings of the local GNNs using random graphs as inputs to compute similarities between them, and use them to perform weighted averaging for server-side aggregation. Further, it learns a personalized sparse mask at each client to select and update only the subgraph-relevant subset of the aggregated parameters. We validate FED-PUB for its subgraph FL performance on six datasets, considering both non-overlapping and overlapping subgraphs, on which ours largely outperforms relevant baselines.
Reject
The author(s) present(s) a new subgraph federated learning approach to learn a single GNN model that computes embeddings based on the relationship between local graphs. This approach goes beyond the previous approaches that consider the local subgraphs separately. The paper is interesting and present some novel ideas but it should address two fundamental critiques by the reviewers before acceptance. In particular the authors should: - clarify the novelty of their approach(especially considering the missing discussion with previous work) - improve the experimental section. The current results are interesting but not fully convincing and the experimental section could be improved significantly by including additional settings suggested by the reviewers Overall, the paper is interesting but it is not ready for publication at this point
train
[ "sNZimWDO6Ow", "1NmnBT7ogae", "V1XMplRDyb", "jY9lPdlQAXi", "r7eAGfLwrbX", "yjPxe342La", "GxIxs3n97uw", "T3POAOwTao", "ozIULvzpAm4", "UI9L93T2Iu", "8VNyOq7wiI", "rsFvzra8iQ", "qsDyC7OnlQt", "KvJ-WBKbQIs", "EWy8Dg55lyv", "VsRS6Oi-6jr", "NN85sabUXrQ", "L-07kJX7cjq", "F3OXYEl1lv", "hrsyitfFeVs", "CedWT0qAIju", "OD67D-UdzmJ", "WVkWCuv_8ca", "Zjti7Vui6j_", "JmrCYpN6QI_", "Rl1DIiqIs-bK", "9PTl-swNbEL", "U1G8sqKOKTPj", "5jKdm5NEJG", "auZDhY46SWU", "SCVMP7IrQm6", "i3TQduj1KP" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We sincerely thank you for your helpful comments, as well as your time and effort in reviewing our paper. We are happy to hear that your concerns are mostly resolved. \n\n**NOTE:** I know you must be very busy, but could you please reflect your updated score in your original comment? You promised to increase your score, but it seems the updated score is not reflected in the system yet.\n\n---\n\nRegarding your last comment that our paper can be better if more novel and well-motivated techniques can be applied here, throughout constructive discussions with reviewers, we are confident that our work is sufficiently novel and well-motivated. In particular, we motivate our novel personalized subgraph FL problem with two important challenges: missing edge and network community problems, and we tackle our novel task with personalized weight aggregation, which is particularly helpful for overlapping scenarios, and with personalized weight mask, which is particularly helpful for non-overlapping scenarios. Thus, the task that we formalize is important for subgraph FL, and the ingredients that we propose are also necessary for our formalized task. In this vein, we would like to politely argue that our current contributions are well-motivated and valuable.\n\n---\n\nOnce again, thank you for reading our responses and replying to us.", " I want to thank the authors for the detailed response. My concerns have almost been resolved.\n\nThe idea is interesting. I will raise my score to borderline accept. I am also aware that the paper can be better if more novel and well-motivated techniques can be applied here.", " Dear Reviewer we3U,\n\nWe believe that we have satisfactorily addressed all your two concerns: we clarify incompatible knowledge collapse issues and provide additional experimental results on the random graph partitioning you requested, in the initial response comments below. While the interactive discussion period between authors and reviewers has passed, we would like to gently remind you about our responses as well as the revision of our paper. \n\nWe believe that incorporating your constructive comments and our discussion on them into our paper has significantly strengthened our work, and we would like to politely request you to go over our responses and then reflects our clarifications and revisions into your rating if they are satisfactory. We thank you again for your time and efforts in reviewing our paper.\n\nBest regards, Authors", " We sincerely appreciate your thoughtful reviews and comments, and also your reply to our initial responses. We are happy to hear that your comments/curiosities are satisfied with our responses. \n\nAs suggested, we have faithfully included all the constructive discussions that we had with reviewers in our main paper and the supplementary file. Moreover, following your concrete suggestion, we highlight the difference in graph partitionings between ours (i.e., METIS) and FedSage [43] (i.e., Louvain) in Section 5.1 of the main paper and Section C.2 of the supplementary file. Note that, due to the page limits, we had to move some of the details in the supplementary file. However, if one page is further allowed after acceptance of this paper, we are sure to describe such differences in more concrete ways in the main paper. \n\nWe hope our revision further satisfies you. Also, we hope you take this into consideration for your rating, as you promise. Once again, thank you so much for your constructive comments, and thank you for engaging with us.", " We sincerely appreciate all your constructive and helpful comments, and also your reply to our initial responses. We are happy to hear that, throughout our initial responses with additional experiments, some of your questions are addressed.\n\n---\n\nHowever, you are still concerned about the minor differences between our proposed FED-PUB and existing works, and here we would like to faithfully address this concern once again. We believe the big difference in our work stems from our motivations. Specifically, we handle two critical challenges in subgraph FL: missing edge and network community problems, by formalizing a new personalized subgraph FL problem, which is clearly different from existing works. Our architecture is also oriented to tackle this personalized subgraph FL problem, which is divided into two sub-problems: overlapping and nonoverlapping subgraphs. In particular, to promote knowledge sharing between the overlapping subgraphs, we propose to share weights largely between the interconnected subgraphs (Figure 6 and Figure 7). On the other hand, to deal with the heterogeneity issues when subgraphs are completely disjoint, we propose to train personalized weight masks that are not shared across different clients (Figure 7 and Figure 8). Thus, in terms of the main novelty, our work should be evaluated more on its contributions to discovering challenges and solving them in subgraph FL domains, and we would like to politely claim that, considering all these perspectives above, ours are valuable and well-motivated.\n\n---\n\nAlso, the one remaining concern you have is that the proposed weight masking mechanism lacks strong motivation; however, we believe the weight masking scheme is the necessary design choice for the subgraph FL problem. In particular, as described in the previous responses (responses to Question 2.1 and Question 2.2), the heterogeneity issue of subgraphs is severe when subgraphs are completely disjoint. And this issue is not easily solvable via personalized weight aggregation, since there are no good subgraphs that provide helpful weights to the highly heterogeneous subgraph. Then, to tackle this issue, personalized weight masking should be used, since this scheme can filter out irrelevant information transmitted from the other heterogeneous subgraphs, while allowing the model to maintain the locally helpful information in its parameters. Note that the severity of heterogeneity for nonoverlapping subgraphs was additionally provided in Question 2.2, and we already provided the advantage of using weight masking schemes in Figure 7, Ablation studies.", " We sincerely appreciate all your comments and questions, as well as your reply to our initial responses. Also, thank you for acknowledging our detailed responses. We are happy to hear that your concerns about random graphs and similarity computations have been resolved.\n\n---\n\nHowever, we would like to clarify your remaining concern about the label similarity. Note that the goal of federated learning is to leave the local data on the client side while learning the model by aggregating locally-computed updates, to preserve the privacy of the user's data. Then, in terms of privacy-preserving, **it is more practical and safer not to share any type of local data, including features, labels, and their distributions**, compared to sharing some of them. This privacy issue on sharing label distributions becomes more severe in heterogeneous settings with Non-IID label distributions (i.e., non-overlapping subgraphs). For example, let's assume all nodes in client A have the label 0, while all nodes in the other client B have the label 1. Then, the server can know which clients have which node labels, which are not optimal in federated learning, especially when we are concerned about user privacy. Therefore, in our work, we consider more rigorous federated learning scenarios, where any types of local data related to the client's data instances are not sharable and should only be stored on the client side. \n\nAs you pointed out, one might use privacy-preserving methods, such as differential privacy techniques, and they might make federated learning systems safer even when we share local information (e.g., local data distributions). However, investigating how we can preserve the privacy of label distributions with differential privacy methods is beyond the scope of our work, and we leave it as future work.\n\n---\n\nLast but not least, regarding novel and well-motivated techniques, we are confident that we motivate our personalized subgraph FL problems with two important challenges: missing edge and network community problems, and we tackle our novel task with personalized weight aggregation, which is particularly helpful for overlapping scenarios, and with personalized weight mask, which is particularly helpful for non-overlapping scenarios. Thus, we would like to politely claim that our current contributions are well-motivated and valuable. ", " Overall, my curiosity is satisfied with the authors' response. However, the authors did not update the paper yet. I am happy to increase my score in case the paper gets revised. Along the other changes that you plan for the revision, I would suggest to include the the reasoning for the METIS algorithm also in the main text of the manuscript. I consider this design choice very important and highlighting differences to [43] valuable for the research community.", " I want to thank the authors for the detailed response. My concerns on random graphs and similarity computation have been resolved. \n\nHowever, as shown in the answer of **Question 3.2: A performance comparison with other similarity methods (label similarity, gradient similarity, model similarity) will also be very helpful.** The label similarity has the same or better performance than the proposed method, since the proposed method is trying to make it close to the label similarity. The label distribution might not be privacy sensitive (It is not directly related with user information and traditional FL algorithms use it for similarity calculation) and also can be resolved by privacy preserve methods.\n\nThe idea on using random graphs for functional embedding is interesting. I will keep my score and I also think the paper can be better if more novel and well-motivated techniques can be applied here.\n\n", " I've read all the responses. Some of my questions are addressed by these explanations and additional experiments. \n\nHowever, even if the authors have pointed out the minor difference between the proposed work and existing methods, I still hold my view that the novelty of the proposed method is limited. Although there are some differences, the core mechanism of the proposed method is still similar to some classic FL methods. Another concern is about the weight masking mechanism, which I still think this technique lacks a strong motivation behind and seems inconsistent with the learning task. \n\nConsidering the authors' responses have addressed some of my concerns, I would like to change my rating from 4 to 5. I think the paper can be better if more novel and well-motivated techniques can be applied here.", " Dear Reviewer gnhR,\n\nWe sincerely appreciate your positive comments: our research problem is novel and challenging, which can inspire future studies, and the experiments conducted are extensive, which proves the effectiveness of our methods.\n\nHowever, you pointed out the novelty of our work by providing relevant works, but also suggested to us clarifying the advantage of weight masks, the heterogeneity issue on real-world datasets, and the claim on missing edge problems. To this end, during the rebuttal period, we have made every effort to faithfully address all of them, which are given in detail in the responses below. In short, we have concretely discussed which of ours are different from the suggested works, i.e., we formalize a new problem in subgraph FL based on two challenges of missing edges and network communities and then propose to identify the interconnected subgraphs with functional embeddings, which are our main novelty. Furthermore, we have provided sufficient evidence with experimental results that weight masking is significantly helpful to our subgraph FL problem, heterogeneity issues are severe in real-world datasets, and missing edge problems are indeed implicitly handled by our FED-PUB.\n\nIn the end, we are confident that we have fully addressed all of your comments/concerns in the initial responses. Therefore, since the discussion phase will close soon, could you please go over our complete responses? Please let us know if you have anything else that we should address.\n\nBest regards, Authors", " Dear Reviewer cN7m,\n\nWe sincerely appreciate your positive comments that the personalized weight aggregation based on functional embeddings is interesting, the sparse weight masking looks helpful, and we provide our source code. \n\nOn the other hand, you pointed out two weaknesses: the advantage of using random graphs for functional embeddings is unclear; we only provide the source code for our FED-PUB and baseline FedAvg models. In the responses below, we have clearly validated with additional experiments that, instead of using node features stored in clients, randomly initializing graphs as the inputs for functional embeddings is more helpful since they might not yield any bias on the model’s functional space. Also, regarding random graphs, there is an additional advantage that we can preserve the privacy of FL participants’ local data. Lastly, we would like to politely say again that providing the codes for FedAvg and FED-PUB is our strength, which should not be the weakness of our work. \n\nIn the end, we are confident that we have fully addressed all of your comments/concerns in the initial responses. Therefore, since the discussion phase will close soon, could you please go over our complete responses? Please let us know if you have anything else that we should address.\n\nBest regards, Authors\n", " Dear Reviewer we3U,\n\nWe sincerely appreciate your positive comments that the personalized weight aggregation with functional embeddings for identifying similar subgraphs is novel, the proposed method can handle the missing edge and community detection challenges, and our FED-PUB achieves superior performances against relevant baselines in more communication-efficient and privacy-preserving ways.\n\nHowever, you pointed out two weaknesses: the incompatible knowledge issue between subgraphs from different communities is unclear, and the random partitioning setting should be tested to further validate our method. To this end, in the responses below, we have clearly shown that the incompatible knowledge issue is severe in subgraph FL tasks with additional experimental results. Also, we have provided the results on the suggested random graph partitioning setting, where our FED-PUB significantly outperforms other baselines with a large margin.\n\nIn the end, we are confident that we have fully addressed all of your comments/concerns in the initial responses. Therefore, since the discussion phase will close soon, could you please go over our complete responses? Please let us know if you have anything else that we should address.\n\nBest regards, Authors", " Dear Reviewer rAFu,\n\nWe sincerely appreciate your positive comments that our work is well motivated by graph-theoretical concepts of communities, the proposed methods are appropriate to handle suggested subgraph FL problems in privacy-preserving ways, the benchmark experiments on overlapping and non-overlapping subgraphs are valuable, and the experimental evaluations support our claims well, with high clarity in writing. \n\nIn the responses below, we have made every effort to address all your concerns/comments. To mention a few, we have provided the additional experimental results under the suggested experimental settings of [43] and the additional dataset statistics you requested. Also, we have clarified our statement on network homophily and network communities with their theoretical connections, and experimentally validated the necessity of our sparse weight masks.\n\nAs the discussion phase will close soon, could you please go over our complete responses? While we strongly believe that we have sufficiently addressed all your comments in the initial responses, please let us know if you have anything else that we should address.\n\nBest regards, Authors\n", " We sincerely thank you for your constructive and helpful comments. We appreciate your positive comments that our research problem is interesting and novel yet challenging, and that the experiments are extensive which proves the effectiveness of our method. We initially address all your concerns below:\n\n---\n\n**Question 1:** Despite some interesting designs in the proposed model, the core idea is not novel enough and similar to some existing works, which either use shared data to minimize the distance between neural networks of different sizes [*1] or incorporate auxiliary data to assist the federated training procedure with knowledge distillation [*2, *3, *4].\n\n**Answer:** Thank you for pointing out relevant works. However, our main novelty comes from proposing the **novel personalized subgraph FL task** with an undiscovered challenge of **community structures**, along with the **missing edge problem**, for **subgraph FL domains**, and we tackle those unique challenges in subgraph FL with our personalized weight aggregation and local weight masking schemes. From this perspective, we strongly believe that our work is sufficiently novel, although some of its components may be reminiscent of the ones for other FL settings. Moreover, even from the methodological perspective, our work significantly differs from the suggested works, which we describe one by one as follows:\n\n1. [*1] This recent work aims to aggregate neural networks of different shapes, and, for which, the authors propose to minimize the distances between the model representations obtained by forwarding the same input, since aggregating the neural network weights of different sizes is not straightforward in FL. However, compared to this work, we aim to tackle our novel personalized subgraph FL problem where there particularly exist community structures, and, for which, we propose to perform personalized weight aggregation between clients based on their models’ functional similarities obtained from the same randomly generated graph. Therefore, our work not only targets a completely different problem -- aggregating different-sized neural models vs capturing community structures for personalized subgraph FL -- but also uses the obtained representations differently: minimizing the differences between the obtained representations as a training objective vs identifying the similar subgraphs to largely share weights among them. Furthermore, please note that we already acknowledge existing works on functional embeddings in Lines 203-210. However, our work differs in how we utilize functional embeddings for tackling the unique challenges in subgraph FL tasks.\n\n2. [*2, *3, *4] Those works utilize the knowledge distillation in FL, in which the local model outputs are used as auxiliary data to train the other server/client models. However, they are largely different from ours: we obtain the models’ functional embeddings to largely share weights between similar subgraphs, and our work does not aim to generate/predict auxiliary labels for additional training of the neural models in the knowledge distillation process.\n\nThus, to summarize our main novelty, we study the novel problem of personalized subgraph FL where different subgraphs with similar properties are likely to belong to the same community while edges are missing between them, and we tackle this challenging problem with both the personalized weight aggregation scheme based on models’ functional embeddings and the local weight masking scheme with its sparsification. We believe that both are sufficiently novel and effective solutions for the novel problem of personalized subgraph FL. \n", " We sincerely thank you for your constructive and helpful comments. We appreciate your positive comments that personalized weight aggregation based on functional embedding is interesting, personalized sparse masks look helpful, and we provide the code for our FED-PUB. We initially address all your concerns below:\n\n---\n\n**Question 1.1:** The advantages of using the outputs of random graphs are not convincing to me. \n\n**Answer:** There are obvious advantages of using the outputs of random graphs (i.e., functional embeddings), summarized as follows: \n* As described in Lines 198-202 with Figure 3, compared to the parameter and gradient similarities, which are measured on high-dimensional spaces but those spaces are neither efficient nor meaningful due to the curse of dimensionality [5], our functional similarities work on low-dimensional spaces with both high efficacy and efficiency.\n* As shown in Figure 6, our functional embeddings obtained by outputs of random graphs accurately capture the similarity (community) structures between subgraphs.\n\n---\n\n**Question 1.2:** The random graph based on SBM has too much randomness. Especially with random node features with normal distribution.\n\n**Answer:** The random graph based on SBM is initially at once (i.e., not randomly changed at every iteration), and randomly initializing node features with normal distribution is more beneficial than using the pre-defined node features.\n\nSpecifically, as described in B.3 Implementation Details of the supplementary file, to calculate each client model’s functional embedding, we use the same random graph for all clients, by first initializing the random graph at once in the server and then distributing it to all clients. Also, using the randomly initialized features is more helpful when capturing the functional space, since randomness does not yield any bias on that space, while particularly initialized node features from exiting nodes and their label distributions might have a bias toward particular subspaces. To experimentally validate this statement, we compare various schemes used for calculating the functional embeddings: 1) SBM denotes the random graph generated from the SBM model like ours; 2) ER denotes the random graph generated from the Erdos-Renyi model; 3) One denotes the random graph having only one node; 4) Feature denotes the graph where nodes are initialized by the node features in the client. We then measure the performances of those four schemes by calculating the correlation coefficients between label distributions and generated similarities of subgraphs (i.e., the high value means that the similarities from the functional embeddings are similar to the label distributions) on the Cora dataset of non-overlapping and overlapping node scenarios with 20 clients, which are reported in the table below. \n\n| Graph | Overlapping | Non-Overlapping |\n| --- | --- | --- | \n| SBM | 0.937 | 0.810 | \n| ER | 0.920 | 0.712 | \n| One | 0.822 | 0.656 |\n| Feature | 0.897 | 0.632 | \n\nAs shown in the table, compared to the One scheme that uses only one node for calculating the functional embeddings, SBM and ER schemes that use more large numbers of randomly initialized nodes can accurately capture the similarities between subgraphs. This result confirms that a sufficient amount of randomness is required to identify the model’s functional space. Also, compared to the Feature scheme that uses existing node representations for calculating the functional embeddings, SBM and ER random models show superiority in capturing similarities among subgraphs, which verifies that randomness might help obtain accurate functional embeddings of the models without bias. \n\n---\n\n**Question 2:** The code only provides FedAvg and FED-PUB.\n\n**Answer:** We would like to politely speak that this point cannot be our weakness, since we not only provide our FED-PUB code in the supplementary material but also we faithfully follow the available codes of existing models provided in their papers. To mention a few, for subgraph FL works, we use the following authors’ codes:\n* FedGNN: https://github.com/wuch15/FedPerGNN\n* FedSage: https://github.com/zkhku/fedsage\n", " We sincerely thank you for your constructive and helpful comments. We appreciate your positive comments that our functional embedding method to identify similar subgraphs is novel, our FED-PUB can tackle the missing edge problem, and ours outperforms existing subgraph benchmarks in a more communication-efficient and privacy-preserved way. We initially address all your concerns below:\n\n---\n\n**Question 1.1:** This work tackles the heterogeneity of different subgraphs by mitigating the incompatible knowledge collapse issue, which is occurred when using only one global model to fit all the data. However, the explanation for this incompatibility is not adequate.\n\n**Answer:** The incompatibility issue is sufficiently explained in our motivation figures and main results, and here we explain again why the incompatibility issue arises in subgraph FL. At first, as illustrated in Figure 1 (B) and Figure 1 right, an incompatible knowledge collapse clearly happens for the model that aggregates the knowledge from different communities at different clients into one. Moreover, the main results in Table 1 and Table 2 support this claim. Specifically, in those two tables, FedAvg often underperforms the local model without parameter aggregation, especially when we increase the number of clients, since FedAvg collapses incompatible knowledge from different subgraphs (communities) into one while the local models preserve the local knowledge in their parameters. However, since our FED-PUB can further share compatible and meaningful knowledge among subgraphs in the same community with personalized weight aggregation while filtering out irrelevant knowledge for the local task with local weight masking, ours significantly outperforms those baselines in such a challenging setting of the incompatibility between different subgraphs.\n\n---\n\n**Question 1.2:** Maybe direct model training on the whole dataset is needed as the oracle case to show such incompatibility.\n\n**Answer:** Thank you for your suggestion. Please note that direct training on the whole dataset results in unfair comparison due to missing edges: this oracle setting can observe missing edges between subgraphs during training, while all the other methods cannot observe them. \n\nHowever, as suggested, we have conducted an additional experiment that we train a model on the connected global graph and then evaluate it on disjoint subgraphs over all clients, on the Cora dataset of both Non-overlapping and Overlapping node scenarios with varying client numbers. As shown in the below table, the Oracle model outperforms all the other methods, while our FED-PUB achieves the closest performance to the Oracle model. \n\n| Model | NonOverlapping-5 | NonOverlapping-20 | Overlapping-10 | Overlapping-50 |\n| --- | --- | --- | --- | --- |\n| Oracle | 85.07 | 85.47 | 85.08 | 85.28 |\n| Local | 81.30 | 80.30 | 73.98 | 76.63 |\n| FedAvg | 74.45 | 69.50 | 76.48 | 53.99 |\n| FedGNN | 81.51 | 70.10 | 70.63 | 56.91 |\n| FedSage | 72.97 | 57.97 | 77.52 | 55.48 | \n| FED-PUB (Ours) | 83.70 | 81.75 | 79.60 | 77.84 |\n\nThe above results bring us to the following points. In particular, due to the problem of missing edges, all the FL methods, which observe edges only within each subgraph, perform poorly than the Oracle method. Also, the missing edge problem negatively affects the incompatible knowledge issue: since all client models are trained by the partial subgraphs, which are parts of the larger global graph, the trained parameters in the client and the aggregated parameters in the server might not capture globally meaningful knowledge or the knowledge that is helpful to the other clients, which explains why the Oracle model performs the best. \n", " We sincerely thank you for your constructive and helpful comments. We appreciate your positive comments, which praise the strengths of our work in terms of significance, originality, quality, and clarity. We initially address all your concerns below:\n\n---\n\n**Question 1:** Please provide graph statistics to investigate the diversity of your evaluation settings across the six datasets.\n\n**Answer:** We already provided the graph statistics, such as the number of nodes, edges, and classes, in Table 1 of the supplementary file. However, we have further analyzed the additional graph statistics, namely the clustering coefficients and the heterogeneities for subgraphs, and here we provide them for the Cora dataset of the non-overlapping node scenario in the table below. \n\n| Dataset | 5 clients | 10 clients | 20 clients |\n| --- | --- | --- | --- |\n| # Classes | 7 | 7 | 7 |\n| # Nodes | 497 | 249 | 124 |\n| # Edges | 1,866 | 891 | 422 |\n| Clustering coefficient | 0.250 | 0.259 | 0.263 |\n| Heterogeneity | 0.590 | 0.606 | 0.665 |\n\nNote that clustering coefficients, which are the measure of how much nodes tend to cluster together, are calculated by the average of clustering coefficients [A] of all nodes. On the other hand, we calculate the heterogeneities by measuring the median Jenson-Shannon divergence of label distributions between all pairs of subgraphs. We will add those statistics of the real-world datasets in Table 1 of the supplementary file, in the revision. \n\n[A] Watts and Strogatz. Collective dynamics of 'small-world' networks. Nature 1998.\n\n---\n\n**Question 2.1:** Please elaborate on your statements about the theory of network homophily and community detection. \n\n**Answer:** While we explain those two concepts and their connection in Lines 161-164, here we further clarify our statements. At first, from the definition of network homophily [31], nodes with similar properties are more likely to connect to each other than dissimilar ones. Then, based on this definition, similar nodes are densely connected, which then form a community from the definition of the community [33, 9, 32] (i.e., the number of edges within the same community is larger than the number of edges estimated by the random model). In this regard, for the subgraph FL problem that we target, we specify two facts from the graph theories above that it is important to capture the community structures to promote joint improvements among subgraphs having similar properties, and to handle the incompatible knowledge issues between subgraphs in different communities. Therefore, to summarize, theoretical concepts of network homophily and network community motivate us to find the challenges on subgraph FL, and, from this, we define our novel personalized subgraph FL problem represented in Equation (2).\n\n---\n\n**Question 2.2:** The claims on community detection and network homophily should be supported with additional theory, while the authors provide good intuition why their introduced setting and evaluation are valid.\n\n**Answer:** Thank you for your comment, however, we strongly believe that, in our work, we not only clearly introduce our ideas based on graph theories as explained above (the answer for Question 2.1), but also sufficiently validate our ideas with various experimental results. Also note that, for deep networks, it is challenging to analyze the functional behavior of every deep layer, and then invent a new theoretical justification. For this reason, many recent works on deep FL (e.g., [2, 40]) do not provide theoretical analyses, while they are well received due to their task-level ideas and sufficient empirical verifications. Thus, to summarize, we believe that we already sufficiently motivated our subgraph FL tasks based on theoretical concepts of network homophily and community and that further analyzing the theoretical behavior of our deep FL methods goes beyond the scope of our work, thus we leave it as future work.", " **Question 3.1:** Please clarify the necessity of personalized sparse masks and quantify the drift to the local training distribution.\n\n**Answer:** We already clearly showed that personalized sparse masks are significantly helpful when all the **subgraphs are disjoint** (i.e., there exist extreme distribution shifts between subgraphs), compared to the overlapping node scenario in Figure 7, showing the necessity of the weight masking scheme for subgraph FL tasks. However, to further quantify how the weight masking scheme benefits the node classification task, we have additionally measured how much distributional shifts exist among the subgraphs, as suggested.\n\nSpecifically, to quantify the necessity of personalized sparse masks, we have measured the label differences (i.e., distributional shifts) between subgraphs with the Jenson-Shannon divergence, in which the minimum and maximum values are 0 and 2, respectively, on the Cora dataset with 20 different clients over the overlapping and the non-overlapping scenarios. Then, we observe that the distance (i.e., divergence value) among the subgraphs within the same community is 0.384 while the distance among the subgraphs belonging to different communities is 0.639 for the non-overlapping node scenario. On the other hand, the distance among the subgraphs within the same community is 0.047 while the distance among the subgraphs belonging to different communities is 0.528 for the overlapping node scenario. Thus, from the fact that heterogeneity of subgraphs within the same community is extremely larger in the non-overlapping setting (0.384) compared to the overlapping setting (0.047), personalized weight aggregation might not be enough in disjoint subgraph FL problems, and, as shown in Figure 7, weight masking scheme is clearly required to selectively utilize and train only on the relevant parameters to each local task.\n\nFurthermore, besides the above advantage of weight masks on disjoint subgraph scenarios, parameter sparsifications with sparse masks can further yield computation and communication efficiencies as described in Lines 245-249 with experimental results in Figure 8. Note that efficiencies are key factors in FL (e.g., considering the case where we often do federate learning on edge devices), which is, however, getting less attention in graph FL yet we have explored them.\n\n---\n\n**Question 3.2:** Please provide insights for the specific choice of λ2.\n\n**Answer:** As described in Section B.3 of the supplementary file, we simply set λ2 as 0.001 which is the same as the λ1 value for all experiments. This is because we empirically observed this value to work well (i.e., preventing local divergence as shown in Figure 9) across different settings. However, we also varied λ2 in Figure 1 of the supplementary file, and the results show that the performance is not sensitive across different λ2 values.\n\n---\n\n**Question 4:** The authors should analyze FED-PUB on another graph partitioning setting presented in [43]. \n\n**Answer:** Thank you for your suggestion. However, there is a drawback when using the Louvain algorithm presented in [43], rather than using the METIS algorithm as ours, for subgraph FL scenarios. Specifically, since the Louvain algorithm cannot specify the number of graph partitions, the number of subgraphs on the CiteSeer dataset is 38, where three of them have less than ten nodes. Then, based on those 38 disjoint subgraphs, to generate the particular number of clients (e.g., 10), [43] randomly merges the different subgraphs without considering their graph properties. Therefore, even though each partitioned subgraph has its unique structural role/characteristic, the reconstructed 10 subgraphs from the original 38 subgraphs have mixed properties (i.e., two incompatible subgraphs could be merged), which is suboptimal. However, as described in Lines 265-266, the METIS algorithm that we used can specify the number of partitions, thus more appropriate for making the experimental settings for subgraph FL.\n\nHowever, as suggested, we have additionally conducted experiments with the Louvain graph partitioning algorithm [43], on Cora, CiteSeer, and PubMed datasets with the number of clients as 10, and then reported the results in the table below. Then, the results show that our FED-PUB consistently outperforms all the other baselines on another graph partitioning setting, thus the effectiveness of our FED-PUB becomes more obvious. \n\n| Model | Cora | CiteSeer | PubMed |\n| --- | --- | --- | --- | \n| Local | 78.56 ± 0.27 | 64.06 ± 0.09 | 84.07 ± 0.17 |\n| FedAvg | 71.83 ± 0.40 | 69.23 ± 0.71 | 82.47 ± 0.32 |\n| FedProx | 72.09 ± 0.29 | 67.66 ± 0.97 | 82.68 ± 0.34 |\n| FedPer | 80.13 ± 0.50 | 66.28 ± 1.22 | 85.02 ± 0.23 |\n| FedGNN | 76.59 ± 0.66 | 61.21 ± 1.46 | 82.67 ± 0.26 |\n| FedSage | 72.20 ± 0.60 | 68.40 ± 0.61 | 82.76 ± 0.09 |\n| GCFL | 78.55 ± 0.38 | 64.20 ± 0.31 | 84.62 ± 0.31 |\n| FED-PUB (Ours) | **82.68** ± 0.13 | **69.45** ± 0.75 | **86.20** ± 0.11 |", " **Question 5:** There is a gap between the provided real-world motivation and the artificially constructed client graphs. Please give some insight into how well the shown setting translates to real scenarios.\n\n**Answer:** Unfortunately, since real-world datasets for graph FL are not available due to privacy concerns, pioneer [43] and our works partition the given entire graph into different subgraphs with community detection algorithms. However, we believe that our experiment setting clearly reflects real-world scenarios. In particular, in realistic scenarios, similar clients (e.g., hospitals for the same sector) are likely to have similar nodes with many connections among them, while dissimilar clients (e.g., hospitals for different sectors) are likely to have nodes with opposite properties that are not connected to each other. Then, in this real-world example, subgraphs from similar clients are likely to belong to the community, and, in our experiment, we allow each client to have a subgraph belonging to the particular community by dividing the global graph based on its community structures, not based on any arbitrary random structures. Therefore, our motivation (i.e., a real graph has a community structure where each client is associated with the particular community) aligns with our experimental setup (i.e., each client has a subgraph that belongs to the particular community).\n\n---\n\n**Question 6:** The authors lack the chance to discuss weaknesses in the paper's main text.\n\n**Answer:** Due to the space issue, we provide the limitations of our work in Section D of the supplementary file. If an additional one page is allowed after the acceptance, following your suggestion, we will discuss weaknesses in the main paper.\n\n---\n\n**Question 7:** Introduction to GNNs is relatively shallow.\n\n**Answer:** Thank you for pointing this out. Due to the space limits, we will add more explanations about GNNs in the next revision if additional space is allowed.\n\n---\n\n**Question 8:** The authors exaggerate at times, \"Seemingly impossible task\" (4.1), \"an even more serious issue\" (1.), and \"crucial drawback\".\n\n**Answer:** Thank you for pointing this out. Our original intention was that we would like to emphasize the importance of challenges prevalent in subgraph FL tasks, however, since they might feel exaggerated, we will tone down on those parts accordingly in the revision.\n\n---\n\n**Question 9:** Cite the inspiration paper [17] earlier.\n\n**Answer:** Thank you for your suggestion. We will cite the inspiration paper [17] earlier (e.g., Introduction section) in the next revision. \n\n---\n\n**Question 10:** The results for FedSage [43] are worse compared to the original publication.\n\n**Answer:** The reported results in FedSage [43] are not comparable, as the datasets and evaluation setups are different between FedSage and ours. At first, as discussed in answers to Question 4, FedSage uses the Louvain algorithm while we use the METIS algorithm to partition the global graph into different subgraphs, thus the training and test datasets are completely different. Furthermore, regarding the evaluation setup, FedSage [43] measures the performance on the entire graph (i.e., not the subgraphs) with the global model averaged by all client parameters. However, we measure the average performance on the individual subgraphs with their own personalized model parameters, as we deal with personalized scenarios. \n\n---\n\n**Question 11:** Regarding the functional embeddings on random graphs, what are the failure cases of this approach? \n\n**Answer:** In experiments, we have not observed clear failure cases of our graph functional embeddings. However, one worthwhile point to share is that the scaling hyperparameter τ should be set differently across overlapping and nonoverlapping node scenarios as described in Section B.3 of the supplementary file, since the scales of similarities of those two settings are different. Thus, we suspect that our tuned hyperparameters τ may not be optimal if we deal with totally different evaluation setups: neither non-overlapping nor overlapping. \n", " **Question 12:** The randomly generated graph from SBM does not capture real-world graph properties, and more analyses on different graph distributions are required.\n\n**Answer:** As described in Line 212, since the server cannot access the local graph properties (e.g., local node features) due to privacy concerns, we randomly generate the graph from SBM where node features are initialized by the normal distribution. However, when calculating the functional embeddings, using the random node features has an advantage over using the existing node features from clients, since randomness does not yield any bias on the functional space, while particularly initialized node features from exiting nodes and their label distributions might have a bias toward particular subspaces. To experimentally validate whether random initialization is superior to using the existing node features, we compare various schemes used for calculating the functional embeddings: 1) SBM denotes the random graph generated from the SBM model like ours; 2) ER denotes the random graph generated from the Erdos-Renyi model; 3) One denotes the random graph having only one node; 4) Feature denotes the graph where nodes are initialized by the node features in the client. We then measure the performances of those four schemes by calculating the correlation coefficients between actual label distributions and generated similarities of subgraphs (i.e., the high value means that the similarities from the functional embeddings are similar to the label distributions) on the Cora dataset of non-overlapping and overlapping node scenarios with 20 clients, which are reported in the table below. \n\n| Graph | Overlapping | Non-Overlapping |\n| --- | --- | --- | \n| SBM | 0.937 | 0.810 | \n| ER | 0.920 | 0.712 | \n| One | 0.822 | 0.656 |\n| Feature | 0.897 | 0.632 | \n\nAs shown in the table, compared to the One scheme that uses only one node for calculating the functional embeddings, SBM and ER schemes that use more large numbers of randomly initialized nodes can accurately capture the similarities between subgraphs. This result confirms that a sufficient amount of randomness is required to identify the model’s functional space. Also, compared to the Feature scheme that uses existing node representations for calculating the functional embeddings, SBM and ER random models show superiority in capturing similarities among subgraphs, which verifies that randomness might help obtain accurate functional embeddings of the models without bias.\n\n---\n\n**Question 13:** What is the effect of local graph size vs. heterogeneity? \n\n**Answer:** If the local graph size increases, the heterogeneity issue becomes severe. To quantify the heterogeneity issue over varying local graph sizes, we first measure the distances of each subgraph to all the other subgraphs by calculating their Jenson-Shannon divergence of label distributions, and then report the median divergence value between subgraphs on the number of clients of 3, 5, 10, and 20. Note that if the number of clients is small, then the number of nodes and edges in their subgraphs are also small, as shown in Table 1 of the supplementary file. \n\n| # Clients | Cora | CiteSeer | PubMed |\n| --- | --- | --- | --- |\n| 3 | 0.520 | 0.437 | 0.373 |\n| 5 | 0.590 | 0.517 | 0.362 | \n| 10 | 0.606 | 0.541 | 0.392 |\n| 20 | 0.665 | 0.568 | 0.424 |\n\nWe have conducted analyses on the Cora, CiteSeer, and PubMed datasets of the nonoverlapping node scenario, and then provided the results in the above table. As shown in the table, we can confirm that, when the number of clients is increased (i.e., the numbers of nodes and edges are decreased), the heterogeneity across clients becomes more severe and problematic for personalized subgraph FL, and thus is an important issue to tackle.", " **Question 14:** How well are the chosen graphs suited for the presented method?\n\n**Answer:** Our chosen graphs are suited for the presented task and methods in two different aspects. Before explaining the specific reasons, please note that, in this work, we aim to deal with challenges in heterogeneity, where subgraphs from different communities have opposite properties, with our personalized weight aggregation and personalized parameter masking schemes. Then, since real-world graphs that we used for experimental comparisons all have the community structures: citation and product networks both have the communities of categories, they are suited for verifying the presented task and methods. Also, to construct the experimental settings of personalized subgraph FL, we partition the global graph with graph partitioning algorithms based on community detection. Therefore, the partitioned graphs have also community structures, and such community properties are suited for demonstrating the effectiveness of our methods.\n\n---\n\n**Question 15:** Can you quantify to what degree training runs faster using local masks μ_k?\n\n**Answer:** Theoretically, we can train as much faster as the number of masked parameters. In other words, if we can mask 50% of the parameters, then the training will be two times faster than the full parameter training in terms of FLOPs. However, as described in Lines 247-248, to implement this sparsity in neural network training, the hardware that supports the zero-skipping operations is required, and the actual training speeds depend on such hardware performance. Note that analyzing such hardware performance goes beyond the scope of our work, therefore, which we leave as future work. \n\n---\n\n**Question 16:** What is the influence of the scaling parameter τ (Eq.4)?\n\n**Answer:** τ value in Equation 4 influences how much each model receives the weights from the weights of the other similar/dissimilar subgraphs. For extreme cases, when we set the τ value as 0, the weight aggregation scheme is the same as the FedAvg model, since we aggregate all model parameters with equal weights. On the other hand, when we set the τ value as $\\infty$, each model does not receive model weights from the other clients, and only trains with its own parameters (i.e., the Local model). Thus, if we would like to share weights largely among similar subgraphs, then the τ value should be set lower. On the other hand, if we would like to localize each model without parameter sharing, the τ value should be set higher. \n\n---\n\n**Question 17:** Why this specific choice of $S(i,j)=\\frac{h_i \\cdot h_j}{‖hi‖‖hj‖}$ (Eq.3)?\n\n**Answer:** This is because cosine similarity is widely used to calculate the similarity between vectors in the embedding space. \n\n---\n\n**Question 18:** How are communication costs in Figure 8 computed?\n\n**Answer:** As described in Lines 324-327, we measure communication costs by the sum of the number of active parameters (neurons) transmitted from client-to-server and server-to-client. Note that, regarding the client-to-server cost of our FED-PUB, we consider both the numbers of active neurons in the sparsified weight $\\theta_k$ and the sparse mask $\\mu_k$. On the other hand, regarding the server-to-client cost, we only consider the transmission cost of the sparsified weight $\\bar{\\theta}_{k}$ obtained from the personalized weight aggregation. Thus, since our FED-PUB transmits only the sparsified weights instead of the full weights, ours has more efficiencies in communication against FedAvg. \n\n---\n\n**Question 19:** What does the coloring indicate for Fig.6a \"Missing edges\"?\n\n**Answer:** If there exist lots of missing edges between two subgraphs, the blue color for those two subgraphs becomes darker. Specifically, by comparing the number of edges in two subgraphs before/after graph partitioning with the METIS algorithm described in Lines 265-270, we can measure the number of missing edges between them, which we represent as color in Figure 6 (i.e., the larger the missing edges, the darker the blue color). ", " **Question 3.1:** Why not directly use local node label distributions to calculate the similarities among clients?\n\n**Answer:** The usage of label distributions stored in the private local subgraphs may violate the privacy constraint in FL. Thus, in this work, we do not use such local information when calculating the similarities between clients. However, as shown in Figure 6, the functional similarities of subgraphs calculated by our FED-PUB are close to the similarities calculated by the label distributions, which verifies that ours can capture similarities of label distributions among subgraphs without using their local labels.\n\n---\n\n**Question 3.2:** A performance comparison with other similarity methods (label similarity, gradient similarity, model similarity) will also be very helpful.\n\n**Answer:** Please note that, in Figure 3, we already compared the effectiveness and efficiency of parameter, gradient, and functional similarities, and then showed that parameter and gradient similarities are neither effective nor efficient compared to our functional similarity. Also, as we answer in the previous question (Question 3.1) above, we already showed that the similarities of subgraphs from our functional embeddings are similar to their label similarities in Figure 6.\n\nHowever, as suggested, we have additionally conducted experiments on the parameter, gradient, and label similarities, on the Cora dataset of the overlapping node scenario with the number of clients as 30, and then reported the results on 20, 40, 60, and 80 epochs in the table below. \n\n| Model | 20 | 40 | 60 | 80 |\n| --- | --- | --- | --- | --- | \n| FedAvg | 29.94 | 32.69 | 47.84 | 52.42 |\n| Parameter | 29.94 | 35.89 | 47.03 | 52.28 |\n| Gradient | 33.93 | 51.09 | 52.77 | 58.14 |\n| Label | 65.97 | 74.31 | 76.50 | 76.82 |\n| Function (FED-PUB) | 67.82 | 73.51 | 74.66 | 75.90 |\n\nAs shown in the above table, the models, which utilize the parameter and gradient for calculating the similarities between subgraphs, perform similarly to the FedAvg model and perform worse than our functional and label similarity schemes. However, even though the label similarity model uses privacy-sensitive local information (i.e., label distributions of every client), the performance of our FED-PUB that utilizes the functional embeddings is similar to the performance of the label model. Therefore, along with the results in Figure 6, this comparison results between similarity schemes further verify the effectiveness of our functional embedding scheme in capturing the similarities among subgraphs. \n\n---\n\n**Question 4:** The non-overlapping node scenario is more challenging since it is more heterogeneous. Why does FedAvg (Table 2) in non-overlapping have better performance than FedAvg (Table 1) in overlapping?\n\n**Answer:** The performances in Table 1 for the overlapping node scenario and Table 2 for the non-overlapping node scenario are not comparable, since the subgraphs used for training and evaluation are different due to differences in experimental settings, which are described in Lines 266-270 of the main paper, Section B of the supplementary file, and Table 1 of the supplementary file. However, in general, when the number of clients increases, the knowledge collapse issue of FedAvg becomes more problematic. Therefore, since overlapping node scenarios have more numbers of clients than the non-overlapping node scenarios in Table 1 and Table 2 (i.e., overlapping node scenarios have client numbers of 10, 30, and 50, meanwhile, non-overlapping node scenarios have client numbers of 5, 10, and 20), FedAvg in overlapping node settings underperforms than in non-overlapping node settings.\n", " **Question 5:** In Appendix B.3, for small datasets, namely Cora, CiteSeer, and PubMed, the experiments set the number of local training epochs as 1, which is equal to distributed training. Why does FedAvg perform poorly in this setting? Is it due to missing edges?\n\n**Answer:** Yes, it is due to the problem of missing edges, which further negatively affects the incompatible knowledge issue. To see how much the missing edges yield the performance drops in subgraph FL, we first train a FedAvg on the connected global graph and then evaluate it on disjoint subgraphs over all the clients, on the Cora dataset of both Non-overlapping and Overlapping node scenarios with varying client numbers. Note that this setting (i.e., Oracle) is unfair in comparison to all the other FL methods, since this Oracle model can observe the missing edges during training, while all the other methods are not. \n\n| Model | NonOverlapping-5 | NonOverlapping-20 | Overlapping-10 | Overlapping-50 |\n| --- | --- | --- | --- | --- |\n| Oracle | 85.07 | 85.47 | 85.08 | 85.28 |\n| Local | 81.30 | 80.30 | 73.98 | 76.63 |\n| FedAvg | 74.45 | 69.50 | 76.48 | 53.99 |\n| FedGNN | 81.51 | 70.10 | 70.63 | 56.91 |\n| FedSage | 72.97 | 57.97 | 77.52 | 55.48 | \n| FED-PUB (Ours) | 83.70 | 81.75 | 79.60 | 77.84 |\n\nAs shown in the table above, we can observe that the Oracle model outperforms all the other methods, which brings us to the following points. At first, due to the problem of missing edges, all the FL methods, which observe edges only within each subgraph, perform poorly than the Oracle method. Also, the missing edge problem negatively affects the incompatible knowledge issue: since all client models are trained by the partial subgraphs, which are parts of the larger global graph, the trained parameters in the client and the aggregated parameters in the server might not capture globally meaningful knowledge or the knowledge that is helpful to the other clients, which explains why the FedAvg performs poorly in this subgraph FL setting.\n", " **Question 1.3:** I wonder whether the missing edges between nodes or the too small model size cause incompatibility between subgraphs.\n\n**Answer:** Yes, the problem of missing edges is one of the factors that cause such the knowledge incompatibility issue as explained before (See the previous answer for Question 1.2). Also, as you mentioned, increasing the model sizes may alleviate the knowledge incompatible issue to some degree. However, this is largely suboptimal since such a large model may suffer from overfitting and large computation and communication overheads. Also, our FED-PUB with personalized modules still outperforms the global aggregation model, regardless of the model sizes.\n\nSpecifically, we have varied the dimensionality of the hidden layers for FedAvg and our FED-PUB in the range of {64, 128, 256, 512, 1024, 2048} on the Cora dataset of the overlapping node scenario with the number of clients as 30, and then reported the results in the table below. The results show that when we increase the number of dimensions from 64 to 256, the performances of FedAvg which aggregates all local models into the global model improve. However, when we further increase the hidden dimension sizes, the FedAvg does not yield any further performance improvements, and when using the largest hidden dimensionality, it suffers from a performance drop due to the overfitting issues. Note that our FED-PUB significantly outperforms FedAvg regardless of the hidden dimensionality, which verifies that simply increasing the model capacity does not effectively solve the knowledge collapse issues. On the other hand, our personalized subgraph FL framework with personalized weight aggregation and parameter masking effectively handles such challenges. \n\n| Model | 64 | 128 | 256 | 512 | 1024 | 2048 |\n| --- | --- | --- | --- | --- | --- | --- |\n| FedAvg | 50.75 | 53.99 | 61.13 | 60.99 | 61.26 | 59.12 |\n| FED-PUB (Ours) | 74.64 | 75.40 | 74.86 | 74.96 | 74.60 | 73.84 |\n\n---\n\n**Question 2:** The proposed method is not validated in cases where the nodes and edges of the whole graph are uniformly distributed across clients. Due to the complex distribution of graph data in practice, the performance of the method in such cases where there are no obvious communities should be also considered.\n\n**Answer:** Please note that uniform partitions of graphs are more unrealistic, and that is why we used graph partitioning algorithms (e.g., the METIS algorithm). For example, when we partition the entire graphs of the Cora, CiteSeer, and PubMed datasets into 10 different subgraphs uniformly at random, the number of nodes per subgraph will be larger than the number of edges (e.g., some nodes do not have any edges) as shown in the below table, which is uncommon in practice. \n\n| Dataset | # Nodes | # Edges |\n| --- | --- | --- |\n| Cora | 248 | 97.4 |\n| CiteSeer | 212 | 72.0 |\n| PubMed | 1971 | 884.6 | \n\nHowever, as suggested, we have additionally conducted experiments on the random split setting with 10 different clients on the CiteSeer dataset, and then provided the results in the below table. As shown in the table, while the gap between baselines and our model is reduced compared to the non-overlapping and overlapping scenarios since there is no specific community structure in this random setting, our FED-PUB significantly outperforms all the other baselines under this setting as well.\n\n| Model | Random with # clients 10 |\n| --- | --- | \n| Local | 44.27 ± 1.05 | \n| FedAvg | 60.84 ± 0.80 | \n| FedProx | 59.38 ± 1.66 | \n| FedPer | 60.04 ± 0.93 | \n| FedGNN | 54.64 ± 1.67 | \n| FedSage | 61.03 ± 0.11 | \n| GCFL | 53.15 ± 1.82 | \n| FED-PUB (Ours) | **63.63** ± 0.86 | \n\n---\n\n**Question 3:** Why not compare with FedSage+ [43]?\n\n**Answer:** We apologize for the confusion. As described in B.2 Baselines and Our Model Section of the supplementary file, we compare FedSage+ [43], and we refer to this FedSage+ model as FedSage in our work. We will correct this in the next revision.", " **Question 2.1:** The relationship between the weight masking method and the research problem is unclear. The authors claim that the weight masking mechanism is to select the relevant parameters for each subgraph. However, it seems like a general solution for personalized FL under heterogeneity and is less relevant to the target subgraph FL problem. \n\n**Answer:** As you mentioned, our weight masking scheme may be also helpful for general FL. However, it is more helpful and relevant to our subgraph FL problems since **heterogeneity arising in personalized subgraph FL is more severe** than in general FL scenarios, that is subgraphs in graph FL may be **completely disjoint**, while in general FL each client is likely to train on shifted, but similar data distributions. Also, please see the results in Figure 7 (ablation study), which shows that the weight masking scheme largely improves the performance on the subgraph FL tasks when all the subgraphs are disjoint (i.e., there exist extreme distribution shifts between subgraphs), with a much larger performance gain over the gain obtained on the overlapping node scenario, which shows that the weight masking is a necessary design choice for subgraph FL. \n\n---\n\n**Question 2.2:** Although weight masking brings some performance improvement, how it benefits the node classification problem is unknown.\n\n**Answer:** To further quantify how the weight masking scheme benefits the node classification task, we have additionally measured how much distributional shifts exist among the subgraphs. In particular, we have measured the label differences between subgraphs with the Jenson-Shannon divergence, in which the minimum and maximum values are 0 and 2, respectively, on the Cora dataset with 20 different clients over the overlapping and the non-overlapping scenarios. Then, we observe that the distance (i.e., divergence value) among the subgraphs within the same community is 0.384 while the distance among the subgraphs belonging to different communities is 0.639 for the non-overlapping node scenario. On the other hand, the distance among the subgraphs within the same community is 0.047 while the distance among the subgraphs belonging to different communities is 0.528 for the overlapping node scenario. Thus, from the fact that **heterogeneity of subgraphs within the same community is extremely larger in the non-overlapping setting (0.384)** compared to the overlapping setting (0.047), personalized weight aggregation might not be enough in disjoint subgraph FL problems, and, as shown in Figure 7, **weight masking scheme is clearly required** to selectively utilize and train only on the relevant parameters to each local task.", " **Question 3.1:** Even though the authors use an intuitive explanation and a tiny experiment on the synthetic dataset in Figure 1 to demonstrate the heterogeneity problem and further show the need for personalized subgraph FL, the motivation behind personalized subgraph FL is not convictive enough on real-world datasets. How serious the heterogeneity problem is on real-world datasets under the subgraph FL setting?\n\n**Answer:** The heterogeneity is highly problematic in real-world datasets of personalized subgraph FL, especially when there are lots of FL participants with different knowledge. Please note that we already experimentally demonstrated this issue in Table 1 and Table 2 that, due to the heterogeneity between subgraphs, global weight aggregation methods (e.g., FedAvg) perform poorly when the number of clients is large since they collapse the incompatible knowledge into one global model, while our FED-PUB is not. \n\nHowever, as suggested, we have further quantified the exact amount of heterogeneities by varying the number of clients. Specifically, for each subgraph, we first measure its distance to other subgraphs by measuring their Jenson-Shannon divergence of label distributions, and then report the median divergence value between subgraphs. Thus, if the median divergence values are large, then we can regard that the datasets are heterogeneous enough to cause incompatible knowledge issues. We report such divergence values on the Cora, CiteSeer, and PubMed datasets of the nonoverlapping node scenario with 3, 5, 10, and 20 clients, in the table below. Then, based on the reported results in the table, we can confirm that, when we increase the number of clients, the heterogeneity across clients becomes more severe and problematic for personalized subgraph FL, and thus is an important issue to tackle.\n\n| # Clients | Cora | CiteSeer | PubMed |\n| --- | --- | --- | --- |\n| 3 | 0.520 | 0.437 | 0.373 |\n| 5 | 0.590 | 0.517 | 0.362 | \n| 10 | 0.606 | 0.541 | 0.392 |\n| 20 | 0.665 | 0.568 | 0.424 |\n\n---\n\n**Question 3.2:** Regarding heterogeneity, what if the number of clients is much smaller than 10, which may show the significance of personalized subgraph FL across different heterogeneities?\n\n**Answer:** Following your suggestion, to see the performance of models when the heterogeneity issue is less significant, we have conducted experiments in the setting where the number of clients is smaller than 10 (i.e., 3), on the Cora, CiteSeer, and PubMed datasets of the nonoverlapping node scenario. As shown in the table below, compared to the results in Table 2 with numbers of clients as 5, 10, and 20, the performance gaps between our FED-PUB and baselines are much reduced. However, we clearly observe that our FED-PUB meaningfully (i.e., statistically) outperforms all the other baselines even when the number of clients is small, since there still exists incompatible knowledge across clients, which our FED-PUB effectively tackles with personalized weight aggregation and local weight masking schemes.\n\n| Model | Cora | CiteSeer | PubMed |\n| --- | --- | --- | --- | \n| Local | 81.73 ± 0.44 | 68.16 ± 0.25 | 84.81 ± 0.40 |\n| FedAvg | 78.77 ± 0.13 | 69.34 ± 0.23 | 85.29 ± 0.20 |\n| FedProx | 78.91 ± 0.21 | 69.54 ± 0.27 | 85.59 ± 0.18 |\n| FedPer | 82.29 ± 0.13 | 69.80 ± 0.33 | 85.34 ± 0.16 |\n| FedGNN | 82.36 ± 0.62 | 67.79 ± 0.49 | 85.57 ± 0.13 |\n| FedSage | 77.79 ± 1.96 | 69.35 ± 0.12 | 85.63 ± 0.22 |\n| GCFL | 82.67 ± 0.74 | 68.85 ± 0.58 | 86.20 ± 0.15 |\n| FED-PUB (Ours) | **84.45** ± 0.23 | **70.66** ± 0.34 | **86.74** ± 0.16 |", " **Question 4:** The capability of dealing with missing edges is over-claimed. Specifically, in Sec 4.1 (Line 230) and Sec 5.2 (Line 348), the authors claim that FED-PUB can handle the problem of missing edges in an implicit manner with experimental results in Figure 10, which is, however, not persuasive enough since FED-PUB (neighbor) has the sub-optimal performance in Figure 10. \n\n**Answer:** This is a misinterpretation of the results in Figure 10, since our FED-PUB (Local) and (Neighbor) models **perform the best** among all the other (Local) and (Neighbor) models, respectively, which is sufficient evidence that it effectively alleviates the missing edge problem (See the next paragraph for details). Moreover, the qualitative results in Figure 6 also support the missing edge claim. In particular, since our FED-PUB puts high similarities on subgraphs with a large number of missing edges as shown in Figure 6 (a) and (d), it can inherently handle the missing edge problem: it enables to share knowledge across potentially connected, yet disconnected subgraphs with missing edges. \n\nSpecifically, regarding the interpretation of results in Figure 10, let us assume that we have two disjoint subgraphs with a large number of potential missing edges between them. Then, handling such the missing edge problem means that, despite their lack of explicit connections, two subgraphs share meaningful knowledge with each other, which leads to the joint improvement of models that train on them (i.e., the performance of (Local) and (Neighbor) models are both high). In other words, failure in handling the missing edge problem means that there is little/no information sharing between interconnected subgraphs with a large number of missing edges, and consequently, the performances of both (Local) and (Neighbor) models are low. From this statement, since our FED-PUB outperforms all the other methods on both (Local) and (Neighbor) measures in Figure 10, we can sufficiently say that our FED-PUB facilitates the information sharing between relevant subgraphs, which then helps alleviate the missing edge problem. \n\n---\n\n**Question 5:** What is the meaning of J in Eq. (2)? If J is the number of subgraphs within a community, how to find a specific community for each subgraph during model training? And does that mean the aggregation is only executed within the subgraphs in a community?\n\n**Answer:** We apologize for the confusion. J is the total number of subgraphs (i.e., the total number of FL participants). Also, as described in Lines 218-221, we perform weight averaging of each model based on its functional similarity to all the other models, as totally ignoring the model weights from other communities may result in exploiting only the local objective while ignoring the globally useful weights. \n\n---\n\n**Question 6:** In Line 45, the authors claim that the algorithms in [40] and [43] may compromise data privacy. Is there any further evidence to prove that?\n\n**Answer:** Yes, there is evidence to prove that [40] and [43] may compromise data privacy. Let us assume that sending the local client data/representation to the server or other clients is evidence of data privacy violation in federated learning, from the observation of the work [A]. Then, from this assumption, [40] and [43] clearly violate the data privacy constraints during local graph expansion processes (Lines 41-44), for the following reasons:\n\n* In [40], to augment the similar nodes stored in the other clients as the neighboring nodes in the current client’s subgraph, they share the nodes (i.e., node features) between clients. In particular, the client first sends its nodes to the server, and the server distributes the similar nodes located in the other clients to that client who sent their nodes to the server, which violates the privacy constraints. Also, in Section 3.5 of the authors’ paper [40], the authors already described the potential data privacy issues of their methods.\n\n* In [43], to generate the features of similar nodes stored in the other clients and then augment them in the current client’s subgraph, they share the node representations between subgraphs. More specifically, to generate the realistic node features used for subgraph expansion, this work [43] trains to maximize the similarities between the generated node representations in the certain client and the initial node representations in the other clients with their representation sharing, in which data privacy violation may occur. \n\n[A] Sun et al. Soteria: Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective. CVPR 2021.\n", " **Question 7:** In the formulation of existing subgraph FL (Line 169) and personalized subgraph FL (Eq. (2)), what is the optimized parameter? It's better to highlight this under the \"min\" operation.\n\n**Answer:** Thank you for pointing it out, and we apologize for the confusion. The optimized parameter is $\\theta$ for the existing subgraph FL, and $\\theta_i$ for our personalized subgraph FL. Also, after introducing the weight masking scheme in Section 4.2, we additionally optimize the local mask $\\mu_i$ along with existing $\\theta_i$, as represented in Equation (5).\n\n---\n\n**Question 8:** Could you provide a theoretical analysis?\n\n**Answer:** Thank you for your comment, and we already motivated our personalized subgraph FL task with theoretical concepts of network homophily and network community in Section 3. More specifically, at first, from the definition of network homophily [31], nodes with similar properties are more likely to connect to each other than dissimilar ones. Then, based on this definition, similar nodes are densely connected, which then form a community from the definition of the community [33, 9, 32] (i.e., the number of edges within the same community is larger than the number of edges estimated by the random model). In this regard, for the subgraph FL problem that we target, we specify two facts from the graph theories above that it is important to capture the community structures to promote joint improvements among subgraphs having similar properties, and to handle the incompatible knowledge issues between subgraphs in different communities. Therefore, to summarize, theoretical concepts of network homophily and network community motivate us to find the challenges on subgraph FL, and, from this, we define our novel personalized subgraph FL problem represented in Equation (2). In other words, we clearly introduce our ideas based on graph theories, although we do not believe that any rigorous proofs are required when introducing them. Furthermore, regarding the methodological perspective, for deep networks, it is challenging to analyze the functional behavior of every deep layer, and then invent a new theoretical justification. For this reason, lots of deep FL works published recently (e.g., [2, 40]) do not provide theoretical analyses, while they are well received due to their task-level ideas and sufficient empirical verifications. Thus, we believe that analyzing the theoretical behavior of our deep FL methods goes beyond the scope of our work, and leave it as future work.", " The paper deals with Federated Learning (FL) for graphs. Specifically, the authors investigate FL for heterogeneous graph distributions that can arise when the local graphs belong to different communities within the larger global graph. To tackle the threat of knowledge collapse of the server's GNN model, the introduced method, FED-PUB, tries to identify interrelated local graphs through functional embeddings and aims for their joint improvement. A weighted averaging on the server-side and locally learned weight masks achieve the combined local improvement. The authors evaluate FED-PUB on six different datasets. Strengths: \n\n- Combination of FL with the graph-theoretical concept of communities\n- An improved design to ensure data privacy and security via functional embeddings of random graphs\n- Providing a different perspective on the graph FL problem by aiming to improve local graph models\n\nWeaknesses/Open Aspects:\n\n- Please provide graph statistics to investigate the diversity of your evaluation settings across the six datasets\n- Please elaborate on the theory of network homophily and the specific statements you build your arguments on, cf. citation [31]\n- Please clarify the necessity of personalized sparse masks and quantify the drift to the local training distribution. Please provide insights for the specific choice of $\\lambda_2$.\n- For a more transparent comparison, the authors should analyse FED-Pub on the graph partitioning presented in [43]\n\nMinor:\n\n- There is a gap between the provided real-world motivation and the artificially constructed client graphs. Please give some insight into how well the shown setting translates to real scenarios.\n\n### Originality\n\nThe topic of FL for graphs has been picked up only recently (most papers are from 2020 and 2021). Within this evolving research field, the authors introduce the challenge of appropriately dealing with heterogeneous graphs on the client-side. The authors combine graph FL with the well-defined notion of community detection from graph theory to address heterogeneous local graphs.\n\n### Quality\n\n- The authors motivate their work well, and the shown evaluation fits their claims\n- The methods used are appropriate and clear.\n- The presented work is complete with multiple references to the supplement materials\n \nNegatives:\n\n- The claims on community detection and network homophily should be supported with additional theory. The authors, however, provide good intuition why their introduced setting and evaluation are valid.\n- The authors lack the chance to discuss weaknesses in the paper's main text.\n\n### Clarity\n\n- The paper is well written\n- The authors present their work well structured\n- The test is mostly self-contained\n\nMinor:\n\n- Introduction to GNNs is relatively shallow\n- The authors exaggerate at times, \"Seemingly impossible task\" (4.1), \"an even more serious issue\" (1.), and \"crucial drawback\"\n- Cite the inspiration paper [17] earlier\n\n### Significance\n\n- Dealing with FL, this work is essential for areas where data is not publically available due to privacy constraints.\n- The benchmark for overlapping and non-overlapping subgraphs is new and can inspire future evaluations for graph FL\n- The connection to graph theory (community) is valuable\n \nNegatives:\n\n- For a more transparent comparison, the authors should analyse FED-Pub on the graph partitioning preneted in [43] - The results for FedSage are worse compared to the original publication. How does FED-PUB perform for the subgraph division from [43](<https://proceedings.neurips.cc/paper/2021/hash/34adeb8e3242824038aa65460a47c29e-Abstract.html>)?\n- The functional embeddings on random graphs, what are the failure cases of this approach? What if the graph distribution of these random graphs is off? SBM surely does not capture real-world graph properties. How does this translate?\n- What is the effect of local graph size vs. heterogeneity and how well are the chosen graphs suited for/ support the presented method.\n- Why is it not enough to average parameters according to similarity (Eq.4)? Please clarify the importance of local masks $\\mu$ (4.2)?\n \nMinor:\n\n- Can you quantify to what degree training runs faster using local masks $\\mu_k$?\n- What is the influence of the scaling parameter $\\tau$ (Eq.4)?\n- Why this specific choice of $S(i,j)=\\frac{h_i h_j}{\\lVert h_i \\rVert \\lVert h_j \\rVert}$ (Eq.3)?\n- How are communication costs computed, cf. 52?\n- What does the colouring indicate for Fig.6a \"Missing edges\"? -", " This paper introduces a personalized FL framework to train graph neural network model across subgraphs owned by different clients. The authors argue that the heterogeneity of subgraphs will cause the unexpected knowledge collapse when using the same global model to fit all the data from different subgraphs. The authors first group subgraphs into different communities and then train personalized model parameters for clients in each community by introducing adaptive aggregation weights and unique parameter masks. Experimental results support the advantage of the proposed method FED-PUB on six datasets. Strengths\n\tCompared with existing popular methods, the proposed framework achieves competitive results on six popular federated graph benchmarks in a more communication efficient and privacy-preserved manner, without sharing features or embeddings of nodes in graphs. \n\tThe method to identify the similarity between subgraphs is novel, which uses input-based functional similarity instead of the inner of model parameters (i.e. cossim) to estimate the communities of user data. \n\tThe proposed method can potentially handle the missing edges problem by only consider the model aggregation in the same community for each client, since the edges and neighbors are dense in the same community.\n\nWeaknesses\n\tThis work tackles the heterogeneity of different subgraphs by mitigating incompatible knowledge collapse when using only one global model to fit all the data. However, the explaination for this incompatibility between different communities is not adequate. I wonder whether the too small model size or the missing edges between nodes cause such incompatibility. Maybe the direct model training on the whole datasets is needed as the oracle case to show such incompatibility.\n\tThe proposed method is not validated in the cases where the nodes and edges of the whole graph are uniformly distributed across clients. Due to the complex distribution of graph data in practice, the performance of the method in such case or cases where there are not obvious communities should be also considered. \n See Weakness.\nPlus, why not compare with FedSage+ [43]? See Weakness.", " This paper proposes an algorithm for personalized FL subgraph learning, FED-PUB.\n\nThe server compute similarities among clients based the output of the local GNNs using random graphs as inputs. It then use them to perform weighted averaging for models.\n\nEach client also learns a personalized sparse mask to select and update only part of the aggregated parameters.\n\nExperiments show the performance. Strengths:\n1. The weighted aggregation based on similarities of outputs of random graphs is interesting.\n2. The personalized sparse mask looks helpful.\n3. Code is provided.\n\nWeaknesses:\n1. The advantages by using outputs of random graphs are not convincing to me. The random graph based on SBM has too much randomness. Especially with random node features with normal distribution.\n2. The code only provides fedavg and fedpub. \n 1. Why not directly use local node label distributions to calculate the similarities among clients? The authors could add a comparison with this method.\n\n2. The non-overlapping node scenario is more challenging since it is more heterogeneous. Why does FedAvg (Table 2) in non-overlapping have better performance than FedAvg (Table 1) in overlapping.\n\n3. In Appendix B.3, for small datasets, namely Cora, CiteSeer and PubMed, the experiments set the number of local training epoch as 1. The setting is equal to distributed training. Why does FedAvg perform poorly in this setting? Is it due to missing edges?\n\n\n\n The intuition of weighted aggregation by similarity has been widely adopted. But the mechanism of similarity calculation by a single random input is not convincing to me. A performance comparison with other similarity methods (label similarity, gradient similarity, model similarity) will also be very helpful. ", " Aiming to solve the data heterogeneity problem in subgraph federated learning (FL), this paper presents a novel research problem, namely personalised subgraph FL. Focus on this personalised FL scenario, the authors propose a new learning method named FED-PUB. To train local models effectively, FED-PUB considers two key designs, i.e., functional embedding-based similarity estimation and adaptive parameter masking. Extensive experiments validate the effectiveness of FED-PUB. Pros:\n1. The research problem is interesting.\nThis paper first proposes to study the personalized problem of graph federated learning. The novel research problem is interesting and challenging, which may inspire future studies in the community.\n2. The experiments are extensive.\nThe authors conduct sufficient experiments to compare the performance with baseline methods and discover the property of the proposed method. The proposed method achieves outstanding results, which proves its effectiveness.\n\nCons:\n1. The novelty of the proposed method is limited.\nDespite some interesting designs in the proposed model, the core idea of this paper is not novel enough and similar to some existing works. For instance, using shared data to calculate the similarity between local models has been proposed in [*1]. There is a branch of works that incorporate auxiliary data to assist the federated training procedure, such as [*2], [*3], and [*4].\n[*1] Makhija, Disha, et al. \"Architecture Agnostic Federated Learning for Neural Networks.\" arXiv preprint arXiv:2202.07757 (2022). (Accepted by ICML 2022)\n[*2] Lin, Tao, et al. \"Ensemble distillation for robust model fusion in federated learning.\" Advances in Neural Information Processing Systems 33 (2020): 2351-2363.\n[*3] Sattler, Felix, et al. \"Fedaux: Leveraging unlabeled auxiliary data in federated learning.\" IEEE Transactions on Neural Networks and Learning Systems (2021).\n[*4] Zhu, Zhuangdi, Junyuan Hong, and Jiayu Zhou. \"Data-free knowledge distillation for heterogeneous federated learning.\" International Conference on Machine Learning. PMLR, 2021.\n2. The relationship between the key design (i.e. weight masking) and the research problem is unclear.\nThe authors claim that the weight masking mechanism is to select the relevant parameters for each subgraph. However, it seems like a general solution for personalized FL under heterogeneity and is less relevant to the specific subgraph FL problem. Although it brings some performance improvement, how it benefits the node classification problem is unknown.\n3. The motivation behind personalized subgraph FL is not convictive enough.\nIn the introduction section, the authors use an intuitive explanation (with Fig 1) and a tiny experiment on the synthetic dataset to demonstrate the heterogeneity problem and further show the need for personalized subgraph FL. However, how serious the heterogeneity problem on real-world datasets under the subgraph FL setting is still unknown. The real-world experiments, to a certain extent, demonstrate the issue, but what if the clients' number is much smaller than 10? Maybe more comprehensive experiments will better show the significance of personalized FL.\n4. The capability of dealing with missing edges is over-claimed.\nIn Sec 4.1 (Line 230) and Sec 5.2 (Line 348), the authors claim that FED-PUB can handle the missing edges problem in an implicit manner. This is not persuasive enough. In Fig 10, we can still find that FED-PUB(neighbor) has sub-optimal performance, and the performance gap between (local) and (neighbor) is similar among different methods. In another word, the good performance in this setting cannot prove that the performance gain is caused by handling the missing edges problem. 1. What is the meaning of J in Eq. (2)? If J is the number of subgraphs within a community, how to find a specific community for each subgraph during model training? And does that mean the aggregation is only executed within the subgraph in a community?\n2. In Line 45, the authors claim that the algorithms in [40] and [43] may compromise data privacy. Is there any further evidence to prove that?\n3. In the formulation of existing subgraph FL (Line 169) and personalized FL (Eq. (2)), what is the optimized parameter? It's better to highlight this under the \"min\" operation.\n4. Could you provide a theoretical analysis for the techniques proposed in this paper? The authors have discussed the limitations and potential negative social impacts in Appendix D." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 3, 4 ]
[ "1NmnBT7ogae", "Zjti7Vui6j_", "auZDhY46SWU", "GxIxs3n97uw", "ozIULvzpAm4", "T3POAOwTao", "CedWT0qAIju", "8VNyOq7wiI", "UI9L93T2Iu", "i3TQduj1KP", "SCVMP7IrQm6", "auZDhY46SWU", "5jKdm5NEJG", "i3TQduj1KP", "SCVMP7IrQm6", "auZDhY46SWU", "5jKdm5NEJG", "5jKdm5NEJG", "5jKdm5NEJG", "5jKdm5NEJG", "5jKdm5NEJG", "SCVMP7IrQm6", "SCVMP7IrQm6", "auZDhY46SWU", "i3TQduj1KP", "i3TQduj1KP", "i3TQduj1KP", "i3TQduj1KP", "nips_2022_VwRFJi9crEH", "nips_2022_VwRFJi9crEH", "nips_2022_VwRFJi9crEH", "nips_2022_VwRFJi9crEH" ]